diff --git a/DOCS.md b/DOCS.md
new file mode 100644
index 0000000..6c08a85
--- /dev/null
+++ b/DOCS.md
@@ -0,0 +1,171 @@
+# VisionForge Documentation
+
+This directory contains the complete user documentation for VisionForge, built with MkDocs.
+
+## 📚 Documentation Overview
+
+The documentation provides comprehensive guides for building neural networks visually using VisionForge's drag-and-drop interface.
+
+### 🎯 Key Sections
+
+- **Getting Started** - Installation and quick start guide
+- **Architecture Design** - Creating diagrams and connection rules
+- **Layer Reference** - Complete guide to all available layers
+- **Examples** - Step-by-step tutorials for common architectures
+- **Code Generation** - Export to PyTorch and TensorFlow
+- **API Reference** - Backend and frontend API documentation
+- **Advanced Topics** - Group blocks, AI assistant, sharing
+- **Troubleshooting** - Common issues and solutions
+
+## 🚀 Quick Start
+
+### 1. Install Dependencies
+
+```bash
+pip install mkdocs mkdocs-material mkdocs-mermaid2-plugin mkdocs-video-plugin mkdocs-glightbox
+```
+
+### 2. Serve Documentation
+
+```bash
+mkdocs serve
+```
+
+The documentation will be available at `http://127.0.0.1:8000`
+
+### 3. Build for Production
+
+```bash
+mkdocs build
+```
+
+Built files will be in the `site/` directory.
+
+## 📖 Documentation Structure
+
+```
+docs/
+├── index.md # Main landing page
+├── getting-started/
+│ ├── installation.md # Setup instructions
+│ ├── quickstart.md # First neural network
+│ └── interface.md # UI overview
+├── architecture/
+│ ├── creating-diagrams.md # Visual building guide
+│ ├── connection-rules.md # Layer compatibility
+│ ├── shape-inference.md # Tensor dimensions
+│ └── validation.md # Error checking
+├── layers/
+│ ├── input.md # Input layer types
+│ ├── core.md # Basic neural network layers
+│ ├── activation.md # Activation functions
+│ ├── pooling.md # Pooling operations
+│ ├── merge.md # Combining paths
+│ └── advanced.md # Specialized layers
+├── examples/
+│ ├── simple-cnn.md # Basic CNN tutorial
+│ ├── resnet.md # Skip connections
+│ ├── lstm.md # Sequence modeling
+│ └── group-blocks.md # Custom components
+├── codegen/
+│ ├── pytorch.md # PyTorch export
+│ ├── tensorflow.md # TensorFlow export
+│ └── custom-templates.md # Custom code generation
+├── api/
+│ ├── rest-api.md # Backend API
+│ └── node-definitions.md # Layer specifications
+├── advanced/
+│ ├── group-blocks.md # Reusable components
+│ ├── ai-assistant.md # Natural language help
+│ └── sharing.md # Project collaboration
+└── troubleshooting/
+ ├── common-issues.md # Frequently asked questions
+ ├── validation-errors.md # Architecture validation
+ └── performance.md # Optimization tips
+```
+
+## 🎨 Features
+
+### Interactive Elements
+- **Mermaid diagrams** for architecture visualization
+- **Code highlighting** for generated examples
+- **Responsive design** for all devices
+- **Search functionality** for quick navigation
+
+### Navigation
+- **Tabbed navigation** by topic
+- **Expandable sections** for detailed content
+- **Breadcrumbs** for location tracking
+- **Related links** for guided learning
+
+### Visual Aids
+- **Color-coded layers** by category
+- **Connection diagrams** showing valid paths
+- **Shape progression tables** for tensor tracking
+- **Validation indicators** for error checking
+
+## 🔧 Configuration
+
+The documentation uses the Material theme with these features:
+
+- **Dark/light mode** toggle
+- **Code syntax highlighting**
+- **Mermaid diagram support**
+- **Responsive design**
+- **Search functionality**
+- **Social links**
+
+## 📝 Contributing
+
+When updating documentation:
+
+1. **Follow the existing style** and structure
+2. **Use Mermaid diagrams** for visual explanations
+3. **Include code examples** with proper highlighting
+4. **Add cross-references** to related topics
+5. **Test all links** and examples
+
+### Style Guidelines
+
+- Use **clear headings** and subheadings
+- Include **emoji** for visual hierarchy
+- Provide **step-by-step** instructions
+- Add **validation checklists** where appropriate
+- Include **common pitfalls** and solutions
+
+## 🚀 Deployment
+
+### GitHub Pages
+```bash
+# Install gh-pages
+pip install gh-pages
+
+# Deploy to GitHub Pages
+mkdocs gh-deploy
+```
+
+### Custom Domain
+Update `mkdocs.yml` with your domain:
+```yaml
+site_url: https://your-domain.com/docs
+```
+
+## 📊 Analytics
+
+Add Google Analytics by updating `mkdocs.yml`:
+```yaml
+extra:
+ analytics:
+ provider: google
+ property: G-XXXXXXXXXX
+```
+
+## 🔗 External Resources
+
+- [MkDocs Documentation](https://www.mkdocs.org/)
+- [Material Theme](https://squidfunk.github.io/mkdocs-material/)
+- [Mermaid Diagram Syntax](https://mermaid-js.github.io/)
+
+---
+
+**For VisionForge usage**, see the [main documentation](docs/index.md)
diff --git a/docs/architecture/connection-rules.md b/docs/architecture/connection-rules.md
new file mode 100644
index 0000000..5541102
--- /dev/null
+++ b/docs/architecture/connection-rules.md
@@ -0,0 +1,306 @@
+# Layer Connection Rules
+
+Understanding which layers can connect to each other is crucial for building valid neural network architectures. This guide covers all connection rules and shape compatibility requirements.
+
+## 🎯 Overview
+
+VisionForge enforces strict connection rules to ensure architectural validity. Connections are validated based on:
+- **Tensor shape compatibility**
+- **Layer type constraints**
+- **Framework-specific requirements**
+
+## 📊 Tensor Dimension Notation
+
+We use the following notation for tensor shapes:
+
+| Dimension | Meaning | Example |
+|-----------|---------|---------|
+| **N** | Batch size | 32, 64, 1 |
+| **C** | Channels | 3 (RGB), 64 (feature maps) |
+| **H** | Height | 224, 512 |
+| **W** | Width | 224, 512 |
+| **D** | Depth | 16 (for 3D conv) |
+| **L** | Sequence Length | 128, 256 |
+| **F** | Features | 512, 1024 |
+
+## 🔗 Core Connection Rules
+
+### 1. Input Layer Rules
+
+**Input → Convolutional**
+```
+Input: [N, C_in, H, W] → Conv2D: [N, C_out, H', W']
+```
+✅ **Valid**: Any 4D tensor
+- `C_in` must match input channels
+- `H, W` can be any size
+- Output computed from kernel, stride, padding
+
+**Input → Linear**
+```
+Input: [N, F_in] → Linear: [N, F_out]
+```
+✅ **Valid**: 2D tensor [batch, features]
+❌ **Invalid**: 4D tensor (needs Flatten first)
+
+**Input → LSTM/GRU**
+```
+Input: [N, L, F_in] → LSTM: [N, L, F_hidden]
+```
+✅ **Valid**: 3D sequence tensor
+- `L` = sequence length
+- `F_in` = input features
+
+### 2. Convolutional Layer Rules
+
+**Conv2D → Conv2D**
+```
+Conv2D: [N, C_in, H, W] → Conv2D: [N, C_out, H', W']
+```
+✅ **Valid**: Same number of dimensions
+- `C_in` must match previous `C_out`
+- Spatial dims can change based on kernel/stride
+
+**Conv2D → Activation**
+```
+Conv2D: [N, C, H, W] → ReLU: [N, C, H, W]
+```
+✅ **Valid**: Element-wise operations preserve shape
+
+**Conv2D → Pooling**
+```
+Conv2D: [N, C, H, W] → MaxPool2D: [N, C, H', W']
+```
+✅ **Valid**: Same channel count
+- Spatial dims reduced by pooling
+
+**Conv2D → Flatten**
+```
+Conv2D: [N, C, H, W] → Flatten: [N, C×H×W]
+```
+✅ **Valid**: Any 4D tensor
+- Collapses all but batch dimension
+
+### 3. Linear Layer Rules
+
+**Linear → Linear**
+```
+Linear: [N, F_in] → Linear: [N, F_out]
+```
+✅ **Valid**: `F_in` must match previous `F_out`
+
+**Linear → Activation**
+```
+Linear: [N, F] → ReLU: [N, F]
+```
+✅ **Valid**: Element-wise preserves shape
+
+**Linear → Dropout**
+```
+Linear: [N, F] → Dropout: [N, F]
+```
+✅ **Valid**: Preserves shape during training
+
+### 4. Recurrent Layer Rules
+
+**LSTM → LSTM**
+```
+LSTM: [N, L, F_in] → LSTM: [N, L, F_out]
+```
+✅ **Valid**: Same sequence length
+- `F_in` must match previous hidden size
+
+**LSTM → Linear**
+```
+LSTM: [N, L, F] → Linear: [N, L, F_out]
+```
+✅ **Valid**: Apply to each time step
+- Or use only last time step
+
+**Embedding → LSTM**
+```
+Embedding: [N, L] → LSTM: [N, L, F_emb]
+```
+✅ **Valid**: Indices to dense vectors
+- `F_emb` = embedding dimension
+
+## 🔄 Merge Operation Rules
+
+### Add Operation
+**Requirements**:
+- Same tensor shape
+- Element-wise addition
+
+```mermaid
+graph LR
+ A[Conv2D: N,C,H,W] --> C[Add: N,C,H,W]
+ B[Conv2D: N,C,H,W] --> C
+
+ style A fill:#e3f2fd,stroke:#2196f3
+ style B fill:#e3f2fd,stroke:#2196f3
+ style C fill:#e8f5e8,stroke:#4caf50
+```
+
+✅ **Valid**: Same shape tensors
+❌ **Invalid**: Different shapes or dimensions
+
+### Concatenate Operation
+**Requirements**:
+- Same dimensions except concat axis
+- Specified concat dimension
+
+```mermaid
+graph LR
+ A[Conv2D: N,64,H,W] --> C[Concat: N,128,H,W]
+ B[Conv2D: N,64,H,W] --> C
+
+ style A fill:#e3f2fd,stroke:#2196f3
+ style B fill:#e3f2fd,stroke:#2196f3
+ style C fill:#e8f5e8,stroke:#4caf50
+```
+
+✅ **Valid**: Concat along channel dimension
+✅ **Valid**: Concat along feature dimension
+❌ **Invalid**: Different spatial dimensions
+
+## 📋 Connection Validity Matrix
+
+| From \ To | Input | Conv2D | Linear | LSTM | Add | Concat | Flatten |
+|-----------|-------|--------|--------|------|-----|---------|---------|
+| **Input** | ❌ | ✅ | ✅ | ✅ | ❌ | ❌ | ❌ |
+| **Conv2D** | ❌ | ✅ | ❌ | ❌ | ✅ | ✅ | ✅ |
+| **Linear** | ❌ | ❌ | ✅ | ❌ | ✅ | ✅ | ❌ |
+| **LSTM** | ❌ | ❌ | ✅ | ✅ | ✅ | ✅ | ✅ |
+| **Add** | ❌ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ |
+| **Concat** | ❌ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ |
+| **Flatten** | ❌ | ❌ | ✅ | ❌ | ✅ | ✅ | ❌ |
+
+## 🚨 Common Connection Errors
+
+### Shape Mismatch
+```
+❌ Conv2D([N,64,224,224]) → Linear([N,1000])
+ Expected: [N, features], Got: [N,64,224,224]
+```
+**Solution**: Add Flatten layer before Linear
+
+### Channel Mismatch
+```
+❌ Conv2D(out_channels=128) → Conv2D(in_channels=64)
+ Expected: 128 channels, Got: 64 channels
+```
+**Solution**: Match input/output channels
+
+### Dimension Mismatch
+```
+❌ LSTM([N,L,F]) → Conv2D([N,C,H,W])
+ Expected: 4D tensor, Got: 3D tensor
+```
+**Solution**: Use appropriate layer types
+
+### Sequence Length Mismatch
+```
+❌ LSTM(seq_len=128) → LSTM(seq_len=256)
+ Expected: 128, Got: 256
+```
+**Solution**: Match sequence lengths
+
+## 🎯 Special Cases
+
+### Multi-input Networks
+```mermaid
+graph LR
+ A[Image Input] --> C[Concat]
+ B[Text Input] --> C
+ C --> D[Fusion Layer]
+
+ style A fill:#e3f2fd,stroke:#2196f3
+ style B fill:#e3f2fd,stroke:#2196f3
+ style C fill:#e8f5e8,stroke:#4caf50
+ style D fill:#fff3e0,stroke:#ff9800
+```
+
+### Skip Connections
+```mermaid
+graph LR
+ A[Input] --> B[ConvBlock] --> C[Add] --> E[Output]
+ A --> C
+ D[Identity] --> C
+
+ style A fill:#e3f2fd,stroke:#2196f3
+ style B fill:#e8f5e8,stroke:#4caf50
+ style C fill:#fff3e0,stroke:#ff9800
+ style D fill:#f3e5f5,stroke:#9c27b0
+ style E fill:#e3f2fd,stroke:#2196f3
+```
+
+### Residual Networks
+- **Identity mapping**: Input shape must equal output shape
+- **Projection shortcut**: Use 1x1 conv to match dimensions
+
+## 🔧 Framework-Specific Rules
+
+### PyTorch Specific
+- **BatchNorm2D**: Expects [N, C, H, W]
+- **Dropout**: Training/inference mode affects behavior
+- **LayerNorm**: Normalizes across specified dimensions
+
+### TensorFlow Specific
+- **BatchNormalization**: Different default behavior
+- **Dropout**: Rate parameter (0.0-1.0)
+- **Conv2D**: Data format (NHWC vs NCHW)
+
+## 📚 Advanced Connection Patterns
+
+### Dense Connections (DenseNet)
+```mermaid
+graph LR
+ A[Input] --> B[Conv1]
+ A --> C[Conv2]
+ B --> C
+ B --> D[Conv3]
+ C --> D
+
+ style A fill:#e3f2fd,stroke:#2196f3
+ style B fill:#e8f5e8,stroke:#4caf50
+ style C fill:#e8f5e8,stroke:#4caf50
+ style D fill:#e8f5e8,stroke:#4caf50
+```
+
+### Multi-Head Attention
+```mermaid
+graph LR
+ A[Query] --> D[Attention]
+ B[Key] --> D
+ C[Value] --> D
+ D --> E[Output]
+
+ style A fill:#e3f2fd,stroke:#2196f3
+ style B fill:#e3f2fd,stroke:#2196f3
+ style C fill:#e3f2fd,stroke:#2196f3
+ style D fill:#e8f5e8,stroke:#4caf50
+ style E fill:#fff3e0,stroke:#ff9800
+```
+
+## ✅ Validation Checklist
+
+Before finalizing your architecture:
+
+- [ ] All connections are green (valid)
+- [ ] Input shapes are correctly specified
+- [ ] No circular dependencies
+- [ ] All required parameters are configured
+- [ ] Merge operations have compatible inputs
+- [ ] Output layer matches task requirements
+- [ ] No orphaned blocks (unless intentional)
+
+## 🚀 Next Steps
+
+Now that you understand connection rules:
+1. Practice with [Simple CNN Example](../../examples/simple-cnn.md)
+2. Learn about [Shape Inference](shape-inference.md)
+3. Study [Advanced Architectures](../../examples/)
+
+---
+
+**Need help?** Check [Validation Errors Guide](../../troubleshooting/validation-errors.md)
diff --git a/docs/architecture/creating-diagrams.md b/docs/architecture/creating-diagrams.md
new file mode 100644
index 0000000..9d71be2
--- /dev/null
+++ b/docs/architecture/creating-diagrams.md
@@ -0,0 +1,233 @@
+# Creating Architecture Diagrams
+
+Learn how to build neural network architectures visually using VisionForge's drag-and-drop interface.
+
+## 🎯 Overview
+
+VisionForge provides a visual canvas where you can design neural networks by dragging and connecting layer blocks. The interface automatically handles tensor shape inference and validates connections in real-time.
+
+## 🖥️ Interface Components
+
+```mermaid
+graph TB
+ A[Block Palette] --> B[Canvas Area]
+ C[Properties Panel] --> B
+ D[Validation Panel] --> B
+ E[Export Options] --> B
+
+ style A fill:#f3e5f5,stroke:#9c27b0
+ style B fill:#e8f5e8,stroke:#4caf50
+ style C fill:#fff3e0,stroke:#ff9800
+ style D fill:#ffebee,stroke:#f44336
+ style E fill:#e3f2fd,stroke:#2196f3
+```
+
+### 1. Block Palette (Left Sidebar)
+Contains all available layers organized by category:
+- **Input** - Data input layers
+- **Basic** - Core neural network layers
+- **Advanced** - Specialized operations
+- **Merge** - Combining multiple paths
+- **Output** - Loss and output layers
+
+### 2. Canvas Area (Center)
+The main workspace where you:
+- Drag and drop blocks
+- Create connections
+- Arrange your architecture
+- Visualize data flow
+
+### 3. Properties Panel (Right)
+Configure selected layer parameters:
+- Layer-specific settings
+- Shape information
+- Validation status
+
+### 4. Validation Panel (Bottom)
+Real-time feedback on:
+- Connection validity
+- Shape compatibility
+- Configuration errors
+
+## 🎨 Building Your First Architecture
+
+### Step 1: Add Input Layer
+1. Open the **Input** category in the palette
+2. Drag **Input** block to canvas
+3. Configure input shape:
+ ```json
+ {
+ "inputShape": {
+ "dims": [1, 3, 224, 224] // [batch, channels, height, width]
+ }
+ }
+ ```
+
+### Step 2: Add Core Layers
+1. From **Basic** category, drag **Conv2D**
+2. Position it to the right of input
+3. Configure parameters:
+ ```json
+ {
+ "out_channels": 64,
+ "kernel_size": 3,
+ "stride": 1,
+ "padding": 1
+ }
+ ```
+
+### Step 3: Create Connections
+1. Hover over the output port of Input layer
+2. Click and drag to the input port of Conv2D
+3. Release to create connection
+4. **Green line** = Valid connection
+5. **Red line** = Invalid connection
+
+### Step 4: Add Activation
+1. Drag **ReLU** from **Basic** category
+2. Connect Conv2D output to ReLU input
+3. No configuration needed for basic activations
+
+### Step 5: Complete the Network
+Continue adding layers:
+- **MaxPool2D** for downsampling
+- **Flatten** for dimensionality reduction
+- **Linear** for classification
+- **Softmax** for output probabilities
+
+## 🔗 Connection Types
+
+### Standard Connections
+Most layers have single input/output ports:
+```mermaid
+graph LR
+ A[Input] --> B[Conv2D] --> C[ReLU] --> D[Output]
+
+ style A fill:#e3f2fd,stroke:#2196f3
+ style B fill:#e8f5e8,stroke:#4caf50
+ style C fill:#fff3e0,stroke:#ff9800
+ style D fill:#f3e5f5,stroke:#9c27b0
+```
+
+### Merge Operations
+Some layers accept multiple inputs:
+```mermaid
+graph LR
+ A[Conv2D] --> C[Add]
+ B[Conv2D] --> C
+ C --> D[ReLU]
+
+ style A fill:#e3f2fd,stroke:#2196f3
+ style B fill:#e3f2fd,stroke:#2196f3
+ style C fill:#e8f5e8,stroke:#4caf50
+ style D fill:#fff3e0,stroke:#ff9800
+```
+
+### Skip Connections
+Create ResNet-style architectures:
+```mermaid
+graph LR
+ A[Input] --> B[ConvBlock] --> C[Add] --> D[Output]
+ A --> C
+
+ style A fill:#e3f2fd,stroke:#2196f3
+ style B fill:#e8f5e8,stroke:#4caf50
+ style C fill:#fff3e0,stroke:#ff9800
+ style D fill:#f3e5f5,stroke:#9c27b0
+```
+
+## ⚙️ Advanced Features
+
+### Group Blocks
+Create reusable components:
+1. Select multiple blocks
+2. Right-click → "Create Group"
+3. Define input/output ports
+4. Save as custom block
+
+### Copy/Paste
+- **Ctrl+C** - Copy selected blocks
+- **Ctrl+V** - Paste blocks
+- Connections are preserved within copied blocks
+
+### Undo/Redo
+- **Ctrl+Z** - Undo last action
+- **Ctrl+Y** - Redo action
+- Full history maintained
+
+### Canvas Navigation
+- **Mouse wheel** - Zoom in/out
+- **Click + drag** - Pan canvas
+- **Double-click** - Fit to screen
+
+## 🎯 Best Practices
+
+### Organization
+1. **Left to right flow** - Input on left, output on right
+2. **Group related layers** - Use alignment guides
+3. **Consistent spacing** - Leave room for connections
+4. **Label important layers** - Use descriptive names
+
+### Validation
+1. **Watch connection colors** - Green = valid, red = invalid
+2. **Check shape compatibility** - Hover over ports to see shapes
+3. **Fix errors early** - Address validation warnings immediately
+4. **Test incrementally** - Validate after each major addition
+
+### Performance
+1. **Minimize connections** - Avoid unnecessary complexity
+2. **Use group blocks** - Reduce canvas clutter
+3. **Optimize layout** - Reduce connection crossing
+
+## 🔍 Real-time Feedback
+
+### Shape Inference
+VisionForge automatically computes tensor shapes:
+```
+Input: [1, 3, 224, 224]
+ ↓ Conv2D(64, 3x3, stride=1, padding=1)
+Conv2D: [1, 64, 224, 224]
+ ↓ MaxPool2D(2x2, stride=2)
+MaxPool: [1, 64, 112, 112]
+ ↓ Flatten
+Flatten: [1, 802,816]
+```
+
+### Validation Messages
+- ✅ **Valid connections** - Green highlight
+- ⚠️ **Warnings** - Yellow indicators (e.g., unused blocks)
+- ❌ **Errors** - Red indicators (e.g., incompatible shapes)
+
+### Tooltips
+Hover over any element to see:
+- Layer descriptions
+- Shape information
+- Connection details
+- Configuration hints
+
+## 🎨 Visual Customization
+
+### Block Colors
+Layers are color-coded by category:
+- 🔵 **Input** - Blue
+- 🟢 **Basic** - Green
+- 🟡 **Advanced** - Yellow
+- 🟣 **Merge** - Purple
+- 🔴 **Output** - Red
+
+### Connection Styles
+- **Solid line** - Standard connection
+- **Dashed line** - Conditional connection
+- **Thick line** - High-dimensional data flow
+
+## 🚀 Next Steps
+
+Now that you understand how to create diagrams:
+1. Learn [Layer Connection Rules](connection-rules.md)
+2. Study [Shape Inference](shape-inference.md)
+3. Try [Example Architectures](../../examples/)
+4. Export your first [PyTorch model](../../codegen/pytorch.md)
+
+---
+
+**Need help?** Check our [Troubleshooting Guide](../../troubleshooting/common-issues.md)
diff --git a/docs/examples/simple-cnn.md b/docs/examples/simple-cnn.md
new file mode 100644
index 0000000..46dd9b5
--- /dev/null
+++ b/docs/examples/simple-cnn.md
@@ -0,0 +1,316 @@
+# Simple CNN Example
+
+Build a complete image classification network from scratch using VisionForge's visual interface.
+
+## 🎯 Overview
+
+This tutorial walks you through creating a simple Convolutional Neural Network (CNN) for image classification. You'll learn:
+- How to arrange layers properly
+- Connection best practices
+- Parameter configuration
+- Exporting to PyTorch code
+
+## 🏗️ Architecture Overview
+
+We'll build this CNN architecture:
+
+```mermaid
+graph TB
+ A[Input
224x224x3] --> B[Conv2D
64 filters, 3x3]
+ B --> C[ReLU]
+ C --> D[MaxPool2D
2x2]
+ D --> E[Conv2D
128 filters, 3x3]
+ E --> F[ReLU]
+ F --> G[MaxPool2D
2x2]
+ G --> H[Flatten]
+ H --> I[Linear
512 units]
+ I --> J[ReLU]
+ J --> K[Dropout
0.5]
+ K --> L[Linear
10 classes]
+ L --> M[Softmax]
+
+ style A fill:#e3f2fd,stroke:#2196f3
+ style B fill:#e8f5e8,stroke:#4caf50
+ style C fill:#fff3e0,stroke:#ff9800
+ style D fill:#f3e5f5,stroke:#9c27b0
+ style E fill:#e8f5e8,stroke:#4caf50
+ style F fill:#fff3e0,stroke:#ff9800
+ style G fill:#f3e5f5,stroke:#9c27b0
+ style H fill:#ffebee,stroke:#f44336
+ style I fill:#e8f5e8,stroke:#4caf50
+ style J fill:#fff3e0,stroke:#ff9800
+ style K fill:#9e9e9e,stroke:#424242
+ style L fill:#e8f5e8,stroke:#4caf50
+ style M fill:#fff3e0,stroke:#ff9800
+```
+
+**Target Task**: 10-class image classification (e.g., CIFAR-10)
+**Input Size**: 224×224×3 RGB images
+**Output**: 10 class probabilities
+
+## 📝 Step-by-Step Guide
+
+### Step 1: Set Up Input Layer
+
+1. **Add Input Block**
+ - Drag **Input** from the **Input** category
+ - Place it on the left side of the canvas
+
+2. **Configure Input Shape**
+ ```json
+ {
+ "inputShape": {
+ "dims": [1, 3, 224, 224]
+ }
+ }
+ ```
+ - **Batch size**: 1 (can be changed later)
+ - **Channels**: 3 (RGB)
+ - **Height**: 224 pixels
+ - **Width**: 224 pixels
+
+### Step 2: First Convolutional Block
+
+3. **Add Conv2D Layer**
+ - Drag **Conv2D** from **Basic** category
+ - Position it to the right of Input
+
+4. **Configure Conv2D**
+ ```json
+ {
+ "out_channels": 64,
+ "kernel_size": 3,
+ "stride": 1,
+ "padding": 1
+ }
+ ```
+ - **Output channels**: 64 feature maps
+ - **Kernel size**: 3×3 convolution
+ - **Stride**: 1 (no downsampling)
+ - **Padding**: 1 (preserves spatial size)
+
+5. **Add ReLU Activation**
+ - Drag **ReLU** from **Basic** category
+ - Connect Conv2D → ReLU
+
+6. **Add MaxPool2D**
+ - Drag **MaxPool2D** from **Pooling** category
+ - Configure:
+ ```json
+ {
+ "kernel_size": 2,
+ "stride": 2
+ }
+ ```
+ - Output shape: [1, 64, 112, 112]
+
+### Step 3: Second Convolutional Block
+
+7. **Add Second Conv2D**
+ - Drag another **Conv2D**
+ - Configure:
+ ```json
+ {
+ "out_channels": 128,
+ "kernel_size": 3,
+ "stride": 1,
+ "padding": 1
+ }
+ ```
+ - Input: [1, 64, 112, 112]
+ - Output: [1, 128, 112, 112]
+
+8. **Add ReLU and MaxPool2D**
+ - Add **ReLU** after Conv2D
+ - Add **MaxPool2D** (2×2, stride=2)
+ - Final shape: [1, 128, 56, 56]
+
+### Step 4: Classification Head
+
+9. **Add Flatten Layer**
+ - Drag **Flatten** from **Basic** category
+ - Input: [1, 128, 56, 56]
+ - Output: [1, 401,408] (128 × 56 × 56)
+
+10. **Add First Linear Layer**
+ - Drag **Linear** from **Basic** category
+ - Configure:
+ ```json
+ {
+ "out_features": 512
+ }
+ ```
+ - Input: [1, 401,408]
+ - Output: [1, 512]
+
+11. **Add ReLU and Dropout**
+ - Add **ReLU** activation
+ - Add **Dropout** with rate 0.5:
+ ```json
+ {
+ "p": 0.5
+ }
+ ```
+
+### Step 5: Output Layer
+
+12. **Add Final Linear Layer**
+ - Drag **Linear** layer
+ - Configure:
+ ```json
+ {
+ "out_features": 10
+ }
+ ```
+ - Input: [1, 512]
+ - Output: [1, 10] (logits)
+
+13. **Add Softmax**
+ - Drag **Softmax** from **Activation** category
+ - Configure:
+ ```json
+ {
+ "dim": 1
+ }
+ ```
+ - Output: [1, 10] (probabilities)
+
+## 🔗 Complete Connection Flow
+
+Verify all connections are in order:
+
+```
+Input → Conv2D → ReLU → MaxPool2D → Conv2D → ReLU → MaxPool2D
+ → Flatten → Linear → ReLU → Dropout → Linear → Softmax
+```
+
+All connections should show **green** lines indicating valid connections.
+
+## 📊 Shape Progression
+
+Track how tensor shapes change through the network:
+
+| Layer | Input Shape | Output Shape | Transformation |
+|-------|-------------|--------------|----------------|
+| Input | - | [1, 3, 224, 224] | User defined |
+| Conv2D | [1, 3, 224, 224] | [1, 64, 224, 224] | 3→64 channels |
+| ReLU | [1, 64, 224, 224] | [1, 64, 224, 224] | Element-wise |
+| MaxPool2D | [1, 64, 224, 224] | [1, 64, 112, 112] | 2×2 pooling |
+| Conv2D | [1, 64, 112, 112] | [1, 128, 112, 112] | 64→128 channels |
+| ReLU | [1, 128, 112, 112] | [1, 128, 112, 112] | Element-wise |
+| MaxPool2D | [1, 128, 112, 112] | [1, 128, 56, 56] | 2×2 pooling |
+| Flatten | [1, 128, 56, 56] | [1, 401,408] | Collapse spatial |
+| Linear | [1, 401,408] | [1, 512] | Dense projection |
+| ReLU | [1, 512] | [1, 512] | Element-wise |
+| Dropout | [1, 512] | [1, 512] | Random zeroing |
+| Linear | [1, 512] | [1, 10] | Classification |
+| Softmax | [1, 10] | [1, 10] | Probabilities |
+
+## ✅ Validation Checklist
+
+Before exporting, verify:
+
+- [ ] All connections are green
+- [ ] Input shape is correctly specified
+- [ ] No red validation errors
+- [ ] Output matches task requirements (10 classes)
+- [ ] All required parameters are configured
+
+## 🚀 Export to PyTorch
+
+1. **Open Export Panel**
+ - Click the export button in the toolbar
+ - Select **PyTorch** as framework
+
+2. **Configure Export Options**
+ ```json
+ {
+ "class_name": "SimpleCNN",
+ "include_imports": true,
+ "include_forward": true
+ }
+ ```
+
+3. **Generated Code**
+ ```python
+ import torch
+ import torch.nn as nn
+ import torch.nn.functional as F
+
+ class SimpleCNN(nn.Module):
+ def __init__(self):
+ super(SimpleCNN, self).__init__()
+
+ # Convolutional layers
+ self.conv1 = nn.Conv2d(3, 64, kernel_size=3, stride=1, padding=1)
+ self.conv2 = nn.Conv2d(64, 128, kernel_size=3, stride=1, padding=1)
+
+ # Pooling layer
+ self.pool = nn.MaxPool2d(kernel_size=2, stride=2)
+
+ # Fully connected layers
+ self.fc1 = nn.Linear(128 * 56 * 56, 512)
+ self.fc2 = nn.Linear(512, 10)
+
+ # Dropout
+ self.dropout = nn.Dropout(p=0.5)
+
+ def forward(self, x):
+ # First conv block
+ x = self.pool(F.relu(self.conv1(x)))
+
+ # Second conv block
+ x = self.pool(F.relu(self.conv2(x)))
+
+ # Flatten and classify
+ x = x.view(x.size(0), -1) # Flatten
+ x = F.relu(self.fc1(x))
+ x = self.dropout(x)
+ x = self.fc2(x)
+
+ return F.softmax(x, dim=1)
+ ```
+
+## 🎯 Usage Example
+
+```python
+# Create model instance
+model = SimpleCNN()
+
+# Test with sample input
+sample_input = torch.randn(1, 3, 224, 224)
+output = model(sample_input)
+
+print(f"Output shape: {output.shape}") # torch.Size([1, 10])
+print(f"Probabilities: {output}")
+```
+
+## 🔧 Customization Ideas
+
+### Different Architectures
+- **More layers**: Add additional conv blocks
+- **Different filters**: Try 32, 256, 512 channels
+- **Different kernel sizes**: 5×5, 7×7 convolutions
+- **BatchNorm**: Add BatchNorm2d after conv layers
+
+### Advanced Features
+- **Global Average Pooling**: Replace Flatten+Linear with GAP
+- **Residual connections**: Add skip connections
+- **Data augmentation**: Not in architecture, but important for training
+
+## 📚 Related Examples
+
+- [ResNet Architecture](resnet.md) - Skip connections
+- [LSTM Networks](lstm.md) - Sequence modeling
+- [Custom Group Blocks](group-blocks.md) - Reusable components
+
+## 🚀 Next Steps
+
+1. **Train the model** using your favorite framework
+2. **Experiment with different architectures**
+3. **Try transfer learning** with pretrained models
+4. **Deploy to production** using the exported code
+
+---
+
+**Ready for more?** Try the [ResNet example](resnet.md) for advanced architectures!
diff --git a/docs/getting-started/installation.md b/docs/getting-started/installation.md
new file mode 100644
index 0000000..4864745
--- /dev/null
+++ b/docs/getting-started/installation.md
@@ -0,0 +1,371 @@
+# Installation Guide
+
+Set up VisionForge on your system with this comprehensive installation guide.
+
+## 🎯 Overview
+
+VisionForge consists of two main components:
+- **Backend**: Django-based API server
+- **Frontend**: React-based web interface
+
+## 📋 Prerequisites
+
+Before installing VisionForge, ensure you have:
+
+### Required Software
+- **Python 3.8+** - Backend runtime
+- **Node.js 16+** - Frontend development
+- **npm** or **yarn** - Package manager
+
+### Optional but Recommended
+- **Git** - Version control
+- **VS Code** - Code editor with extensions
+- **Google Gemini API Key** - For AI assistant features
+
+## 🚀 Quick Installation
+
+### Option 1: Using Git (Recommended)
+
+1. **Clone the repository**
+ ```bash
+ git clone https://github.com/devgunnu/visionforge.git
+ cd visionforge
+ ```
+
+2. **Install backend dependencies**
+ ```bash
+ cd project
+ pip install -r requirements.txt
+ ```
+
+3. **Install frontend dependencies**
+ ```bash
+ cd frontend
+ npm install
+ ```
+
+### Option 2: Download ZIP
+
+1. Download and extract the ZIP file
+2. Follow steps 2-3 from Option 1
+
+## 🔧 Detailed Setup
+
+### Backend Setup
+
+1. **Navigate to project directory**
+ ```bash
+ cd visionforge/project
+ ```
+
+2. **Create virtual environment** (recommended)
+ ```bash
+ python -m venv venv
+
+ # On Windows
+ venv\Scripts\activate
+
+ # On macOS/Linux
+ source venv/bin/activate
+ ```
+
+3. **Install Python dependencies**
+ ```bash
+ pip install -r requirements.txt
+ ```
+
+4. **Set up environment variables**
+ ```bash
+ cp .env.example .env
+ ```
+
+ Edit `.env` file and add your API keys:
+ ```env
+ GEMINI_API_KEY=your_gemini_api_key_here
+ ```
+
+5. **Initialize database**
+ ```bash
+ python manage.py migrate
+ ```
+
+6. **Create superuser** (optional)
+ ```bash
+ python manage.py createsuperuser
+ ```
+
+### Frontend Setup
+
+1. **Navigate to frontend directory**
+ ```bash
+ cd visionforge/project/frontend
+ ```
+
+2. **Install Node.js dependencies**
+ ```bash
+ npm install
+ # or
+ yarn install
+ ```
+
+3. **Set up environment variables**
+ ```bash
+ cp .env.example .env.local
+ ```
+
+## 🏃♂️ Running the Application
+
+### Start Backend Server
+
+1. **Navigate to project directory**
+ ```bash
+ cd visionforge/project
+ ```
+
+2. **Start Django server**
+ ```bash
+ python manage.py runserver
+ ```
+
+3. **Verify backend is running**
+ - Open `http://localhost:8000` in browser
+ - You should see Django welcome page or API documentation
+
+### Start Frontend Development Server
+
+1. **Navigate to frontend directory**
+ ```bash
+ cd visionforge/project/frontend
+ ```
+
+2. **Start development server**
+ ```bash
+ npm run dev
+ # or
+ yarn dev
+ ```
+
+3. **Access VisionForge**
+ - Open `http://localhost:5173` in browser
+ - You should see the VisionForge interface
+
+## 🔍 Verification
+
+### Check Backend Health
+
+```bash
+# Test API endpoints
+curl http://localhost:8000/api/projects/
+```
+
+### Check Frontend Health
+
+- Open browser developer tools
+- Check for any console errors
+- Verify network requests to backend
+
+## 🛠️ Common Issues & Solutions
+
+### Port Already in Use
+
+**Problem**: `Port 8000 is already in use`
+
+**Solution**:
+```bash
+# Kill process on port 8000
+# On Windows
+netstat -ano | findstr :8000
+taskkill /PID /F
+
+# On macOS/Linux
+lsof -ti:8000 | xargs kill -9
+
+# Or use different port
+python manage.py runserver 8080
+```
+
+### Node.js Version Issues
+
+**Problem**: `Node.js version not supported`
+
+**Solution**:
+```bash
+# Check current version
+node --version
+
+# Upgrade Node.js using nvm
+nvm install 18
+nvm use 18
+```
+
+### Python Dependencies
+
+**Problem**: `ModuleNotFoundError`
+
+**Solution**:
+```bash
+# Reinstall dependencies
+pip install -r requirements.txt --force-reinstall
+
+# Or upgrade pip first
+pip install --upgrade pip
+```
+
+### CORS Issues
+
+**Problem**: `CORS policy: No 'Access-Control-Allow-Origin'`
+
+**Solution**:
+- Ensure both servers are running
+- Check backend CORS settings in `settings.py`
+- Verify frontend API URL configuration
+
+### Database Issues
+
+**Problem**: `django.db.utils.OperationalError`
+
+**Solution**:
+```bash
+# Delete and recreate database
+rm db.sqlite3
+python manage.py migrate
+```
+
+## 🐳 Docker Installation (Alternative)
+
+### Using Docker Compose
+
+1. **Create Dockerfile**
+ ```dockerfile
+ # Backend Dockerfile
+ FROM python:3.9
+ WORKDIR /app
+ COPY requirements.txt .
+ RUN pip install -r requirements.txt
+ COPY . .
+ CMD ["python", "manage.py", "runserver", "0.0.0.0:8000"]
+ ```
+
+2. **Create docker-compose.yml**
+ ```yaml
+ version: '3.8'
+ services:
+ backend:
+ build: ./project
+ ports:
+ - "8000:8000"
+ environment:
+ - GEMINI_API_KEY=${GEMINI_API_KEY}
+
+ frontend:
+ build: ./project/frontend
+ ports:
+ - "5173:5173"
+ depends_on:
+ - backend
+ ```
+
+3. **Run with Docker**
+ ```bash
+ docker-compose up
+ ```
+
+## 📱 Development Environment Setup
+
+### VS Code Extensions
+
+Install these extensions for optimal development:
+
+```json
+{
+ "recommendations": [
+ "ms-python.python",
+ "ms-python.flake8",
+ "ms-python.black-formatter",
+ "bradlc.vscode-tailwindcss",
+ "esbenp.prettier-vscode",
+ "ms-vscode.vscode-typescript-next"
+ ]
+}
+```
+
+### Git Configuration
+
+```bash
+git config --global user.name "Your Name"
+git config --global user.email "your.email@example.com"
+```
+
+## 🚀 Production Deployment
+
+### Backend Production
+
+1. **Set environment variables**
+ ```bash
+ export DEBUG=False
+ export SECRET_KEY=your_production_secret_key
+ export ALLOWED_HOSTS=yourdomain.com
+ ```
+
+2. **Collect static files**
+ ```bash
+ python manage.py collectstatic
+ ```
+
+3. **Use production server**
+ ```bash
+ pip install gunicorn
+ gunicorn --bind 0.0.0.0:8000 backend.wsgi:application
+ ```
+
+### Frontend Production
+
+1. **Build for production**
+ ```bash
+ npm run build
+ ```
+
+2. **Serve static files**
+ ```bash
+ npm install -g serve
+ serve -s dist
+ ```
+
+## ✅ Installation Checklist
+
+Before proceeding:
+
+- [ ] Python 3.8+ installed
+- [ ] Node.js 16+ installed
+- [ ] Backend dependencies installed
+- [ ] Frontend dependencies installed
+- [ ] Database migrated
+- [ ] Environment variables configured
+- [ ] Backend server running on port 8000
+- [ ] Frontend server running on port 5173
+- [ ] No CORS errors in browser console
+- [ ] API endpoints accessible
+
+## 🆘 Getting Help
+
+If you encounter issues:
+
+1. **Check the logs**:
+ - Backend: Django console output
+ - Frontend: Browser developer tools
+
+2. **Verify requirements**:
+ - Python and Node.js versions
+ - All dependencies installed
+
+3. **Consult documentation**:
+ - [Troubleshooting Guide](../../troubleshooting/common-issues.md)
+ - [GitHub Issues](https://github.com/devgunnu/visionforge/issues)
+
+4. **Community support**:
+ - Join our Discord community
+ - Ask questions on GitHub Discussions
+
+---
+
+**Ready to start?** → [Quick Start Guide](quickstart.md)
diff --git a/docs/getting-started/quickstart.md b/docs/getting-started/quickstart.md
new file mode 100644
index 0000000..0c7589c
--- /dev/null
+++ b/docs/getting-started/quickstart.md
@@ -0,0 +1,263 @@
+# Quick Start Guide
+
+Build your first neural network in minutes with VisionForge's visual interface.
+
+## 🎯 What You'll Build
+
+In this guide, you'll create a simple image classification CNN:
+
+```mermaid
+graph LR
+ A[Input] --> B[Conv2D] --> C[ReLU] --> D[Linear] --> E[Output]
+
+ style A fill:#e3f2fd,stroke:#2196f3
+ style B fill:#e8f5e8,stroke:#4caf50
+ style C fill:#fff3e0,stroke:#ff9800
+ style D fill:#e8f5e8,stroke:#4caf50
+ style E fill:#f3e5f5,stroke:#9c27b0
+```
+
+## 🚀 Step 1: Launch VisionForge
+
+1. **Start the backend server**
+ ```bash
+ cd visionforge/project
+ python manage.py runserver
+ ```
+
+2. **Start the frontend server**
+ ```bash
+ cd visionforge/project/frontend
+ npm run dev
+ ```
+
+3. **Open your browser**
+ Navigate to `http://localhost:5173`
+
+## 🖥️ Step 2: Explore the Interface
+
+You'll see four main areas:
+
+```mermaid
+graph TB
+ A[Block Palette
Left Sidebar] --> B[Canvas
Center]
+ C[Properties Panel
Right Sidebar] --> B
+ D[Validation Panel
Bottom] --> B
+
+ style A fill:#f3e5f5,stroke:#9c27b0
+ style B fill:#e8f5e8,stroke:#4caf50
+ style C fill:#fff3e0,stroke:#ff9800
+ style D fill:#ffebee,stroke:#f44336
+```
+
+## 🎨 Step 3: Create Your First Network
+
+### Add Input Layer
+
+1. **Open Block Palette** (left sidebar)
+2. **Click "Input"** category
+3. **Drag "Input"** to the canvas
+4. **Configure it**:
+ - Click the Input block
+ - In Properties panel, set:
+ ```json
+ {
+ "inputShape": {
+ "dims": [1, 3, 224, 224]
+ }
+ }
+ ```
+
+### Add Convolutional Layer
+
+1. **Click "Basic"** category in palette
+2. **Drag "Conv2D"** to the right of Input
+3. **Connect them**:
+ - Hover over Input's output port (right edge)
+ - Click and drag to Conv2D's input port (left edge)
+ - Release to create connection
+4. **Configure Conv2D**:
+ ```json
+ {
+ "out_channels": 32,
+ "kernel_size": 3,
+ "stride": 1,
+ "padding": 1
+ }
+ ```
+
+### Add Activation
+
+1. **Drag "ReLU"** from Basic category
+2. **Connect Conv2D → ReLU**
+3. **No configuration needed** for ReLU
+
+### Add Output Layer
+
+1. **Drag "Linear"** from Basic category
+2. **Connect ReLU → Linear**
+3. **Configure Linear**:
+ ```json
+ {
+ "out_features": 10
+ }
+ ```
+
+## ✅ Step 4: Validate Your Architecture
+
+### Check Connections
+
+- **Green lines** = Valid connections ✅
+- **Red lines** = Invalid connections ❌
+- **Yellow indicators** = Warnings ⚠️
+
+### Check Validation Panel
+
+Look at the bottom panel for:
+- ✅ **"Architecture is valid"** - You're ready to export!
+- ❌ **Error messages** - Fix any issues before proceeding
+
+## 🚀 Step 5: Export Your Model
+
+1. **Click Export Button** (top toolbar)
+2. **Select PyTorch** as framework
+3. **Configure Export**:
+ ```json
+ {
+ "class_name": "MyFirstModel",
+ "include_imports": true
+ }
+ ```
+4. **Click "Generate Code"**
+
+### Generated Code
+
+You'll see PyTorch code like this:
+
+```python
+import torch
+import torch.nn as nn
+import torch.nn.functional as F
+
+class MyFirstModel(nn.Module):
+ def __init__(self):
+ super(MyFirstModel, self).__init__()
+ self.conv1 = nn.Conv2d(3, 32, kernel_size=3, stride=1, padding=1)
+ self.fc1 = nn.Linear(32 * 224 * 224, 10)
+
+ def forward(self, x):
+ x = F.relu(self.conv1(x))
+ x = x.view(x.size(0), -1) # Flatten
+ x = self.fc1(x)
+ return x
+```
+
+## 🎯 Step 6: Test Your Model
+
+1. **Copy the generated code**
+2. **Save it as `model.py`**
+3. **Test it**:
+
+```python
+import torch
+from model import MyFirstModel
+
+# Create model
+model = MyFirstModel()
+
+# Test with sample input
+sample = torch.randn(1, 3, 224, 224)
+output = model(sample)
+
+print(f"Input shape: {sample.shape}")
+print(f"Output shape: {output.shape}")
+```
+
+## 🎉 Congratulations!
+
+You've just:
+- ✅ Created your first neural network visually
+- ✅ Learned to connect layers properly
+- ✅ Exported working PyTorch code
+- ✅ Validated your architecture
+
+## 🔧 What's Next?
+
+### Try More Complex Architectures
+
+1. **Add more layers**:
+ - Add pooling layers for downsampling
+ - Add multiple conv blocks
+ - Add dropout for regularization
+
+2. **Experiment with parameters**:
+ - Change filter counts (16, 64, 128)
+ - Try different kernel sizes (5×5, 7×7)
+ - Adjust stride and padding
+
+3. **Learn advanced features**:
+ - Skip connections (ResNet style)
+ - Merge operations (Add, Concat)
+ - Group blocks (reusable components)
+
+### Explore Examples
+
+- [Simple CNN Tutorial](../examples/simple-cnn.md) - Complete walkthrough
+- [ResNet Architecture](../examples/resnet.md) - Skip connections
+- [LSTM Networks](../examples/lstm.md) - Sequence modeling
+
+### Master the Interface
+
+- [Interface Overview](interface.md) - Detailed feature guide
+- [Connection Rules](../architecture/connection-rules.md) - Layer compatibility
+- [Shape Inference](../architecture/shape-inference.md) - Understanding dimensions
+
+## 🎯 Quick Tips
+
+### Keyboard Shortcuts
+
+| Shortcut | Action |
+|----------|--------|
+| `Ctrl+Z` | Undo |
+| `Ctrl+Y` | Redo |
+| `Delete` | Remove selected block |
+| `Ctrl+C` | Copy blocks |
+| `Ctrl+V` | Paste blocks |
+
+### Best Practices
+
+1. **Start simple** - Add layers incrementally
+2. **Validate often** - Check connections after each addition
+3. **Read tooltips** - Hover over elements for help
+4. **Use the validation panel** - Fix errors early
+
+### Common Patterns
+
+**CNN Pattern**:
+```
+Input → Conv2D → ReLU → MaxPool2D → Conv2D → ReLU → Flatten → Linear
+```
+
+**Classification Pattern**:
+```
+Features → Linear → ReLU → Dropout → Linear → Softmax
+```
+
+## 🆘 Need Help?
+
+**Stuck on something?**
+
+1. **Check validation messages** - Bottom panel shows specific errors
+2. **Hover over connections** - See shape information
+3. **Consult the docs** - [Troubleshooting Guide](../troubleshooting/common-issues.md)
+4. **Ask the AI assistant** - Built-in help in the interface
+
+**Common Issues:**
+
+- **Red connections**: Check [Layer Connection Rules](../architecture/connection-rules.md)
+- **Shape errors**: Review [Shape Inference](../architecture/shape-inference.md)
+- **Export issues**: Verify all parameters are configured
+
+---
+
+**Ready to dive deeper?** → [Architecture Design Guide](../architecture/creating-diagrams.md)
diff --git a/docs/index.md b/docs/index.md
new file mode 100644
index 0000000..053d6cd
--- /dev/null
+++ b/docs/index.md
@@ -0,0 +1,106 @@
+# VisionForge User Documentation
+
+
+

+
+
+**Build Neural Networks Visually — Export Production Code**
+
+VisionForge is a powerful visual neural network builder that lets you design complex deep learning architectures through an intuitive drag-and-drop interface. Perfect for researchers, students, and ML engineers who want to rapidly prototype models.
+
+## ✨ Key Features
+
+- 🎨 **Drag-and-drop interface** — Build CNNs, LSTMs, ResNets visually
+- ⚡ **Automatic shape inference** — No manual tensor dimension tracking
+- 🔄 **Multi-framework export** — PyTorch or TensorFlow with one click
+- 🤖 **AI-powered assistant** — Ask questions or modify your model with natural language
+- ✅ **Real-time validation** — Catch architecture errors before export
+- 🎯 **Group blocks** — Create reusable custom components
+
+## 🚀 Quick Start
+
+1. **Install VisionForge** following our [Installation Guide](getting-started/installation.md)
+2. **Launch the application** and open your browser to `http://localhost:5173`
+3. **Create your first model** using our [Quick Start Guide](getting-started/quickstart.md)
+4. **Learn architecture rules** in [Layer Connection Rules](architecture/connection-rules.md)
+
+## 📖 Documentation Structure
+
+### 🎯 For Beginners
+- [Installation Guide](getting-started/installation.md) - Set up VisionForge on your system
+- [Quick Start](getting-started/quickstart.md) - Build your first neural network
+- [Interface Overview](getting-started/interface.md) - Understand the workspace
+
+### 🏗️ Architecture Design
+- [Creating Architecture Diagrams](architecture/creating-diagrams.md) - Learn visual model building
+- [Layer Connection Rules](architecture/connection-rules.md) - Understand which layers connect
+- [Shape Inference](architecture/shape-inference.md) - How tensor dimensions are computed
+- [Validation System](architecture/validation.md) - Real-time error checking
+
+### 📚 Layer Reference
+- [Input Layers](layers/input.md) - Data input configurations
+- [Core Layers](layers/core.md) - Convolutional, Linear, and basic operations
+- [Activation Functions](layers/activation.md) - Non-linear transformations
+- [Pooling Layers](layers/pooling.md) - Dimensionality reduction
+- [Merge Operations](layers/merge.md) - Combining multiple paths
+- [Advanced Layers](layers/advanced.md) - Specialized operations
+
+### 💡 Examples & Tutorials
+- [Simple CNN](examples/simple-cnn.md) - Basic image classification
+- [ResNet Architecture](examples/resnet.md) - Skip connections
+- [LSTM Networks](examples/lstm.md) - Sequence modeling
+- [Custom Group Blocks](examples/group-blocks.md) - Reusable components
+
+### 🔧 Advanced Topics
+- [Group Blocks](advanced/group-blocks.md) - Create custom layer groups
+- [AI Assistant](advanced/ai-assistant.md) - Natural language help
+- [Project Sharing](advanced/sharing.md) - Collaborate with others
+
+## 🎯 How It Works
+
+```mermaid
+graph LR
+ A[Drag & Drop Blocks] --> B[Configure Parameters]
+ B --> C[Validate Architecture]
+ C --> D[Export Code]
+
+ style A fill:#e3f2fd,stroke:#2196f3
+ style B fill:#e3f2fd,stroke:#2196f3
+ style C fill:#e3f2fd,stroke:#2196f3
+ style D fill:#e3f2fd,stroke:#2196f3
+```
+
+1. **Add layers** from the sidebar palette
+2. **Connect blocks** to define data flow
+3. **Configure parameters** using the properties panel
+4. **Validate** your architecture with real-time checks
+5. **Export** production-ready code
+
+## 🛠️ Supported Frameworks
+
+| Framework | Status | Export Formats |
+|-----------|--------|----------------|
+| **PyTorch** | ✅ Full Support | `.py`, `.pt` |
+| **TensorFlow** | ✅ Full Support | `.py`, SavedModel |
+| **ONNX** | 🚧 Coming Soon | `.onnx` |
+
+## 🎨 Architecture Categories
+
+VisionForge supports various neural network architectures:
+
+- **Convolutional Neural Networks (CNNs)** - Image classification, object detection
+- **Recurrent Neural Networks (RNNs)** - Sequence modeling, time series
+- **Transformer Networks** - Attention mechanisms, NLP
+- **Custom Architectures** - Mix and match any layers
+- **Group Blocks** - Create reusable components
+
+## 🔗 External Resources
+
+- [VisionForge GitHub Repository](https://github.com/devgunnu/visionforge)
+- [PyTorch Documentation](https://pytorch.org/docs/)
+- [TensorFlow Documentation](https://www.tensorflow.org/api_docs)
+- [Deep Learning Book](https://www.deeplearningbook.org/)
+
+---
+
+**Ready to start building?** → [Quick Start Guide](getting-started/quickstart.md)
diff --git a/docs/troubleshooting/common-issues.md b/docs/troubleshooting/common-issues.md
new file mode 100644
index 0000000..e6efbb6
--- /dev/null
+++ b/docs/troubleshooting/common-issues.md
@@ -0,0 +1,362 @@
+# Common Issues & Solutions
+
+Find solutions to the most frequently encountered problems when using VisionForge.
+
+## 🚨 Quick Fixes
+
+### Backend Issues
+
+#### Django Server Won't Start
+**Problem**: Server fails to start with port errors
+
+**Solution**:
+```bash
+# Check if port is in use
+netstat -ano | findstr :8000
+
+# Kill the process (Windows)
+taskkill /PID /F
+
+# Kill the process (macOS/Linux)
+kill -9
+
+# Or use different port
+python manage.py runserver 8080
+```
+
+#### Database Migration Errors
+**Problem**: `django.db.migrations.exceptions.InconsistentMigrationHistory`
+
+**Solution**:
+```bash
+# Delete database and migrate fresh
+rm db.sqlite3
+python manage.py migrate
+```
+
+#### CORS Errors
+**Problem**: `Access-Control-Allow-Origin` header missing
+
+**Solution**:
+1. Ensure both backend and frontend are running
+2. Check `settings.py` CORS configuration:
+ ```python
+ CORS_ALLOWED_ORIGINS = [
+ "http://localhost:3000",
+ "http://localhost:5173",
+ ]
+ ```
+
+### Frontend Issues
+
+#### npm Install Fails
+**Problem**: `npm ERR! code ERESOLVE`
+
+**Solution**:
+```bash
+# Clear npm cache
+npm cache clean --force
+
+# Delete node_modules and package-lock.json
+rm -rf node_modules package-lock.json
+
+# Reinstall
+npm install
+```
+
+#### Development Server Errors
+**Problem**: Vite dev server shows compilation errors
+
+**Solution**:
+```bash
+# Check Node.js version
+node --version # Should be 16+
+
+# Update to latest version
+npm install -g npm@latest
+npm install
+```
+
+## 🔗 Connection & Architecture Issues
+
+### Invalid Connections
+
+#### Red Connection Lines
+**Problem**: Connections show as red (invalid)
+
+**Common Causes**:
+- Shape mismatch between layers
+- Incompatible layer types
+- Missing required parameters
+
+**Solutions**:
+1. **Check shape compatibility**:
+ - Hover over connection to see shapes
+ - Review [Layer Connection Rules](../architecture/connection-rules.md)
+
+2. **Verify layer configuration**:
+ - Click each layer to check parameters
+ - Ensure all required fields are filled
+
+3. **Fix common shape issues**:
+ ```
+ Conv2D → Linear (needs Flatten)
+ Input[1,3,224,224] → Conv2D → Flatten → Linear
+ ```
+
+#### Orphaned Blocks
+**Problem**: Blocks not connected to the main graph
+
+**Solution**:
+- Connect orphaned blocks to the main flow
+- Or delete them if unnecessary
+- Check validation panel for warnings
+
+### Shape Inference Errors
+
+#### "Cannot determine output shape"
+**Problem**: Shape inference fails for a layer
+
+**Common Causes**:
+- Missing input shape configuration
+- Invalid parameter values
+- Unsupported layer combinations
+
+**Solutions**:
+1. **Set input shape**:
+ ```json
+ {
+ "inputShape": {
+ "dims": [1, 3, 224, 224]
+ }
+ }
+ ```
+
+2. **Check layer parameters**:
+ - Verify kernel sizes are positive
+ - Ensure stride values are reasonable
+ - Check padding values
+
+3. **Review layer sequence**:
+ - Some layers require specific input types
+ - Check [Shape Inference Guide](../architecture/shape-inference.md)
+
+## 🚀 Export Issues
+
+### Code Generation Fails
+
+#### "Export failed: Invalid architecture"
+**Problem**: Export fails with validation errors
+
+**Solution**:
+1. **Fix all validation errors**:
+ - Check the validation panel at the bottom
+ - All red indicators must be resolved
+
+2. **Ensure complete architecture**:
+ - Input layer must be connected
+ - Output layer should be present
+ - No circular dependencies
+
+3. **Verify layer configuration**:
+ - All required parameters filled
+ - Parameter values in valid ranges
+
+#### Generated Code Has Errors
+
+**Problem**: Exported Python code doesn't run
+
+**Common Issues**:
+1. **Missing imports**:
+ ```python
+ import torch
+ import torch.nn as nn
+ import torch.nn.functional as F
+ ```
+
+2. **Shape mismatches**:
+ - Check calculated vs actual tensor shapes
+ - Verify flatten operations
+
+3. **Framework-specific issues**:
+ - PyTorch vs TensorFlow syntax differences
+ - Parameter naming conventions
+
+**Debug Steps**:
+```python
+# Test your model step by step
+model = YourModel()
+x = torch.randn(1, 3, 224, 224)
+
+# Forward pass with error catching
+try:
+ output = model(x)
+ print(f"Success! Output shape: {output.shape}")
+except Exception as e:
+ print(f"Error: {e}")
+```
+
+## 🖥️ Interface Issues
+
+### Canvas Problems
+
+#### Blocks Not Responding
+**Problem**: Can't select or move blocks
+
+**Solutions**:
+1. **Refresh the page** - Simple browser refresh
+2. **Clear browser cache**:
+ - Ctrl+Shift+R (hard refresh)
+ - Or clear cache in browser settings
+
+#### Zoom/Pan Not Working
+**Problem**: Canvas navigation issues
+
+**Solutions**:
+1. **Check browser compatibility** - Use Chrome, Firefox, or Edge
+2. **Try keyboard shortcuts**:
+ - `Ctrl + Mouse wheel` for zoom
+ - `Click + drag` for pan
+
+#### Performance Issues
+**Problem**: Interface is slow or laggy
+
+**Solutions**:
+1. **Reduce canvas complexity**:
+ - Use group blocks for large architectures
+ - Delete unused blocks
+
+2. **Browser optimization**:
+ - Close unnecessary tabs
+ - Update browser to latest version
+
+## 🔧 Configuration Issues
+
+### Environment Variables
+
+#### API Key Not Working
+**Problem**: AI assistant features not available
+
+**Solution**:
+1. **Check .env file**:
+ ```env
+ GEMINI_API_KEY=your_actual_api_key_here
+ ```
+
+2. **Verify API key validity**:
+ - Check Google AI Studio dashboard
+ - Ensure key is active
+
+3. **Restart servers**:
+ ```bash
+ # Stop both servers and restart
+ python manage.py runserver
+ npm run dev
+ ```
+
+### Database Issues
+
+#### "Database is locked"
+**Problem**: SQLite database access errors
+
+**Solution**:
+```bash
+# Stop Django server
+# Delete database file
+rm db.sqlite3
+
+# Recreate database
+python manage.py migrate
+```
+
+#### Migration Conflicts
+**Problem**: Django migration errors
+
+**Solution**:
+```bash
+# Reset migrations
+python manage.py migrate --fake
+python manage.py migrate
+```
+
+## 📱 Browser-Specific Issues
+
+### Chrome/Edge
+- **Works best** with latest versions
+- **Enable hardware acceleration** in settings
+
+### Firefox
+- **May need** to enable WebRTC features
+- **Check** about:config for media settings
+
+### Safari
+- **Limited support** - some features may not work
+- **Use Chrome** for full functionality
+
+## 🆘 Getting Help
+
+### Self-Service Resources
+
+1. **Check the validation panel** - Bottom of the interface
+2. **Hover over elements** - Tooltips provide helpful information
+3. **Review the documentation**:
+ - [Layer Connection Rules](../architecture/connection-rules.md)
+ - [Shape Inference](../architecture/shape-inference.md)
+ - [API Reference](../api/rest-api.md)
+
+### Community Support
+
+1. **GitHub Issues**:
+ - Search existing issues first
+ - Provide detailed error messages
+ - Include screenshots when helpful
+
+2. **Discord Community**:
+ - Real-time help from other users
+ - Share your architecture for feedback
+
+### Reporting Issues
+
+When reporting problems, include:
+
+1. **System Information**:
+ - Operating system
+ - Browser version
+ - Python and Node.js versions
+
+2. **Error Details**:
+ - Full error messages
+ - Steps to reproduce
+ - Screenshots if applicable
+
+3. **Architecture Details**:
+ - Export your architecture (JSON)
+ - Describe expected vs actual behavior
+
+## 📋 Prevention Checklist
+
+To avoid common issues:
+
+### Before Starting
+- [ ] Check system requirements
+- [ ] Update browsers and dependencies
+- [ ] Set up environment variables
+
+### During Development
+- [ ] Save work frequently
+- [ ] Validate after each major change
+- [ ] Check connection colors (green = valid)
+
+### Before Export
+- [ ] Fix all validation errors
+- [ ] Verify all parameters are set
+- [ ] Test with sample data
+
+### Regular Maintenance
+- [ ] Clear browser cache weekly
+- [ ] Update dependencies monthly
+- [ ] Backup important projects
+
+---
+
+**Still stuck?** → [Contact Support](mailto:support@visionforge.ai)
diff --git a/mkdocs.yml b/mkdocs.yml
new file mode 100644
index 0000000..c454a5b
--- /dev/null
+++ b/mkdocs.yml
@@ -0,0 +1,100 @@
+site_name: VisionForge User Documentation
+site_description: Complete guide for building neural networks with VisionForge
+site_author: VisionForge Team
+site_url: https://visionforge-docs.example.com
+
+repo_name: devgunnu/visionforge
+repo_url: https://github.com/devgunnu/visionforge
+
+nav:
+ - Home: index.md
+ - Getting Started:
+ - Installation: getting-started/installation.md
+ - Quick Start: getting-started/quickstart.md
+ - Interface Overview: getting-started/interface.md
+ - Architecture Design:
+ - Creating Architecture Diagrams: architecture/creating-diagrams.md
+ - Layer Connection Rules: architecture/connection-rules.md
+ - Shape Inference: architecture/shape-inference.md
+ - Validation System: architecture/validation.md
+ - Layer Reference:
+ - Input Layers: layers/input.md
+ - Core Layers: layers/core.md
+ - Activation Functions: layers/activation.md
+ - Pooling Layers: layers/pooling.md
+ - Merge Operations: layers/merge.md
+ - Advanced Layers: layers/advanced.md
+ - Examples:
+ - Simple CNN: examples/simple-cnn.md
+ - ResNet Architecture: examples/resnet.md
+ - LSTM Networks: examples/lstm.md
+ - Custom Group Blocks: examples/group-blocks.md
+ - Code Generation:
+ - PyTorch Export: codegen/pytorch.md
+ - TensorFlow Export: codegen/tensorflow.md
+ - Custom Templates: codegen/custom-templates.md
+ - API Reference:
+ - REST API: api/rest-api.md
+ - Node Definitions: api/node-definitions.md
+ - Advanced Topics:
+ - Group Blocks: advanced/group-blocks.md
+ - AI Assistant: advanced/ai-assistant.md
+ - Project Sharing: advanced/sharing.md
+ - Troubleshooting:
+ - Common Issues: troubleshooting/common-issues.md
+ - Validation Errors: troubleshooting/validation-errors.md
+ - Performance Tips: troubleshooting/performance.md
+
+theme:
+ name: material
+ palette:
+ - scheme: default
+ primary: blue
+ accent: blue
+ toggle:
+ icon: material/brightness-7
+ name: Switch to dark mode
+ - scheme: slate
+ primary: blue
+ accent: blue
+ toggle:
+ icon: material/brightness-4
+ name: Switch to light mode
+ features:
+ - navigation.tabs
+ - navigation.sections
+ - navigation.expand
+ - search.highlight
+ - search.share
+ font:
+ text: Roboto
+ code: Roboto Mono
+
+plugins:
+ - search
+ - mermaid2
+ - mkdocs-video
+ - glightbox
+
+markdown_extensions:
+ - codehilite
+ - admonition
+ - toc:
+ permalink: true
+ - tables
+ - fenced_code
+ - pymdownx.superfences:
+ custom_fences:
+ - name: mermaid
+ class: mermaid
+ format: !!python/name:pymdownx.superfences.fence_code_format
+ - pymdownx.inlinehilite
+ - pymdownx.snippets
+ - pymdownx.details
+
+extra:
+ social:
+ - icon: fontawesome/brands/github
+ link: https://github.com/devgunnu/visionforge
+ - icon: fontawesome/brands/twitter
+ link: https://twitter.com/visionforge