What is ControlNet?
ControlNet lets you guide AI image generation using a reference image. Instead of just describing what you want in text, you provide a structural guide — like a pose, edges, or depth map.
This gives you precise control over composition while letting the AI handle style and details.
4 ControlNet Modes
1. Canny Edge — Detects edges in your image. Best for: architecture, objects, logos. The AI preserves the outline but changes everything else.
2. Depth — Creates a 3D depth map. Best for: scenes with foreground/background separation. Maintains spatial relationships.
3. OpenPose — Detects human body poses. Best for: character art, fashion, portraits. Upload a photo of someone posing, and the AI generates a new character in the same pose.
4. Scribble — Uses rough sketches as guides. Best for: quick concepts. Draw a rough sketch and let AI fill in the details.
Control Strength
The Control Strength slider (0.0 - 2.0) determines how strictly the AI follows your reference:
- 0.3 - 0.5: Loose guidance. AI takes creative liberty.
- 0.7 - 1.0: Balanced. Recommended starting point.
- 1.2 - 1.5: Strong adherence to the reference.
How to Use in EGAKU AI
- Go to Generate → ControlNet tab
- Upload a reference image
- Choose a mode (Canny, Depth, OpenPose, or Scribble)
- Write a prompt describing the style you want
- Adjust control strength (start at 0.8)
- Generate!