Image-to-Video Generation Using ComfyUI WAN2.2 + RunPod Serverless
Complete Tutorial with Free Resources
Learn how to generate videos from images using ComfyUI WAN2.2 and RunPod Serverless! All resources, code, and helper files are available for free
Video Preview
Tutorial Preview

Complete Setup Guide
Step-by-step walkthrough from image input to video generation
What You'll Learn
- ✓ComfyUI WAN 2.2 setup on RunPod
- ✓Image-to-video workflow configuration
- ✓Model downloads and JupyterLab setup
- ✓Serverless endpoint deployment
- ✓Postman API testing workflows
- ✓Custom web app for image-to-video generation
Download Resources
Image-to-Video Generation Resources
Everything you need for image-to-video generation setup
Image-to-Video Generation
- ✓ComfyUI WAN 2.2 workflow files
- ✓Complete setup guide with instructions
- ✓Web app for image-to-video generation
- ✓Serverless API configuration
- ✓Model download scripts
Extra Resources
- ✓Postman collection for API testing
- ✓Docker configuration files
- ✓Network storage setup guide
- ✓JupyterLab configuration
- ✓Production deployment tips
Tutorial Overview
ComfyUI WAN 2.2 Image-to-Video Setup Guide
This comprehensive tutorial covers the complete setup process for ComfyUI WAN 2.2 image-to-video generation on RunPod, including network storage, model downloads, JupyterLab configuration, serverless deployment, and web app testing.
1. Create Required Accounts
- RunPod – for GPU pods and future serverless functions:
Referral link: https://runpod.io?ref=ckrxhc11
Signing up and depositing $10 using this referral may grant you a one-time credit bonus between $5–$500. - HuggingFace – required for accessing and downloading AI models:
https://huggingface.co/ - Civitai – used to download community models:
https://civitai.com/
2. Fund Your RunPod Account
Ensure you have at least $10 in your RunPod account to spin up GPU pods and receive the bonus credit (if using referral).
Referral link: https://runpod.io?ref=ckrxhc11
3. Create Network Storage
- Go to the "Network Volumes" section in RunPod.
- Create a new volume with at least 40 GB of space.
- This will allow persistent storage between pod sessions.
4. Deploy a Pod with GPU
- From the RunPod dashboard, click "Deploy Pod".
- Select a GPU (e.g., RTX A5000 for better performance).
- Under Templates, choose: ComfyUI Manager – Permanent Disk – torch 2.4
- Attach your network volume during setup.
5. Launch the Environment
- Wait for the pod to fully install (may take several minutes).
- Once the pod is ready, click "Connect" > "Open in Jupyter Notebook".
- In the Jupyter interface, open the terminal and run:
6. From Templates Choose WAN 2.2 Image to Video
Select the WAN 2.2 image to video template for your ComfyUI setup.
7. Install Necessary Models
Download and install all required WAN 2.2 models for image-to-video generation.
1. wan2.2_t2v_high_noise_14B_fp8_scaled.safetensors
Location: /workspace/ComfyUI/models/diffusion_models/
  "https://huggingface.co/Comfy-Org/Wan_2.2_ComfyUI_Repackaged/resolve/main/split_files/diffusion_models/wan2.2_t2v_high_noise_14B_fp8_scaled.safetensors"
2. wan2.2_t2v_low_noise_14B_fp8_scaled.safetensors
Location: /workspace/ComfyUI/models/diffusion_models/
  "https://huggingface.co/Comfy-Org/Wan_2.2_ComfyUI_Repackaged/resolve/main/split_files/diffusion_models/wan2.2_t2v_low_noise_14B_fp8_scaled.safetensors"
3. wan2.2_t2v_lightx2v_4steps_lora_v1.1_high_noise.safetensors
Location: /workspace/ComfyUI/models/loras/
  "https://huggingface.co/Comfy-Org/Wan_2.2_ComfyUI_Repackaged/resolve/main/split_files/loras/wan2.2_t2v_lightx2v_4steps_lora_v1.1_high_noise.safetensors"
4. wan2.2_t2v_lightx2v_4steps_lora_v1.1_low_noise.safetensors
Location: /workspace/ComfyUI/models/loras/
  "https://huggingface.co/Comfy-Org/Wan_2.2_ComfyUI_Repackaged/resolve/main/split_files/loras/wan2.2_t2v_lightx2v_4steps_lora_v1.1_low_noise.safetensors"
5. wan_2.1_vae.safetensors (VAE model)
Location: /workspace/ComfyUI/models/vae/
  "https://huggingface.co/Comfy-Org/Wan_2.2_ComfyUI_Repackaged/resolve/main/split_files/vae/wan_2.1_vae.safetensors"
6. umt5_xxl_fp8_e4m3fn_scaled.safetensors
Location: /workspace/ComfyUI/models/text_encoders/
  "https://huggingface.co/Comfy-Org/Wan_2.1_ComfyUI_repackaged/resolve/main/split_files/text_encoders/umt5_xxl_fp8_e4m3fn_scaled.safetensors"
8. Test the Workflow
Make sure all the model names are updated to match the ones we downloaded.
- Open ComfyUI and load your image-to-video workflow
- Verify all model paths point to the correct downloaded files
- Test the workflow with a sample image to ensure it generates video correctly
9. Move the Models Folder
Move the models folder from ComfyUI to workspace root.
10. Clean Up Workspace
Remove ComfyUI folder and other unnecessary folders. Keep only the models folder in workspace.
This will leave only the models folder in your workspace, ready for serverless deployment.
11. Terminate the Pod
All files are stored in Network Storage and can be accessed by the serverless endpoint.
- Terminate the current pod to save costs
- All models and files remain accessible via Network Storage
12. Upload to Private GitHub Repository
Now we will start setting up the serverless deployment.
- Upload the given WAN serverless GitHub folder to Private GitHub Repository
- Create a private repo and push your ComfyUI files, Dockerfile, and snapshot
- Make sure large model files are not tracked unless necessary
13. Deploy as Serverless Endpoint
- Connect your GitHub repo on your RunPod
- Choose the GitHub repo
- Add HuggingFace token for model access
14. Configure Endpoint Settings
After successful deployment, edit the endpoint and add the following:
- Add network storage where you downloaded all the models
- Add new environment variables:
COMFY_POLLING_INTERVAL_MS=500
15. Save and Wait
Save the configuration and wait for the deployment to finish.
16. Testing the Endpoint
Two methods available for testing:
- Method 1: Postman
- Method 2: Custom Web App
Notes
- If you modified ComfyUI, update your Postman and app code accordingly
- Match API structure, input keys, and output format, model names, etc.
Summary Checklist
| Task | Status |
|---|---|
| Created required accounts (RunPod, HuggingFace, Civitai) | ✅ |
| Funded RunPod account with $10+ | ✅ |
| Created network storage (40GB+) | ✅ |
| Deployed pod with GPU and ComfyUI template | ✅ |
| Launched environment and ran ./run_gpu.sh | ✅ |
| Selected WAN 2.2 image to video template | ✅ |
| Downloaded all 6 required models | ✅ |
| Tested workflow and verified model names | ✅ |
| Moved models folder to workspace root | ✅ |
| Cleaned up workspace (removed ComfyUI folder) | ✅ |
| Terminated pod to save costs | ✅ |
| Uploaded files to private GitHub repository | ✅ |
| Deployed as serverless endpoint | ✅ |
| Added network storage to endpoint | ✅ |
| Configured environment variables | ✅ |
| Tested endpoint with Postman or web app | ✅ |
This tutorial provides everything you need to create your own AI image-to-video generation app using ComfyUI WAN 2.2 on RunPod. All files, code, and templates are available for FREE!