Fal.ai¶
Model not listed? Add it in seconds.
Option 1 — GitHub issue (no local setup needed): Open an "Add Fal model" issue — a bot picks it up, runs the skill, and opens a PR automatically.
Option 2 — Claude Code skill (in your terminal): First, install the skill into your project:
mkdir -p .claude/skills/add-fal-model
curl -o .claude/skills/add-fal-model/SKILL.md \
https://raw.githubusercontent.com/vertexcover-io/tarash/master/.claude/skills/add-fal-model/SKILL.md
Or view the skill file on GitHub to download it manually. Then run:
Both paths fetch the model schema from fal.ai, generate field mappers, register the model, and write unit + e2e tests automatically.
Fal.ai is a serverless AI inference platform hosting multiple video and image generation models including Veo3, Sora, Minimax (Hailuo), Kling, and Flux.
Installation¶
Quick Example¶
from tarash.tarash_gateway import generate_video
from tarash.tarash_gateway.models import VideoGenerationConfig, VideoGenerationRequest
config = VideoGenerationConfig(
provider="fal",
model="fal-ai/veo3.1/fast",
api_key="YOUR_FAL_KEY",
)
request = VideoGenerationRequest(
prompt="A hummingbird hovering over a flower, slow motion",
duration_seconds=6,
aspect_ratio="16:9",
generate_audio=True,
seed=42,
)
response = generate_video(config, request)
print(response.video)
Configuration¶
| Field | Type | Required | Default | Description |
|---|---|---|---|---|
provider |
str |
✅ | — | Must be "fal" |
model |
str |
✅ | — | Model ID, e.g. fal-ai/veo3.1 |
api_key |
str | None |
✅ | — | Fal API key |
timeout_seconds |
int |
— | 300 |
Per-request timeout in seconds |
max_poll_attempts |
int |
— | 120 |
Max polling iterations |
poll_interval |
int |
— | 5 |
Seconds between polls |
from tarash.tarash_gateway.models import VideoGenerationConfig
config = VideoGenerationConfig(
provider="fal",
model="fal-ai/veo3.1/fast",
api_key="...",
timeout_seconds=300,
max_poll_attempts=120,
poll_interval=5,
)
Supported Models¶
| Family | Page | Models | Type |
|---|---|---|---|
| Veo | Veo | fal-ai/veo3, fal-ai/veo3.1 |
Video |
| Kling | Kling | fal-ai/kling-video/v2.6, fal-ai/kling-video/o1, fal-ai/kling-video/v3, fal-ai/kling-video/o3/ |
Video |
| Minimax | Minimax | fal-ai/minimax |
Video |
| Wan | Wan | wan/v2.6/, fal-ai/wan-25-preview/, fal-ai/wan/v2.2-a14b/ |
Video |
| Sora | Sora | fal-ai/sora-2 |
Video |
| Seedance | Seedance | fal-ai/bytedance/seedance, bytedance/seedance-2.0 |
Video |
| Pixverse | Pixverse | fal-ai/pixverse/v5, fal-ai/pixverse/v5.5, fal-ai/pixverse/lipsync |
Video |
| OmniHuman | OmniHuman | fal-ai/bytedance/omnihuman, fal-ai/bytedance/omnihuman/v1.5 |
Video |
| Sync Lipsync | Sync Lipsync | fal-ai/sync-lipsync, fal-ai/sync-lipsync/v2, fal-ai/sync-lipsync/v2/pro |
Video |
| Reve | Reve | fal-ai/reve/text-to-image, fal-ai/reve/edit, fal-ai/reve/remix |
Image |
| Grok Imagine | Grok Imagine | xai/grok-imagine-image, xai/grok-imagine-image/edit |
Image |
| Seedream | Seedream | fal-ai/bytedance/seedream/v5/lite/text-to-image, fal-ai/bytedance/seedream/v5/lite/edit |
Image |
| Nano Banana 2 | Nano Banana 2 | fal-ai/nano-banana-2, fal-ai/nano-banana-2/edit |
Image |
| GPT Image 2 | GPT Image 2 | openai/gpt-image-2/edit |
Image |
| MiniMax Speech | MiniMax Speech | fal-ai/minimax/speech-2.8-hd |
Audio (TTS) |
| Qwen 3 TTS | Qwen 3 TTS | fal-ai/qwen-3-tts |
Audio (TTS) |
Model lookup uses prefix matching: fal-ai/veo3.1/fast matches the fal-ai/veo3.1 registry entry, so any sub-variant automatically inherits the right field mappers.
Any Fal model not in this table gets generic mappers (prompt passthrough + common fields). For full support with model-specific parameters, use /add-fal-model in Claude Code.
Provider Notes¶
Event-based polling: Fal uses native event streaming (iter_events()), which gives real-time progress updates without constant HTTP polling.
Extend-video constraints: The fal-ai/veo3.1/fast/extend-video endpoint has stricter limits:
- Duration: 7s only
- Resolution: 720p only
- Requires both prompt and a video input