Tarash Gateway¶
One API. Every AI video and image provider.
from tarash.tarash_gateway import generate_video
from tarash.tarash_gateway.models import VideoGenerationConfig, VideoGenerationRequest
config = VideoGenerationConfig(provider="fal", model="fal-ai/veo3.1/fast")
request = VideoGenerationRequest(prompt="A cat playing piano, cinematic lighting")
response = generate_video(config, request)
print(response.video) # → URL to generated video
Switch providers by changing two words:
Get Started GitHub Why We Built This
Why Tarash Gateway¶
Sora, Veo3, Runway, Kling, Imagen — every provider ships a different API, different parameters, and different polling logic. Tarash Gateway handles the translation. You write one integration and it runs on all of them.
Features¶
Fallback chains¶
If a provider fails or rate-limits, Tarash Gateway automatically tries the next one in your chain:
config = VideoGenerationConfig(
provider="fal",
model="fal-ai/veo3.1/fast",
fallback_configs=[
VideoGenerationConfig(provider="replicate", model="google/veo-3.1"),
VideoGenerationConfig(provider="openai", model="openai/sora-2"),
],
)
response = generate_video(config, request)
# Fal → Replicate → OpenAI — first success wins
Mock provider¶
Test your full pipeline locally without hitting any API or spending credits:
from tarash.tarash_gateway.mock import MockConfig
config = VideoGenerationConfig(
provider="fal",
model="fal-ai/veo3.1/fast",
mock=MockConfig(enabled=True),
)
response = generate_video(config, request)
print(response.is_mock) # True — no real API call was made
Progress callbacks¶
Track generation in real time with sync or async callbacks:
def on_progress(update):
print(f"[{update.status}] {update.progress_percent}%")
response = generate_video(config, request, on_progress=on_progress)
Raw response access¶
Every response keeps the original provider JSON — useful for debugging or reading provider-specific fields not in the standard interface:
response = generate_video(config, request)
print(response.raw_response) # original provider JSON, unmodified
Sync + async¶
Every call has both a sync and async variant:
response = generate_video(config, request) # sync
response = await generate_video_async(config, request) # async
Providers¶
Video Generation¶
| Model | Variants | Provider(s) |
|---|---|---|
| Veo 3 | fal-ai/veo3fal-ai/veo3.1/fastfal-ai/veo3.1/fast/image-to-videofal-ai/veo3.1/fast/first-last-frame-to-videofal-ai/veo3.1/fast/extend-videoveo-3.0-generate-001 (Google)veo-3.0-generate-preview (Google)google/veo-3 (Replicate)google/veo-3.1 (Replicate) |
Fal.ai · Google · Replicate |
| Kling | fal-ai/kling-video/v2.6fal-ai/kling-video/v2.6/standard/motion-controlfal-ai/kling-video/v3/pro/text-to-videofal-ai/kling-video/v3/pro/image-to-videofal-ai/kling-video/v3/standard/text-to-videofal-ai/kling-video/v3/standard/image-to-videofal-ai/kling-video/o1/image-to-videofal-ai/kling-video/o1/standard/reference-to-videofal-ai/kling-video/o1/standard/video-to-video/editfal-ai/kling-video/o1/standard/video-to-video/referencefal-ai/kling-video/o3/pro/text-to-videofal-ai/kling-video/o3/pro/image-to-videofal-ai/kling-video/o3/standard/reference-to-videokwaivgi/kling-v2.1 (Replicate) |
Fal.ai · Replicate |
| Sora | sora-2sora-2-profal-ai/sora-2/text-to-videofal-ai/sora-2/image-to-video |
OpenAI · Fal.ai |
| Minimax | fal-ai/minimax/video-01fal-ai/minimax/hailuo-02-fast/image-to-videominimax/video-01 (Replicate) |
Fal.ai · Replicate |
| WAN | fal-ai/wan-25-preview/text-to-videofal-ai/wan-25-preview/image-to-videofal-ai/wan/v2.2-14b/animate/movefal-ai/wan/v2.2-a14b/image-to-videofal-ai/wan/v2.2-a14b/image-to-video/lorafal-ai/wan/v2.2-a14b/text-to-video/lorafal-ai/wan/v2.2-a14b/video-to-video |
Fal.ai · Replicate |
| Seedance | fal-ai/bytedance/seedance/v1.5/pro/text-to-videofal-ai/bytedance/seedance/v1/pro/image-to-videofal-ai/bytedance/seedance/v1/lite/reference-to-video |
Fal.ai |
| Pixverse | fal-ai/pixverse/v5.5/text-to-videofal-ai/pixverse/v5.5/image-to-videofal-ai/pixverse/v5/text-to-videofal-ai/pixverse/lipsync |
Fal.ai |
| OmniHuman | fal-ai/bytedance/omnihuman/v1.5fal-ai/bytedance/omnihuman |
Fal.ai |
| Kling Lip Sync | kwaivgi/kling-lip-sync (Replicate) |
Replicate |
| Sync Lipsync | fal-ai/sync-lipsync/v2/profal-ai/sync-lipsync/v2fal-ai/sync-lipsync |
Fal.ai |
| Runway | gen4_turbogen4_alephveo3.1veo3.1_fast |
Runway |
| Grok Imagine | grok-imagine-video |
xAI |
Image Generation¶
| Model | Variants | Provider(s) |
|---|---|---|
| DALL-E | dall-e-3dall-e-2 |
OpenAI |
| GPT Image | gpt-image-1.5 |
OpenAI |
| Imagen 3 | imagen-3.0-generate-001imagen-3.0-generate-002imagen-3.0-fast-generate-001 |
|
| Gemini Image | gemini-2.5-flash-image-previewgemini-3-pro-image-preview |
|
| Flux | fal-ai/flux/devfal-ai/flux/schnellfal-ai/flux-2fal-ai/flux-2/profal-ai/flux-2/devfal-ai/flux-2/flex |
Fal.ai |
| Stable Diffusion | sd3.5-largesd3.5-mediumsd3.5-large-turbo |
Stability AI |
| Stable Image | stable-image-ultrastable-image-core |
Stability AI |
| Recraft | fal-ai/recraft-v3fal-ai/recraft |
Fal.ai |
| Ideogram | fal-ai/ideogram |
Fal.ai |
| Z-Image Turbo | fal-ai/z-image/turbo |
Fal.ai |
| Reve | fal-ai/reve/text-to-imagefal-ai/reve/editfal-ai/reve/remix |
Fal.ai |
| Grok Imagine | xai/grok-imagine-imagexai/grok-imagine-image/edit |
Fal.ai |
| Grok Imagine (native) | grok-imagine-imagegrok-2-image |
xAI |
| Seedream | fal-ai/bytedance/seedream/v5/lite/text-to-imagefal-ai/bytedance/seedream/v5/lite/edit |
Fal.ai |
| Nano Banana 2 | fal-ai/nano-banana-2fal-ai/nano-banana-2/edit |
Fal.ai |
Audio Generation (TTS)¶
| Model | Variants | Provider(s) |
|---|---|---|
| MiniMax Speech | fal-ai/minimax/speech-2.8-hdfal-ai/minimax/speech-2.8-turbofal-ai/minimax/speech-2.6-hdfal-ai/minimax/speech-2.6-turbo |
Fal.ai |
| ElevenLabs | eleven_multilingual_v2eleven_flash_v2_5eleven_turbo_v2_5 |
ElevenLabs |
| Qwen 3 TTS | fal-ai/qwen-3-tts |
Fal.ai |
| Cartesia | sonic-3sonic-turbo |
Cartesia |
| Sarvam AI | bulbul:v2bulbul:v3 |
Sarvam |
| Hume AI | octave-2octave-1 |
Hume |