Skip to content

Tarash Gateway

One API. Every AI video and image provider.

from tarash.tarash_gateway import generate_video
from tarash.tarash_gateway.models import VideoGenerationConfig, VideoGenerationRequest

config = VideoGenerationConfig(provider="fal", model="fal-ai/veo3.1/fast")
request = VideoGenerationRequest(prompt="A cat playing piano, cinematic lighting")
response = generate_video(config, request)
print(response.video)  # → URL to generated video

Switch providers by changing two words:

config = VideoGenerationConfig(provider="runway", model="gen4_turbo")
pip install tarash-gateway[fal]     # single provider
pip install tarash-gateway[all]     # every provider

Get Started GitHub Why We Built This


Why Tarash Gateway

Sora, Veo3, Runway, Kling, Imagen — every provider ships a different API, different parameters, and different polling logic. Tarash Gateway handles the translation. You write one integration and it runs on all of them.


Features

Fallback chains

If a provider fails or rate-limits, Tarash Gateway automatically tries the next one in your chain:

config = VideoGenerationConfig(
    provider="fal",
    model="fal-ai/veo3.1/fast",
    fallback_configs=[
        VideoGenerationConfig(provider="replicate", model="google/veo-3.1"),
        VideoGenerationConfig(provider="openai", model="openai/sora-2"),
    ],
)
response = generate_video(config, request)
# Fal → Replicate → OpenAI — first success wins

Fallback & Routing guide →

Mock provider

Test your full pipeline locally without hitting any API or spending credits:

from tarash.tarash_gateway.mock import MockConfig

config = VideoGenerationConfig(
    provider="fal",
    model="fal-ai/veo3.1/fast",
    mock=MockConfig(enabled=True),
)
response = generate_video(config, request)
print(response.is_mock)   # True — no real API call was made

Mock Provider guide →

Progress callbacks

Track generation in real time with sync or async callbacks:

def on_progress(update):
    print(f"[{update.status}] {update.progress_percent}%")

response = generate_video(config, request, on_progress=on_progress)

Raw response access

Every response keeps the original provider JSON — useful for debugging or reading provider-specific fields not in the standard interface:

response = generate_video(config, request)
print(response.raw_response)       # original provider JSON, unmodified

Sync + async

Every call has both a sync and async variant:

response = generate_video(config, request)            # sync
response = await generate_video_async(config, request)  # async

Providers

Video Generation

Model Variants Provider(s)
Veo 3 fal-ai/veo3
fal-ai/veo3.1/fast
fal-ai/veo3.1/fast/image-to-video
fal-ai/veo3.1/fast/first-last-frame-to-video
fal-ai/veo3.1/fast/extend-video
veo-3.0-generate-001 (Google)
veo-3.0-generate-preview (Google)
google/veo-3 (Replicate)
google/veo-3.1 (Replicate)
Fal.ai · Google · Replicate
Kling fal-ai/kling-video/v2.6
fal-ai/kling-video/v2.6/standard/motion-control
fal-ai/kling-video/v3/pro/text-to-video
fal-ai/kling-video/v3/pro/image-to-video
fal-ai/kling-video/v3/standard/text-to-video
fal-ai/kling-video/v3/standard/image-to-video
fal-ai/kling-video/o1/image-to-video
fal-ai/kling-video/o1/standard/reference-to-video
fal-ai/kling-video/o1/standard/video-to-video/edit
fal-ai/kling-video/o1/standard/video-to-video/reference
fal-ai/kling-video/o3/pro/text-to-video
fal-ai/kling-video/o3/pro/image-to-video
fal-ai/kling-video/o3/standard/reference-to-video
kwaivgi/kling-v2.1 (Replicate)
Fal.ai · Replicate
Sora sora-2
sora-2-pro
fal-ai/sora-2/text-to-video
fal-ai/sora-2/image-to-video
OpenAI · Fal.ai
Minimax fal-ai/minimax/video-01
fal-ai/minimax/hailuo-02-fast/image-to-video
minimax/video-01 (Replicate)
Fal.ai · Replicate
WAN fal-ai/wan-25-preview/text-to-video
fal-ai/wan-25-preview/image-to-video
fal-ai/wan/v2.2-14b/animate/move
fal-ai/wan/v2.2-a14b/image-to-video
fal-ai/wan/v2.2-a14b/image-to-video/lora
fal-ai/wan/v2.2-a14b/text-to-video/lora
fal-ai/wan/v2.2-a14b/video-to-video
Fal.ai · Replicate
Seedance fal-ai/bytedance/seedance/v1.5/pro/text-to-video
fal-ai/bytedance/seedance/v1/pro/image-to-video
fal-ai/bytedance/seedance/v1/lite/reference-to-video
Fal.ai
Pixverse fal-ai/pixverse/v5.5/text-to-video
fal-ai/pixverse/v5.5/image-to-video
fal-ai/pixverse/v5/text-to-video
fal-ai/pixverse/lipsync
Fal.ai
OmniHuman fal-ai/bytedance/omnihuman/v1.5
fal-ai/bytedance/omnihuman
Fal.ai
Kling Lip Sync kwaivgi/kling-lip-sync (Replicate) Replicate
Sync Lipsync fal-ai/sync-lipsync/v2/pro
fal-ai/sync-lipsync/v2
fal-ai/sync-lipsync
Fal.ai
Runway gen4_turbo
gen4_aleph
veo3.1
veo3.1_fast
Runway
Grok Imagine grok-imagine-video xAI

Image Generation

Model Variants Provider(s)
DALL-E dall-e-3
dall-e-2
OpenAI
GPT Image gpt-image-1.5 OpenAI
Imagen 3 imagen-3.0-generate-001
imagen-3.0-generate-002
imagen-3.0-fast-generate-001
Google
Gemini Image gemini-2.5-flash-image-preview
gemini-3-pro-image-preview
Google
Flux fal-ai/flux/dev
fal-ai/flux/schnell
fal-ai/flux-2
fal-ai/flux-2/pro
fal-ai/flux-2/dev
fal-ai/flux-2/flex
Fal.ai
Stable Diffusion sd3.5-large
sd3.5-medium
sd3.5-large-turbo
Stability AI
Stable Image stable-image-ultra
stable-image-core
Stability AI
Recraft fal-ai/recraft-v3
fal-ai/recraft
Fal.ai
Ideogram fal-ai/ideogram Fal.ai
Z-Image Turbo fal-ai/z-image/turbo Fal.ai
Reve fal-ai/reve/text-to-image
fal-ai/reve/edit
fal-ai/reve/remix
Fal.ai
Grok Imagine xai/grok-imagine-image
xai/grok-imagine-image/edit
Fal.ai
Grok Imagine (native) grok-imagine-image
grok-2-image
xAI
Seedream fal-ai/bytedance/seedream/v5/lite/text-to-image
fal-ai/bytedance/seedream/v5/lite/edit
Fal.ai
Nano Banana 2 fal-ai/nano-banana-2
fal-ai/nano-banana-2/edit
Fal.ai

Audio Generation (TTS)

Model Variants Provider(s)
MiniMax Speech fal-ai/minimax/speech-2.8-hd
fal-ai/minimax/speech-2.8-turbo
fal-ai/minimax/speech-2.6-hd
fal-ai/minimax/speech-2.6-turbo
Fal.ai
ElevenLabs eleven_multilingual_v2
eleven_flash_v2_5
eleven_turbo_v2_5
ElevenLabs
Qwen 3 TTS fal-ai/qwen-3-tts Fal.ai
Cartesia sonic-3
sonic-turbo
Cartesia
Sarvam AI bulbul:v2
bulbul:v3
Sarvam
Hume AI octave-2
octave-1
Hume