Haira is a compiled, type-safe programming language with first-class primitives for AI agents, tools, and workflows. Four keywords. One binary.
Replaces Python + LangChain, n8n / Make / Zapier, CrewAI / AutoGen.
import "io"
import "http"
provider openai {
api_key: env("OPENAI_KEY")
model: "gpt-4o"
}
agent Assistant {
model: openai
system: "Be helpful and concise."
memory: conversation(max_turns: 10)
}
@post("/chat")
workflow Chat(msg: string) -> stream {
return Assistant.stream(msg)
}
fn main() {
http.Server([Chat]).listen(8080)
}
Not a framework. Not a library. Agents, tools, and workflows are part of the language.
Haira compiles to Go, then to a native binary. No interpreter, no VM, no runtime dependencies. Ship a single executable.
Providers, tools, agents, and workflows are language keywords — not afterthoughts bolted on with decorators and classes.
Static types, pattern matching, enums, structs, pipe operator, and error handling — catch bugs at compile time, not in production.
A complete AI agent with tools, memory, and an HTTP endpoint.
from langchain.chat_models import ChatOpenAI
from langchain.agents import AgentExecutor
from langchain.tools import tool
from langchain.memory import ConversationBufferWindowMemory
from fastapi import FastAPI
import uvicorn, requests
@tool
def get_weather(city: str) -> str:
"""Get weather for a city"""
r = requests.get(f"https://wttr.in/{city}?format=j1")
data = r.json()
return f"{city}: {data[...]}"
llm = ChatOpenAI(model="gpt-4o", temperature=0.7)
memory = ConversationBufferWindowMemory(k=10)
agent = AgentExecutor.from_agent_and_tools(
agent=..., tools=[get_weather],
memory=memory, verbose=True
)
app = FastAPI()
@app.post("/api/chat")
async def chat(message: str, session_id: str):
result = agent.invoke({"input": message})
return {"reply": result["output"]}
uvicorn.run(app, port=8080)
import "io"
import "http"
provider openai {
api_key: env("OPENAI_API_KEY")
model: "gpt-4o"
}
tool get_weather(city: string) -> string {
"""Get weather for a city"""
resp, err = http.get("https://wttr.in/${city}?format=j1")
if err != nil { return "Failed." }
return "${city}: ${resp.json()["temp"]}"
}
agent Assistant {
model: openai
tools: [get_weather]
memory: conversation(max_turns: 10)
}
@post("/api/chat")
workflow Chat(msg: string, sid: string) {
reply, err = Assistant.ask(msg, session: sid)
return { reply: reply }
}
fn main() { http.Server([Chat]).listen(8080) }
Everything in Haira builds on four agentic keywords baked into the language.
provider openai {
api_key: env("OPENAI_API_KEY")
model: "gpt-4o"
}
tool search(q: string) -> string {
"""Search the knowledge base"""
return http.get("...?q=${q}").body
}
agent Bot {
model: openai
tools: [search]
memory: conversation(max_turns: 10)
}
@post("/api/chat")
workflow Chat(msg: string) {
reply, err = Bot.ask(msg)
return { reply: reply }
}
From language design to production deployment.
SSE streaming with -> stream. Built-in chat
UI for every streaming workflow.
Every workflow gets a web form at /_ui/.
File uploads, inputs — zero config.
Route between agents with handoffs: [A, B].
Front desk to billing to tech — automatic.
Built-in conversation and summary memory.
conversation(max_turns: N) or
summary(max_tokens: N).
Run tasks concurrently with
spawn { } blocks. Fan-out across tools and
agents.
Full match with or-patterns, range
patterns, and guards. Exhaustive checking on enums.
Chain transformations with |>. Readable,
composable data pipelines.
OpenAI, Azure OpenAI, Anthropic, and more. Switch models with a single line change.
HTTP, JSON, Postgres, Excel, Slack, regex, time, env — batteries included.
Haira replaces entire stacks with a single language.
Chatbots, assistants, and autonomous agents with memory and tools.
Replaces LangChainMulti-step automations with HTTP triggers, scheduling, and webhooks.
Replaces n8n / ZapierAgent handoffs, parallel execution, and orchestrated collaboration.
Replaces CrewAI / AutoGenCompiled binaries with built-in HTTP server. Deploy anywhere.
Replaces FastAPI + DockerEvery workflow automatically gets a web form. File uploads, inputs, results — no frontend code.
Your Haira source code
Upload a text file and get an AI summary
/summarize
Auto-generated at /_ui/ — zero frontend code
Real patterns, not toy demos.
import "io"
import "http"
provider openai {
api_key: env("OPENAI_API_KEY")
model: "gpt-4o"
}
agent Writer {
model: openai
system: "You are a creative writer."
memory: conversation(max_turns: 10)
temperature: 0.9
}
@post("/chat")
workflow Chat(msg: string, sid: string) -> stream {
return Writer.stream(msg, session: sid)
}
fn main() {
http.Server([Chat]).listen(8080)
// SSE at POST /chat, Chat UI at /_ui/chat
}
import "io"
import "http"
provider openai {
api_key: env("OPENAI_API_KEY")
model: "gpt-4o"
}
agent BillingAgent {
model: openai
system: "Handle billing and payment questions."
memory: conversation(max_turns: 20)
}
agent TechAgent {
model: openai
system: "Handle technical support questions."
memory: conversation(max_turns: 20)
}
agent FrontDesk {
model: openai
system: "Greet users. Route billing to BillingAgent, tech to TechAgent."
handoffs: [BillingAgent, TechAgent]
memory: conversation(max_turns: 10)
}
@post("/support")
workflow Support(msg: string, sid: string) {
reply, err = FrontDesk.ask(msg, session: sid)
if err != nil { return { reply: "Something went wrong." } }
return { reply: reply }
}
fn main() { http.Server([Support]).listen(8080) }
import "io"
import "http"
provider openai {
api_key: env("OPENAI_API_KEY")
model: "gpt-4o"
}
agent Summarizer {
model: openai
system: "Summarize the given text clearly and concisely."
temperature: 0.3
}
@webui(title: "File Summarizer")
@post("/summarize")
workflow Summarize(document: file, context: string) {
content, err = io.read_file(document)
if err != nil { return { summary: "Failed to read file." } }
prompt = "Summarize this: ${context}\n\n${content}"
reply, ai_err = Summarizer.ask(prompt)
if ai_err != nil { return { summary: "AI error." } }
return { summary: reply }
}
fn main() {
http.Server([Summarize]).listen(8080)
// Upload UI at /_ui/summarize
}
From source to production in seconds.
Write your agent logic in .haira files.
Four keywords, familiar syntax.
Run haira build app.haira. Compiles to Go,
then to a native binary.
Ship a single binary. No Python, no node_modules, no Docker required.
macOS and Linux. Requires Go 1.21+.
curl -sSL https://haira.dev/install.sh | bash
Or download from GitHub Releases