Open source · Apache-2.0

Build agents, not boilerplate.

Haira is a compiled, type-safe programming language with first-class primitives for AI agents, tools, and workflows. Four keywords. One binary.

Replaces Python + LangChain, n8n / Make / Zapier, CrewAI / AutoGen.

chatbot.haira
import "io"
import "http"

provider openai {
    api_key: env("OPENAI_KEY")
    model: "gpt-4o"
}

agent Assistant {
    model: openai
    system: "Be helpful and concise."
    memory: conversation(max_turns: 10)
}

@post("/chat")
workflow Chat(msg: string) -> stream {
    return Assistant.stream(msg)
}

fn main() {
    http.Server([Chat]).listen(8080)
}
Why Haira

Language-level AI primitives

Not a framework. Not a library. Agents, tools, and workflows are part of the language.

Compiled to native binary

Haira compiles to Go, then to a native binary. No interpreter, no VM, no runtime dependencies. Ship a single executable.

Agentic by design

Providers, tools, agents, and workflows are language keywords — not afterthoughts bolted on with decorators and classes.

Type-safe & expressive

Static types, pattern matching, enums, structs, pipe operator, and error handling — catch bugs at compile time, not in production.

Comparison

Less code. More agents.

A complete AI agent with tools, memory, and an HTTP endpoint.

Python + LangChain ~45 lines
from langchain.chat_models import ChatOpenAI
from langchain.agents import AgentExecutor
from langchain.tools import tool
from langchain.memory import ConversationBufferWindowMemory
from fastapi import FastAPI
import uvicorn, requests

@tool
def get_weather(city: str) -> str:
    """Get weather for a city"""
    r = requests.get(f"https://wttr.in/{city}?format=j1")
    data = r.json()
    return f"{city}: {data[...]}"

llm = ChatOpenAI(model="gpt-4o", temperature=0.7)
memory = ConversationBufferWindowMemory(k=10)
agent = AgentExecutor.from_agent_and_tools(
    agent=..., tools=[get_weather],
    memory=memory, verbose=True
)

app = FastAPI()

@app.post("/api/chat")
async def chat(message: str, session_id: str):
    result = agent.invoke({"input": message})
    return {"reply": result["output"]}

uvicorn.run(app, port=8080)
Haira ~28 lines
import "io"
import "http"

provider openai {
    api_key: env("OPENAI_API_KEY")
    model: "gpt-4o"
}

tool get_weather(city: string) -> string {
    """Get weather for a city"""
    resp, err = http.get("https://wttr.in/${city}?format=j1")
    if err != nil { return "Failed." }
    return "${city}: ${resp.json()["temp"]}"
}

agent Assistant {
    model: openai
    tools: [get_weather]
    memory: conversation(max_turns: 10)
}

@post("/api/chat")
workflow Chat(msg: string, sid: string) {
    reply, err = Assistant.ask(msg, session: sid)
    return { reply: reply }
}

fn main() { http.Server([Chat]).listen(8080) }
Primitives

Four keywords. Infinite possibilities.

Everything in Haira builds on four agentic keywords baked into the language.

Provider

LLM backend configuration

provider openai {
    api_key: env("OPENAI_API_KEY")
    model: "gpt-4o"
}
Tool

Agent-callable function

tool search(q: string) -> string {
    """Search the knowledge base"""
    return http.get("...?q=${q}").body
}
Agent

LLM entity with memory

agent Bot {
    model: openai
    tools: [search]
    memory: conversation(max_turns: 10)
}
Workflow

HTTP endpoint & orchestration

@post("/api/chat")
workflow Chat(msg: string) {
    reply, err = Bot.ask(msg)
    return { reply: reply }
}
Features

Everything you need

From language design to production deployment.

Streaming

SSE streaming with -> stream. Built-in chat UI for every streaming workflow.

Auto-generated UI

Every workflow gets a web form at /_ui/. File uploads, inputs — zero config.

Agent handoffs

Route between agents with handoffs: [A, B]. Front desk to billing to tech — automatic.

Agent memory

Built-in conversation and summary memory. conversation(max_turns: N) or summary(max_tokens: N).

Parallel execution

Run tasks concurrently with spawn { } blocks. Fan-out across tools and agents.

Pattern matching

Full match with or-patterns, range patterns, and guards. Exhaustive checking on enums.

Pipe operator

Chain transformations with |>. Readable, composable data pipelines.

Multi-provider

OpenAI, Azure OpenAI, Anthropic, and more. Switch models with a single line change.

Rich standard library

HTTP, JSON, Postgres, Excel, Slack, regex, time, env — batteries included.

Use cases

What you can build

Haira replaces entire stacks with a single language.

🤖

AI Agents

Chatbots, assistants, and autonomous agents with memory and tools.

Replaces LangChain
🔄

Workflows

Multi-step automations with HTTP triggers, scheduling, and webhooks.

Replaces n8n / Zapier
🤝

Multi-agent systems

Agent handoffs, parallel execution, and orchestrated collaboration.

Replaces CrewAI / AutoGen

Production APIs

Compiled binaries with built-in HTTP server. Deploy anywhere.

Replaces FastAPI + Docker
Zero-config UI

Write code. Get a UI.

Every workflow automatically gets a web form. File uploads, inputs, results — no frontend code.

summarizer.haira

Your Haira source code

localhost:8080/_ui/summarize

File Summarizer POST

Upload a text file and get an AI summary

/summarize

Response

                                        
                                    

Auto-generated at /_ui/ — zero frontend code

Examples

See it in action

Real patterns, not toy demos.

import "io"
import "http"

provider openai {
    api_key: env("OPENAI_API_KEY")
    model: "gpt-4o"
}

agent Writer {
    model: openai
    system: "You are a creative writer."
    memory: conversation(max_turns: 10)
    temperature: 0.9
}

@post("/chat")
workflow Chat(msg: string, sid: string) -> stream {
    return Writer.stream(msg, session: sid)
}

fn main() {
    http.Server([Chat]).listen(8080)
    // SSE at POST /chat, Chat UI at /_ui/chat
}
import "io"
import "http"

provider openai {
    api_key: env("OPENAI_API_KEY")
    model: "gpt-4o"
}

agent BillingAgent {
    model: openai
    system: "Handle billing and payment questions."
    memory: conversation(max_turns: 20)
}

agent TechAgent {
    model: openai
    system: "Handle technical support questions."
    memory: conversation(max_turns: 20)
}

agent FrontDesk {
    model: openai
    system: "Greet users. Route billing to BillingAgent, tech to TechAgent."
    handoffs: [BillingAgent, TechAgent]
    memory: conversation(max_turns: 10)
}

@post("/support")
workflow Support(msg: string, sid: string) {
    reply, err = FrontDesk.ask(msg, session: sid)
    if err != nil { return { reply: "Something went wrong." } }
    return { reply: reply }
}

fn main() { http.Server([Support]).listen(8080) }
import "io"
import "http"

provider openai {
    api_key: env("OPENAI_API_KEY")
    model: "gpt-4o"
}

agent Summarizer {
    model: openai
    system: "Summarize the given text clearly and concisely."
    temperature: 0.3
}

@webui(title: "File Summarizer")
@post("/summarize")
workflow Summarize(document: file, context: string) {
    content, err = io.read_file(document)
    if err != nil { return { summary: "Failed to read file." } }

    prompt = "Summarize this: ${context}\n\n${content}"
    reply, ai_err = Summarizer.ask(prompt)
    if ai_err != nil { return { summary: "AI error." } }
    return { summary: reply }
}

fn main() {
    http.Server([Summarize]).listen(8080)
    // Upload UI at /_ui/summarize
}
Getting started

How it works

From source to production in seconds.

1

Write

Write your agent logic in .haira files. Four keywords, familiar syntax.

2

Compile

Run haira build app.haira. Compiles to Go, then to a native binary.

3

Deploy

Ship a single binary. No Python, no node_modules, no Docker required.

Install

One command

macOS and Linux. Requires Go 1.21+.

curl -sSL https://haira.dev/install.sh | bash

Or download from GitHub Releases