Skip to main content

What You’ll Build

A question-answering agent that:
  • Leverages Agentuity for easy deployment and monitoring
  • Retrieves live data from a SQL database through Snow Leopard
  • Requires no MCP setup, no ETL or data pipelines, and no RAG setup for data retrieval

Prerequisites

Don’t have data? Use our sample Northwind dataset to get started, or choose from our other sample datasets.

1. Create an Agentuity project

agentuity create
Follow along with the interactive project create which will create a directory for your agentuity agent. This quickstart doesn’t need auth, db access, or any other options, so feel free to reject any Agentuity features you do not plan on using.

2. Install dependencies

Once you have an Agentuity project and are in its working directory, we will need to add a few new dependencies to add live data retrieval to your agent.
bun add @snowleopard-ai/client ai zod @ai-sdk/openai

3. Configure environment variables

Add your API keys and datafile ID to your .env file:
.env
OPENAI_API_KEY=<your_openai_api_key>
SNOWLEOPARD_API_KEY=<your_snowleopard_api_key>
SNOWLEOPARD_DATAFILE_ID=<your_snowleopard_datafile_id>

4. Create the Snow Leopard tool

Create a Vercel AI tool that calls Snow Leopard to retrieve data:
src/agent/getData.ts
import { tool } from "ai";
import { z } from "zod";
import { SnowLeopardClient } from "@snowleopard-ai/client";

const snowy = new SnowLeopardClient({
  apiKey: process.env.SNOWLEOPARD_API_KEY!
});

export const getData = tool({
  description:
    'Retrieve data from the database. ' +
    'Describe your data here - this becomes part of the tool description.',
  inputSchema: z.object({
    userQuestion: z.string().describe('the natural language query to answer'),
  }),
  execute: async ({ userQuestion }) => {
    return await snowy.retrieve({
      userQuery: userQuestion,
      datafileId: process.env.SNOWLEOPARD_DATAFILE_ID!
    });
  },
});

5. Create the agent

Build an Agentuity agent that uses the Snow Leopard tool:
src/agent/agent.ts
import { createAgent } from '@agentuity/runtime';
import { s } from '@agentuity/schema';
import { generateText, type ModelMessage } from 'ai';
import { openai } from '@ai-sdk/openai';
import { getData } from './getData';

const agent = createAgent('chat', {
  description: 'A chat agent with data retrieval',
  handler: async (ctx, { message }) => {
    const messages: ModelMessage[] = [
      { role: 'system', content: 'You are a helpful assistant that answers questions using your data tools.' },
      { role: 'user', content: message }
    ];

    for (let step = 0; step < 10; step++) {
      const result = await generateText({
        model: openai('gpt-5-mini'),
        messages: messages,
        tools: { getData }
      });
      messages.push(...result.response.messages);
      if (result.finishReason !== 'tool-calls') {
        return { response: result.text };
      }
    }
    throw new Error('Agent exceeded maximum number of steps');
  },
  schema: {
    input: s.object({ message: s.string() }),
    output: s.object({ response: s.string() }),
  },
});

export default agent;

6. Expose the agent via HTTP

Add an API route to handle chat requests:
src/api/index.ts
import { createRouter } from '@agentuity/runtime';
import chat from '../agent/agent';

const api = createRouter();
/*
Existing api definitions...
 */

api.post('/chat', chat.validator(), async (c) => {
  const data = c.req.valid('json');
  return await chat.run(data);
});

export default api;

7. Try it out!

Start your development server:
agentuity dev
Query your data:
curl -X POST http://localhost:3500/api/chat \
  -H "Content-Type: application/json" \
  -d '{"message": "How many customers do we have?"}'
{
  "response": "You have 91 customers (counted as non-null customer_id entries in the customers table).  \n\nWould you like a breakdown by segment, region, sign-up date, or any other criteria?"
}

Next steps