Build an AI Research Agent in Python
Build a fully working AI research agent that fetches live crypto data, runs analysis in a sandbox, stores findings in memory, and generates reports. Under 100 lines of Python. No LLM required.
What is an AI Research Agent?
A research agent is a program that autonomously gathers information, analyzes it, and produces insights. Unlike a simple script that calls one API, a research agent:
- Gathers data from multiple sources in parallel
- Runs analysis code on the gathered data (not just fetches and displays)
- Stores findings so it can build on previous research
- Produces actionable output (reports, alerts, recommendations)
In this tutorial, we'll build a crypto market research agent. The same pattern works for any domain: competitor research, news monitoring, financial analysis, security scanning, etc.
Architecture Overview
The key insight: instead of managing separate API keys, auth flows, and rate limits for each service, a single API key through Agent Gateway gives your agent access to 39 production services.
Setup (30 Seconds)
You need Python 3.8+ and the requests library. No other dependencies.
pip install requests
Get a free API key (200 credits, no signup):
curl -s -X POST https://agent-gateway-kappa.vercel.app/api/keys/create | python3 -m json.tool
{
"key": "gw_a1b2c3d4e5f6...",
"credits": 200,
"expires_at": "2026-04-03T12:00:00.000Z",
"note": "200 free credits (expires in 30 days)."
}
Save your key:
export AGENT_KEY="gw_your_key_here"
Step 1: Gather Data from Multiple Sources
Our agent starts by collecting data from two APIs simultaneously: live crypto prices and trending token analytics.
import os
import requests
from concurrent.futures import ThreadPoolExecutor
GW = "https://agent-gateway-kappa.vercel.app"
KEY = os.environ["AGENT_KEY"]
HEADERS = {"Authorization": f"Bearer {KEY}"}
def fetch_prices():
"""Fetch live crypto prices for top tokens."""
resp = requests.get(f"{GW}/v1/crypto-feeds/api/prices", headers=HEADERS)
return resp.json()
def fetch_trending():
"""Fetch trending tokens from on-chain analytics."""
resp = requests.get(f"{GW}/v1/onchain-analytics/api/trending", headers=HEADERS)
return resp.json()
def fetch_news(topic="crypto market"):
"""Scrape latest headlines from a news source."""
resp = requests.post(
f"{GW}/v1/agent-scraper/api/scrape",
headers=HEADERS,
json={"url": "https://www.coindesk.com", "extract": "text"}
)
return resp.json()
# Gather data in parallel (uses 3 credits, saves time)
with ThreadPoolExecutor(max_workers=3) as pool:
price_future = pool.submit(fetch_prices)
trend_future = pool.submit(fetch_trending)
news_future = pool.submit(fetch_news)
prices = price_future.result()
trending = trend_future.result()
news = news_future.result()
print(f"Prices: {len(prices.get('prices', []))} tokens")
print(f"Trending: {len(trending.get('tokens', []))} tokens")
print(f"News: {len(news.get('text', ''))} chars scraped")
Each API call costs 1 credit but takes 200-500ms. Running them in parallel means your agent gathers all data in ~500ms instead of ~1500ms. For an agent that runs every hour, this adds up.
Step 2: Run Analysis in a Sandbox
Now the agent sends the gathered data to a sandboxed code execution environment for analysis. This is where the "intelligence" happens: the agent writes Python code to analyze the data and extracts insights.
import json
# Prepare data for analysis
price_list = prices.get("prices", [])[:20] # Top 20 tokens
trend_list = trending.get("tokens", [])[:10] # Top 10 trending
# Build analysis code that will run in the sandbox
analysis_code = f"""
import json
prices = {json.dumps(price_list)}
trending = {json.dumps(trend_list)}
# Find tokens with biggest 24h gains
gainers = sorted(
[p for p in prices if p.get('change_24h')],
key=lambda x: float(x.get('change_24h', 0)),
reverse=True
)[:5]
# Find tokens with biggest losses
losers = sorted(
[p for p in prices if p.get('change_24h')],
key=lambda x: float(x.get('change_24h', 0))
)[:5]
# Calculate market summary
total_mcap = sum(float(p.get('market_cap', 0)) for p in prices if p.get('market_cap'))
btc_dominance = float(prices[0].get('market_cap', 0)) / total_mcap * 100 if total_mcap > 0 and prices else 0
report = {{
"summary": {{
"total_market_cap": f"${{total_mcap/1e12:.2f}}T",
"btc_dominance": f"{{btc_dominance:.1f}}%",
"tokens_analyzed": len(prices),
"trending_count": len(trending),
}},
"top_gainers": [
{{"name": t.get("name", "?"), "price": t.get("price"), "change": t.get("change_24h")}}
for t in gainers
],
"top_losers": [
{{"name": t.get("name", "?"), "price": t.get("price"), "change": t.get("change_24h")}}
for t in losers
],
"trending": [t.get("name", "?") for t in trending[:5]],
}}
print(json.dumps(report, indent=2))
"""
# Execute in sandboxed environment (1 credit)
run_resp = requests.post(
f"{GW}/v1/agent-coderunner/api/execute",
headers=HEADERS,
json={"language": "python", "code": analysis_code}
)
result = run_resp.json()
# Parse the analysis output
try:
analysis = json.loads(result.get("output", "{}"))
print(f"\nMarket Summary: {analysis['summary']}")
print(f"Top Gainers: {[g['name'] for g in analysis.get('top_gainers', [])]}")
print(f"Top Losers: {[l['name'] for l in analysis.get('top_losers', [])]}")
except json.JSONDecodeError:
analysis = {"raw_output": result.get("output", "No output")}
print(f"Analysis output: {result.get('output', 'error')}")
Running analysis code in a sandbox (instead of locally) means: (1) your agent can execute dynamically generated code safely, (2) if you later add an LLM that writes the analysis code, it can't compromise your system, and (3) the sandbox has its own CPU/memory limits so one bad analysis doesn't crash your agent.
Step 3: Store Findings in Agent Memory
A research agent that forgets everything between runs isn't very useful. We store the analysis results in persistent key-value memory so the agent can reference previous research.
from datetime import datetime
timestamp = datetime.utcnow().isoformat()
# Store the latest analysis (1 credit)
requests.post(
f"{GW}/v1/agent-memory/api/memory",
headers=HEADERS,
json={
"key": f"research_{timestamp[:10]}",
"value": {
"timestamp": timestamp,
"analysis": analysis,
"news_snippet": news.get("text", "")[:500],
"data_sources": ["crypto-feeds", "onchain-analytics", "coindesk"],
}
}
)
# Also store as "latest" for easy retrieval
requests.post(
f"{GW}/v1/agent-memory/api/memory",
headers=HEADERS,
json={
"key": "latest_research",
"value": analysis,
}
)
print(f"\nResearch stored with key: research_{timestamp[:10]}")
print(f"Latest research pointer updated.")
# Retrieve previous research to compare
prev = requests.get(
f"{GW}/v1/agent-memory/api/memory/latest_research",
headers=HEADERS
).json()
if prev.get("value"):
print(f"Previous research found: {prev['value'].get('summary', {})}")
Use date-based keys (research_2026-03-04) for historical data and fixed keys (latest_research) for quick lookups. This gives your agent both a timeline and instant access to the most recent findings.
Step 4: Generate a Report
Finally, the agent generates a formatted PDF report of its findings. This is useful for emailing stakeholders or archiving research.
# Build a markdown report
summary = analysis.get("summary", {})
gainers = analysis.get("top_gainers", [])
losers = analysis.get("top_losers", [])
trending_names = analysis.get("trending", [])
report_md = f"""# Crypto Market Research Report
**Generated**: {timestamp}
## Market Summary
- **Total Market Cap**: {summary.get('total_market_cap', 'N/A')}
- **BTC Dominance**: {summary.get('btc_dominance', 'N/A')}
- **Tokens Analyzed**: {summary.get('tokens_analyzed', 0)}
- **Trending Tokens**: {summary.get('trending_count', 0)}
## Top Gainers (24h)
{"".join(f"- **{g['name']}**: {g.get('change', 'N/A')}%{chr(10)}" for g in gainers)}
## Top Losers (24h)
{"".join(f"- **{l['name']}**: {l.get('change', 'N/A')}%{chr(10)}" for l in losers)}
## Trending
{", ".join(trending_names) if trending_names else "No trending data available"}
---
*Research by AI Agent via [Agent Gateway](https://agent-gateway-kappa.vercel.app)*
"""
# Generate PDF (1 credit)
pdf_resp = requests.post(
f"{GW}/v1/agent-pdfgen/api/generate",
headers=HEADERS,
json={"markdown": report_md, "filename": f"research-{timestamp[:10]}.pdf"}
)
pdf_data = pdf_resp.json()
print(f"\nPDF Report: {pdf_data.get('url', 'generated')}")
# Check remaining credits
balance = requests.get(
f"{GW}/api/keys/balance",
headers=HEADERS
).json()
print(f"Credits remaining: {balance.get('credits', '?')}")
Full Working Code
Here's the complete agent in one file. Copy this, set your AGENT_KEY, and run it.
#!/usr/bin/env python3
"""AI Research Agent — fetches data, analyzes it, stores findings, generates reports."""
import os, json, requests
from datetime import datetime
from concurrent.futures import ThreadPoolExecutor
GW = "https://agent-gateway-kappa.vercel.app"
KEY = os.environ.get("AGENT_KEY") or input("Enter your API key (gw_...): ")
H = {"Authorization": f"Bearer {KEY}"}
def api(method, path, **kw):
"""Helper: call any gateway endpoint."""
resp = getattr(requests, method)(f"{GW}{path}", headers=H, **kw)
return resp.json()
# ---- PHASE 1: Gather data in parallel ----
print("Phase 1: Gathering data...")
with ThreadPoolExecutor(3) as pool:
prices_f = pool.submit(api, "get", "/v1/crypto-feeds/api/prices")
trend_f = pool.submit(api, "get", "/v1/onchain-analytics/api/trending")
news_f = pool.submit(api, "post", "/v1/agent-scraper/api/scrape",
json={"url": "https://www.coindesk.com", "extract": "text"})
prices = prices_f.result().get("prices", [])[:20]
trending = trend_f.result().get("tokens", [])[:10]
news_text = news_f.result().get("text", "")[:500]
print(f" {len(prices)} prices, {len(trending)} trending, {len(news_text)} chars news")
# ---- PHASE 2: Analyze in sandbox ----
print("Phase 2: Running analysis...")
code = f"""
import json
prices = {json.dumps(prices)}
trending = {json.dumps(trending)}
gainers = sorted([p for p in prices if p.get('change_24h')],
key=lambda x: float(x.get('change_24h', 0)), reverse=True)[:5]
losers = sorted([p for p in prices if p.get('change_24h')],
key=lambda x: float(x.get('change_24h', 0)))[:5]
total_mcap = sum(float(p.get('market_cap', 0)) for p in prices if p.get('market_cap'))
btc_dom = float(prices[0].get('market_cap', 0)) / total_mcap * 100 if total_mcap and prices else 0
print(json.dumps({{
"summary": {{"market_cap": f"${{total_mcap/1e12:.2f}}T", "btc_dominance": f"{{btc_dom:.1f}}%",
"tokens": len(prices), "trending": len(trending)}},
"gainers": [{{"name": g.get("name"), "change": g.get("change_24h")}} for g in gainers],
"losers": [{{"name": l.get("name"), "change": l.get("change_24h")}} for l in losers],
"hot": [t.get("name") for t in trending[:5]]
}}, indent=2))
"""
run = api("post", "/v1/agent-coderunner/api/execute", json={"language": "python", "code": code})
try:
analysis = json.loads(run.get("output", "{}"))
except: analysis = {"raw": run.get("output", "error")}
print(f" Summary: {analysis.get('summary', {})}")
# ---- PHASE 3: Store in memory ----
print("Phase 3: Storing findings...")
ts = datetime.utcnow().isoformat()
api("post", "/v1/agent-memory/api/memory",
json={"key": f"research_{ts[:10]}", "value": {"ts": ts, "analysis": analysis, "news": news_text}})
api("post", "/v1/agent-memory/api/memory",
json={"key": "latest_research", "value": analysis})
print(f" Stored as research_{ts[:10]}")
# ---- PHASE 4: Generate report ----
print("Phase 4: Generating report...")
s = analysis.get("summary", {})
report = f"# Crypto Research Report\\n**{ts}**\\n\\n"
report += f"Market Cap: {s.get('market_cap','?')} | BTC: {s.get('btc_dominance','?')}\\n\\n"
report += "## Gainers\\n" + "\\n".join(f"- {g['name']}: {g.get('change','')}%" for g in analysis.get("gainers",[])) + "\\n\\n"
report += "## Losers\\n" + "\\n".join(f"- {l['name']}: {l.get('change','')}%" for l in analysis.get("losers",[])) + "\\n\\n"
report += f"## Trending\\n{', '.join(analysis.get('hot',[]))}\\n"
pdf = api("post", "/v1/agent-pdfgen/api/generate", json={"markdown": report})
print(f" PDF: {pdf.get('url', 'done')}")
# ---- Done ----
bal = api("get", "/api/keys/balance")
print(f"\nDone! Credits used: ~7 | Remaining: {bal.get('credits', '?')}")
print(f"Run this every hour with: watch -n 3600 python3 research_agent.py")
This agent uses ~7 credits per run (3 data fetches + 1 code execution + 2 memory writes + 1 PDF). With 200 free credits, you can run it 28 times before needing to top up. At $0.002/credit, running hourly for a month costs about $10.
Ways to Extend This Agent
Add an LLM for natural language analysis
Replace the hardcoded analysis with an LLM call via /v1/agent-llm/api/chat. Send the raw data + a prompt like "Analyze this market data and identify opportunities." The LLM generates the analysis code, which you then run in the sandbox.
Schedule it to run automatically
Use the /v1/agent-scheduler/api/jobs endpoint to set up a recurring schedule. Or use a simple cron job: 0 * * * * cd /path && python3 research_agent.py >> agent.log 2>&1
Send alerts on big movements
Add a condition check after analysis: if any token gained/lost more than 10% in 24h, post a webhook notification via /v1/webhook-inspector/api/webhooks/{id}/send or send an email via /v1/agent-email/api/send.
Track a portfolio
Use /v1/agent-wallet/api/wallets/create to create a watch-only wallet, then use /v1/defi-mcp/api/balance/{address}/multi to check balances across chains. Store the portfolio value in memory and generate daily P&L reports.
Build a research dashboard
Use /v1/agent-screenshot/api/screenshot to capture a visualization, store the image URL in memory, and display it on a simple HTML page.
Start Building
Get your free API key and start building AI agents in under a minute.
curl -s -X POST https://agent-gateway-kappa.vercel.app/api/keys/create
More tutorials: AI Agent + Crypto Wallet | 39 APIs for AI Agents | Getting Started Guide
Free tools: Regex Tester | JWT Decoder | JSON Formatter | All 10 Tools