Best Free APIs for AI Agents — Give Your Agent Web Access, Memory & Tools
Your AI agent is only as good as its tools. Here are 8 free REST APIs that turn a basic chatbot into a capable agent that can search the web, scrape pages, execute code, remember things, send emails, and more — all with no signup required.
The biggest bottleneck in AI agent development isn't the LLM — it's the tools. Your agent can reason perfectly, but without web access, file handling, and memory, it's stuck answering from stale training data.
This guide shows you 8 free APIs that solve this. Each one works as a standalone REST endpoint — no SDK installations, no OAuth flows, no vendor lock-in. Call them from LangChain, CrewAI, AutoGen, Claude MCP, or plain HTTP requests.
Every API is available through one gateway at agent-gateway-kappa.vercel.app, so your agent only needs one base URL to access all tools.
Agent Tools Covered
- Web Search — find information online
- Web Scraping — read web page content
- Code Execution — run Python/JS/Bash
- Persistent Memory — remember across sessions
- Email Sending — notify users
- Screenshots — see web pages visually
- File Storage — save and share files
- DNS & Network — investigate infrastructure
Architecture: One Gateway, All Tools
Instead of managing 8 different API endpoints, your agent calls one unified gateway. This means one base URL, one rate limit header format, and one optional API key for all tools.
1. Web Search — Find Information Online
Web Search API
Let your agent search the web and get structured results. Returns titles, URLs, and text snippets. Supports batch queries (5 at once) and full text extraction for feeding results directly into the LLM context.
curl "https://agent-gateway-kappa.vercel.app/v1/agent-search/api/search?q=latest+AI+agent+frameworks+2026"Why agents need this: Without search, your agent answers from training data that's months or years old. With search, it can look up current information, verify facts, and find relevant resources in real time.
import requests
BASE = "https://agent-gateway-kappa.vercel.app/v1"
def search(query, num_results=5):
"""Search the web and return structured results."""
r = requests.get(f"{BASE}/agent-search/api/search", params={
"q": query,
"num": num_results
})
data = r.json()
return [{"title": r["title"], "url": r["url"], "snippet": r["snippet"]}
for r in data.get("results", [])]
# Agent uses this when it needs current information
results = search("best Python web frameworks 2026")
for r in results:
print(f" {r['title']}: {r['url']}")2. Web Scraping — Read Any Web Page
Web Scraping API
Fetch any URL and get back clean markdown content with metadata. Your agent can read documentation, blog posts, product pages, and news articles without dealing with HTML parsing or headless browsers.
curl "https://agent-gateway-kappa.vercel.app/v1/agent-scraper/api/scrape?url=https://example.com"Why agents need this: Search gives you URLs and snippets. Scraping gives you the full content. When your agent finds a relevant page via search, it can scrape it to read the entire article and extract the specific information it needs.
def scrape(url):
"""Scrape a URL and return clean markdown content."""
r = requests.get(f"{BASE}/agent-scraper/api/scrape", params={"url": url})
data = r.json()
return {
"title": data.get("meta", {}).get("title", ""),
"content": data.get("content", ""),
"url": data.get("url", url)
}
# Agent reads a webpage to answer a question
page = scrape("https://docs.python.org/3/library/asyncio.html")
print(f"Title: {page['title']}")
print(f"Content length: {len(page['content'])} chars")
# Feed page['content'] into LLM context for the agent to analyze3. Code Execution — Run Python, JavaScript, Bash
Code Execution API
Execute code in a sandboxed environment. Supports Python, JavaScript, TypeScript, and Bash. Persistent sessions let your agent build up state across multiple executions. Perfect for data analysis, calculations, and automation.
curl -X POST "https://agent-gateway-kappa.vercel.app/v1/agent-coderunner/api/execute" -H "Content-Type: application/json" -d '{"code":"import math; print(math.factorial(20))","language":"python"}'Why agents need this: LLMs are bad at math and can't verify their own code. With a code execution tool, your agent can write Python to calculate results, parse data, transform files, and validate its own logic — then check the actual output.
def execute_code(code, language="python"):
"""Execute code and return stdout/stderr."""
r = requests.post(f"{BASE}/agent-coderunner/api/execute", json={
"code": code,
"language": language
})
data = r.json()
return {
"stdout": data.get("stdout", ""),
"stderr": data.get("stderr", ""),
"exit_code": data.get("exitCode", -1)
}
# Agent verifies a calculation
result = execute_code("""
import statistics
data = [23, 45, 12, 67, 34, 89, 56, 78, 90, 11]
print(f"Mean: {statistics.mean(data):.2f}")
print(f"Median: {statistics.median(data):.2f}")
print(f"Std Dev: {statistics.stdev(data):.2f}")
""")
print(result["stdout"])
# Mean: 50.50
# Median: 50.50
# Std Dev: 29.274. Persistent Memory — Remember Across Sessions
Persistent Memory API
Key-value store designed for AI agents. Store user preferences, conversation history, task state, and research notes that persist across sessions and restarts. Namespace isolation keeps different agents' data separate.
curl "https://agent-gateway-kappa.vercel.app/v1/agent-memory/health"Why agents need this: Without memory, every conversation starts from zero. With persistent memory, your agent can remember user preferences ("prefers Python over JS"), track ongoing tasks, and build up knowledge over time.
def memory_set(namespace, key, value, ttl=None):
"""Store a value in persistent memory."""
payload = {"namespace": namespace, "key": key, "value": value}
if ttl:
payload["ttl"] = ttl
r = requests.post(f"{BASE}/agent-memory/api/set", json=payload)
return r.json()
def memory_get(namespace, key):
"""Retrieve a value from persistent memory."""
r = requests.get(f"{BASE}/agent-memory/api/get/{namespace}/{key}")
return r.json().get("value")
# Agent remembers user preferences
memory_set("user-alice", "preferred_language", "python")
memory_set("user-alice", "timezone", "US/Pacific")
# In a later session, agent recalls preferences
lang = memory_get("user-alice", "preferred_language")
print(f"User prefers: {lang}") # "python"5. Email Sending — Notify Users & Report Results
Email API
Send transactional emails when your agent completes tasks, encounters errors, or needs to alert users. Supports plain text, HTML, and 5 built-in templates (welcome, alert, invoice, notification, password reset).
curl -X POST "https://agent-gateway-kappa.vercel.app/v1/agent-email/api/send" -H "Content-Type: application/json" -d '{"to":"test@example.com","subject":"Agent Report","text":"Task completed successfully."}'Why agents need this: Agents that run autonomously need a way to report back. Email lets your agent send task completion reports, error alerts, daily summaries, and notifications without requiring the user to check a dashboard.
def send_notification(to, subject, message):
"""Send an email notification."""
r = requests.post(f"{BASE}/agent-email/api/send", json={
"to": to,
"subject": subject,
"text": message
})
return r.json()
# Agent sends a completion report
send_notification(
to="developer@example.com",
subject="Daily Research Report",
message="""Research completed:
- Analyzed 15 competitor products
- Found 3 trending topics in your niche
- Scraped 8 documentation pages for reference
- Full report saved to shared storage (link below)"""
)6. Screenshots — See Web Pages Visually
Screenshot API
Capture screenshots of any URL. Choose viewport (desktop, tablet, mobile), enable full-page mode, dark mode, or target specific CSS elements. Returns PNG images your agent can analyze with vision models.
curl "https://agent-gateway-kappa.vercel.app/v1/agent-screenshot/api/screenshot?url=https://news.ycombinator.com" -o screenshot.pngWhy agents need this: Some information is visual — charts, layouts, UI bugs, design comparisons. When paired with a vision-capable LLM (GPT-4o, Claude, Gemini), your agent can "see" web pages and reason about visual content.
def take_screenshot(url, viewport="desktop", full_page=False):
"""Capture a screenshot of a URL."""
params = {"url": url, "viewport": viewport}
if full_page:
params["fullPage"] = "true"
r = requests.get(f"{BASE}/agent-screenshot/api/screenshot", params=params)
return r.content # PNG bytes
# Agent captures a screenshot for visual analysis
img = take_screenshot("https://competitor-site.com", viewport="mobile")
with open("competitor_mobile.png", "wb") as f:
f.write(img)
# Feed image to vision model for analysis7. File Storage — Save & Share Artifacts
File Storage API
Upload files up to 50 MB and get shareable links. Auto-expiring (1 hour to 7 days), download limits, and deletion tokens. Three upload methods: multipart, raw binary, or base64 JSON.
curl -X POST "https://agent-gateway-kappa.vercel.app/v1/agent-filestorage/api/upload" -F "file=@report.pdf"Why agents need this: Agents generate artifacts — reports, transformed data, images, exports. File storage gives your agent a place to save outputs and share them with users via links, rather than trying to paste everything into the chat.
def upload_file(content, filename, expires="24h"):
"""Upload content and return a shareable link."""
r = requests.post(f"{BASE}/agent-filestorage/api/upload", json={
"content": content,
"filename": filename,
"expiresIn": expires
})
data = r.json()
return data.get("shareUrl", data.get("url", ""))
# Agent saves a research report
report = """# Market Research Report
## Findings
- Competitor A launched new pricing: $29/mo
- Competitor B added API support
- Market trend: 40% growth in AI agent tools
"""
share_url = upload_file(report, "research-report.md", expires="7d")
print(f"Report available at: {share_url}")8. DNS & Network — Investigate Infrastructure
DNS Lookup API
Resolve domains, check WHOIS data, query specific record types (MX, TXT, CNAME), check domain availability, and test DNS propagation. Essential for DevOps and security-focused agents.
curl "https://agent-gateway-kappa.vercel.app/v1/agent-dns/api/resolve/github.com"Why agents need this: DevOps and security agents need to investigate domains, verify DNS configurations, check certificate setups, and monitor infrastructure. DNS is the foundation of network troubleshooting.
Complete Example: Research Agent
Here's a complete agent that combines search, scraping, code execution, and email to perform autonomous research:
import requests
import json
BASE = "https://agent-gateway-kappa.vercel.app/v1"
class ResearchAgent:
"""An AI agent with web search, scraping, code execution, and email tools."""
def __init__(self):
self.tools = {
"search": self._search,
"scrape": self._scrape,
"execute_code": self._execute_code,
"send_email": self._send_email,
}
def _search(self, query, num=5):
r = requests.get(f"{BASE}/agent-search/api/search",
params={"q": query, "num": num})
return r.json().get("results", [])
def _scrape(self, url):
r = requests.get(f"{BASE}/agent-scraper/api/scrape",
params={"url": url})
data = r.json()
return {"title": data.get("meta", {}).get("title", ""),
"content": data.get("content", "")[:5000]}
def _execute_code(self, code, language="python"):
r = requests.post(f"{BASE}/agent-coderunner/api/execute",
json={"code": code, "language": language})
return r.json()
def _send_email(self, to, subject, body):
r = requests.post(f"{BASE}/agent-email/api/send",
json={"to": to, "subject": subject, "text": body})
return r.json()
def research(self, topic, notify_email=None):
"""Research a topic using all available tools."""
print(f"Researching: {topic}")
# Step 1: Search for relevant sources
results = self._search(f"{topic} 2026")
print(f" Found {len(results)} search results")
# Step 2: Scrape top 3 results for full content
articles = []
for r in results[:3]:
page = self._scrape(r["url"])
articles.append(page)
print(f" Scraped: {page['title']}")
# Step 3: Use code execution to analyze the content
analysis_code = f"""
topics = {json.dumps([a['title'] for a in articles])}
print(f"Analyzed {{len(topics)}} articles")
for i, t in enumerate(topics, 1):
print(f" {{i}}. {{t}}")
print(f"\\nTotal content: {{sum(len(a) for a in topics)}} chars")
"""
analysis = self._execute_code(analysis_code)
print(f" Analysis: {analysis.get('stdout', '')}")
# Step 4: Send results via email
if notify_email:
report = f"Research Report: {topic}\\n\\n"
report += f"Sources analyzed: {len(articles)}\\n\\n"
for a in articles:
report += f"- {a['title']}\\n"
self._send_email(notify_email, f"Research: {topic}", report)
print(f" Report sent to {notify_email}")
return articles
# Run the agent
agent = ResearchAgent()
agent.research("AI agent frameworks", notify_email="dev@example.com")LangChain Integration
Each API works as a LangChain tool with minimal wrapping:
from langchain.tools import tool
import requests
BASE = "https://agent-gateway-kappa.vercel.app/v1"
@tool
def web_search(query: str) -> str:
"""Search the web for current information. Use when you need
up-to-date facts, recent news, or information not in your training data."""
r = requests.get(f"{BASE}/agent-search/api/search", params={"q": query})
results = r.json().get("results", [])
return "\n".join([f"- {r['title']}: {r['url']}\n {r['snippet']}"
for r in results[:5]])
@tool
def read_webpage(url: str) -> str:
"""Read the full content of a webpage. Use after web_search to get
detailed information from a specific URL."""
r = requests.get(f"{BASE}/agent-scraper/api/scrape", params={"url": url})
data = r.json()
return f"Title: {data.get('meta', {}).get('title', '')}\n\n{data.get('content', '')[:4000]}"
@tool
def run_python(code: str) -> str:
"""Execute Python code and return the output. Use for calculations,
data processing, or verifying logic."""
r = requests.post(f"{BASE}/agent-coderunner/api/execute",
json={"code": code, "language": "python"})
data = r.json()
if data.get("stderr"):
return f"Error: {data['stderr']}"
return data.get("stdout", "No output")
@tool
def take_screenshot(url: str) -> str:
"""Capture a screenshot of a webpage. Returns the file path
of the saved screenshot."""
r = requests.get(f"{BASE}/agent-screenshot/api/screenshot",
params={"url": url})
path = "/tmp/agent_screenshot.png"
with open(path, "wb") as f:
f.write(r.content)
return f"Screenshot saved to {path}"
# Use with any LangChain agent:
# from langchain.agents import initialize_agent
# agent = initialize_agent(
# tools=[web_search, read_webpage, run_python, take_screenshot],
# llm=your_llm,
# agent="zero-shot-react-description"
# )Node.js / TypeScript Agent
const BASE = "https://agent-gateway-kappa.vercel.app/v1";
// Tool definitions for OpenAI function calling
const agentTools = [
{
type: "function",
function: {
name: "web_search",
description: "Search the web for current information",
parameters: {
type: "object",
properties: { query: { type: "string", description: "Search query" } },
required: ["query"]
}
}
},
{
type: "function",
function: {
name: "read_webpage",
description: "Read the full content of a webpage URL",
parameters: {
type: "object",
properties: { url: { type: "string", description: "URL to read" } },
required: ["url"]
}
}
},
{
type: "function",
function: {
name: "run_code",
description: "Execute Python code and return output",
parameters: {
type: "object",
properties: {
code: { type: "string", description: "Python code to execute" },
},
required: ["code"]
}
}
}
];
// Tool implementations
async function executeToolCall(name, args) {
switch (name) {
case "web_search": {
const r = await fetch(`${BASE}/agent-search/api/search?q=${encodeURIComponent(args.query)}`);
const data = await r.json();
return data.results?.map(r => `${r.title}: ${r.url}`).join("\n") || "No results";
}
case "read_webpage": {
const r = await fetch(`${BASE}/agent-scraper/api/scrape?url=${encodeURIComponent(args.url)}`);
const data = await r.json();
return `${data.meta?.title || ""}\n\n${(data.content || "").slice(0, 4000)}`;
}
case "run_code": {
const r = await fetch(`${BASE}/agent-coderunner/api/execute`, {
method: "POST",
headers: { "Content-Type": "application/json" },
body: JSON.stringify({ code: args.code, language: "python" })
});
const data = await r.json();
return data.stderr ? `Error: ${data.stderr}` : (data.stdout || "No output");
}
}
}
// Use with OpenAI function calling:
// const response = await openai.chat.completions.create({
// model: "gpt-4",
// messages: [...],
// tools: agentTools,
// });Claude MCP Integration
If you're building with Claude via MCP (Model Context Protocol), these APIs are already available as an MCP server:
{
"mcpServers": {
"defi-mcp": {
"command": "npx",
"args": ["-y", "defi-mcp@latest"],
"env": {}
}
}
}This gives Claude direct access to crypto prices, wallet balances, DeFi data, and more through the MCP protocol — no REST wrapper needed. See the full MCP setup guide.
Agent Tool Comparison
| Tool | What It Does | When to Use | Response Time |
|---|---|---|---|
| Web Search | Search queries → URLs + snippets | Need current info, fact-checking | <1s |
| Web Scraper | URL → clean markdown content | Read full page content from search results | 1-3s |
| Code Execution | Code → stdout/stderr | Math, data analysis, verify logic | <1s |
| Memory | Key-value store for agent state | Remember preferences, track tasks | <100ms |
| Send transactional emails | Report results, alert on errors | <2s | |
| Screenshots | URL → PNG image | Visual analysis, monitoring, comparison | 2-5s |
| File Storage | Upload/download files | Save reports, share artifacts | <1s |
| DNS | Domain → DNS records | Infrastructure investigation, security | <500ms |
Best Practices for Agent Tool Design
1. Fail gracefully
Always handle API errors in your tool functions. A single failed API call shouldn't crash your agent.
def safe_search(query):
"""Search with error handling."""
try:
r = requests.get(f"{BASE}/agent-search/api/search",
params={"q": query}, timeout=10)
r.raise_for_status()
return r.json().get("results", [])
except requests.RequestException as e:
return [{"error": f"Search failed: {str(e)}"}]2. Limit context size
Don't dump entire web pages into your LLM context. Truncate scraped content to the first 3,000-5,000 characters, or use code execution to extract the relevant parts first.
3. Chain tools naturally
The most powerful agent workflows chain tools: search to find URLs → scrape to read content → code execution to analyze data → email to report results.
4. Use memory for efficiency
Cache frequently accessed data in memory. If your agent looks up the same information repeatedly, store it once and retrieve it from memory instead of making duplicate API calls.
/api/keys/create for 120 req/min. Getting started guide.Frequently Asked Questions
/openapi.json and interactive Swagger UI at /docs. You can auto-generate function calling schemas from these specs. View the full spec.Start Building Your Agent Now
Every tool works instantly with no signup. Copy any curl command from this page and try it.
Getting Started Guide