OpenAI Integration Agent Skill
GPT API patterns, function calling, structured outputs with Zod, streaming, and token management.
The Skill
Full content, every format. Copy it, download it, or install with one command.
---
description: GPT API patterns, function calling, structured outputs with Zod, streaming, and token management.
homepage: https://yepapi.com/skills/openai-integration
metadata:
tags: [openai, gpt, function-calling, ai]
---
# OpenAI Integration
## Rules
- Use the official `openai` package — `new OpenAI({ apiKey: process.env.OPENAI_API_KEY })`
- Streaming for chat: `stream: true` with async iterator — never block on full completion for user-facing responses
- Function calling: define functions with JSON Schema, handle `tool_calls` in response, return results as tool messages
- Structured outputs: use `response_format: { type: "json_schema", json_schema: {...} }` or Zod with AI SDK
- Token management: estimate tokens before sending (roughly 4 chars per token), trim context to stay within limits
- Model selection: `gpt-4o` for complex reasoning, `gpt-4o-mini` for simple tasks — make it configurable
- System prompt as first message — set tone, constraints, and output format
- Temperature: 0 for deterministic outputs (data extraction), 0.7-1.0 for creative tasks
- Rate limiting: respect `x-ratelimit-*` headers, implement exponential backoff on 429 errors
## Patterns
```ts
// Streaming completion
const stream = await openai.chat.completions.create({
model: "gpt-4o",
messages,
stream: true,
});
for await (const chunk of stream) {
const content = chunk.choices[0]?.delta?.content || "";
process.stdout.write(content);
}
```
```ts
// Function calling
const response = await openai.chat.completions.create({
model: "gpt-4o",
messages,
tools: [{ type: "function", function: { name: "get_weather", parameters: schema } }],
});
```
## Avoid
- Hardcoding API keys — always use environment variables
- Ignoring token limits — count tokens and trim messages proactively
- Using `gpt-4o` for trivial tasks — use `gpt-4o-mini` to save cost
- Missing error handling on API calls — handle 429, 500, timeout errors gracefullyInstall
Why Use the OpenAI Integration Skill?
Without this skill, your AI guesses at openai integration patterns. It might hallucinate deprecated APIs, use outdated conventions, or miss best practices entirely. With it, your AI follows a proven ruleset — every suggestion aligns with current standards.
Drop this skill into your project and your AI instantly knows the rules. Better code suggestions, fewer errors, faster shipping.
Try These Prompts
These prompts work better with the OpenAI Integration skill installed. Your AI knows the context and writes code that fits.
"Build a GPT-powered feature with function calling, structured outputs, and streaming"
"Create an AI assistant that uses tool-use to query your API and format responses"
"Set up OpenAI integration with token management, rate limiting, and response caching"
Works Great With
OpenAI Integration skill — FAQ
It provides GPT API patterns with function calling, structured outputs via Zod, streaming, and token management. Your AI integrates OpenAI correctly with proper error handling and cost control.
Run `npx skills add YepAPI/skills --skill openai-integration` in your project root. This copies the skill file into your repo where your AI coding tool can read it automatically.
Yes. The skill covers OpenAI function calling with Zod schemas for structured outputs, tool-use patterns, and streaming. It includes token counting and rate limiting to control costs.