OpenAI LLM Call Patterns
This snippet demonstrates various patterns for working with OpenAI’s API, including basic chat completions, structured outputs using Pydantic, and function calling capabilities.
What You’ll Learn
- Basic OpenAI API Integration: Set up and make your first API calls
- Structured Outputs: Use Pydantic models to get structured responses
- Function Calling: Implement tools and function calling patterns
- Error Handling: Best practices for handling API errors
- Environment Management: Secure API key management with python-dotenv
Key Concepts
1. Basic Chat Completions
The foundation of working with OpenAI’s API starts with simple chat completions. You’ll learn how to:
- Set up the OpenAI client
- Make basic chat completion requests
- Handle responses and extract content
2. Structured Outputs with Pydantic
Pydantic integration allows you to:
- Define response schemas using Python classes
- Automatically validate and parse API responses
- Ensure type safety in your applications
- Handle complex data structures
3. Function Calling
Advanced patterns include:
- Defining function schemas for the API
- Implementing tool registries
- Handling function call responses
- Creating conversational flows with tools
Prerequisites
- Python 3.8+
- OpenAI API key
- Basic understanding of Python classes
- Familiarity with REST APIs
Environment Setup
Before running the notebook, make sure you have:
- OpenAI API Key: Sign up at OpenAI and get your API key
- Environment Variables: Create a
.envfile with your API key - Required Packages: Install the dependencies shown in the notebook
Use Cases
This pattern is particularly useful for:
- Chatbots and Conversational AI: Building intelligent chat interfaces
- Data Extraction: Extracting structured information from unstructured text
- Content Generation: Creating formatted content with specific schemas
- API Integration: Integrating LLM capabilities into existing applications
- Automation: Building AI-powered automation workflows
Notebook Information
Kernel: Python 3
Language: python
Cells: 10
Format:
v4.5
Code
[3]
Cell 1 Input:
!pip install openai Output:
Collecting openai
Downloading openai-1.101.0-py3-none-any.whl.metadata (29 kB)
Collecting anyio<5,>=3.5.0 (from openai)
Using cached anyio-4.10.0-py3-none-any.whl.metadata (4.0 kB)
Collecting distro<2,>=1.7.0 (from openai)
Using cached distro-1.9.0-py3-none-any.whl.metadata (6.8 kB)
Collecting httpx<1,>=0.23.0 (from openai)
Using cached httpx-0.28.1-py3-none-any.whl.metadata (7.1 kB)
Collecting jiter<1,>=0.4.0 (from openai)
Downloading jiter-0.10.0-cp311-cp311-macosx_11_0_arm64.whl.metadata (5.2 kB)
Collecting pydantic<3,>=1.9.0 (from openai)
Using cached pydantic-2.11.7-py3-none-any.whl.metadata (67 kB)
Collecting sniffio (from openai)
Using cached sniffio-1.3.1-py3-none-any.whl.metadata (3.9 kB)
Collecting tqdm>4 (from openai)
Using cached tqdm-4.67.1-py3-none-any.whl.metadata (57 kB)
Requirement already satisfied: typing-extensions<5,>=4.11 in /Users/vkrana/Documents/GL_Knowledge-Graph/.conda/lib/python3.11/site-packages (from openai) (4.14.1)
Collecting idna>=2.8 (from anyio<5,>=3.5.0->openai)
Using cached idna-3.10-py3-none-any.whl.metadata (10 kB)
Collecting certifi (from httpx<1,>=0.23.0->openai)
Using cached certifi-2025.8.3-py3-none-any.whl.metadata (2.4 kB)
Collecting httpcore==1.* (from httpx<1,>=0.23.0->openai)
Using cached httpcore-1.0.9-py3-none-any.whl.metadata (21 kB)
Collecting h11>=0.16 (from httpcore==1.*->httpx<1,>=0.23.0->openai)
Using cached h11-0.16.0-py3-none-any.whl.metadata (8.3 kB)
Collecting annotated-types>=0.6.0 (from pydantic<3,>=1.9.0->openai)
Using cached annotated_types-0.7.0-py3-none-any.whl.metadata (15 kB)
Collecting pydantic-core==2.33.2 (from pydantic<3,>=1.9.0->openai)
Downloading pydantic_core-2.33.2-cp311-cp311-macosx_11_0_arm64.whl.metadata (6.8 kB)
Collecting typing-inspection>=0.4.0 (from pydantic<3,>=1.9.0->openai)
Using cached typing_inspection-0.4.1-py3-none-any.whl.metadata (2.6 kB)
Downloading openai-1.101.0-py3-none-any.whl (810 kB)
[2K [90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━[0m [32m810.8/810.8 kB[0m [31m20.1 MB/s[0m eta [36m0:00:00[0m
[?25hUsing cached anyio-4.10.0-py3-none-any.whl (107 kB)
Using cached distro-1.9.0-py3-none-any.whl (20 kB)
Using cached httpx-0.28.1-py3-none-any.whl (73 kB)
Using cached httpcore-1.0.9-py3-none-any.whl (78 kB)
Downloading jiter-0.10.0-cp311-cp311-macosx_11_0_arm64.whl (321 kB)
Using cached pydantic-2.11.7-py3-none-any.whl (444 kB)
Downloading pydantic_core-2.33.2-cp311-cp311-macosx_11_0_arm64.whl (1.9 MB)
[2K [90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━[0m [32m1.9/1.9 MB[0m [31m29.9 MB/s[0m eta [36m0:00:00[0m
[?25hUsing cached annotated_types-0.7.0-py3-none-any.whl (13 kB)
Using cached h11-0.16.0-py3-none-any.whl (37 kB)
Using cached idna-3.10-py3-none-any.whl (70 kB)
Using cached sniffio-1.3.1-py3-none-any.whl (10 kB)
Using cached tqdm-4.67.1-py3-none-any.whl (78 kB)
Using cached typing_inspection-0.4.1-py3-none-any.whl (14 kB)
Using cached certifi-2025.8.3-py3-none-any.whl (161 kB)
Installing collected packages: typing-inspection, tqdm, sniffio, pydantic-core, jiter, idna, h11, distro, certifi, annotated-types, pydantic, httpcore, anyio, httpx, openai
[2K [90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━[0m [32m15/15[0m [openai]14/15[0m [openai]c]
[1A[2KSuccessfully installed annotated-types-0.7.0 anyio-4.10.0 certifi-2025.8.3 distro-1.9.0 h11-0.16.0 httpcore-1.0.9 httpx-0.28.1 idna-3.10 jiter-0.10.0 openai-1.101.0 pydantic-2.11.7 pydantic-core-2.33.2 sniffio-1.3.1 tqdm-4.67.1 typing-inspection-0.4.1
Code
[13]
Cell 2 Input:
!pip install python-dotenv Output:
Collecting python-dotenv
Using cached python_dotenv-1.1.1-py3-none-any.whl.metadata (24 kB)
Using cached python_dotenv-1.1.1-py3-none-any.whl (20 kB)
Installing collected packages: python-dotenv
Successfully installed python-dotenv-1.1.1
Code
Cell 3 Input:
from pydantic import BaseModel
class User(BaseModel):
name: str
age: int
email: str
# Create a valid user
user = User(name="Vijendra", age=30, email="vijendrav@gmail.com")
print(user.model_dump_json()) Output:
{"name":"Vijendrav","age":30,"email":"vijendrav@gmail.com"}
Code
[2]
Cell 4 Input:
from openai import OpenAI
from dotenv import load_dotenv
import os
load_dotenv(override=True)
client = OpenAI(api_key=os.getenv("OPENAI_API_KEY"))
response = client.chat.completions.create(
model="gpt-4o-mini",
messages=[{"role": "user", "content": "Hello, how are you?"}]
)
print(response.choices[0].message.content) Output:
Hello! I'm just a computer program, so I don't have feelings, but I'm here and ready to help you. How can I assist you today?
Code
[3]
Cell 5 Input:
## pydentic use cases
from pydantic import BaseModel
from openai import OpenAI
import os
from dotenv import load_dotenv
load_dotenv(override=True)
client = OpenAI(api_key=os.getenv("OPENAI_API_KEY"))
class Feedback(BaseModel):
sentiment: str
completed_courses: int
would_recommend: bool
#step 2
completion = client.beta.chat.completions.parse(
model="gpt-4o-mini",
messages=[{"role": "user", "content": "I'm a student who has read all your article and series and rest content. I would definitely recommend this platform to my friends."}],
response_format=Feedback
)
print(completion.choices[0].message.parsed)
Output:
sentiment='Positive' completed_courses=0 would_recommend=True
Code
[19]
Cell 6 Input:
!pip install requests Output:
Collecting requests
Downloading requests-2.32.5-py3-none-any.whl.metadata (4.9 kB)
Collecting charset_normalizer<4,>=2 (from requests)
Downloading charset_normalizer-3.4.3-cp311-cp311-macosx_10_9_universal2.whl.metadata (36 kB)
Requirement already satisfied: idna<4,>=2.5 in /Users/vkrana/Documents/GL_Knowledge-Graph/.conda/lib/python3.11/site-packages (from requests) (3.10)
Collecting urllib3<3,>=1.21.1 (from requests)
Using cached urllib3-2.5.0-py3-none-any.whl.metadata (6.5 kB)
Requirement already satisfied: certifi>=2017.4.17 in /Users/vkrana/Documents/GL_Knowledge-Graph/.conda/lib/python3.11/site-packages (from requests) (2025.8.3)
Downloading requests-2.32.5-py3-none-any.whl (64 kB)
Downloading charset_normalizer-3.4.3-cp311-cp311-macosx_10_9_universal2.whl (204 kB)
Using cached urllib3-2.5.0-py3-none-any.whl (129 kB)
Installing collected packages: urllib3, charset_normalizer, requests
[2K [90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━[0m [32m3/3[0m [requests]
[1A[2KSuccessfully installed charset_normalizer-3.4.3 requests-2.32.5 urllib3-2.5.0
Code
[4]
Cell 7 Input:
#Tools use
from openai import OpenAI
import os
from dotenv import load_dotenv
import requests
import json
load_dotenv(override=True)
client = OpenAI(api_key=os.getenv("OPENAI_API_KEY"))
#Tools for function calling
def get_tempreture(lat, lon):
response = requests.get(
f"https://api.open-meteo.com/v1/forecast"
f"?latitude={lat}&longitude={lon}"
f"¤t=temperature_2m"
)
data = response.json()
return data["current"]["temperature_2m"]
tool_registry = [
{
"type": "function",
"function": {
"name": "get_tempreture",
"description": "Get current temperature for provided coordinates in celsius.",
"parameters": {
"type": "object",
"properties": {
"latitude": {"type": "number"},
"longitude": {"type": "number"},
},
"required": ["latitude", "longitude"],
"additionalProperties": False,
},
"strict": True,
},
}
]
conversation = [{"role": "user", "content": "Can you check how hot it is in Bangalore right now?"}]
first_response = client.chat.completions.create(
model="gpt-4.1",
messages=conversation,
tools=tool_registry,
)
Code
[5]
Cell 8 Input:
tool_suggestion = first_response.choices[0].message.tool_calls[0]
tool_args = json.loads(tool_suggestion.function.arguments)
print("Tool arguments:", tool_args) # Debug: see what's actually in the arguments
print("Available keys:", tool_args.keys()) # Debug: see what keys exist
temp_result = get_tempreture(tool_args["latitude"], tool_args["longitude"])
print(temp_result) Output:
Tool arguments: {'latitude': 12.9716, 'longitude': 77.5946}
Available keys: dict_keys(['latitude', 'longitude'])
27.8
Code
[6]
Cell 9 Input:
conversation.append({"role": "assistant", "tool_calls": [tool_suggestion]})
conversation.append({
"role": "tool",
"tool_call_id": tool_suggestion.id,
"content": json.dumps(temp_result)
})
Code
[8]
Cell 10 Input:
## structured output
from pydantic import BaseModel, Field
from typing import Literal
class TempReply(BaseModel):
temperature: float = Field(
description="Temperature in Celsius at the requested location."
)
# 👇 Literal forces an exact match
message: Literal[
"Thanks for sticking with my snnipet content! "
"According to Pydantic validation, here’s your weather update."
] = Field(
description="Canonical thank-you line that must appear verbatim."
)
completion_final = client.beta.chat.completions.parse(
model="gpt-4o-mini",
messages = conversation,
tools = tool_registry,
response_format = TempReply
)
final = completion_final.choices[0].message.parsed
print(final.temperature)
print(final.message) Output:
27.8
Thanks for sticking with my snnipet content! According to Pydantic validation, here’s your weather update.