OpenAIProvider also accepts a custom AsyncOpenAI client via the openai_client parameter, so you can customise the organization, project, base_url etc. as defined in the OpenAI API docs.
You could also use the AsyncAzureOpenAI client to use the Azure OpenAI API.
The Responses API has built-in tools that you can use instead of building your own:
Web search: allow models to search the web for the latest information before generating a response.
File search: allow models to search your files for relevant information before generating a response.
Computer use: allow models to use a computer to perform tasks on your behalf.
You can use the OpenAIResponsesModelSettings class to make use of those built-in tools:
fromopenai.types.responsesimportWebSearchToolParamfrompydantic_aiimportAgentfrompydantic_ai.models.openaiimportOpenAIResponsesModel,OpenAIResponsesModelSettingsmodel_settings=OpenAIResponsesModelSettings(openai_builtin_tools=[WebSearchToolParam(type='web_search_preview')],)model=OpenAIResponsesModel('gpt-4o')agent=Agent(model=model,model_settings=model_settings)result=agent.run_sync('What is the weather in Tokyo?')print(result.output)"""As of 7:48 AM on Wednesday, April 2, 2025, in Tokyo, Japan, the weather is cloudy with a temperature of 53°F (12°C)."""
You can learn more about the differences between the Responses API and Chat Completions API in the OpenAI API docs.
OpenAI-compatible Models
Many models are compatible with the OpenAI API, and can be used with OpenAIModel in PydanticAI.
Before getting started, check the installation and configuration instructions above.
To use another OpenAI-compatible API, you can make use of the base_url and api_key arguments from OpenAIProvider:
To use Ollama, you must first download the Ollama client, and then download a model using the Ollama model library.
You must also ensure the Ollama server is running when trying to make requests to it. For more information, please see the Ollama documentation.
Example local usage
With ollama installed, you can run the server with the model you want to use:
ollamarunllama3.2
(this will pull the llama3.2 model if you don't already have it downloaded)
Then run your code, here's a minimal example:
frompydanticimportBaseModelfrompydantic_aiimportAgentfrompydantic_ai.models.openaiimportOpenAIModelfrompydantic_ai.providers.openaiimportOpenAIProviderclassCityLocation(BaseModel):city:strcountry:strollama_model=OpenAIModel(model_name='llama3.2',provider=OpenAIProvider(base_url='http://localhost:11434/v1'))agent=Agent(ollama_model,output_type=CityLocation)result=agent.run_sync('Where were the olympics held in 2012?')print(result.output)#> city='London' country='United Kingdom'print(result.usage())"""Usage(requests=1, request_tokens=57, response_tokens=8, total_tokens=65, details=None)"""
Example using a remote server
frompydanticimportBaseModelfrompydantic_aiimportAgentfrompydantic_ai.models.openaiimportOpenAIModelfrompydantic_ai.providers.openaiimportOpenAIProviderollama_model=OpenAIModel(model_name='qwen2.5-coder:7b',provider=OpenAIProvider(base_url='http://192.168.1.74:11434/v1'),)classCityLocation(BaseModel):city:strcountry:stragent=Agent(model=ollama_model,output_type=CityLocation)result=agent.run_sync('Where were the olympics held in 2012?')print(result.output)#> city='London' country='United Kingdom'print(result.usage())"""Usage(requests=1, request_tokens=57, response_tokens=8, total_tokens=65, details=None)"""
Azure AI Foundry
If you want to use Azure AI Foundry as your provider, you can do so by using the AzureProvider class.
Go to Fireworks.AI and create an API key in your account settings.
Once you have the API key, you can use it with the OpenAIProvider:
frompydantic_aiimportAgentfrompydantic_ai.models.openaiimportOpenAIModelfrompydantic_ai.providers.openaiimportOpenAIProvidermodel=OpenAIModel('accounts/fireworks/models/qwq-32b',# model library available at https://fireworks.ai/modelsprovider=OpenAIProvider(base_url='https://api.fireworks.ai/inference/v1',api_key='your-fireworks-api-key',),)agent=Agent(model)...
Together AI
Go to Together.ai and create an API key in your account settings.
Once you have the API key, you can use it with the OpenAIProvider:
frompydantic_aiimportAgentfrompydantic_ai.models.openaiimportOpenAIModelfrompydantic_ai.providers.openaiimportOpenAIProvidermodel=OpenAIModel('meta-llama/Llama-3.3-70B-Instruct-Turbo-Free',# model library available at https://www.together.ai/modelsprovider=OpenAIProvider(base_url='https://api.together.xyz/v1',api_key='your-together-api-key',),)agent=Agent(model)...