LLM Mocks
Getting Started

Library overview

Welcome to the llm-mocks Library! This library is designed to provide comprehensive mock responses for Large Language Models (LLMs) from popular providers. Whether you are developing, testing, or integrating LLM-based solutions, this library offers a reliable and efficient way to simulate LLM responses without needing direct access to the actual services.

Key Features

  • Mock Responses: Predefined responses for various queries to simulate interactions with LLMs from providers like Anthropic and OpenAI.

  • Seamless Integration: Easy to integrate with your existing development and testing workflows such as Pytest.

  • Extensibility: Ability to customize mock responses to better match your specific use cases.

  • Performance: no network request, ensuring smooth development and testing processes.

Getting Started

To get started with LLM Library, follow the installation instructions and explore our examples to see how you can simulate LLM interactions in your projects. We provide detailed guides (opens in a new tab) to help you quickly integrate the library and start benefiting from its features.

Installation

You can install the LLM Library using pip:

pip install llm-mocks

For more information on installation and setup, please refer to our Installation Guide.

Supported providers

  • OpenAI (Chat CompletionS, Images Regenerations, Embedding)
  • Anthropic (Messages)
  • Cohere (Chat, Embed, Rerank)
  • Groq specs
  • Cohere specs
  • Azure specs
  • AWS Bedrock specs
  • Gemini specs
  • VertexAI specs

Usage

The library will try to return mock response if the providers and apis are supported; otherwise it will fallback to vcrpy implementation of network mocking.

With Pytest Recording

Pytest recording already implement vcrpy. Simply initialise LLMMock to mock response.

from llm_mocks import LLMMock
 
...
 
LLMMock()

Normal mock

Follow vcrpy usage documentation (opens in a new tab) for setting up vcr. Then initialise LLMMock class for reponse mocking.