Skip to main content
From the beginning, Checkly has bet on Monitoring as Code which lets you create and control your monitoring infrastructure entirely using code. By default, Checkly constructs reflect all your monitoring properties.
api.check.ts
import { ApiCheck, AssertionBuilder } from "checkly/constructs"

new ApiCheck("api-health-check", {
  name: "API Health Check",
  request: {
    url: "https://danube-web.shop/api/books",
    method: "GET",
    assertions: [
      AssertionBuilder.statusCode().equals(200),
    ],
  },
})
All your monitoring resources can be updated, tested and deployed via the Checkly CLI.
# test your monitoring configuration
npx checkly test

# deploy and update your monitoring setup
npx checkly deploy
The Monitoring as Code workflow is by default AI-native because LLMs are excellent at writing and editing Checkly constructs code and modern AI agents can execute CLI commands easily. Provide the necessary Checkly context and let your AI agent of choice do the rest.

Create new checks, alert channels or other constructs

“Can you create a new BrowserCheck monitoring example.com

Gather information about the current monitoring setup

“What are the currently used monitoring locations?”

Bulk-update your monitoring resources

“Can you change all checks to run every 5 minutes instead of every 10 minutes?”

Add Checkly context to your AI agent conversation

Install Checkly Skills or add the Checkly Rules to your AI conversation to give your AI agent enough context to perform Checkly-related tasks.

Skills vs Rules

Use Skills when your AI agent supports the Agent Skills standard. Skills load context on demand, keeping your agent’s context window lean until Checkly-related tasks arise. This is the recommended approach for compatible agents. Use Rules when your agent doesn’t support skills or when you want the Checkly context always available. Rules files are loaded at session start and provide consistent context throughout your conversation.

Why is there no Checkly MCP server (yet)?

The MCP concept is often used to enable LLMs to interact with external systems. It acts as a bridge between the AI model and the target system, translating natural language commands into actionable API calls or code snippets. With Monitoring as Code, Checkly already provides a native way to control your monitoring infrastructure via code and the command line. Whether you need to create new resources or update existing ones, AI can write and update the necessary construct files and execute the Checkly CLI commands autonomously.
We are researching additional AI-native workflows. Let us know in the public roadmap if you are interested in more agent-friendly integrations.