Build a Code-Executing AI Chatbot with Mistral & Matplotlib

Share:
Tutorial
Beginner
⏱ 45 min read
© Gate of AI 2026-05-15

Learn to build an AI chatbot using the Mistral Codestral API to generate and execute code for tweaking Matplotlib plots efficiently.

Prerequisites

  • Python 3.10 or above
  • Mistral API key
  • Basic understanding of Python and Asyncio

What We’re Building

In this tutorial, we will build an AI chatbot capable of generating and executing Python code to tweak Matplotlib plots. The chatbot will be built using the Mistral API, utilizing specialized models for code generation. By the end of this project, the chatbot will be able to understand natural language requests, generate the corresponding Python code, and execute it to produce the desired plot modifications in real-time.

This project is ideal for developers looking to get hands-on experience with AI-driven code execution and interactive Python web applications. The chatbot will utilize Codestral, a model specifically optimized for code generation, ensuring higher accuracy for data visualization tasks.

Setup and Installation

To get started, we need to install the latest versions of the necessary Python packages. We will use `panel` for the UI and the unified `mistralai` client for API communications.

pip install panel mistralai python-dotenv

Next, configure your environment variables in a `.env` file to securely manage your API credentials.

# .env file
MISTRAL_API_KEY="your_api_key_here"

Step 1: Initialize the Project Environment

First, we set up the environment and ensure the Panel extension is initialized. We use the unified Mistral client, which supports both synchronous and asynchronous operations natively.

import os
import panel as pn
from mistralai import Mistral
from dotenv import load_dotenv

load_dotenv()
pn.extension("codeeditor", sizing_mode="stretch_width")

# Initialize unified Mistral client
api_key = os.getenv("MISTRAL_API_KEY")
client = Mistral(api_key=api_key)

The code initializes the UI framework and sets up the API client using the key from your environment. This ensures your application can securely talk to Mistral’s latest code-generation models.

Step 2: Define Chatbot’s Default Behavior

In this step, we select the optimal model for coding and provide the system instructions. For 2026, we utilize the ‘codestral-latest’ model which outperforms general-purpose models in technical tasks.

LLM_MODEL = "codestral-latest"
default_prompt = """You are a data visualization expert.
Your goal is to provide Python code using matplotlib based on user requests.
Provide ONLY the code inside triple backticks (```python). 
Ensure the final plot object is named `fig` and is the last line of the script."""

By using Codestral, we minimize the risk of syntax errors in the generated code. The prompt ensures that the output is formatted correctly for our execution engine to capture the figure object.

Step 3: Implement Chatbot Functionality

Now, we implement the logic to process input and update the plot. Note the use of `async` to prevent the UI from freezing during API calls, and `pn.bind` for a reactive experience.

from panel.io.mime_render import exec_with_return

async def get_plot(prompt):
    if not prompt:
        return "Please enter a request."
        
    # Request code from Mistral
    messages = [{"role": "system", "content": default_prompt},
                {"role": "user", "content": prompt}]
    
    response = await client.chat.complete_async(model=LLM_MODEL, messages=messages)
    raw_code = response.choices[0].message.content
    
    # Extract code and execute
    code = raw_code.replace("```python", "").replace("```", "").strip()
    try:
        return exec_with_return(code)
    except Exception as e:
        return f"Error executing code: {e}"

# Panel UI setup using reactive binding
input_box = pn.widgets.TextInput(name='Update Plot (e.g., "Make it a red line chart")')
interactive_plot = pn.bind(get_plot, input_box)

layout = pn.Column(input_box, pn.pane.Plotly(interactive_plot))
layout.servable()

The `get_plot` function handles the entire pipeline: sending the request, parsing the markdown response, and executing the code safely using Panel’s execution utility. The UI is kept responsive by utilizing the asynchronous client.

⚠️ Security Note: Executing LLM-generated code via `exec()` should only be done in trusted local environments. For production apps, always use a sandboxed execution environment like Docker or Pyodide.

Testing Your Implementation

Enter a natural language command into the input box. The chatbot will translate your words into Python, execute it, and render the resulting Matplotlib figure immediately.

# Example prompt: "Draw a sine wave and set the color to emerald green."
# Expected output: A high-quality Matplotlib chart appearing in the output pane.

What to Build Next

  • Switch to the **Mistral-Large** model for complex multi-step reasoning.
  • Add a `CodeEditor` widget to allow manual overrides of the AI-generated code.
  • Implement a chat history (Memory) so the AI remembers previous plot changes.

Share:

Was this tutorial helpful?