How to Setup and Use Your AI Chatbot with OpenAI API (Step-by-Step Guide 2025)

Learn how to set up and use your AI chatbot built with the OpenAI API.

Build a Production-Ready Chatbot with OpenAI API, Flask, Redis, and React

Learn how to set up and use your AI chatbot built with the OpenAI API. Step-by-step installation, environment setup, running with Docker or manually, and deploying to production with security best practices.

In this detailed guide, we’ll design and implement a production-grade chatbot that integrates OpenAI’s GPT models with a Flask backend, Redis memory, PostgreSQL storage, and a React frontend. We’ll also include deployment, scaling, and security considerations for running this in production.

Overview: Flask handles API requests, Redis manages short-term memory, PostgreSQL stores conversations, and React provides the UI. Docker Compose orchestrates everything.

📐 Architecture

  • Backend (Flask): Handles user requests, moderation, rate-limiting, and OpenAI calls.
  • PostgreSQL: Stores user accounts, chat logs, and analytics.
  • Redis: Keeps recent chat history for fast context.
  • React Frontend: Chat UI with live updates.
  • Docker & Nginx: Deployment with TLS and scaling.

"The key to scaling AI chatbots is separating memory, storage, and compute into independent services."

— Engineering Best Practice

🛠️ Step-by-Step Implementation

  1. Setup environment variables and dependencies.
  2. Implement Flask backend with OpenAI API calls.
  3. Use Redis for short-term memory (last N messages).
  4. Store all chat history in PostgreSQL.
  5. Implement moderation & rate-limiting.
  6. Build a React chat UI that communicates with backend.
  7. Deploy using Docker Compose + Nginx reverse proxy.

📂 Complete Project Code

Below is the full set of code files. Copy them into your project structure and adapt as needed.


# --- File: backend/.env.example
OPENAI_API_KEY=your_openai_api_key
DATABASE_URL=postgresql://postgres:postgres@db:5432/chatdb
REDIS_URL=redis://redis:6379/0
SECRET_KEY=super-secret
MODEL_NAME=gpt-4.1
TEMPERATURE=0.3
MAX_TOKENS=600

# --- File: backend/app.py
import os, json, time, logging
from flask import Flask, request, jsonify
from flask_sqlalchemy import SQLAlchemy
from flask_cors import CORS
import redis, openai
from datetime import datetime
from dotenv import load_dotenv

load_dotenv()
app = Flask(__name__)
app.config["SQLALCHEMY_DATABASE_URI"] = os.getenv("DATABASE_URL")
app.config["SECRET_KEY"] = os.getenv("SECRET_KEY")
db = SQLAlchemy(app)
CORS(app)
r = redis.from_url(os.getenv("REDIS_URL"), decode_responses=True)
openai.api_key = os.getenv("OPENAI_API_KEY")

class Message(db.Model):
    id = db.Column(db.Integer, primary_key=True)
    user_id = db.Column(db.String(50))
    role = db.Column(db.String(10))
    content = db.Column(db.Text)
    created_at = db.Column(db.DateTime, default=datetime.utcnow)

db.create_all()

def push_memory(uid, role, content, limit=12):
    key = f"chat:{uid}"
    r.rpush(key, json.dumps({"role":role,"content":content}))
    r.ltrim(key, -limit, -1)

def get_memory(uid):
    key = f"chat:{uid}"
    items = r.lrange(key, 0, -1)
    return [json.loads(x) for x in items] if items else []

@app.route("/api/v1/chat", methods=["POST"])
def chat():
    data = request.json
    uid = data.get("user_id","guest")
    msg = data.get("message","")
    if not msg: return jsonify({"error":"empty message"}),400

    # memory
    history = get_memory(uid)
    messages = [{"role":"system","content":"You are a helpful assistant."}]
    messages.extend(history)
    messages.append({"role":"user","content":msg})

    try:
        resp = openai.ChatCompletion.create(
            model=os.getenv("MODEL_NAME","gpt-4.1"),
            messages=messages,
            temperature=float(os.getenv("TEMPERATURE",0.3)),
            max_tokens=int(os.getenv("MAX_TOKENS",600))
        )
        reply = resp["choices"][0]["message"]["content"]
        # store
        db.session.add(Message(user_id=uid,role="user",content=msg))
        db.session.add(Message(user_id=uid,role="assistant",content=reply))
        db.session.commit()
        push_memory(uid,"user",msg)
        push_memory(uid,"assistant",reply)
        return jsonify({"reply":reply})
    except Exception as e:
        logging.exception("OpenAI error")
        return jsonify({"error":"OpenAI failed","detail":str(e)}),500

@app.route("/api/v1/history/<uid>")
def history(uid):
    return jsonify({"messages":get_memory(uid)})

if __name__=="__main__":
    app.run(host="0.0.0.0",port=5000)

# --- File: backend/requirements.txt
Flask
flask-cors
Flask-SQLAlchemy
psycopg2-binary
redis
openai
python-dotenv

# --- File: docker-compose.yml
version: "3.8"
services:
  db:
    image: postgres:15
    environment:
      POSTGRES_USER: postgres
      POSTGRES_PASSWORD: postgres
      POSTGRES_DB: chatdb
    volumes: ["db_data:/var/lib/postgresql/data"]
  redis:
    image: redis:7
  backend:
    build: ./backend
    ports: ["5000:5000"]
    env_file: ./backend/.env
    depends_on: [db, redis]
  frontend:
    build: ./frontend
    ports: ["3000:3000"]
    depends_on: [backend]
volumes:
  db_data:

# --- File: frontend/src/App.jsx
import React, {useState,useEffect,useRef} from "react";

const API="/api/v1/chat";

function Msg({role,content}){
  const me=role==="user";
  return (
    <div style={{display:"flex",justifyContent:me?"flex-end":"flex-start"}}>
      <div style={{
        background:me?"#3498db":"#ecf0f1",
        color:me?"#fff":"#333",
        padding:"8px 12px",borderRadius:8,margin:4,maxWidth:"70%"
      }}>{content}</div>
    </div>
  );
}

export default function App(){
  const [msgs,setMsgs]=useState([]);
  const [text,setText]=useState("");
  const ref=useRef();

  useEffect(()=>{ref.current?.scrollIntoView({behavior:"smooth"});},[msgs]);

  const send=async()=>{
    if(!text.trim()) return;
    setMsgs(m=>[...m,{role:"user",content:text}]);
    const res=await fetch(API,{method:"POST",headers:{"Content-Type":"application/json"},body:JSON.stringify({user_id:1,message:text})});
    const d=await res.json();
    setMsgs(m=>[...m,{role:"assistant",content:d.reply||"Error"}]);
    setText("");
  };

  return (
    <div style={{maxWidth:600,margin:"auto",padding:16}}>
      <h2>AI Chatbot</h2>
      <div style={{border:"1px solid #ccc",height:400,overflowY:"auto",padding:8}}>
        {msgs.map((m,i)=><Msg key={i} {...m}/>)}
        <div ref={ref}/>
      </div>
      <div style={{display:"flex",marginTop:8}}>
        <input value={text} onChange={e=>setText(e.target.value)} style={{flex:1,padding:8}} placeholder="Type..."/>
        <button onClick={send} style={{marginLeft:8,padding:"8px 16px"}}>Send</button>
      </div>
    </div>
  );
}

🔐 Security & Scaling

  • Never expose API keys in frontend — always proxy via backend.
  • Use monitoring — track token usage and set billing alerts.
  • Horizontal scaling — run multiple backend containers behind Nginx.

⚙️ How to Setup the Chatbot (Step by Step)

Now that you have the full code, let’s configure and run it locally (or in the cloud). Follow these steps carefully to ensure everything works as expected.

  1. Clone or Download the project files into your local machine.
  2. Install Docker & Docker Compose (if not already installed).
  3. Create a new .env file inside the backend folder and copy contents from .env.example.
  4. Add your OpenAI API Key in .env.
  5. Run docker-compose up --build to start services.
  6. Open http://localhost:3000 to access the chatbot UI.

Info:
By default, PostgreSQL and Redis containers will start automatically, and Flask will connect to them. The React app communicates with Flask via http://localhost:5000.

📥 Installing Prerequisites (Optional, No Docker)

If you don’t want to use Docker, you can manually install everything:


# Backend setup
cd backend
python3 -m venv venv
source venv/bin/activate
pip install -r requirements.txt

# PostgreSQL & Redis (via system install or Docker)
sudo service postgresql start
sudo service redis-server start

# Run Flask server
python app.py

# Frontend setup
cd frontend
npm install
npm run dev

💻 How to Use the Chatbot

Once everything is running, you can start chatting with your AI assistant. Here’s how it works:

  1. Open your browser and go to http://localhost:3000.
  2. Type a message in the input box and press Send.
  3. The Flask backend forwards your query to OpenAI API.
  4. Redis stores your last 12 messages for short-term memory.
  5. PostgreSQL stores the full conversation history permanently.
  6. The assistant replies in real-time, and the UI updates instantly.

🔍 Example Conversation


You: Hello, chatbot!
Bot: Hi there 👋 I'm your AI assistant. How can I help you today?

You: Write me a short Python script that reverses a string.
Bot: Sure! Here's an example:

def reverse_string(s):
    return s[::-1]

print(reverse_string("Maxoncodes"))

Tip:
Since all chats are stored in PostgreSQL, you can later add features like analytics, search, or exporting conversations.

🌐 Deploying to Production

To make your chatbot available online:

  • Use Nginx reverse proxy to route traffic to Flask (backend) and React (frontend).
  • Enable HTTPS (TLS/SSL) with Let’s Encrypt.
  • Host on VPS (DigitalOcean, AWS, GCP, etc.).
  • Scale by running multiple backend containers behind a load balancer.

“Deploying a chatbot is not just about writing code—it’s about ensuring security, scalability, and reliability for real users.”

— Maxoncodes

📊 Monitoring & Best Practices

  • Logging & Error Handling: Capture exceptions and store logs.
  • Analytics: Track user sessions, number of messages, and response times.
  • Cost Control: Limit max tokens and temperature to avoid high bills.
  • Security: Use API authentication and rate limiting.

✅ Conclusion

You now have a working chatbot with Flask + Redis + PostgreSQL + React. It supports real memory, persistent storage, and can be deployed anywhere with Docker. This setup is ready for production and can be extended with more features like voice input, multi-language support, or even fine-tuned models.

🚀 Go ahead and deploy your AI chatbot today!

📌 This guide was originally published by MaxonCodes on www.maxoncodes.com.

Post a Comment