🚀 Your First Step with the Gemini API
🔑 Step 1: Getting Your API Key
Before writing any code, you need authentication credentials to access the Gemini API.
Quick Setup Process:
- Navigate to Google AI Studio
- Sign in with your Google account
- Generate a new API key in the “Get API key” section
⚠️ Security First!
Treat your API key like a password. Never hardcode it directly into your scripts. We’ll use environment variables for secure storage.
⚙️ Step 2: Setting Up Your Development Environment
Required Libraries
Install these Python packages to get started:
pip install google-generativeai python-dotenv
What each library does:
google-generativeai → Official Python SDK for Gemini APIpython-dotenv → Secure environment variable management
Environment Configuration
Create a .env file in your project root:
GOOGLE_API_KEY="your_api_key_here"
📝 Note: Replace “your_api_key_here” with your actual API key from Step 1.
Initial Connection Code
import os
import google.generativeai as genai
from dotenv import load_dotenv
# Load environment variables from the .env file
load_dotenv()
# Get the API key
api_key = os.getenv("GOOGLE_API_KEY")
# Configure the SDK with your API key
genai.configure(api_key=api_key)
✅ You’re now authenticated and ready to build!
🎯 Step 3: Making Your First API Request
Let’s create a function that sends prompts to the Gemini model:
def ai_chat(prompt):
try:
# Instantiate the model
model = genai.GenerativeModel('gemini-1.5-flash')
# Send the prompt and get a response
response = model.generate_content(prompt)
# Return the generated text
return response.text
except Exception as e:
# Handle potential errors gracefully
print(f"An error occurred: {e}")
return "Sorry, I couldn't generate a response."
🚀 Key Improvements to Consider:
Efficiency Issue: Creating a new GenerativeModel instance for every call is wasteful. We’ll fix this in the next step.
Error Handling: The try…except block catches API failures, network issues, and safety policy violations.
🤖 Step 4: Building a Smart Chatbot with Personality
The real power comes from giving your AI a specific persona using system prompts.
Complete Implementation:
import os
import google.generativeai as genai
from dotenv import load_dotenv
# Load environment variables
load_dotenv()
api_key = os.getenv("GOOGLE_API_KEY")
genai.configure(api_key=api_key)
# Create a single model instance for reuse (more efficient!)
model = genai.GenerativeModel('gemini-1.5-flash')
def ai_chat(prompt):
try:
response = model.generate_content(prompt)
return response.text
except Exception as e:
print(f"An error occurred: {e}")
return "Sorry, I couldn't generate a response."
def get_answer(user_text, persona_prompt):
# Combine the persona prompt with the user's input
full_prompt = f"{persona_prompt}\\n\\n{user_text}"
output = ai_chat(full_prompt)
return output
if __name__ == '__main__':
# Define our chatbot's persona
system_prompt = "You are an expert software engineer that prefers functional programming."
# Get user input
user_input = input("Enter your message: ")
# Get the AI's response with the specified persona
print(get_answer(user_input, system_prompt))
✨ What We’ve Accomplished
Our final implementation includes:
- 🔒 Secure API key management using environment variables
- ⚡ Optimised performance with a reusable model instance
- 🛡️ Robust error handling for production reliability
- 🎠Customizable AI personality through system prompts🎯 Next Steps
This journey, from a blank page to a functional, persona-driven chatbot, shows just how accessible powerful AI tools have become. What will you create with the Gemini API?
đź’¬ Building True Conversations: Adding Memory to Your Gemini Chatbot
🎯 Level Up Your AI! Transform your single-turn chatbot into a conversational AI that remembers previous interactions and maintains context throughout the conversation.
đź§ The Core Concept: Multi-turn Conversations
Understanding API Memory
⚠️ Important to Know
The Gemini API doesn’t automatically remember past interactions. Each API call is stateless — you must send the entire conversation history with every request.
The Good News: The SDK makes this surprisingly simple with the ChatSession object! 🎉
🚀 Step 1: Initialize the Chat Session
Replace direct model.generate_content calls with a proper chat session:
# Create a single model instance for reuse
model = genai.GenerativeModel('gemini-1.5-flash')
# Start a new chat session
chat = model.start_chat(history=[])
📝 Understanding the History Parameter
The history parameter accepts a list of Content objects. You can pre-populate it to:
- Set a system prompt from the beginning
- Continue from a previous conversation
- Provide initial context
đź’ˇ Step 2: Sending Messages with Automatic History Management
The ChatSession automatically handles conversation history for you!
Updated Chat Function
def ai_chat(prompt, chat_session):
try:
response = chat_session.send_message(prompt)
return response.text
except Exception as e:
print(f"An error occurred: {e}")
return "Sorry, I couldn't generate a response."
✨ What Happens Automatically:
- User message gets added toÂ
chat.history - Model response gets added toÂ
chat.history - Full context is maintained across turns
- No manual history management required
đź”§ Step 3: Complete Implementation
Here’s your full conversational chatbot with persistent memory:
import os
import google.generativeai as genai
from dotenv import load_dotenv
# --- Secure API Key Management ---
load_dotenv()
api_key = os.getenv("GOOGLE_API_KEY")
genai.configure(api_key=api_key)
# --- Define a reusable model instance ---
model = genai.GenerativeModel('gemini-1.5-flash')
# --- Start a chat session with system prompt in history ---
initial_history = [
{
'role': 'user',
'parts': ["You are an expert software engineer that prefers functional programming."]
},
{
'role': 'model',
'parts': ["Understood. I will respond to all queries with a functional programming perspective."]
}
]
chat = model.start_chat(history=initial_history)
def ai_chat(prompt, chat_session):
"""Sends a message to the chat session and returns the response."""
try:
response = chat_session.send_message(prompt)
return response.text
except Exception as e:
print(f"An error occurred: {e}")
return "Sorry, I couldn't generate a response."
if __name__ == '__main__':
print("🤖 Welcome to the functional programming expert chatbot!")
print("đź’ˇ Type 'quit' to exit, 'history' to see conversation history.\n")
while True:
user_input = input("You: ")
if user_input.lower() == 'quit':
print("đź‘‹ Goodbye!")
break
if user_input.lower() == 'history':
print("\n--- 📜 Full Chat History ---")
for i, message in enumerate(chat.history):
role_emoji = "🧑" if message.role == "user" else "🤖"
print(f"{role_emoji} {message.role.title()}: {message.parts[0]}")
print("------------------------\n")
continue
# Send the user's message and get a response
bot_response = ai_chat(user_input, chat)
print(f"🤖 Bot: {bot_response}\n")
🔍 Key Differences from Single-Turn Approach
Old Approach New Approach Why It Matters model.generate_content() model.start_chat() Creates persistent conversation context Manual prompt management chat.send_message() Automatic history tracking No memory between calls chat.history attribute Full conversation context maintained Stateless interactions Stateful chat sessions Enables natural conversations
đź’Ž Advanced Features You Can Add
🎠Dynamic Persona Changes
def change_persona(chat_session, new_persona): """Inject a new system message to change the AI's behavior""" response = chat_session.send_message(f"From now on: {new_persona}") return response.text
đź’ľ Save and Load Conversations
import jsondef save_conversation(chat_session, filename): """Save chat history to file""" history_data = [] for message in chat_session.history: history_data.append({ 'role': message.role, 'parts': message.parts }) with open(filename, 'w') as f: json.dump(history_data, f, indent=2)def load_conversation(filename): """Load chat history from file""" with open(filename, 'r') as f: history_data = json.load(f) return model.start_chat(history=history_data)
đź§ą Conversation Cleanup
def summarize_and_truncate(chat_session, max_turns=20): """Keep conversations manageable by summarizing old messages""" if len(chat_session.history) > max_turns: # Get summary of early conversation early_history = chat_session.history[:10] summary_prompt = "Summarize this conversation in 2-3 sentences:" # Implementation would create new session with summary return chat_session
✅ What You’ve Achieved
Your chatbot now has:
- 🧠Persistent memory across conversation turns
- 🎯 Context awareness for more relevant responses
- 🔄 Natural conversation flow with follow-up questions
- 📊 Full history tracking for debugging and analysis
- 🛠️ Extensible architecture for advanced features
🚀 Next Level Enhancements
💡 Pro Tips for Production
Consider implementing these features for a production chatbot:
Performance Optimizations:
- Conversation summarization for long chats
- Token counting to manage API costs
- Response caching for common queries
User Experience:
- Typing indicators during response generation
- Message threading for complex topics
- Export/import conversation functionality
Advanced AI Features:
- Multi-modal inputs (text + images)
- Function calling for tool integration
- Custom safety settings for your use case
🎉 Ready for Real Conversations!🚀 You Did It!
Your AI can now engage in meaningful, context-aware conversations. The foundation is set for building sophisticated conversational applications!
From stateless single responses to contextual conversations — you’ve just unlocked the true potential of AI chatbots! 🎊
This article was originally published on an external platform:
Read Full Article on medium.com/@pawan.pk980/your-first-steps-with-the-gemini-api-b9cd89630563