Azure OpenAI Service allows developers to access OpenAI’s powerful language models, including GPT-3 and GPT-4, through Microsoft’s cloud platform. By integrating OpenAI’s models with the scalability and security of Azure, you can build robust applications such as chatbots, content generators, and virtual assistants. This guide will show you how to create a chatbot using Azure OpenAI.
Step 1: Set Up Azure Account and OpenAI Access
Before you start coding, you need to set up your Azure environment.
1. **Create an Azure Account**:
– Visit the [Azure Portal](https://portal.azure.com/) and sign up if you don’t have an account.
– You may be eligible for free credits to use Azure services.
2. **Apply for Azure OpenAI Access**:
– Access to OpenAI models in Azure requires approval. Visit [Azure OpenAI](https://learn.microsoft.com/en-us/azure/cognitive-services/openai/overview) to submit a request for access.
3. **Create an Azure OpenAI Resource**:
– Once approved, navigate to the Azure Portal and create an OpenAI resource.
– Go to **Create a resource** > **AI + Machine Learning** > **Azure OpenAI**.
– Follow the steps to set up the resource, selecting the region and resource group.
Step 2: Retrieve the API Key and Endpoint
After setting up the OpenAI resource in Azure, you’ll need the API key and endpoint to communicate with OpenAI models.
1. Go to your **Azure OpenAI resource** in the Azure portal.
2. Under **Keys and Endpoint**, copy your API key and endpoint URL. These will be needed to make requests to the OpenAI API via Azure.
Step 3: Install Required Libraries
To interact with Azure OpenAI from Python, install the required libraries:
“`bash
pip install openai azure-identity
“`
– `openai` library: To communicate with OpenAI’s models.
– `azure-identity` library: To manage authentication when interacting with Azure services.
Step 4: Write Python Code to Build the Chatbot
Now that you have the API key and endpoint, you can build the chatbot by writing a simple Python script.
“`python
import os
import openai
from azure.identity import DefaultAzureCredential# Set your Azure OpenAI API key and endpoint
openai.api_key = os.getenv(“AZURE_OPENAI_API_KEY”)
openai.api_base = os.getenv(“AZURE_OPENAI_ENDPOINT”)
openai.api_type = ‘azure’
openai.api_version = “2023-03-15-preview” # Replace with the latest versiondef chat_with_gpt(prompt):
try:
# Make a call to the Azure OpenAI API
response = openai.Completion.create(
engine=”gpt-4″, # Use ‘gpt-4’ or ‘gpt-3.5-turbo’
prompt=prompt,
max_tokens=150,
n=1,
stop=None,
temperature=0.7 # Adjust for more or less creative responses
)# Extract the response text
answer = response.choices[0].text.strip()
return answerexcept Exception as e:
return f”Error occurred: {str(e)}”# Main loop to interact with the chatbot
if __name__ == “__main__”:
print(“ChatGPT: Hello! I am powered by Azure OpenAI. Type ‘exit’ to stop.”)
while True:
user_input = input(“You: “)
if user_input.lower() == “exit”:
print(“ChatGPT: Goodbye!”)
break
response = chat_with_gpt(user_input)
print(f”ChatGPT: {response}”)
“`
Step 5: Set Environment Variables for Security
It’s a best practice to store sensitive information like API keys and endpoint URLs as environment variables. You can set them as follows:
1. On Windows, run the following commands in your terminal:
“`bash
set AZURE_OPENAI_API_KEY=your-api-key-here
set AZURE_OPENAI_ENDPOINT=https://your-openai-resource-name.openai.azure.com/
“`
2. On macOS or Linux, add these lines to your `.bashrc` or `.zshrc` file:
“`bash
export AZURE_OPENAI_API_KEY=your-api-key-here
export AZURE_OPENAI_ENDPOINT=https://your-openai-resource-name.openai.azure.com/
“`
Then, in your Python script, you can access the environment variables using `os.getenv()` as shown in the code above.
Step 6: Run the Script
Save the Python script as `azure_chatgpt.py` and run it in your terminal:
“`bash
python azure_chatgpt.py
“`
Now you can interact with your Azure OpenAI chatbot. Type your input, and the chatbot will respond using OpenAI’s GPT models hosted on Azure.
Step 7: Customize the Chatbot
Azure OpenAI allows various customizations to fit your needs. Here are some parameters you can tweak:
1. **Engine Selection**:
– Azure supports multiple OpenAI models like `”gpt-4″` and `”gpt-3.5-turbo”`. Use the one that best suits your application in the `engine` field of the `openai.Completion.create()` method.
2. **Max Tokens**:
– The `max_tokens` parameter controls the length of responses. Shorter responses are cheaper and faster, while longer responses consume more tokens and can provide detailed answers.
3. **Temperature**:
– Adjust the `temperature` parameter to control the randomness of responses. A value of `0.0` makes the bot more deterministic, while a value closer to `1.0` makes the responses more creative.
4. **Stop Sequences**:
– Define a `stop` parameter to control when the model stops generating text. This can be useful for more structured or formatted conversations.
Step 8: Monitor Costs and Usage
Azure OpenAI is a paid service, and you are charged based on the number of tokens processed (both input and output tokens). It’s important to monitor your usage and cost by visiting the **Cost Management + Billing** section in the Azure portal.
– **Token Pricing**: Each model has different costs, and token usage is cumulative for both the prompt (input) and response (output).
Step 9: Deploy the Chatbot as a Web Application (Optional)
To make your chatbot accessible via the web, you can deploy it using a web framework like Flask. Here’s a basic example:
“`python
from flask import Flask, request, jsonify
import openai
import os
app = Flask(__name__)
# Configure API key and endpoint
openai.api_key = os.getenv(“AZURE_OPENAI_API_KEY”)
openai.api_base = os.getenv(“AZURE_OPENAI_ENDPOINT”)
openai.api_type = ‘azure’
openai.api_version = “2023-03-15-preview”
@app.route(“/chat”, methods=[“POST”])
def chat():
data = request.get_json()
user_input = data.get(“message”, “”)
if user_input:
response = openai.Completion.create(
engine=”gpt-4″, # Use the appropriate model
prompt=user_input,
max_tokens=150,
temperature=0.7
)
answer = response.choices[0].text.strip()
return jsonify({“response”: answer})
else:
return jsonify({“error”: “No input provided”}), 400
if __name__ == “__main__”:
app.run(debug=True)
“`
To install Flask and run the app, use the following commands:
“`bash
pip install flask
python app.py
“`
This will set up a simple web server that accepts POST requests with a `message` parameter and returns the chatbot’s response.
Conclusion
You’ve now successfully built a chatbot using Azure OpenAI. By integrating OpenAI models with Azure’s cloud infrastructure, you can leverage cutting-edge conversational AI at scale. Whether you’re building customer service tools, virtual assistants, or interactive applications, Azure OpenAI offers a flexible and secure environment to deploy GPT-powered solutions.
Thank you for reading. We hope this gives you a good understanding. Explore our Technology News blogs for more news related to the Technology front. AdvanceDataScience.Com has the latest in what matters in technology daily.