fbpx

LangGraphとAzure OpenAIを組み合わせてみよう #ai #langgraph #azure #openai #llm #python

はじめに

LangGraphとは、LLM (Lagre Language Models; 大規模言語モデル)を使用した、ステートフルなエージェントやワークフローを作成するためのライブラリです。

先のブログ記事「LangGraphをLLMなしでちょっと触ってみよう」では、LLM関連コードをまったく書かず、LangGraphそのものの動きを見るためのコードのみを書いてみました。これを前提として本稿を進めますので、未読でしたら「LangGraphをLLMなしでちょっと触ってみよう」からご覧ください。

LangGraphのコードが何となく理解できたところで、いよいよLLM関連コードを組み込んでみます。本稿では Azure OpenAI を使います。

おさらい:LangGraphのみのコード

一旦LLM関連コードを含めないLangGraphのみのコードを見てみましょう。

from typing_extensions import TypedDict
from langgraph.graph import StateGraph, START, END

# 受け渡しするデータ
class State(TypedDict):
    question: str
    answer: str

# Nodeを宣言
def node(state: State):
    print("Q: " + state["question"])
    return {"answer": "answer"}

# グラフを構成
workflow = StateGraph(State)

# グラフにNodeを追加
workflow.add_node("node", node)

# EdgeにNodeを追加(どのノードからどのノードに処理が移るかの表現)
workflow.add_edge(START, "node")
workflow.add_edge("node", END)

# グラフをコンパイル
app = workflow.compile()

# グラフ可視化
app.get_graph().print_ascii()

# グラフを実行
res = app.invoke({"question": "question"}, debug=True)
print("A: " + res["answer"])

質問を入力して回答を出力するものです。これを実行すると次のようになります。デバッグを有効にしています。

+-----------+
| __start__ |
+-----------+
      *
      *
      *
  +------+
  | node |
  +------+
      *
      *
      *
 +---------+
 | __end__ |
 +---------+
[-1:checkpoint] State at the end of step -1:
{}
[0:tasks] Starting step 0 with 1 task:
- __start__ -> {'question': 'question'}
[0:writes] Finished step 0 with writes to 1 channel:
- question -> 'question'
[0:checkpoint] State at the end of step 0:
{'question': 'question'}
[1:tasks] Starting step 1 with 1 task:
- node -> {'question': 'question'}
Q: question
[1:writes] Finished step 1 with writes to 1 channel:
- answer -> 'answer'
[1:checkpoint] State at the end of step 1:
{'answer': 'answer', 'question': 'question'}
A: answer

詳しくは「LangGraphをLLMなしでちょっと触ってみよう」をご覧ください。

Azure OpenAIのみのコード

Azure OpenAIに「What is Docker?」と質問して帰ってきた答えを表示するだけのPythonスクリプトを用意します。OpenAI Python API libraryを使ってAzure OpenAIにアクセスし、python-dotenvを使って接続に必要なAPIキーなどを読み込んでいます。

import os
from openai import AzureOpenAI
from dotenv import load_dotenv
load_dotenv()

def send_prompt(prompt):
    response = None
    try:
        client = AzureOpenAI(
            api_key = os.getenv("AZURE_OPENAI_API_KEY"),
            azure_endpoint = os.getenv("AZURE_OPENAI_ENDPOINT"),
            api_version = os.getenv("AZURE_OPENAI_API_VERSION")
        )
        response = client.chat.completions.create(
            model = os.getenv("AZURE_OPENAI_MODEL_NAME"),
            messages = [{"role": "user", "content": prompt}],
            max_tokens = 1024,
            temperature = 0.95
        )
    except Exception as e:
        print(f"error: {e}")
        response = None
    return response

res = send_prompt("What is Docker?")
if res:
    model_dump = res.model_dump()
    print({"response": model_dump['choices'][0]['message']['content']})

これを実行すると次のような結果となります。

{'response': 'Docker is an open-source platform designed to make it easier to create, deploy, and run applications by using containers. Containers allow developers to package an application with all of the parts it needs, such as libraries and other dependencies, and ship it all out as one package. This means that the application will run on any other Linux machine regardless of any customized settings that machine might have that could differ from the machine used for writing and testing the code.\n\nSome key features and concepts associated with Docker include:\n\n1. **Containers**: The central concept in Docker. A container is a lightweight, standalone, executable package that includes everything needed to run a piece of software, including the code, runtime, system tools, system libraries, and settings.\n\n2. **Images**: Docker containers are created from Docker images. An image is an immutable file that is essentially a snapshot of a container. Images serve as the starting point for running a container.\n\n3. **Dockerfile**: This is a text file that contains all the commands a user could call on the command line to assemble an image. Using `docker build` users can create an automated build that executes several command-line instructions in succession.\n\n4. **Docker Hub**: A service provided by Docker for finding and sharing container images with your team. It is a public repository for Docker images.\n\n5. **Docker Engine**: This is the core part of Docker. It is a client-server application with:\n    - A server which is a type of long-running program called a daemon process (the `dockerd` command).\n    - A REST API which specifies interfaces that programs can use to talk to the daemon and instruct it what to do.\n    - A command-line interface (CLI) client (the `docker` command).\n\n6. **Docker Compose**: A tool for defining and running multi-container Docker applications. With Compose, you use a YAML file to configure your application’s services.\n\n7. **Volumes and Bind Mounts**: Docker provides options for managing persistent data used by Docker containers. Volumes are managed by Docker and are isolated from the core functionality of the host machine, whereas bind mounts may be anywhere on the host system.\n\n8. **Networking**: Docker has a powerful networking interface that allows containers to communicate with the world outside, as well as with other containers.\n\n9. **Orchestration**: For managing and scaling container deployments, Docker works well with container orchestration tools like Kubernetes.\n\nDocker has become a critical tool for developers, sysadmins, and DevOps professionals for its ability to package and run applications in a way that is portable and consistent across various environments. This helps alleviate the "it works on my machine" headache by providing a consistent environment from development to production.'}

LangGraphにAzure OpenAIを組み込む

ここまででLangGraphのみのコードとAzure OpenAIのみのコード、別々の2つを見てきました。これらを合体させましょう。特に肝となるのはLangGraphのみのコードでハードコーディングしていた「質問への回答」部分である node 関数

def node(state: State):
    print("Q: " + state["question"])
    return {"answer": "answer"}

を、Azure OpenAIに質問して回答させる send_prompt 関数を呼び出すように書き換えます。次が全体のコードです。

from typing_extensions import TypedDict
from langgraph.graph import StateGraph, START, END

import os
from openai import AzureOpenAI
from dotenv import load_dotenv
load_dotenv()

# Azure OpenAIに質問して回答させる
def send_prompt(prompt):
    response = None
    try:
        client = AzureOpenAI(
            api_key = os.getenv("AZURE_OPENAI_API_KEY"),
            azure_endpoint = os.getenv("AZURE_OPENAI_ENDPOINT"),
            api_version = os.getenv("AZURE_OPENAI_API_VERSION")
        )
        response = client.chat.completions.create(
            model = os.getenv("AZURE_OPENAI_MODEL_NAME"),
            messages = [{"role": "user", "content": prompt}],
            max_tokens = 1024,
            temperature = 0.95
        )
    except Exception as e:
        print(f"error: {e}")
        response = None
    return response

# 受け渡しするデータ?
class State(TypedDict):
    question: str
    answer: str

# Nodeを宣言
def node(state: State):
    res = send_prompt(state["question"])
    if res == None:
        return {"answer": ""}
    else:
        model_dump = res.model_dump()
        return {"answer": model_dump['choices'][0]['message']['content']}

# グラフを構成
workflow = StateGraph(State)

# グラフにNodeを追加
workflow.add_node("node", node)

# EdgeにNodeを追加(どのノードからどのノードに処理が移るかの表現)
workflow.add_edge(START, "node")
workflow.add_edge("node", END)

# グラフをコンパイル
app = workflow.compile()

# グラフ可視化
app.get_graph().print_ascii()

# グラフを実行
print(app.invoke({"question": "What is Docker?"})) # , debug=True))

これを実行すると次のようになります。

+-----------+
| __start__ |
+-----------+
      *
      *
      *
  +------+
  | node |
  +------+
      *
      *
      *
 +---------+
 | __end__ |
 +---------+
{'question': 'What is Docker?', 'answer': "Docker is an open-source platform that automates the deployment, scaling, and management of applications within containers. First released in 2013, Docker popularized containerization and has become an integral part of many developers' workflow for creating, deploying, and running applications by using containers.\n\nContainers allow developers to package an application with all its dependencies—libraries, binaries, and configuration files—into a single package. This ensures that the application runs quickly and reliably in different computing environments, whether on-premises, in the cloud, or on a developer’s personal machine.\n\nKey concepts and components of Docker include:\n\n- **Images**: These are read-only templates with instructions for creating a Docker container. An image includes the application code, libraries, tools, dependencies, and other files needed to run the application.\n\n- **Containers**: A running instance of a Docker image. Containers isolate the application from the underlying system, ensuring consistency regardless of where they are deployed.\n\n- **Dockerfile**: A text document that contains all the commands a user could call on the command line to assemble an image. Docker can build images automatically by reading the instructions from a Dockerfile.\n\n- **Docker Hub**: A service provided by Docker for finding and sharing container images with your team. It is the world's largest library and community for container images.\n\n- **Docker Engine**: The core of Docker, it’s a client-server application with a server side that is a long-running program called a daemon process (the `dockerd` command).\n\n- **Docker Compose**: A tool that defines and runs multi-container Docker applications. With Compose, you use a YAML file to configure your application's services.\n\nDocker's simple and straightforward syntax allows for easier scripting, automation, and deployment of services. It has had a profound impact on software development, testing, and deployment practices, enabling microservices architectures and facilitating DevOps and continuous integration/continuous deployment (CI/CD) workflows.\n\nAs of my knowledge cutoff date in March 2023, Docker continues to be widely used, though it faces competition from other containerization technologies and container orchestration platforms like Kubernetes."}

うまく動きました。

まとめ

本稿では単純なLangGraphコードに、これまた単純なAzure OpenAIのコードを組み込んでみました。「開始→処理→終了」という至極単純なグラフなので正直なところLangGraphの恩恵はほとんどなさそうですが、この先いろいろな処理を組み合わせたり、条件分岐やループ、並列処理が入ってくると違ってくると思います。そしていきなりLangGraphとAzure OpenAIを組み合わせたコードを書くより、最初はLangGraphのみのコードから書いてみたことで、より理解がしやすかったと考えています。今後の記事で、さらにコードに処理を追加していく予定です。

Author

Chef・Docker・Mirantis製品などの技術要素に加えて、会議の進め方・文章の書き方などの業務改善にも取り組んでいます。「Chef活用ガイド」共著のほか、Debian Official Developerもやっています。

Daisuke Higuchiの記事一覧

新規CTA