Software - General
1832765 Members
3372 Online
110045 Solutions
New Discussion

PromptTemplate in Action with LangChain

 
Shubham_Patil
HPE Pro

PromptTemplate in Action with LangChain

In the previous blog, we have seen the basics of GenAI and LangChain, and learned how to run an open-source LLM model locally, and how to use ChatOpenAI from LangChain to generate a response. In this blog, we are going to cover how to use PromptTemplate from LangChain and also how to switch between models with the help of LangChain.

Pre-requisite

  1. Have LLM open source model or credentials created for a model.
  2. Set up the IDE and install the required dependencies
    1. langchain (0.3.26)
    2. langchain-ollama (0.3.4) (required for Open source LLM models)

There are different types of prompt templates available in the langchain library. One can import the prompt template, like langchain. prompts. For the below example, we are going to use the simplest prompt template. In the example AzureChatOpenAI LLM model is used as the client.

 

from model import client
from langchain.prompts import PromptTemplate

prompt_temp = PromptTemplate(
    template="""
        You are a intelligent bot,
        who will provide capital of {country}
    """,
    input_variables=["country"]
)

prompt = prompt_temp.format(country = "USA")
print(client.invoke(prompt).content)

 Output:

The capital of the United States is **Washington, D.C.**

 

let's try to understand code line by line:

from langchain.prompts import PromptTemplate

PromptTemplate is used to define a reusable text prompt structure where some parts can be dynamically filled in with variables (like "USA" for a {country} placeholder).

prompt_temp = PromptTemplate(
    template="""
        You are a intelligent bot,
        who will provide capital of {country}
    """,
    input_variables=["country"]
)

Here we are creating a prompt using LangChain. Template is the text format of the prompt. It contains a placeholder {country} to be replaced with an actual input. input_variables parameter tells the template which variables it expects to be filled in. In this case, only "country".

prompt = prompt_temp.format(country = "USA")

Updates the {country} placeholder with "USA". Resulting in final prompt as

        You are a intelligent bot,     
        who will provide capital of USA
client.invoke(prompt)

In this step the prompt is sent to the client (AzureChatOpenAI instance).
client.invoke(prompt) sends the prompt to the model and receives a response.

Response will look like

{
  "content": "The capital of the United States is **Washington, D.C.**",
  "additional_kwargs": {
    "refusal": null
  },
  "response_metadata=": {
    "token_usage": {
      "completion_tokens": 15,
      "prompt_tokens": 24,
      "total_tokens": 39,
      "completion_tokens_details": {
        "accepted_prediction_tokens": 0,
        "audio_tokens": 0,
        "reasoning_tokens": 0,
        "rejected_prediction_tokens": 0
      },
      "prompt_tokens_details": {
        "audio_tokens": 0,
        "cached_tokens": 0
      }
    },
    "model_name": "gpt-4o-2024-11-20",
    "system_fingerprint": "fp_ee1d74bde0",
    "id": "chatcmpl-ByceB3Fe1viehKfNNuzcPlh1xsrtN",
    "service_tier": null,
    "prompt_filter_results": [
      {
        "prompt_index": 0,
        "content_filter_results": {
          "hate": {
            "filtered": false,
            "severity": "safe"
          },
          "jailbreak": {
            "filtered": false,
            "detected": false
          },
          "self_harm": {
            "filtered": false,
            "severity": "safe"
          },
          "violence": {
            "filtered": false,
            "severity": "safe"
          }
        }
      }
    ],
    "finish_reason": "stop",
    "logprobs": null,
    "content_filter_results": {
      "hate": {
        "filtered": false,
        "severity": "safe"
      },
      "protected_material_code": {
        "filtered": false,
        "detected": false
      },
      "protected_material_text": {
        "filtered": false,
        "detected": false
      },
      "self_harm": {
        "filtered": false,
        "severity": "safe"
      },
      "violence": {
        "filtered": false,
        "severity": "safe"
      }
    }
  },
  "id=": "run--408c8ea7-5bfc-4eaf-9a23-855f25a4de4b-0",
  "usage_metadata=": {
    "input_tokens": 24,
    "output_tokens": 15,
    "total_tokens": 39,
    "input_token_details": {
      "audio": 0,
      "cache_read": 0
    },
    "output_token_details": {
      "audio": 0,
      "reasoning": 0
    }
  }
}

In short the response from the invoke method contains the model's answer: "The capital of the United States is Washington, D.C.". It shows that the model used a total of 39 tokens—24 from the prompt and 15 in the response. The reply was generated by the GPT-4o model, specifically the version from November 2024. The content passed all safety filters, meaning nothing harmful or inappropriate was detected. The model completed its response normally, and a unique ID was assigned to this interaction for tracking.

client.invoke(prompt).content

.content extracts just the text content from the response (usually, the message body).
Here we saw a simple use case of Prompt Template, There are different prompt templates available for different use cases. Some of those are 

AIMessagePromptTemplate, BaseChatPromptTemplate, BasePromptTemplate, ChatMessagePromptTemplate, ChatPromptTemplate, FewShotChatMessagePromptTemplate, FewShotPromptTemplate, FewShotPromptWithTemplates, HumanMessagePromptTemplate, StringPromptTemplate, SystemMessagePromptTemplate

 

This was a basic example of langchain's prompt template. In next blog we will be seeing LCEL - Langchain Expression Language, and how to leverage it in application.

 

Thanks & Regards,

Shubham Patil & Vivek Chaudhari

Hewlett Packard Enterprise (PSD-GCC)

 



I work at HPE
HPE Support Center offers support for your HPE services and products when and how you need it. Get started with HPE Support Center today.
[Any personal opinions expressed are mine, and not official statements on behalf of Hewlett Packard Enterprise]
Accept or Kudo