Writing a Simple Dynamic Prompt App with Streamlit using langchain.

Writing a Simple Dynamic Prompt App with Streamlit using langchain.

·

4 min read

Hello readers! In the previous article, we discussed writing a simple static prompt app with Streamlit using langchian. Here, I am going to explain how to write a Dynamic prompt app with Streamlit using langchain.

A dynamic prompt in Large Language Models (LLMs) is a prompt that varies with user input, context, or external information. In contrast to static prompts, which do not change regardless of the context, dynamic prompts vary to give more context-specific and relevant responses.

Most Important Features of Dynamic Prompts

  1. User-Driven Input – The prompt is altered depending on user input.

  2. Context Awareness – It takes into account earlier interactions or outside information.

  3. Variable Substitution – Utilizes placeholders or templates that are dynamically filled.

  4. Programatic Control – Can be modulated through scripts, APIs, or logic.

e.g, AI-Powered Coding Assistants

Example: GitHub Copilot, ChatGPT for coding
✅ Advantage of Dynamic Prompts:

  • Can generate code snippets based on the user’s preferred language, coding style, and framework.

  • Example: A programmer asks for a sorting algorithm, and the AI generates it in Python if the user previously selected Python.

❌ Static Prompt Limitation:

  • Would generate the same response, requiring manual edits.
import os
from langchain_huggingface import ChatHuggingFace, HuggingFaceEndpoint
from dotenv import load_dotenv
import streamlit as st
from langchain_core.prompts import PromptTemplate

# Load Hugging Face API Key
load_dotenv()
hf_token = os.getenv("HUGGINGFACEHUB_ACCESS_TOKEN")

# Define LLM with proper model_kwargs
llm = HuggingFaceEndpoint(
    repo_id="TinyLlama/TinyLlama-1.1B-Chat-v1.0",
    task="text-generation",
    huggingfacehub_api_token=hf_token 
)

# Initialize LangChain Model
model = ChatHuggingFace(llm=llm)

# Streamlit UI
st.header('🔍 Research Summarizer Tool')

# User Inputs
paper_input = st.selectbox("Select Research Paper", [
    "Attention Is All You Need",
    "BERT: Pre-training of Deep Bidirectional Transformers",
    "GPT-3: Language Models are Few-Shot Learners",
    "Diffusion Models Beat GANs on Image Synthesis"
])

style_input = st.selectbox("Select Explanation Style", [
    "Beginner-Friendly", "Technical", "Code-Oriented", "Mathematical"
]) 

length_input = st.selectbox("Select Explanation Length", [
    "Short (1-2 paragraphs)", "Medium (3-5 paragraphs)", "Long (detailed explanation)"
])

# Define Prompt Template
template = PromptTemplate(
T    template="""
    Please summarize the research paper titled "{paper_input}" with the following specifications:

    Explanation Style: {style_input}
    Explanation Length: {length_input}

    1. Mathematical Details:
       - Include relevant mathematical equations if present in the paper.
       - Explain the mathematical concepts using simple, intuitive code snippets where applicable.

    2. Analogies:
       - Use relatable analogies to simplify complex ideas.

    If certain information is not available in the paper, respond with: "Insufficient information available" instead of guessing.

    Ensure the summary is clear, accurate, and aligned with the provided style and length.
    """,
    input_variables=["paper_input", "style_input", "length_input"]
)

# Generate Prompt
prompt = template.format(
    paper_input=paper_input,
    style_input=style_input,
    length_input=length_input
)

# Summarization Button
if st.button('Summarize'):
    result = model.invoke(prompt)
    st.write(result.content)

Understanding the Structure of the Prompt :

This prompt is structured to generate a customized summary of research paper based on user-selected inputs.

  1. Defining a Template with Placeholders

    • The PromptTemplate class is used to create a structured prompt.

    • It contains placeholders ( {paper_input}, {style_input}, {length_input} ) that get replaced with user section.

  1. The prompt is fully customizable— you can modify the wording, structure, or even the entire format to fit differnet use cases.

    e.g, Change the task (Instead of summarization, ask for a critique or key takeaways):

     template=""" 
     Critique the research paper titled "{paper_input}" based on the following aspects:
    
     1. Strengths and contributions
     2. Limitations and weaknesses
     3. Potential future improvements
    
     Ensure the analysis is in a {style_input} manner and {length_input} format.
     """
    
    1. Customization via User Inputs

      • Explanation style : {style_input} (Define how the summary should be written—Beginner Friendly, Technical etc).

      • Explanation Length : {length_input} (Determines the depth of the response—Short, medium or long).

These parametrs personalize the generated summary.

  1. Mathematical Details Section

    • If research paper includes equations, they should be mentioned.

    • The model is instructed to explain math concepts intuitively, potenially using code snnipets where relevant.

This ensures that technical concepts are made clearer to reader.

  1. Analogies for Simplicity

    • The model is required to use analogies to describe sophisticated concepts. This helps bridge knowledge gaps for those unfamiliar with certain concepts.

Analogies make abstract ideas more relatable and easier to grasp

  1. How prompt template uses it

    • The PromptTemplate dynamically fills {paper_input}, {style_input}, and {length_input} with user-selected values.

The final prompt is customized before being sent to the AI model.

  1. Final Execution

    • The template.format(...) function replaces placeholders ({paper_input}, {style_input}, {length_input}) with user selections.

    • The formatted prompt is stored in the prompt variable.

    • st.button('Summarize') creates a clickable button in Streamlit.

    • When the button is clicked, the following happens: Invoke the Model model.invoke(prompt) sends the formatted prompt to TinyLlama.

    • TinyLlama processes the prompt and generates a response.

    • st.write(result.content) displays the generated summary in the Streamlit app.