Semantic Kernel: Working with File-based Prompt Functions : jamie_maguire

Semantic Kernel: Working with File-based Prompt Functions
by: jamie_maguire
blow post content copied from  Jamie Maguire
click here to view original post


In the previous blog post, we saw how to create and run prompt functions inline.

These were effectively hardcoded.

In most situations, you will likely want some more flexibility or to be able to share prompt function definitions across different or multiple projects.

In this blog post you will see how prompt definitions can be loaded from the file-system and used within Semantic Kernel.

What is a File-Based Prompt Function?

Loading function prompts from the file system lets you specify the definition of function prompt in a flat file.

At the time of writing, two flavours of file-based function prompts exist:

  • .txt and paired .json file
  • single YAML file

 

The function prompt definition from either option is then serialised by the Semantic Kernel runtime then used as part of the interaction with the human and machine / LLM.

Important to note that YAML definition is supported by Azure AI Studio. I think that, over time, this will end up becoming the standard.  It’s early days however and we will see.

~

Why Load Prompt Functions from the File System?

Function Prompts stored in the file system are like any other code asset in your solution.

Storing function prompts in the file system has several advantages, which include, but are not limited to:

Modularity, Reusability, and Organisation

  • Separation of Concerns: Keeping prompts in separate files allows you to manage and modify them independently from the rest of your codebase.
  • Reusable Components: Prompts stored in files can be reused across multiple projects, reducing redundancy and ensuring consistency.

 

Ease of Maintenance:

  • Simplified Updates: Updating a prompt in one file automatically applies the changes wherever that file is imported. This makes it easier to maintain and update prompts without needing to search through and edit multiple code files.
  • Version Control: Prompts in files can be version-controlled using tools like Git, allowing you to track changes, revert to previous versions, and collaborate with others more effectively.

 

Collaboration:

  • Shared Resources: Storing prompts in files makes it easier to share them with team members. They can be stored in shared repositories, facilitating collaboration and ensuring everyone uses the same versions.
  • Standardization: Using a common set of prompts can help standardize responses and behavior across different parts of an application or across multiple applications.

 

Loading function prompts from the file system, helps you improve the modularity and maintainability of your solution.

~

Anatomy of File-based Prompt Functions

Storing prompt functions in the file system requires the creation either a pair of .txt / .json files, or a single YAML file.

Regardless of the file format you select (.txt and .json pair or .YAML), each let you define prompt configuration settings, prompt name, description, input variables, and descriptions.

These let the AI model and Semantic Kernel Planner know when and how to use the specific prompt function.

Skprompt.txt

Opting to use the .txt involves the creation of 2 files (paired .json file). The first is a skprompt.txt file.

This contains the natural language prompt that will be sent to the AI model you have connected to the kernel in Semantic Kernel.  In this file, you can tell the AI model how to handle the request and detail any input parameters.

For additional context, in a previous blog post, one of the options the AI model was trained to offer was to File a Complaint:

Console.Write("What is your request > ");

string request = Console.ReadLine()!;

string prompt = @$"Instructions: What is the intent of this request?

          Choices: GetNextBookingDate, MakeBooking, FileAComplaint.

          User Input: Can you tell me how to find the next booking date?
          Intent: GetNextBookingDate

          User Input: Can you tell me how to make a booking?
          Intent: MakeBooking

          User Input: {request}
          Intent: ";

Console.WriteLine(await kernel.InvokePromptAsync(prompt));

 

A specific skprompt.txt definition for the FileAComplaint option could be set to the following:

BE FRIENDLY. BE POLITE. BE PROFESSIONAL.
APOLOGIES FOR THE INCONVENIENCE.

ENSURE THAT THE PERSON FEELS HEARD ABOUT THEIR EXPERIENCE AND THAT YOU ARE TAKING THEIR FEEDBACK SERIOUSLY.

WE ARE SORRY TO HEAR ABOUT .
WE WILL DO OUR BEST TO ENSURE THAT THIS DOES NOT HAPPEN AGAIN.

THANK YOU FOR BRINGING THIS TO OUR ATTENTION.

 

In the above, we can see we give the AI a persona with the first three sentences.

Directly after that, we set the following command:

WE ARE SORRY TO HEAR ABOUT .
WE WILL DO OUR BEST TO ENSURE THAT THIS DOES NOT HAPPEN AGAIN.
THANK YOU FOR BRINGING THIS TO OUR ATTENTION.

 

This defines a variable called request that lets the AI model know to expect the user input variable.

Config.json

When working with the .txt option, you can define the configuration of the prompt function.  Typical settings include:

  • type– The type of prompt. In this case, we’re using the completion type.
  • description– A description of what the prompt does. This is used by planner to automatically orchestrate plans with the function.
  • executiojn_settings– The settings for completion models. For OpenAI models, this includes the max_tokens and temperature
  • Input_variables– Defines the variables that are used inside of the prompt (e.g., request).

 

For the File a Complaint example, our .json file can be:

{
  "schema": 1,
  "type": "completion",
  "description": "Handle the customers complaint.",
  "execution_settings": {

    "default": {
      "max_tokens": 1000,
      "temperature": 0
    },
    "gpt-4": {
      "model_id": "gpt-4-1106-preview",
      "max_tokens": 8000,
      "temperature": 0.3
    }
  },
  "input_variables": [
    {
      "name": "$request",
      "description": "The user's complaint.",
      "required": true
    }
  ]
}

~

YAML (Used by Azure AI Studio)

The second option you have is to use a single YAML file.  Taking the earlier example into account, our single YAML file would resemble the following:

name: FileaComplaint
description: Handles the customers complaint.
BE FRIENDLY. BE POLITE. BE PROFESSIONAL.
APOLOGIES FOR THE INCONVENIENCE.
ENSURE THAT THE PERSON FEELS HEARD ABOUT THEIR EXPERIENCE AND THAT YOU ARE TAKING THEIR FEEDBACK SERIOUSLY.


WE ARE SORRY TO HEAR ABOUT .
WE WILL DO OUR BEST TO ENSURE THAT THIS DOES NOT HAPPEN AGAIN.
THANK YOU FOR BRINGING THIS TO OUR ATTENTION.


template_format: handlebars
input_variables:  
  - name:          request
    description:   The user's request
    is_required:   true
execution_settings:
  default:
    max_tokens:   10
    temperature:  0
  gpt-4:
    model_id:     gpt-4-1106-preview
    max_tokens:   10
    temperature:  0.2

My personal hunch is that .txt and .json may end up less popular than the YAML offering due to the Azure AI Studio portability.  Maybe there will be an opportunity for people to create migration tools.

~

An Example: File-Based Prompt Function to Handle a Complaint

So, bringing the above altogether, we can create a file-based prompt function to handle a customer complaint.

We’ll extend it a by asking the human for their name and include that in the prompt function to send the Open AI model.

The first thing to do, is to create the folder for the file-base prompt function (FileAComplaint) to live in.

This can be /Prompts/Complaint/:

SKPrompt Definition

We set the following Semantic Kernel prompt definition:

BE FRIENDLY. BE POLITE. BE PROFESSIONAL.
APOLOGIES FOR THE INCONVENIENCE.

ENSURE THAT THE PERSON FEELS HEARD ABOUT THEIR EXPERIENCE AND THAT YOU ARE TAKING THEIR FEEDBACK SERIOUSLY.

DEAR ,
+++
WE ARE SORRY TO HEAR ABOUT .
WE WILL DO OUR BEST TO ENSURE THAT THIS DOES NOT HAPPEN AGAIN.
THANK YOU FOR BRINGING THIS TO OUR ATTENTION.
+++

Config.json Definition

We set the following JSON configuration definition.  This does a few things. It lets the Semantic Kernel Planner know how this prompt should be used (to handle a complaint). It also lets the AI model know that it should expect 2 input variables:

{
  "schema": 1,
  "type": "completion",
  "description": "Handle the customers complaint.",
  "execution_settings": {
    "default": {
      "max_tokens": 1000,
      "temperature": 0
    },
    "gpt-4": {
      "model_id": "gpt-4-1106-preview",
      "max_tokens": 8000,
      "temperature": 0.3
    }
  },

  "input_variables": [
    {
      "name": "$customerName",
      "description": "The users name.",
      "required": true
    },
    {
      "name": "$request",
      "description": "The user's complaint.",
      "required": true
    }
  ]
}

 

Next, we setup the Kernel and ask the human for their name and request (complaint) :

var builder = Kernel.CreateBuilder()
             .AddOpenAIChatCompletion(modelId, apiKey);

Kernel kernel = builder.Build();
// Load prompts
var prompts = kernel.CreatePluginFromPromptDirectory("./../../../Prompts");

// Get chat completion service
var chatCompletionService = kernel.GetRequiredService<IChatCompletionService>();

Console.Write("What is your name > ");
string userName = Console.ReadLine()!;

Console.Write("What is your request > ");
string userInput = Console.ReadLine()!;

 

We invoke the prompt function with the user input:

// at this point, we could have inferred the intent from the user input
// and selected the appropriate prompt to use, for the purpose of
// example we will use the /Complaint prompt, regardless of the user
// input

var chatResult = kernel.InvokeStreamingAsync<StreamingChatMessageContent>(
  prompts["Complaint"],
  new()
  {
   { "customerName", userName },
   { "request", userInput }
  }
);

 

Then stream the response to the human:

 // Stream the response to the human
 string message = "";         

 await foreach (var chunk in chatResult)
 {
     if (chunk.Role.HasValue)
     {
         Console.Write(chunk.Role + " > ");
     }
     message += chunk;
     Console.Write(chunk);
 }

 Console.WriteLine();

~

Demo

We can see this interaction take place here. Running the application prompts us for our name:

After hitting return, we are asked to supply some details about the complaint:

At this point, both variables, are injected into the function prompt and sent to the AI model. The AI model then uses this data as part of the repose to return to us.

The user inputs are in red. The AI models use of these can be seen in green :

We can see the entire interaction here:

Nice.  There are many use cases for AI model text generation with OpenAI and Semantic Kernel.

~

Video Demo

The following 8 minute YouTube video shows File-based Prompt Functions in action:

~

Summary

In this blog post you have learned how to load prompt functions from the file system.

You have also seen how to configure prompt settings, set input parameters to send to Open AI GPT-4 and pass input parameters.

You saw how the AI model processed user input and returned a friendly response as defined by the persona.

In the next blog post in the series, you will see how we can further augment Open AI model capabilities by creating Native Functions within Semantic Kernel.

Stay tuned.

~

Further Reading and Resources

You can learn more about Semantic Kernel and Prompt Engineering here:

 

Enjoy what you’ve read, have questions about this content, or would like to see another topic? Drop me a note below.

You can also schedule a call using my Calendly link to discuss consulting and development services.

 

JOIN MY EXCLUSIVE EMAIL LIST
Get the latest content and code from the blog posts!
I respect your privacy. No spam. Ever.

June 29, 2024 at 10:30AM
Click here for more details...

=============================
The original post is available in Jamie Maguire by jamie_maguire
this post has been published as it is through automation. Automation script brings all the top bloggers post under a single umbrella.
The purpose of this blog, Follow the top Salesforce bloggers and collect all blogs in a single place through automation.
============================

Salesforce