Functions in Semantic Kernel — Declarative Prompt

Functions in Semantic Kernel — Declarative Prompt

📝 Declarative prompt functions

Prompt functions are text templates executed by a model. They shine at summarization, extraction, transformation, translation, and reasoning. They are declarative: you describe the desired outcome and constraints instead of prescribing step‑by‑step procedures.

When to choose prompt functions

  • You want fast iteration and natural‑language behavior without writing lots of code.
  • You need semantic generalization (not strict rules) and can tolerate non‑determinism.
  • You can supply grounding/context and enforce output shapes post‑processing.

Pros and cons

  • Pros: Rapid prototyping, expressive, composable, great for content tasks.
  • Cons: Non‑deterministic, sensitive to prompt/inputs, latency and model cost, requires evaluation and guardrails.

Basic usage (C#)

const string prompt = """
You are a concise assistant. Summarize the following text in one sentence:
{{$input}}
""";

var summarize = kernel.CreateFunctionFromPrompt(prompt, "summarize");
var result = await kernel.InvokeAsync(summarize, new() { ["input"] = text });

Tip: use named variables in the template (e.g., {{$input}}) and pass a context dictionary.

Structured outputs

Ask the model to return strict JSON and validate it:

const string extract = """
Extract the following product info as JSON with fields { name, price, currency }.
Text:
{{$text}}
Return only JSON.
""";

var fn = kernel.CreateFunctionFromPrompt(extract, "extract-product");
var raw = await kernel.InvokeAsync(fn, new() { ["text"] = input });
var json = JsonDocument.Parse(raw.ToString());

Hardening: validate JSON schema; reject if missing fields; clamp lengths; sanitize HTML.

Grounding context (RAG)

  • Retrieve relevant context (search/vector DB) and inject into the prompt.
  • Keep context succinct; prefer bullet points or Q&A snippets.
  • Cite sources in the output for traceability.
var context = await kb.SearchAsync(query, top: 5);
var grounded = kernel.CreateFunctionFromPrompt(
    $"""
Answer using only the context below. If unknown, say so.
Context:
{string.Join("\n- ", context)}

Question: {{$q}}
""";
);
var answer = await kernel.InvokeAsync(grounded, new() { ["q"] = query });

Safety and evaluation

  • Input guardrails: block PII, large payloads, or unsafe instructions.
  • Output guardrails: content filters, JSON schema validation, profanity checks.
  • Evaluation: create small golden sets and measure precision/recall/consistency.

Prompt hygiene checklist

  • Provide role, task, and constraints (tone, length, style, format).
  • Give 1–3 examples (few‑shot) if needed.
  • Specify output format (JSON, Markdown table) and error handling instructions.
  • Keep variables explicit; document defaults.

Next in the series

  • 03_functions_04_openapi — from spec to function, auth, error handling
  • 03_functions_05_mcp — connecting Copilot plugins and services
  • 03_functions_06_plugins — packaging, manifests, governance