· Generative AI  · 1 min read

Stop Prompt Engineering, Start Prompt Optimisation

Hand-crafting prompts like spells is so 2023. We explore DSPy and the shift towards programmatic prompt optimisation.

Hand-crafting prompts like spells is so 2023. We explore DSPy and the shift towards programmatic prompt optimisation.

We have all been there. “You are an expert lawyer. Think step by step. Use British English. don’t apologise.” You change one word, and the performance drops 5%. You spend days tweaking adjectives.

This is Fragile. And reliable engineering cannot be built on fragile foundations.

The DSPy Revolution

DSPy (Stanford’s Declarative Self-improving Language Programs) changes the paradigm. Instead of writing the prompt string, you define the Signature (Input -> Output).

  • Input: Question, Context
  • Output: Answer

Then, you act like a Machine Learning engineer. You provide a dataset of examples. DSPy’s Teleprompter (Optimiser) runs thousands of experiments, trying different prompts, different few-shot examples, and even chain-of-thought patterns to find the combination that maximises your metric (e.g. Accuracy).

The Result

You get a compiled prompt that might look weird to a human but works perfectly for the model. And if you switch from GPT-4 to Llama-3, you don’t rewrite the prompt; you just re-compile.

Stop guessing words. Build robust, self-optimising AI pipelines. Learn about DSPy.

Back to Knowledge Hub

Related Posts

View All Posts »