Example-Driven Development
I’ve been thinking of writing this up for a while, but especially with the recent opening up of being able to upload files to Large Language Models — via ChatGPT Code Interpreter, Claude 2, and Bard — this seems like an especially relevant time to discuss!
A popular technique for interacting with Large Language Models (LLMs) such as ChatGPT is Few-Shot Prompting: providing a few examples of the desired task as part of the prompt, effectively providing the model a guide for how to structure its responses.
Recently, I’ve taken this framework of “providing some examples” to ChatGPT to a larger scale — to the degree of many files often spanning multiple directories — and using that as a baseline to rapidly iterate on new ideas. This has worked surprisingly well across a range of domains, from writing technical reports to creating new codebases.
Having a much larger degree of context than a few examples has several interesting impacts:
I’ve found that I am able to skip much of the back-and-forth over ChatGPT to further clarify details, for both technical and document-based workflows.
When working with generating text for reports, relying on a wider range of examples leads to more natural-sounding outputs (and less of the typical “ChatGPT tone”), which is an expected but welcome outcome!
Although this can be done by uploading files to ChatGPT Code Interpreter or Claude, I’ve been using a simple local plugin I built for ChatGPT (available on GitHub), which provides the model read-access to local files within specific directories and sub-directories, making it very easy to hook up a chat session with a codebase or collection of documents from a research project.
This works through a simple API which allows ChatGPT to 1) view a list of all files within a directory; 2) read the contents of any text-based, PDF, or Word files.
Adding just this functionality to the existing capabilities of the model unlocks a surprisingly robust and deep understanding of the context — whether it is code or ideas expressed through documents — which is easy to build off of.
For example:
Software projects — I dogfooded the Local Files plugin by pointing it to itself to use as a prototype when developing a new ChatGPT plugin. The model was able to parse the overall file list and structure in this project, as well as understand the semantic meanings behind each of the components, and I was able to generate another plugin (a Spotify playlist generator) very quickly without needing to go through a long back-and-forth about clarifying requirements.
Writing reports — A project I’ve been collaborating on currently consists of a variety of documents and presentations containing high-level outlines for the projects, requirements documents, user interviews, business analyses, and so forth. Providing examples of such a range of documents to ChatGPT has led to uncovering of some valuable insights that draw connections between all these.
“Example-driven” development is not a new concept (ie GitHub Template Repositories), but the capabilities of LLMs of identifying what parts of context are relevant, modulating output to mimic characteristics of existing input, and drawing connections between a set of files of distinct modalities (whether they are code, documents, presentations), makes it a useful paradigm to apply when collaborating with LLMs.