Back
GitHub Copilot studies
webThis is a practical developer guide on prompt engineering for GitHub Copilot, focused on productivity rather than AI safety; the current tags (causal-model, corrigibility, shutdown-problem) appear to be incorrectly assigned and this resource has minimal relevance to AI safety topics.
Metadata
Importance: 12/100blog posteducational
Summary
A practical guide from GitHub developer advocates on prompt engineering for GitHub Copilot, explaining how to communicate more effectively with AI coding assistants to get better code suggestions. The article covers what prompts are, best practices for writing them, and concrete examples demonstrating how specificity and context improve AI-generated outputs.
Key Points
- •Vague prompts often produce irrelevant or no suggestions; specificity and context dramatically improve GitHub Copilot's output quality.
- •Prompt engineering for developers focuses on practical communication techniques rather than the ML research definition of the term.
- •Breaking down complex tasks into smaller, detailed sub-prompts yields more accurate and useful code generation.
- •Understanding how Copilot processes context (surrounding code, comments, file structure) helps developers frame better requests.
- •Iterative refinement of prompts is recommended, treating AI assistance as a collaborative, conversational process.
Cited by 1 page
| Page | Type | Quality |
|---|---|---|
| Corrigibility Failure Pathways | Analysis | 62.0 |
Cached Content Preview
HTTP 200Fetched Mar 20, 202621 KB
[Rizel Scarlett](https://github.blog/author/blackgirlbytes/ "Posts by Rizel Scarlett") & [Michelle Duke](https://github.blog/author/mishmanners/ "Posts by Michelle Duke")
June 20, 2023 \|
Updated February 26, 2025
\|
25 minutes
- Share:
- [Share on X](https://x.com/share?text=How%20to%20write%20better%20prompts%20for%20GitHub%20Copilot&url=https%3A%2F%2Fgithub.blog%2Fdeveloper-skills%2Fgithub%2Fhow-to-write-better-prompts-for-github-copilot%2F)
- [Share on Facebook](https://www.facebook.com/sharer/sharer.php?t=How%20to%20write%20better%20prompts%20for%20GitHub%20Copilot&u=https%3A%2F%2Fgithub.blog%2Fdeveloper-skills%2Fgithub%2Fhow-to-write-better-prompts-for-github-copilot%2F)
- [Share on LinkedIn](https://www.linkedin.com/shareArticle?title=How%20to%20write%20better%20prompts%20for%20GitHub%20Copilot&url=https%3A%2F%2Fgithub.blog%2Fdeveloper-skills%2Fgithub%2Fhow-to-write-better-prompts-for-github-copilot%2F)
Generative AI coding tools are transforming the way developers approach daily coding tasks. From documenting our codebases to generating unit tests, these tools are helping to accelerate our workflows. However, just like with any emerging tech, there’s always a learning curve. As a result, developers—beginners and experienced alike—sometimes feel frustrated when AI-powered coding assistants don’t generate the output they want. (Feel familiar?)
For example, when asking GitHub Copilot to draw an ice cream cone 🍦using p5.js, a JavaScript library for creative coding, we kept receiving irrelevant suggestions—or sometimes no suggestions at all. But when we learned more about the way that GitHub Copilot processes information, we realized that we had to adjust the way we communicated with it.
Here’s an example of GitHub Copilot generating an irrelevant solution:

When we adjusted our prompt, we were able to generate more accurate results:

We’re both developers and AI enthusiasts ourselves. I, [Rizel](https://github.com/blackgirlbytes), have used
... (truncated, 21 KB total)Resource ID:
3da94a1dccb522fc | Stable ID: ODUzMzM5OD