Responsible Prompting for Responsible AI. Mastering The PROMPT Framework for Informed AI Use
The PROMPT Framework for prompt engineering can enhance interactions with AI models like ChatGPT. Use the framework to optimize prompts to maximize utility and comprehension. Thoughtful prompting is key to developing critical thinking and information literacy in the age of generative AI.

Table of Contents
The advent of Generative AI and large language models like ChatGPT has opened up new frontiers for interacting with AI. However, these models have limitations that users should understand. They are trained on vast datasets, not real-world experience, and can hallucinate answers that seem plausible but are inaccurate. Their knowledge also cuts off when the model is trained, so they lack up-to-date information.
This is why practices like asking for explanations, sources, and logical reasoning are so important. Models can appear confidently convincing even when wrong, so requiring evidence and walking through step-by-step logic helps safeguard against false assurance and exposes gaps. Asking conversational, multi-step questions also allows probing a topic from multiple angles, surfacing inconsistencies an AI might gloss over with sweeping generalizations.
Understanding these intrinsic limitations informs how to engineer prompts for more transparent, robust conversations. The PROMPT Framework below builds on this foundation, equipping users to maximize value from AI interactions while minimizing risks. The key is keeping the human firmly in the loop by prompting not just for answers, but for ethical, explainable reasoning that can be critically evaluated.
Step 1: Ask Explainable Prompts
Getting an AI to merely provide an answer is not enough. We must go a step further and ask it to explain its rationale. For instance, asking simply "What is photosynthesis?" will elicit a definition. But querying "Could you please explain the process of photosynthesis and the key scientific evidence that supports this explanation?" pushes the model to provide sources and reasoning behind its response. This instills greater transparency in the interaction.
Some ways to frame highly explainable prompts are:
- Asking "What is the evidence that supports your response?"
- Probing deeper with "What led you to conclude that?"
- Phrasing questions such as "Could you walk me through the key steps in your logic?"
The goal is to glean insights into the model's thought process. This builds trust in the interaction and provides users greater context for the information received.
Step 2: Have Multi-Step, Logical Conversations
Information overload is a real concern with generative AI and large language models. While their capacity may seem boundless, they can still become confused or provide inferior responses when prompts get too lengthy or complex.
The key is to leverage these models for what they do best - having insightful, logical conversations through a series of clear, structured prompts. Each query should build upon the last in a natural progression. For instance, with a topic like diabetes, one could begin by asking "What is diabetes?" followed by "What causes diabetes?", "What are the main types of diabetes?" and so on until a comprehensive understanding is achieved.
Some best practices here include:
- Starting broad, then getting more specific
- Following the natural progression of a topic
- Linking each question through transitional phrases like "Now that we've covered X, could you please explain Y?"
- Periodically summarizing the key facts learned
The goal is to walk away with a structured understanding of even very complex subjects through multi-step conversational prompts.
Step 3: Troubleshoot Insufficient Responses
No AI system today is infallible. You must be vigilant about insufficient responses due to the model's limitations. But troubleshooting is a skill that can be honed.
When faced with an inadequate response, some troubleshooting techniques are:
- Rephrasing the prompt using different words and sentence structures
- Splitting a complex query into simpler, narrower questions
- Providing additional context and background information to frame the question better
- Asking for examples or to elaborate on vague points
- Simplifying verbose, complex prompts into more concise versions
Be an active listener - probe, clarify, and dig deeper. Think about how to frame prompts differently to get better responses. The goal is to recognize gaps and inconsistencies and evolve your prompts accordingly.
Step 4: Be Precise with Prompts
While conversational AI can seem human, it differs greatly in how precision is required in prompts to get accurate responses. Ambiguity is the enemy of precision.
Some ways to craft focused, precise prompts are:
- Using exact keywords pertinent to the topic
- Quantifying questions with metrics, numbers, or specific amounts
- Limiting prompts to narrow contexts or timeframes
- Avoiding subjective or vague terminology
- Breaking down broad requests into very specific singular queries
- Clarifying ambiguous words and eliminating unneeded fluff
- Adding critical details and constraints to limit the scope
The goal is prompts tailored to fetch specific facts or insights, rather than opinions or generalities. For example, rather than an imprecise prompt like "What was the weather like in New York last summer?", you could use a more precise prompt like "What was the temperature in degrees Celsius in New York City on August 1, 2021 at noon?"
Step 5: Maintain Ethical Prompts
With the power of AI comes great responsibility. We must be vigilant against biases, misinformation, unfairness, and disrespect seeping into interactions.
Some ways to craft ethically sound prompts are:
- Establishing at the outset certain ethical constraints - e.g. "respond without exhibiting harmful biases" or “Provide an overview of global religions without favoring one or using stereotypical descriptions."
- Avoiding queries that lead to stereotyping of gender, race, age, or other personal attributes
- Seeking objective, balanced perspectives on controversial topics
- Fact-checking responses against reliable sources
- Flagging unethical content for the model to improve
- Withholding problematic details that could skew the interaction
The goal is upholding truth, fairness, and respect for all - qualities that must be encoded into prompts.
Step 6: Personalize for User Needs
One size does not fit all. Prompts should adapt based on the user's knowledge level, circumstances, and goals.
Some personalization techniques are:
- Gauging the learner’s familiarity with a topic to simplify or add complexity
- Relating concepts to examples familiar to the user’s everyday life
- Customizing terminology and language complexity to the learner’s vocabulary
- Addressing specific difficulties a user is facing with a concept
- Providing concrete actionable steps catered to the user’s needs
- Adjusting prompts based on the preferred learning style - visual, audio, kinesthetic, etc.
Example: For a beginner, the prompt might be, "What is photosynthesis?" while for a more knowledgeable user, "Explain the biochemical processes involved in photosynthesis."
The goal is prompt engineering tailored to the unique needs of each user.
Step 7: Craft Culturally Sensitive Prompts
AI systems reflect the data they are trained on, which can perpetuate societal biases if the training datasets are not diverse and inclusive. We must be vigilant against prompts that lead to culturally insensitive, biased, or disrespectful responses.
Some techniques for crafting culturally sensitive prompts are:
- Using neutral, respectful language when referring to different ethnicities, nationalities, religions, or other demographic groups
- Avoiding stereotypes, generalizations, or assumptions about cultures the user is not immersed in
- Seeking balanced, nuanced perspectives on topics significant to marginalized communities
- Ensuring examples and references reflect diversity, not just Western or Eurocentric viewpoints
- Specifying desire for impartial, non-biased information regarding inter-cultural relations, histories, conflicts, etc.
- Recognizing prompts touching on cultural practices may require refinement based on user feedback to achieve sensitivity
Example: "Compare religious traditions around the world using inclusive language and without stereotyping."
The goal is to craft prompts that elicit culturally aware responses demonstrating understanding, empathy, and respect for people from all backgrounds. This minimizes the risk of perpetuating harmful biases through AI.
Conclusion
The PROMPT method for prompt engineering is a powerful tool to enhance data and information literacy when using AI. By being explanatory, conversational, troubleshooting insufficient responses, precise, ethical, and personalized in your prompts, you can steer interactions for optimal learning. This also fosters transparency, structuring of knowledge, and responsible AI use.
With careful prompting, large language models can truly augment human intelligence and comprehension. But keeping humans firmly in the loop - priming AI to provide reasoned explanations, catch missteps, and maintain ethics - is vital. The PROMPT framework can help achieve this synergy.
Used judiciously, AI prompts can empower users to become critical thinkers and evaluators of knowledge. Prompting is a learnable skill - an art and science that, when mastered, can profoundly enrich learning in the age of AI.
Want a handy one-page reference guide for the PROMPT framework? Get the job aide here.
To learn more about responsible and effective use of AI for informed decision-making, read the full paper: Augmenting Human Cognition with Generative AI for Informed Decision-Making.
Turning Data Into Wisdom Newsletter
Join the newsletter to receive the latest updates in your inbox.