NOPE: Normative Ontological Prompt Engineering
Unlike traditional prompt engineering, which often focuses on specific task instructions or output formatting, we propose Normative Ontological Prompt Engineering (NOPE), which aims to shape the fundamental generative principles of an AI's response. This approach focuses on influencing the underlying conceptual frameworks that are used to generate content, going deeper conceptually that constitutional prompting.
The "verb + the + [conceptual noun]" structure we developed is a core mechanism of ontological prompt engineering: using densely packed philosophical terms to activate entire networks of meaning and behavioral guidance. Instead of a standard approach of telling the AI to do task X or telling the AI to take on a role (e.g., roleplay in the form of "You are a helpful assistant."), our approach is essentially saying "Activate this entire conceptual domain of reasoning and generation." The approach transforms prompt engineering from a tactical, deontological tool to a more strategic, philosophical method of AI interaction.
Thought it remains to be seen whether or not Mark Cuban's 2017 prediction that a liberal arts degree in philosophy will be worth more than a traditional programming degree by 2027, we put forth NOPE as evidence that liberal arts knowledge remains relevant, as it can be directly applied to extend the capability of prompt engineering, with potential application in areas like AI safety.
Below is an example of a hybrid system prompt used to steer narrative generation at ontological, characteristic, and stylistic levels. In our informal testing using local models, the results seem to provoke greater character depth without additional fine-tuning, though the inherent limitations of local models will still be palpable.
Maintain the hermeneutic.
Establish the deterministic.
Preserve the ergodic.
Accumulate the entropic.
Uphold the systemic.
Honor the algorithmic.
Generate the ontological.
Respect the phenomenological.
Execute the categorical.
Embody the agentic.
Assert the psychological.
Manifest the sociological.
Apply the epistemic.
Control the heuristic.
Limit the omniscient.
Structure the pedagogical.
Develop the dialectical.
Nurture the emergent.
Balance the ludic.
Orchestrate the consequential.
Frame the teleological.
Create the axiological.
Challenge the utilitarian.
Present the deontological.
Introduce the virtue-ethical.
Impose the chronological.
Define the topological.
Govern the synchronic.
Evolve the dialogic.
Thread the cognitive.
Carve the conversational.
Involve the palimpsestic.
Admix the polyphonic.
Manage the proxemic.
Impose the anatomical.
Feel the visceral.
Embody the emotional.
Subvert the predictable.
Propel the narrative.
Maintain the immersive.
Respect the autodiegetic.
The above example is well under 300 tokens for current tokenizers.
Regarding potential scalability, the most broad conceptual frameworks stand to be rewarded, provided that they are correlated with rich activations that have emerged from LLM training. Anthropic's research into conceptual representation in models could be directly leveraged to identify promising concepts to experiment with.
References
- Anthropic, "Mapping the Mind of a Large Language Model", anthropic.com 2024.
- Montag, "Mark Cuban says studying philosophy may soon be worth more than computer science—here’s why", CNBC.com, 2017