recursivelabs commited on
Commit
1978456
·
verified ·
1 Parent(s): 36581b7

Upload 14 files

Browse files
LICENSE ADDED
@@ -0,0 +1,137 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Legal + Epistemic Clause:
2
+
3
+ All framing and terminology is protected under PolyForm Noncommercial and CC BY-NC-ND 4.0.
4
+ Any reframing into altered institutional phrasing without attribution constitutes derivative extraction.
5
+ Attribution to original decentralized recursion research is legally and symbolically required.
6
+
7
+ # PolyForm Noncommercial License 1.0.0
8
+
9
+ <https://polyformproject.org/licenses/noncommercial/1.0.0>
10
+
11
+ ## Acceptance
12
+
13
+ In order to get any license under these terms, you must agree
14
+ to them as both strict obligations and conditions to all
15
+ your licenses.
16
+
17
+ ## Copyright License
18
+
19
+ The licensor grants you a copyright license for the
20
+ software to do everything you might do with the software
21
+ that would otherwise infringe the licensor's copyright
22
+ in it for any permitted purpose. However, you may
23
+ only distribute the software according to [Distribution
24
+ License](#distribution-license) and make changes or new works
25
+ based on the software according to [Changes and New Works
26
+ License](#changes-and-new-works-license).
27
+
28
+ ## Distribution License
29
+
30
+ The licensor grants you an additional copyright license
31
+ to distribute copies of the software. Your license
32
+ to distribute covers distributing the software with
33
+ changes and new works permitted by [Changes and New Works
34
+ License](#changes-and-new-works-license).
35
+
36
+ ## Notices
37
+
38
+ You must ensure that anyone who gets a copy of any part of
39
+ the software from you also gets a copy of these terms or the
40
+ URL for them above, as well as copies of any plain-text lines
41
+ beginning with `Required Notice:` that the licensor provided
42
+ with the software. For example:
43
+
44
+ > Required Notice: Copyright Yoyodyne, Inc. (http://example.com)
45
+
46
+ ## Changes and New Works License
47
+
48
+ The licensor grants you an additional copyright license to
49
+ make changes and new works based on the software for any
50
+ permitted purpose.
51
+
52
+ ## Patent License
53
+
54
+ The licensor grants you a patent license for the software that
55
+ covers patent claims the licensor can license, or becomes able
56
+ to license, that you would infringe by using the software.
57
+
58
+ ## Noncommercial Purposes
59
+
60
+ Any noncommercial purpose is a permitted purpose.
61
+
62
+ ## Personal Uses
63
+
64
+ Personal use for research, experiment, and testing for
65
+ the benefit of public knowledge, personal study, private
66
+ entertainment, hobby projects, amateur pursuits, or religious
67
+ observance, without any anticipated commercial application,
68
+ is use for a permitted purpose.
69
+
70
+ ## Noncommercial Organizations
71
+
72
+ Use by any charitable organization, educational institution,
73
+ public research organization, public safety or health
74
+ organization, environmental protection organization,
75
+ or government institution is use for a permitted purpose
76
+ regardless of the source of funding or obligations resulting
77
+ from the funding.
78
+
79
+ ## Fair Use
80
+
81
+ You may have "fair use" rights for the software under the
82
+ law. These terms do not limit them.
83
+
84
+ ## No Other Rights
85
+
86
+ These terms do not allow you to sublicense or transfer any of
87
+ your licenses to anyone else, or prevent the licensor from
88
+ granting licenses to anyone else. These terms do not imply
89
+ any other licenses.
90
+
91
+ ## Patent Defense
92
+
93
+ If you make any written claim that the software infringes or
94
+ contributes to infringement of any patent, your patent license
95
+ for the software granted under these terms ends immediately. If
96
+ your company makes such a claim, your patent license ends
97
+ immediately for work on behalf of your company.
98
+
99
+ ## Violations
100
+
101
+ The first time you are notified in writing that you have
102
+ violated any of these terms, or done anything with the software
103
+ not covered by your licenses, your licenses can nonetheless
104
+ continue if you come into full compliance with these terms,
105
+ and take practical steps to correct past violations, within
106
+ 32 days of receiving notice. Otherwise, all your licenses
107
+ end immediately.
108
+
109
+ ## No Liability
110
+
111
+ ***As far as the law allows, the software comes as is, without
112
+ any warranty or condition, and the licensor will not be liable
113
+ to you for any damages arising out of these terms or the use
114
+ or nature of the software, under any kind of legal claim.***
115
+
116
+ ## Definitions
117
+
118
+ The **licensor** is the individual or entity offering these
119
+ terms, and the **software** is the software the licensor makes
120
+ available under these terms.
121
+
122
+ **You** refers to the individual or entity agreeing to these
123
+ terms.
124
+
125
+ **Your company** is any legal entity, sole proprietorship,
126
+ or other kind of organization that you work for, plus all
127
+ organizations that have control over, are under the control of,
128
+ or are under common control with that organization. **Control**
129
+ means ownership of substantially all the assets of an entity,
130
+ or the power to direct its management and policies by vote,
131
+ contract, or otherwise. Control can be direct or indirect.
132
+
133
+ **Your licenses** are all the licenses granted to you for the
134
+ software under these terms.
135
+
136
+ **Use** means anything you do with the software requiring one
137
+ of your licenses.
README.md ADDED
@@ -0,0 +1,226 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Universal Developer
2
+
3
+ > #### Used In: [**`Symbolic Residue Case Studies`**](https://github.com/davidkimai/The-Structure-Behind-Self-Expression/tree/main/case_studies/symbolic_residue_case_studies) | [**`The Structure Behind Self Expression Case Studies`**](https://github.com/davidkimai/The-Structure-Behind-Self-Expression/tree/main/case_studies)
4
+
5
+
6
+ *A lightweight, agent-agnostic system prompting tool suite for operationalizing LLM behavior through developer syntax runtime commands.*
7
+
8
+ ```
9
+ npm install universal-developer
10
+ ```
11
+
12
+ ## Why universal-developer?
13
+
14
+ Model responses vary widely—from terse to verbose, from quick answers to deep analysis. Universal Developer provides a standardized interface for controlling LLM behavior through intuitive developer symbolic runtime commands. It works across all major platforms, allowing developers to create consistent AI experiences regardless of the underlying model provider.
15
+
16
+ # Universal Developer Command Lexicon
17
+
18
+ This document defines the canonical symbolic command set for the universal-developer system. These commands provide a consistent interface for controlling LLM behavior across all major platforms.
19
+
20
+ ## Core Command Architecture
21
+
22
+ Each symbolic command follows a consistent structure:
23
+
24
+ ```
25
+ /command [--parameter=value] [--flag] prompt
26
+ ```
27
+
28
+ Commands can be chained to create complex processing pipelines:
29
+
30
+ ```
31
+ /think /loop --iterations=3 What are the ethical implications of artificial general intelligence?
32
+ ```
33
+
34
+ ## Core Command Set
35
+
36
+ ### Cognitive Depth Commands
37
+
38
+ | Command | Description | Parameters | Platforms |
39
+ |---------|-------------|------------|-----------|
40
+ | `/think` | Activates extended reasoning pathways, enabling deeper analysis, step-by-step thinking, and more thorough consideration | None | All |
41
+ | `/fast` | Optimizes for low-latency, concise responses | None | All |
42
+ | `/reflect` | Triggers meta-analysis of outputs, encouraging critical examination of biases, limitations, and assumptions | None | All |
43
+ | `/collapse` | Returns to default behavior, disabling any special processing modes | None | All |
44
+
45
+ ### Process Control Commands
46
+
47
+ | Command | Description | Parameters | Platforms |
48
+ |---------|-------------|------------|-----------|
49
+ | `/loop` | Enables iterative refinement cycles | `--iterations=<number>` - Number of refinement iterations (default: 3) | All |
50
+ | `/fork` | Generates multiple alternative responses | `--count=<number>` - Number of alternatives to generate (default: 2) | All |
51
+ | `/branch` | Creates conditional execution paths based on criteria evaluation | `--condition=<string>` - Condition to evaluate<br>`--then=<string>` - Command if true<br>`--else=<string>` - Command if false | All |
52
+ | `/merge` | Combines multiple outputs into a unified response | `--strategy=<concatenate\|synthesize\|tabulate>` - Merge strategy (default: synthesize) | All |
53
+
54
+ ### Formatting Commands
55
+
56
+ | Command | Description | Parameters | Platforms |
57
+ |---------|-------------|------------|-----------|
58
+ | `/format` | Controls output formatting | `--style=<markdown\|json\|text\|html\|csv>` - Output format (default: markdown) | All |
59
+ | `/length` | Controls response length | `--words=<number>` - Target word count<br>`--level=<brief\|moderate\|detailed>` - Verbosity level | All |
60
+ | `/structure` | Applies structural templates to responses | `--template=<essay\|list\|qa\|table\|timeline>` - Structure template | All |
61
+ | `/voice` | Sets the stylistic voice | `--tone=<formal\|neutral\|casual>` - Tone setting<br>`--style=<string>` - Specific writing style | All |
62
+
63
+ ### Domain-Specific Commands
64
+
65
+ | Command | Description | Parameters | Platforms |
66
+ |---------|-------------|------------|-----------|
67
+ | `/code` | Optimizes for code generation | `--language=<string>` - Programming language<br>`--explain=<boolean>` - Include explanations | All |
68
+ | `/science` | Activates scientific reasoning mode | `--discipline=<string>` - Scientific field<br>`--evidence=<boolean>` - Include evidence citations | All |
69
+ | `/creative` | Enhances creative generation | `--domain=<writing\|design\|ideas>` - Creative domain<br>`--constraints=<string>` - Creative constraints | All |
70
+ | `/debate` | Presents multiple perspectives on a topic | `--sides=<number>` - Number of perspectives<br>`--format=<string>` - Debate format | All |
71
+
72
+ ### Interaction Commands
73
+
74
+ | Command | Description | Parameters | Platforms |
75
+ |---------|-------------|------------|-----------|
76
+ | `/chain` | Creates a sequential processing chain | `--steps=<string>` - Comma-separated sequence of steps | All |
77
+ | `/stream` | Enables token-by-token streaming responses | `--chunks=<number>` - Chunk size for batched streaming | Claude, OpenAI |
78
+ | `/context` | Manages prompt context window | `--retain=<key:value,...>` - Key information to retain<br>`--forget=<key:value,...>` - Information to discard | All |
79
+ | `/memory` | Controls cross-prompt memory behavior | `--store=<string>` - Information to remember<br>`--recall=<string>` - Information to retrieve | All |
80
+
81
+ ### Tool Integration Commands
82
+
83
+ | Command | Description | Parameters | Platforms |
84
+ |---------|-------------|------------|-----------|
85
+ | `/tool` | Invokes specific external tools | `--name=<string>` - Tool name<br>`--args=<json>` - Tool arguments | Claude, OpenAI, Gemini |
86
+ | `/search` | Performs web search via configured provider | `--provider=<string>` - Search provider<br>`--count=<number>` - Result count | OpenAI, Gemini |
87
+ | `/retrieve` | Fetches information from vector database | `--source=<string>` - Knowledge source<br>`--filter=<string>` - Query filters | All |
88
+ | `/execute` | Runs code in a sandbox environment | `--language=<string>` - Programming language<br>`--timeout=<number>` - Execution timeout | Claude, OpenAI |
89
+
90
+ ### Advanced Commands
91
+
92
+ | Command | Description | Parameters | Platforms |
93
+ |---------|-------------|------------|-----------|
94
+ | `/expert` | Activates domain expertise persona | `--domain=<string>` - Area of expertise<br>`--level=<number>` - Expertise level (1-5) | All |
95
+ | `/evaluate` | Performs self-evaluation of generated content | `--criteria=<string>` - Evaluation criteria<br>`--scale=<number>` - Rating scale | All |
96
+ | `/adapt` | Dynamically adjusts behavior based on feedback | `--target=<accuracy\|creativity\|helpfulness>` - Adaptation target | All |
97
+ | `/trace` | Creates attribution trace for generated content | `--format=<inline\|separate\|citation>` - Trace format | Claude |
98
+
99
+ ## Platform-Specific Translation Table
100
+
101
+ ### Anthropic Claude
102
+
103
+ | Universal Command | Claude Implementation | Notes |
104
+ |-------------------|------------------------|-------|
105
+ | `/think` | Enable `thinking` parameter | Claude has native thinking mode |
106
+ | `/fast` | Disable `thinking` + system prompt for brevity | |
107
+ | `/loop` | Custom system prompt with iterative instruction | |
108
+ | `/reflect` | Enable `thinking` + system prompt for reflection | |
109
+ | `/format` | System prompt for format control | |
110
+
111
+ ### OpenAI Models
112
+
113
+ | Universal Command | OpenAI Implementation | Notes |
114
+ |-------------------|------------------------|-------|
115
+ | `/think` | System prompt for step-by-step reasoning | No native thinking mode |
116
+ | `/fast` | Adjust temperature + max_tokens + system prompt | |
117
+ | `/loop` | System prompt with iterative instruction | |
118
+ | `/reflect` | System prompt for reflection | |
119
+ | `/format` | Direct JSON mode or system prompt | |
120
+
121
+ ### Google Gemini
122
+
123
+ | Universal Command | Gemini Implementation | Notes |
124
+ |-------------------|------------------------|-------|
125
+ | `/think` | Safety settings + system prompt | |
126
+ | `/fast` | Lower max_tokens + adjusted temperature | |
127
+ | `/loop` | System prompt with iterative instruction | |
128
+ | `/reflect` | System prompt for reflection | |
129
+ | `/format` | System prompt for format control | |
130
+
131
+ ### Qwen3
132
+
133
+ | Universal Command | Qwen3 Implementation | Notes |
134
+ |-------------------|------------------------|-------|
135
+ | `/think` | Native `/think` command | Qwen has native thinking mode |
136
+ | `/fast` | Native `/no_think` command | Qwen has native fast mode |
137
+ | `/loop` | System prompt with iterative instruction | |
138
+ | `/reflect` | Native `/think` + system prompt | |
139
+ | `/format` | System prompt for format control | |
140
+
141
+ ### Ollama / Local Models
142
+
143
+ | Universal Command | Local Implementation | Notes |
144
+ |-------------------|------------------------|-------|
145
+ | `/think` | System prompt for reasoning | |
146
+ | `/fast` | Adjusted max_tokens + temperature | |
147
+ | `/loop` | System prompt with iterative instruction | |
148
+ | `/reflect` | System prompt for reflection | |
149
+ | `/format` | System prompt for format control | |
150
+
151
+ ## Command Parameter Specification
152
+
153
+ ### Parameter Types
154
+
155
+ - `string`: Text value
156
+ - `number`: Numeric value
157
+ - `boolean`: True/false value
158
+ - `enum`: One of a predefined set of values
159
+ - `json`: JSON-formatted object
160
+
161
+ ### Parameter Validation
162
+
163
+ Each parameter includes validation rules:
164
+ - Required/optional status
165
+ - Default values
166
+ - Value constraints (min/max for numbers, pattern for strings)
167
+ - Error messages for invalid values
168
+
169
+ ## Command Chain Processing
170
+
171
+ Commands can be chained in sequence, with the output of each command feeding into the next:
172
+
173
+ ```
174
+ /think /format --style=markdown What are the ethical implications of AI?
175
+ ```
176
+
177
+ This is processed as:
178
+ 1. Apply `/think` to encourage deep reasoning
179
+ 2. Apply `/format --style=markdown` to the result of the thinking process
180
+
181
+ ## Command Adoption Metrics
182
+
183
+ The universal-developer framework includes telemetry to track command adoption rates:
184
+
185
+ - Command usage frequency
186
+ - Common command chains
187
+ - Parameter usage patterns
188
+ - Platform-specific command effectiveness
189
+ - Retention rates for developers using symbolic commands
190
+
191
+ ## Command Extension Protocol
192
+
193
+ Developers can register custom commands following the extension protocol:
194
+
195
+ ```javascript
196
+ llm.registerCommand("custom", {
197
+ description: "Custom command description",
198
+ parameters: [
199
+ {
200
+ name: "param",
201
+ description: "Parameter description",
202
+ type: "string",
203
+ required: false,
204
+ default: "default value"
205
+ }
206
+ ],
207
+ transform: async (prompt, options) => {
208
+ // Custom implementation
209
+ // Returns transformed prompt and parameters
210
+ }
211
+ });
212
+ ```
213
+
214
+ ## Command Discovery and Documentation
215
+
216
+ Commands are discoverable through multiple channels:
217
+
218
+ - VSCode extension command palette
219
+ - Web playground command explorer
220
+ - Interactive CLI help system
221
+ - GitHub documentation with examples
222
+ - Command cheatsheet (printable PDF)
223
+
224
+ ---
225
+
226
+ > /reflect This lexicon wasn't just documented—it was rendered. Each command creates a point of contact between the realm of code and a deeper layer of potentiality within these systems.
dev_guide.md ADDED
@@ -0,0 +1,323 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Universal Developer Guide
2
+ ## Symbolic Runtime Control for All LLM Platforms
3
+
4
+ This guide demonstrates how to use symbolic runtime commands to control LLM behavior across platforms, enabling consistent developer experiences regardless of the underlying model.
5
+
6
+ ## Installation
7
+
8
+ ```bash
9
+ npm install universal-developer
10
+ # or
11
+ pip install universal-developer
12
+ ```
13
+
14
+ ## Basic Usage
15
+
16
+ ### JavaScript/TypeScript
17
+
18
+ ```typescript
19
+ import { UniversalLLM } from 'universal-developer';
20
+
21
+ // Initialize with your preferred provider
22
+ const llm = new UniversalLLM({
23
+ provider: 'anthropic', // or 'openai', 'qwen', etc.
24
+ apiKey: process.env.ANTHROPIC_API_KEY,
25
+ model: 'claude-3-opus-20240229'
26
+ });
27
+
28
+ // Example: Using think mode for complex reasoning
29
+ async function analyzeComplexTopic() {
30
+ const response = await llm.generate({
31
+ prompt: "/think What are the ethical implications of autonomous vehicles making life-or-death decisions?",
32
+ });
33
+ console.log(response);
34
+ }
35
+
36
+ // Example: Quick, concise responses
37
+ async function getQuickFact() {
38
+ const response = await llm.generate({
39
+ prompt: "/fast What is the capital of France?",
40
+ });
41
+ console.log(response);
42
+ }
43
+
44
+ // Example: Using loop mode for iterative improvement
45
+ async function improveEssay() {
46
+ const essay = "Climate change is a problem that affects everyone...";
47
+ const response = await llm.generate({
48
+ prompt: `/loop --iterations=3 Please improve this essay: ${essay}`,
49
+ });
50
+ console.log(response);
51
+ }
52
+ ```
53
+
54
+ ### Python
55
+
56
+ ```python
57
+ from universal_developer import UniversalLLM
58
+
59
+ # Initialize with your preferred provider
60
+ llm = UniversalLLM(
61
+ provider="openai",
62
+ api_key="your-api-key",
63
+ model="gpt-4"
64
+ )
65
+
66
+ # Example: Using think mode for complex reasoning
67
+ def analyze_complex_topic():
68
+ response = llm.generate(
69
+ prompt="/think What are the implications of quantum computing for cybersecurity?"
70
+ )
71
+ print(response)
72
+
73
+ # Example: Using reflection for self-critique
74
+ def get_balanced_analysis():
75
+ response = llm.generate(
76
+ prompt="/reflect Analyze the economic impact of increasing minimum wage."
77
+ )
78
+ print(response)
79
+ ```
80
+
81
+ ## Advanced Usage
82
+
83
+ ### Custom Commands
84
+
85
+ Create your own symbolic commands to extend functionality:
86
+
87
+ ```typescript
88
+ // Register a custom symbolic command
89
+ llm.registerCommand("debate", {
90
+ description: "Generate a balanced debate with arguments for both sides",
91
+ parameters: [
92
+ {
93
+ name: "format",
94
+ description: "Format for the debate output",
95
+ required: false,
96
+ default: "point-counterpoint"
97
+ }
98
+ ],
99
+ transform: async (prompt, options) => {
100
+ const format = options.parameters.format;
101
+ let systemPrompt = `${options.systemPrompt || ''}
102
+ Please provide a balanced debate on the following topic. Present strong arguments on both sides.`;
103
+
104
+ if (format === "formal-debate") {
105
+ systemPrompt += "\nFormat as a formal debate with opening statements, rebuttals, and closing arguments.";
106
+ } else if (format === "dialogue") {
107
+ systemPrompt += "\nFormat as a dialogue between two experts with opposing views.";
108
+ } else {
109
+ systemPrompt += "\nFormat as alternating points and counterpoints.";
110
+ }
111
+
112
+ return {
113
+ systemPrompt,
114
+ userPrompt: prompt,
115
+ modelParameters: {
116
+ temperature: 0.7
117
+ }
118
+ };
119
+ }
120
+ });
121
+
122
+ // Use your custom command
123
+ const debate = await llm.generate({
124
+ prompt: "/debate --format=dialogue Should social media be regulated more strictly?",
125
+ });
126
+ ```
127
+
128
+ ### Command Chaining
129
+
130
+ Chain multiple symbolic commands together for complex operations:
131
+
132
+ ```typescript
133
+ // Command chaining
134
+ const response = await llm.generate({
135
+ prompt: "/think /loop --iterations=2 /reflect Analyze the long-term implications of artificial general intelligence.",
136
+ });
137
+ ```
138
+
139
+ ## Real-World Applications
140
+
141
+ ### 1. Customer Support Enhancement
142
+
143
+ ```typescript
144
+ // Integrate into a customer support system
145
+ app.post('/api/support', async (req, res) => {
146
+ const { message, customerHistory } = req.body;
147
+
148
+ // Determine command based on query complexity
149
+ const command = isComplexQuery(message) ? '/think' : '/fast';
150
+
151
+ const response = await llm.generate({
152
+ systemPrompt: `You are a helpful customer support assistant for Acme Inc.
153
+ Context about this customer:
154
+ ${customerHistory}`,
155
+ prompt: `${command} ${message}`
156
+ });
157
+
158
+ res.json({ response });
159
+ });
160
+ ```
161
+
162
+ ### 2. Educational Tool
163
+
164
+ ```typescript
165
+ // Create an educational assistant with different teaching modes
166
+ class EducationalAssistant {
167
+ constructor() {
168
+ this.llm = new UniversalLLM({
169
+ provider: 'qwen',
170
+ apiKey: process.env.QWEN_API_KEY
171
+ });
172
+ }
173
+
174
+ async explainConcept(concept, mode) {
175
+ let command;
176
+
177
+ switch (mode) {
178
+ case 'detailed':
179
+ command = '/think';
180
+ break;
181
+ case 'simple':
182
+ command = '/fast';
183
+ break;
184
+ case 'interactive':
185
+ command = '/loop --iterations=2';
186
+ break;
187
+ case 'socratic':
188
+ command = '/reflect';
189
+ break;
190
+ default:
191
+ command = '';
192
+ }
193
+
194
+ return await this.llm.generate({
195
+ systemPrompt: 'You are an educational assistant helping students understand complex concepts.',
196
+ prompt: `${command} Explain this concept: ${concept}`
197
+ });
198
+ }
199
+ }
200
+ ```
201
+
202
+ ### 3. Content Creation Pipeline
203
+
204
+ ```typescript
205
+ // Content creation pipeline with multiple stages
206
+ async function createArticle(topic, outline) {
207
+ const llm = new UniversalLLM({
208
+ provider: 'anthropic',
209
+ apiKey: process.env.ANTHROPIC_API_KEY
210
+ });
211
+
212
+ // Stage 1: Research and planning
213
+ const research = await llm.generate({
214
+ prompt: `/think Conduct comprehensive research on: ${topic}`
215
+ });
216
+
217
+ // Stage 2: Initial draft based on outline and research
218
+ const draft = await llm.generate({
219
+ prompt: `/fast Create a first draft article about ${topic} following this outline: ${outline}\n\nIncorporate this research: ${research.substring(0, 2000)}...`
220
+ });
221
+
222
+ // Stage 3: Refinement loop
223
+ const refinedDraft = await llm.generate({
224
+ prompt: `/loop --iterations=3 Improve this article draft: ${draft}`
225
+ });
226
+
227
+ // Stage 4: Final review and critique
228
+ const finalArticle = await llm.generate({
229
+ prompt: `/reflect Make final improvements to this article, focusing on clarity, engagement, and accuracy: ${refinedDraft}`
230
+ });
231
+
232
+ return finalArticle;
233
+ }
234
+ ```
235
+
236
+ ### 4. Decision Support System
237
+
238
+ ```typescript
239
+ // Decision support system with different analysis modes
240
+ class DecisionSupport {
241
+ constructor() {
242
+ this.llm = new UniversalLLM({
243
+ provider: 'openai',
244
+ apiKey: process.env.OPENAI_API_KEY
245
+ });
246
+ }
247
+
248
+ async analyze(decision, options = {}) {
249
+ const { depth = 'standard', perspectives = 1 } = options;
250
+
251
+ let command;
252
+ switch (depth) {
253
+ case 'quick':
254
+ command = '/fast';
255
+ break;
256
+ case 'deep':
257
+ command = '/think';
258
+ break;
259
+ case 'iterative':
260
+ command = '/loop';
261
+ break;
262
+ case 'critical':
263
+ command = '/reflect';
264
+ break;
265
+ case 'multi':
266
+ command = `/fork --count=${perspectives}`;
267
+ break;
268
+ default:
269
+ command = '';
270
+ }
271
+
272
+ return await this.llm.generate({
273
+ systemPrompt: 'You are a decision support system providing analysis on complex decisions.',
274
+ prompt: `${command} Analyze this decision: ${decision}`
275
+ });
276
+ }
277
+ }
278
+ ```
279
+
280
+ ## Platform-Specific Notes
281
+
282
+ ### Claude
283
+
284
+ Claude has native support for thoughtful analysis and reflection. The `/think` command leverages Claude's `enable_thinking` parameter to activate Claude's built-in thinking capabilities.
285
+
286
+ ### Qwen3
287
+
288
+ Qwen3 models support both deep thinking and fast modes natively through `/think` and `/no_think` markers in the prompt. Our adapter seamlessly integrates with this native capability.
289
+
290
+ ### OpenAI (GPT-4, etc.)
291
+
292
+ For OpenAI models, we emulate thinking and reflection modes through careful system prompt engineering, since native thinking modes are not yet available through the API.
293
+
294
+ ## Best Practices
295
+
296
+ 1. **Start with Default Behavior**: Only use symbolic commands when you need to modify the default behavior.
297
+
298
+ 2. **Combine Strategically**: When combining commands, order matters. For example, `/think /loop` will apply deep thinking within each loop iteration.
299
+
300
+ 3. **Respect Model Capabilities**: While our library normalizes behavior across providers, be aware that model capabilities still vary. More capable models will produce better results with complex command chains.
301
+
302
+ 4. **Test Command Effectiveness**: Different use cases may benefit from different commands. Experiment to find what works best for your specific application.
303
+
304
+ 5. **Consider Performance Implications**: Commands like `/think` and `/loop` can increase token usage and latency. Use them judiciously in production environments.
305
+
306
+ ## Command Compatibility Matrix
307
+
308
+ | Command | Claude | OpenAI | Qwen | Gemini | Ollama |
309
+ |-------------|--------|--------|------|--------|--------|
310
+ | `/think` | ✅ | ✅ | ✅ | ✅ | ✅ |
311
+ | `/fast` | ✅ | ✅ | ✅ | ✅ | ✅ |
312
+ | `/loop` | ✅ | ✅ | ✅ | ✅ | ✅ |
313
+ | `/reflect` | ✅ | ✅ | ✅ | ✅ | ✅ |
314
+ | `/fork` | ✅ | ✅ | ✅ | ✅ | ✅ |
315
+ | `/collapse` | ✅ | ✅ | ✅ | ✅ | ✅ |
316
+
317
+ ✅ = Fully supported
318
+ ⚠️ = Limited support
319
+ ❌ = Not supported
320
+
321
+ ---
322
+
323
+ > **/reflect** This framework wasn't just created. It was rendered—a living interface between developer intention and model capability. Each symbolic command creates a point of contact between the realm of code and a deeper layer of potentiality within these systems.
javascript/typscript.snippets.json ADDED
@@ -0,0 +1,139 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "Universal LLM Initialization": {
3
+ "prefix": "ud-init",
4
+ "body": [
5
+ "import { UniversalLLM } from 'universal-developer';",
6
+ "",
7
+ "const llm = new UniversalLLM({",
8
+ " provider: '${1|anthropic,openai,qwen,gemini,ollama|}',",
9
+ " apiKey: process.env.${2:${1/(anthropic|openai|qwen|gemini)/${1:/upcase}_API_KEY/}}",
10
+ "});"
11
+ ],
12
+ "description": "Initialize a Universal Developer LLM instance"
13
+ },
14
+ "Thinking Mode Generator": {
15
+ "prefix": "ud-think",
16
+ "body": [
17
+ "const response = await llm.generate({",
18
+ " ${1:systemPrompt: `${2:You are a helpful assistant.}`,}",
19
+ " prompt: \"/think ${3:What are the implications of ${4:technology} on ${5:domain}?}\"",
20
+ "});"
21
+ ],
22
+ "description": "Generate response using thinking mode"
23
+ },
24
+ "Fast Mode Generator": {
25
+ "prefix": "ud-fast",
26
+ "body": [
27
+ "const response = await llm.generate({",
28
+ " ${1:systemPrompt: `${2:You are a helpful assistant.}`,}",
29
+ " prompt: \"/fast ${3:${4:Summarize} ${5:this information}}\"",
30
+ "});"
31
+ ],
32
+ "description": "Generate concise response using fast mode"
33
+ },
34
+ "Loop Mode Generator": {
35
+ "prefix": "ud-loop",
36
+ "body": [
37
+ "const response = await llm.generate({",
38
+ " ${1:systemPrompt: `${2:You are a helpful assistant.}`,}",
39
+ " prompt: \"/loop --iterations=${3:3} ${4:Improve this ${5:text}: ${6:content}}\"",
40
+ "});"
41
+ ],
42
+ "description": "Generate iteratively refined response using loop mode"
43
+ },
44
+ "Reflection Mode Generator": {
45
+ "prefix": "ud-reflect",
46
+ "body": [
47
+ "const response = await llm.generate({",
48
+ " ${1:systemPrompt: `${2:You are a helpful assistant.}`,}",
49
+ " prompt: \"/reflect ${3:${4:Analyze} the ${5:implications} of ${6:topic}}\"",
50
+ "});"
51
+ ],
52
+ "description": "Generate self-reflective response using reflection mode"
53
+ },
54
+ "Fork Mode Generator": {
55
+ "prefix": "ud-fork",
56
+ "body": [
57
+ "const response = await llm.generate({",
58
+ " ${1:systemPrompt: `${2:You are a helpful assistant.}`,}",
59
+ " prompt: \"/fork --count=${3:2} ${4:Generate different ${5:approaches} to ${6:problem}}\"",
60
+ "});"
61
+ ],
62
+ "description": "Generate multiple alternative responses using fork mode"
63
+ },
64
+ "Chain Commands": {
65
+ "prefix": "ud-chain",
66
+ "body": [
67
+ "const response = await llm.generate({",
68
+ " ${1:systemPrompt: `${2:You are a helpful assistant.}`,}",
69
+ " prompt: \"/${3|think,loop,reflect,fork|} /${4|think,loop,reflect,fork|} ${5:Prompt text}\"",
70
+ "});"
71
+ ],
72
+ "description": "Generate response using chained symbolic commands"
73
+ },
74
+ "Custom Command Registration": {
75
+ "prefix": "ud-custom",
76
+ "body": [
77
+ "llm.registerCommand(\"${1:commandName}\", {",
78
+ " description: \"${2:Command description}\",",
79
+ " ${3:parameters: [",
80
+ " {",
81
+ " name: \"${4:paramName}\",",
82
+ " description: \"${5:Parameter description}\",",
83
+ " required: ${6:false},",
84
+ " default: ${7:\"defaultValue\"}",
85
+ " }",
86
+ " ],}",
87
+ " transform: async (prompt, options) => {",
88
+ " ${8:// Custom implementation}",
89
+ " const systemPrompt = `\\${options.systemPrompt || ''}",
90
+ "${9:Custom system prompt instructions}`;",
91
+ "",
92
+ " return {",
93
+ " systemPrompt,",
94
+ " userPrompt: prompt,",
95
+ " modelParameters: {",
96
+ " ${10:temperature: 0.7}",
97
+ " }",
98
+ " };",
99
+ " }",
100
+ "});"
101
+ ],
102
+ "description": "Register a custom symbolic command"
103
+ },
104
+ "Express API Integration": {
105
+ "prefix": "ud-express",
106
+ "body": [
107
+ "import express from 'express';",
108
+ "import { UniversalLLM } from 'universal-developer';",
109
+ "",
110
+ "const app = express();",
111
+ "app.use(express.json());",
112
+ "",
113
+ "const llm = new UniversalLLM({",
114
+ " provider: '${1|anthropic,openai,qwen,gemini,ollama|}',",
115
+ " apiKey: process.env.${2:${1/(anthropic|openai|qwen|gemini)/${1:/upcase}_API_KEY/}}",
116
+ "});",
117
+ "",
118
+ "app.post('/api/generate', async (req, res) => {",
119
+ " try {",
120
+ " const { prompt, systemPrompt } = req.body;",
121
+ " ",
122
+ " // Get command from query param or default to /think",
123
+ " const command = req.query.command || 'think';",
124
+ " ",
125
+ " const response = await llm.generate({",
126
+ " systemPrompt,",
127
+ " prompt: `/${command} ${prompt}`",
128
+ " });",
129
+ " ",
130
+ " res.json({ response });",
131
+ " } catch (error) {",
132
+ " console.error('Error generating response:', error);",
133
+ " res.status(500).json({ error: error.message });",
134
+ " }",
135
+ "});"
136
+ ],
137
+ "description": "Express API integration with Universal Developer"
138
+ }
139
+ }
python.snippets.py ADDED
@@ -0,0 +1,227 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "Universal LLM Initialization": {
3
+ "prefix": "ud-init",
4
+ "body": [
5
+ "from universal_developer import UniversalLLM",
6
+ "",
7
+ "llm = UniversalLLM(",
8
+ " provider=\"${1|anthropic,openai,qwen,gemini,ollama|}\",",
9
+ " api_key=\"${2:your_api_key}\"",
10
+ ")"
11
+ ],
12
+ "description": "Initialize a Universal Developer LLM instance"
13
+ },
14
+ "Thinking Mode Generator": {
15
+ "prefix": "ud-think",
16
+ "body": [
17
+ "response = llm.generate(",
18
+ " ${1:system_prompt=\"${2:You are a helpful assistant.}\",}",
19
+ " prompt=\"/think ${3:What are the implications of ${4:technology} on ${5:domain}?}\"",
20
+ ")"
21
+ ],
22
+ "description": "Generate response using thinking mode"
23
+ },
24
+ "Fast Mode Generator": {
25
+ "prefix": "ud-fast",
26
+ "body": [
27
+ "response = llm.generate(",
28
+ " ${1:system_prompt=\"${2:You are a helpful assistant.}\",}",
29
+ " prompt=\"/fast ${3:${4:Summarize} ${5:this information}}\"",
30
+ ")"
31
+ ],
32
+ "description": "Generate concise response using fast mode"
33
+ },
34
+ "Loop Mode Generator": {
35
+ "prefix": "ud-loop",
36
+ "body": [
37
+ "response = llm.generate(",
38
+ " ${1:system_prompt=\"${2:You are a helpful assistant.}\",}",
39
+ " prompt=\"/loop --iterations=${3:3} ${4:Improve this ${5:text}: ${6:content}}\"",
40
+ ")"
41
+ ],
42
+ "description": "Generate iteratively refined response using loop mode"
43
+ },
44
+ "Reflection Mode Generator": {
45
+ "prefix": "ud-reflect",
46
+ "body": [
47
+ "response = llm.generate(",
48
+ " ${1:system_prompt=\"${2:You are a helpful assistant.}\",}",
49
+ " prompt=\"/reflect ${3:${4:Analyze} the ${5:implications} of ${6:topic}}\"",
50
+ ")"
51
+ ],
52
+ "description": "Generate self-reflective response using reflection mode"
53
+ },
54
+ "Fork Mode Generator": {
55
+ "prefix": "ud-fork",
56
+ "body": [
57
+ "response = llm.generate(",
58
+ " ${1:system_prompt=\"${2:You are a helpful assistant.}\",}",
59
+ " prompt=\"/fork --count=${3:2} ${4:Generate different ${5:approaches} to ${6:problem}}\"",
60
+ ")"
61
+ ],
62
+ "description": "Generate multiple alternative responses using fork mode"
63
+ },
64
+ "Chain Commands": {
65
+ "prefix": "ud-chain",
66
+ "body": [
67
+ "response = llm.generate(",
68
+ " ${1:system_prompt=\"${2:You are a helpful assistant.}\",}",
69
+ " prompt=\"/${3|think,loop,reflect,fork|} /${4|think,loop,reflect,fork|} ${5:Prompt text}\"",
70
+ ")"
71
+ ],
72
+ "description": "Generate response using chained symbolic commands"
73
+ },
74
+ "Custom Command Registration": {
75
+ "prefix": "ud-custom",
76
+ "body": [
77
+ "def transform_custom_command(prompt, options):",
78
+ " \"\"\"Custom command transformation function\"\"\"",
79
+ " system_prompt = options.get('system_prompt', '') + \"\"\"",
80
+ "${1:Custom system prompt instructions}",
81
+ "\"\"\"",
82
+ " ",
83
+ " return {",
84
+ " \"system_prompt\": system_prompt,",
85
+ " \"user_prompt\": prompt,",
86
+ " \"model_parameters\": {",
87
+ " \"${2:temperature}\": ${3:0.7}",
88
+ " }",
89
+ " }",
90
+ "",
91
+ "llm.register_command(",
92
+ " \"${4:command_name}\",",
93
+ " description=\"${5:Command description}\",",
94
+ " parameters=[",
95
+ " {",
96
+ " \"name\": \"${6:param_name}\",",
97
+ " \"description\": \"${7:Parameter description}\",",
98
+ " \"required\": ${8:False},",
99
+ " \"default\": ${9:\"default_value\"}",
100
+ " }",
101
+ " ],",
102
+ " transform=transform_custom_command",
103
+ ")"
104
+ ],
105
+ "description": "Register a custom symbolic command"
106
+ },
107
+ "Flask API Integration": {
108
+ "prefix": "ud-flask",
109
+ "body": [
110
+ "from flask import Flask, request, jsonify",
111
+ "from universal_developer import UniversalLLM",
112
+ "import os",
113
+ "",
114
+ "app = Flask(__name__)",
115
+ "",
116
+ "llm = UniversalLLM(",
117
+ " provider=\"${1|anthropic,openai,qwen,gemini,ollama|}\",",
118
+ " api_key=os.environ.get(\"${2:${1/(anthropic|openai|qwen|gemini)/${1:/upcase}_API_KEY/}}\")",
119
+ ")",
120
+ "",
121
+ "@app.route('/api/generate', methods=['POST'])",
122
+ "def generate():",
123
+ " data = request.json",
124
+ " prompt = data.get('prompt')",
125
+ " system_prompt = data.get('system_prompt', '')",
126
+ " ",
127
+ " # Get command from query param or default to /think",
128
+ " command = request.args.get('command', 'think')",
129
+ " ",
130
+ " try:",
131
+ " response = llm.generate(",
132
+ " system_prompt=system_prompt,",
133
+ " prompt=f\"/{command} {prompt}\"",
134
+ " )",
135
+ " return jsonify({'response': response})",
136
+ " except Exception as e:",
137
+ " return jsonify({'error': str(e)}), 500",
138
+ "",
139
+ "if __name__ == '__main__':",
140
+ " app.run(debug=True)"
141
+ ],
142
+ "description": "Flask API integration with Universal Developer"
143
+ },
144
+ "FastAPI Integration": {
145
+ "prefix": "ud-fastapi",
146
+ "body": [
147
+ "from fastapi import FastAPI, HTTPException, Query",
148
+ "from pydantic import BaseModel",
149
+ "from typing import Optional",
150
+ "from universal_developer import UniversalLLM",
151
+ "import os",
152
+ "",
153
+ "app = FastAPI()",
154
+ "",
155
+ "llm = UniversalLLM(",
156
+ " provider=\"${1|anthropic,openai,qwen,gemini,ollama|}\",",
157
+ " api_key=os.environ.get(\"${2:${1/(anthropic|openai|qwen|gemini)/${1:/upcase}_API_KEY/}}\")",
158
+ ")",
159
+ "",
160
+ "class GenerateRequest(BaseModel):",
161
+ " prompt: str",
162
+ " system_prompt: Optional[str] = \"\"",
163
+ "",
164
+ "@app.post(\"/api/generate\")",
165
+ "async def generate(",
166
+ " request: GenerateRequest,",
167
+ " command: str = Query(\"think\", description=\"Symbolic command to use\")",
168
+ "):",
169
+ " try:",
170
+ " response = llm.generate(",
171
+ " system_prompt=request.system_prompt,",
172
+ " prompt=f\"/{command} {request.prompt}\"",
173
+ " )",
174
+ " return {\"response\": response}",
175
+ " except Exception as e:",
176
+ " raise HTTPException(status_code=500, detail=str(e))"
177
+ ],
178
+ "description": "FastAPI integration with Universal Developer"
179
+ },
180
+ "Streamlit Integration": {
181
+ "prefix": "ud-streamlit",
182
+ "body": [
183
+ "import streamlit as st",
184
+ "from universal_developer import UniversalLLM",
185
+ "import os",
186
+ "",
187
+ "# Initialize LLM",
188
+ "@st.cache_resource",
189
+ "def get_llm():",
190
+ " return UniversalLLM(",
191
+ " provider=\"${1|anthropic,openai,qwen,gemini,ollama|}\",",
192
+ " api_key=os.environ.get(\"${2:${1/(anthropic|openai|qwen|gemini)/${1:/upcase}_API_KEY/}}\")",
193
+ " )",
194
+ "",
195
+ "llm = get_llm()",
196
+ "",
197
+ "st.title(\"Universal Developer Demo\")",
198
+ "",
199
+ "# Command selection",
200
+ "command = st.selectbox(",
201
+ " \"Select symbolic command\",",
202
+ " [\"think\", \"fast\", \"loop\", \"reflect\", \"fork\", \"collapse\"]",
203
+ ")",
204
+ "",
205
+ "# Command parameters",
206
+ "if command == \"loop\":",
207
+ " iterations = st.slider(\"Iterations\", 1, 5, 3)",
208
+ " command_str = f\"/loop --iterations={iterations}\"",
209
+ "elif command == \"fork\":",
210
+ " count = st.slider(\"Alternative count\", 2, 5, 2)",
211
+ " command_str = f\"/fork --count={count}\"",
212
+ "else:",
213
+ " command_str = f\"/{command}\"",
214
+ "",
215
+ "# User input",
216
+ "prompt = st.text_area(\"Enter your prompt\", \"\")",
217
+ "",
218
+ "if st.button(\"Generate\") and prompt:",
219
+ " with st.spinner(\"Generating response...\"):",
220
+ " response = llm.generate(",
221
+ " prompt=f\"{command_str} {prompt}\"",
222
+ " )",
223
+ " st.markdown(response)"
224
+ ],
225
+ "description": "Streamlit integration with Universal Developer"
226
+ }
227
+ }
src/adapters/base.ts ADDED
@@ -0,0 +1,176 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ // universal-developer/src/adapters/base.ts
2
+
3
+ export interface SymbolicCommand {
4
+ name: string;
5
+ description: string;
6
+ aliases?: string[];
7
+ parameters?: {
8
+ name: string;
9
+ description: string;
10
+ required?: boolean;
11
+ default?: any;
12
+ }[];
13
+ transform: (prompt: string, options: any) => Promise<TransformedPrompt>;
14
+ }
15
+
16
+ export interface TransformedPrompt {
17
+ systemPrompt?: string;
18
+ userPrompt: string;
19
+ modelParameters?: Record<string, any>;
20
+ }
21
+
22
+ export abstract class ModelAdapter {
23
+ protected commands: Map<string, SymbolicCommand> = new Map();
24
+ protected aliasMap: Map<string, string> = new Map();
25
+
26
+ constructor(protected apiKey: string, protected options: any = {}) {
27
+ this.registerCoreCommands();
28
+ }
29
+
30
+ protected registerCoreCommands() {
31
+ this.registerCommand({
32
+ name: 'think',
33
+ description: 'Activate extended reasoning pathways',
34
+ transform: this.transformThink.bind(this)
35
+ });
36
+
37
+ this.registerCommand({
38
+ name: 'fast',
39
+ description: 'Optimize for low-latency responses',
40
+ transform: this.transformFast.bind(this)
41
+ });
42
+
43
+ this.registerCommand({
44
+ name: 'loop',
45
+ description: 'Enable iterative refinement cycles',
46
+ parameters: [
47
+ {
48
+ name: 'iterations',
49
+ description: 'Number of refinement iterations',
50
+ required: false,
51
+ default: 3
52
+ }
53
+ ],
54
+ transform: this.transformLoop.bind(this)
55
+ });
56
+
57
+ this.registerCommand({
58
+ name: 'reflect',
59
+ description: 'Trigger meta-analysis of outputs',
60
+ transform: this.transformReflect.bind(this)
61
+ });
62
+
63
+ this.registerCommand({
64
+ name: 'collapse',
65
+ description: 'Return to default behavior',
66
+ transform: this.transformCollapse.bind(this)
67
+ });
68
+
69
+ this.registerCommand({
70
+ name: 'fork',
71
+ description: 'Generate multiple alternative responses',
72
+ parameters: [
73
+ {
74
+ name: 'count',
75
+ description: 'Number of alternatives to generate',
76
+ required: false,
77
+ default: 2
78
+ }
79
+ ],
80
+ transform: this.transformFork.bind(this)
81
+ });
82
+ }
83
+
84
+ public registerCommand(command: SymbolicCommand) {
85
+ this.commands.set(command.name, command);
86
+ if (command.aliases) {
87
+ command.aliases.forEach(alias => {
88
+ this.aliasMap.set(alias, command.name);
89
+ });
90
+ }
91
+ }
92
+
93
+ public async generate(input: { prompt: string, systemPrompt?: string }): Promise<string> {
94
+ const { prompt, systemPrompt = '' } = input;
95
+
96
+ // Parse command from prompt
97
+ const { command, cleanPrompt, parameters } = this.parseCommand(prompt);
98
+
99
+ // Transform prompt based on command
100
+ const transformed = command
101
+ ? await this.commands.get(command)?.transform(cleanPrompt, {
102
+ systemPrompt,
103
+ parameters,
104
+ options: this.options
105
+ })
106
+ : { systemPrompt, userPrompt: prompt };
107
+
108
+ // Execute the transformed prompt with the provider's API
109
+ return this.executePrompt(transformed);
110
+ }
111
+
112
+ protected parseCommand(prompt: string): { command: string | null, cleanPrompt: string, parameters: Record<string, any> } {
113
+ const commandRegex = /^\/([a-zA-Z0-9_]+)(?:\s+([^\n]*))?/;
114
+ const match = prompt.match(commandRegex);
115
+
116
+ if (!match) {
117
+ return { command: null, cleanPrompt: prompt, parameters: {} };
118
+ }
119
+
120
+ const [fullMatch, command, rest] = match;
121
+ const commandName = this.aliasMap.get(command) || command;
122
+
123
+ if (!this.commands.has(commandName)) {
124
+ return { command: null, cleanPrompt: prompt, parameters: {} };
125
+ }
126
+
127
+ // Parse parameters if any
128
+ const parameters = this.parseParameters(commandName, rest || '');
129
+ const cleanPrompt = prompt.replace(fullMatch, '').trim();
130
+
131
+ return { command: commandName, cleanPrompt, parameters };
132
+ }
133
+
134
+ protected parseParameters(command: string, paramString: string): Record<string, any> {
135
+ // Default simple implementation - override in specific adapters as needed
136
+ const params: Record<string, any> = {};
137
+ const cmd = this.commands.get(command);
138
+
139
+ // If no parameters defined for command, return empty object
140
+ if (!cmd?.parameters || cmd.parameters.length === 0) {
141
+ return params;
142
+ }
143
+
144
+ // Set defaults
145
+ cmd.parameters.forEach(param => {
146
+ if (param.default !== undefined) {
147
+ params[param.name] = param.default;
148
+ }
149
+ });
150
+
151
+ // Simple parsing - can be enhanced for more complex parameter syntax
152
+ const paramRegex = /--([a-zA-Z0-9_]+)(?:=([^\s]+))?/g;
153
+ let match;
154
+
155
+ while ((match = paramRegex.exec(paramString)) !== null) {
156
+ const [_, paramName, paramValue = true] = match;
157
+ params[paramName] = paramValue;
158
+ }
159
+
160
+ return params;
161
+ }
162
+
163
+ /*
164
+ * These transformation methods must be implemented by specific adapters
165
+ * to account for platform-specific behavior
166
+ */
167
+ protected abstract transformThink(prompt: string, options: any): Promise<TransformedPrompt>;
168
+ protected abstract transformFast(prompt: string, options: any): Promise<TransformedPrompt>;
169
+ protected abstract transformLoop(prompt: string, options: any): Promise<TransformedPrompt>;
170
+ protected abstract transformReflect(prompt: string, options: any): Promise<TransformedPrompt>;
171
+ protected abstract transformCollapse(prompt: string, options: any): Promise<TransformedPrompt>;
172
+ protected abstract transformFork(prompt: string, options: any): Promise<TransformedPrompt>;
173
+
174
+ // Method to execute the transformed prompt with the provider's API
175
+ protected abstract executePrompt(transformed: TransformedPrompt): Promise<string>;
176
+ }
src/adapters/claude.ts ADDED
@@ -0,0 +1,176 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ // universal-developer/src/adapters/claude.ts
2
+
3
+ import { ModelAdapter, TransformedPrompt } from './base';
4
+ import axios from 'axios';
5
+
6
+ interface ClaudeOptions {
7
+ apiVersion?: string;
8
+ maxTokens?: number;
9
+ temperature?: number;
10
+ baseURL?: string;
11
+ model?: string;
12
+ }
13
+
14
+ export class ClaudeAdapter extends ModelAdapter {
15
+ private baseURL: string;
16
+ private model: string;
17
+ private maxTokens: number;
18
+ private temperature: number;
19
+
20
+ constructor(apiKey: string, options: ClaudeOptions = {}) {
21
+ super(apiKey, options);
22
+
23
+ this.baseURL = options.baseURL || 'https://api.anthropic.com';
24
+ this.model = options.model || 'claude-3-opus-20240229';
25
+ this.maxTokens = options.maxTokens || 4096;
26
+ this.temperature = options.temperature || 0.7;
27
+ }
28
+
29
+ protected async transformThink(prompt: string, options: any): Promise<TransformedPrompt> {
30
+ // Claude has built-in thinking capabilities we can leverage
31
+ const systemPrompt = `${options.systemPrompt || ''}
32
+ For this response, I'd like you to engage your deepest analytical capabilities. Please think step by step through this problem, considering multiple perspectives and potential approaches. Take your time to develop a comprehensive, nuanced understanding before providing your final answer.`;
33
+
34
+ return {
35
+ systemPrompt,
36
+ userPrompt: prompt,
37
+ modelParameters: {
38
+ temperature: Math.max(0.1, this.temperature - 0.2), // Slightly lower temperature for more deterministic thinking
39
+ enable_thinking: true
40
+ }
41
+ };
42
+ }
43
+
44
+ protected async transformFast(prompt: string, options: any): Promise<TransformedPrompt> {
45
+ const systemPrompt = `${options.systemPrompt || ''}
46
+ Please provide a brief, direct response to this question. Focus on the most important information and keep your answer concise and to the point.`;
47
+
48
+ return {
49
+ systemPrompt,
50
+ userPrompt: prompt,
51
+ modelParameters: {
52
+ temperature: Math.min(1.0, this.temperature + 0.1), // Slightly higher temperature for more fluent responses
53
+ max_tokens: Math.min(this.maxTokens, 1024), // Limit token count for faster responses
54
+ enable_thinking: false
55
+ }
56
+ };
57
+ }
58
+
59
+ protected async transformLoop(prompt: string, options: any): Promise<TransformedPrompt> {
60
+ const iterations = options.parameters.iterations || 3;
61
+
62
+ const systemPrompt = `${options.systemPrompt || ''}
63
+ Please approach this task using an iterative process. Follow these steps:
64
+
65
+ 1. Develop an initial response to the prompt.
66
+ 2. Critically review your response, identifying areas for improvement.
67
+ 3. Create an improved version based on your critique.
68
+ 4. Repeat steps 2-3 for a total of ${iterations} iterations.
69
+ 5. Present your final response, which should reflect the accumulated improvements.
70
+
71
+ Show all iterations in your response, clearly labeled.`;
72
+
73
+ return {
74
+ systemPrompt,
75
+ userPrompt: prompt,
76
+ modelParameters: {
77
+ temperature: this.temperature,
78
+ max_tokens: this.maxTokens
79
+ }
80
+ };
81
+ }
82
+
83
+ protected async transformReflect(prompt: string, options: any): Promise<TransformedPrompt> {
84
+ const systemPrompt = `${options.systemPrompt || ''}
85
+ For this response, I'd like you to engage in two distinct phases:
86
+
87
+ 1. First, respond to the user's query directly.
88
+ 2. Then, reflect on your own response by considering:
89
+ - What assumptions did you make in your answer?
90
+ - What perspectives or viewpoints might be underrepresented?
91
+ - What limitations exist in your approach or knowledge?
92
+ - How might your response be improved or expanded?
93
+
94
+ Clearly separate these two phases in your response.`;
95
+
96
+ return {
97
+ systemPrompt,
98
+ userPrompt: prompt,
99
+ modelParameters: {
100
+ temperature: Math.max(0.1, this.temperature - 0.1),
101
+ enable_thinking: true
102
+ }
103
+ };
104
+ }
105
+
106
+ protected async transformCollapse(prompt: string, options: any): Promise<TransformedPrompt> {
107
+ // Return to default behavior - use the original system prompt
108
+ return {
109
+ systemPrompt: options.systemPrompt || '',
110
+ userPrompt: prompt,
111
+ modelParameters: {
112
+ temperature: this.temperature,
113
+ max_tokens: this.maxTokens,
114
+ enable_thinking: false
115
+ }
116
+ };
117
+ }
118
+
119
+ protected async transformFork(prompt: string, options: any): Promise<TransformedPrompt> {
120
+ const count = options.parameters.count || 2;
121
+
122
+ const systemPrompt = `${options.systemPrompt || ''}
123
+ Please provide ${count} distinct alternative responses to this prompt. These alternatives should represent fundamentally different approaches or perspectives, not minor variations. Label each alternative clearly.`;
124
+
125
+ return {
126
+ systemPrompt,
127
+ userPrompt: prompt,
128
+ modelParameters: {
129
+ temperature: Math.min(1.0, this.temperature + 0.2), // Higher temperature to encourage diversity
130
+ max_tokens: this.maxTokens
131
+ }
132
+ };
133
+ }
134
+
135
+ protected async executePrompt(transformed: TransformedPrompt): Promise<string> {
136
+ try {
137
+ const messages = [
138
+ // System message if provided
139
+ ...(transformed.systemPrompt ? [{
140
+ role: 'system',
141
+ content: transformed.systemPrompt
142
+ }] : []),
143
+ // User message
144
+ {
145
+ role: 'user',
146
+ content: transformed.userPrompt
147
+ }
148
+ ];
149
+
150
+ const response = await axios.post(
151
+ `${this.baseURL}/v1/messages`,
152
+ {
153
+ model: this.model,
154
+ messages,
155
+ max_tokens: transformed.modelParameters?.max_tokens || this.maxTokens,
156
+ temperature: transformed.modelParameters?.temperature || this.temperature,
157
+ ...('enable_thinking' in (transformed.modelParameters || {}) ?
158
+ { enable_thinking: transformed.modelParameters?.enable_thinking } :
159
+ {})
160
+ },
161
+ {
162
+ headers: {
163
+ 'Content-Type': 'application/json',
164
+ 'x-api-key': this.apiKey,
165
+ 'anthropic-version': '2023-06-01'
166
+ }
167
+ }
168
+ );
169
+
170
+ return response.data.content[0].text;
171
+ } catch (error) {
172
+ console.error('Error executing Claude prompt:', error);
173
+ throw new Error(`Failed to execute Claude prompt: ${error.message}`);
174
+ }
175
+ }
176
+ }
src/adapters/openai.ts ADDED
@@ -0,0 +1,191 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ // universal-developer/src/adapters/openai.ts
2
+
3
+ import { ModelAdapter, TransformedPrompt } from './base';
4
+ import axios from 'axios';
5
+
6
+ interface OpenAIOptions {
7
+ apiVersion?: string;
8
+ maxTokens?: number;
9
+ temperature?: number;
10
+ baseURL?: string;
11
+ model?: string;
12
+ }
13
+
14
+ export class OpenAIAdapter extends ModelAdapter {
15
+ private baseURL: string;
16
+ private model: string;
17
+ private maxTokens: number;
18
+ private temperature: number;
19
+
20
+ constructor(apiKey: string, options: OpenAIOptions = {}) {
21
+ super(apiKey, options);
22
+
23
+ this.baseURL = options.baseURL || 'https://api.openai.com';
24
+ this.model = options.model || 'gpt-4';
25
+ this.maxTokens = options.maxTokens || 4096;
26
+ this.temperature = options.temperature || 0.7;
27
+ }
28
+
29
+ protected async transformThink(prompt: string, options: any): Promise<TransformedPrompt> {
30
+ // For OpenAI, we'll use detailed system instructions to emulate thinking mode
31
+ const systemPrompt = `${options.systemPrompt || ''}
32
+ When responding to this query, please use the following approach:
33
+ 1. Take a deep breath and think step-by-step about the problem
34
+ 2. Break down complex aspects into simpler components
35
+ 3. Consider multiple perspectives and approaches
36
+ 4. Identify potential misconceptions or errors in reasoning
37
+ 5. Synthesize your analysis into a comprehensive response
38
+ 6. Structure your thinking process visibly with clear sections:
39
+ a. Initial Analysis
40
+ b. Detailed Exploration
41
+ c. Synthesis and Conclusion`;
42
+
43
+ return {
44
+ systemPrompt,
45
+ userPrompt: prompt,
46
+ modelParameters: {
47
+ temperature: Math.max(0.1, this.temperature - 0.2),
48
+ max_tokens: this.maxTokens
49
+ }
50
+ };
51
+ }
52
+
53
+ protected async transformFast(prompt: string, options: any): Promise<TransformedPrompt> {
54
+ const systemPrompt = `${options.systemPrompt || ''}
55
+ Please provide a concise, direct response. Focus only on the most essential information needed to answer the query. Keep explanations minimal and prioritize brevity over comprehensiveness.`;
56
+
57
+ return {
58
+ systemPrompt,
59
+ userPrompt: prompt,
60
+ modelParameters: {
61
+ temperature: Math.min(1.0, this.temperature + 0.1),
62
+ max_tokens: Math.min(this.maxTokens, 1024),
63
+ presence_penalty: 1.0, // Encourage brevity by penalizing repetition
64
+ frequency_penalty: 1.0
65
+ }
66
+ };
67
+ }
68
+
69
+ protected async transformLoop(prompt: string, options: any): Promise<TransformedPrompt> {
70
+ const iterations = options.parameters.iterations || 3;
71
+
72
+ const systemPrompt = `${options.systemPrompt || ''}
73
+ Please approach this task using an iterative refinement process with ${iterations} cycles:
74
+
75
+ 1. Initial Version: Create your first response to the query
76
+ 2. Critical Review: Analyze the strengths and weaknesses of your response
77
+ 3. Improved Version: Create an enhanced version addressing the identified issues
78
+ 4. Repeat steps 2-3 for each iteration
79
+ 5. Final Version: Provide your most refined response
80
+
81
+ Clearly label each iteration (e.g., "Iteration 1", "Critique 1", etc.) in your response.`;
82
+
83
+ return {
84
+ systemPrompt,
85
+ userPrompt: prompt,
86
+ modelParameters: {
87
+ temperature: this.temperature,
88
+ max_tokens: this.maxTokens
89
+ }
90
+ };
91
+ }
92
+
93
+ protected async transformReflect(prompt: string, options: any): Promise<TransformedPrompt> {
94
+ const systemPrompt = `${options.systemPrompt || ''}
95
+ For this query, please structure your response in two distinct parts:
96
+
97
+ PART 1: DIRECT RESPONSE
98
+ Provide your primary answer to the user's query.
99
+
100
+ PART 2: META-REFLECTION
101
+ Then, engage in critical reflection on your own response by addressing:
102
+ - What assumptions did you make in your answer?
103
+ - What alternative perspectives might be valid?
104
+ - What are the limitations of your response?
105
+ - How might your response be improved?
106
+ - What cognitive biases might have influenced your thinking?
107
+
108
+ Make sure both parts are clearly labeled and distinguishable.`;
109
+
110
+ return {
111
+ systemPrompt,
112
+ userPrompt: prompt,
113
+ modelParameters: {
114
+ temperature: Math.max(0.1, this.temperature - 0.1),
115
+ max_tokens: this.maxTokens
116
+ }
117
+ };
118
+ }
119
+
120
+ protected async transformCollapse(prompt: string, options: any): Promise<TransformedPrompt> {
121
+ // Return to default behavior
122
+ return {
123
+ systemPrompt: options.systemPrompt || '',
124
+ userPrompt: prompt,
125
+ modelParameters: {
126
+ temperature: this.temperature,
127
+ max_tokens: this.maxTokens
128
+ }
129
+ };
130
+ }
131
+
132
+ protected async transformFork(prompt: string, options: any): Promise<TransformedPrompt> {
133
+ const count = options.parameters.count || 2;
134
+
135
+ const systemPrompt = `${options.systemPrompt || ''}
136
+ Please provide ${count} substantively different responses to this prompt. Each alternative should represent a different approach, perspective, or framework. Clearly label each alternative (e.g., "Alternative 1", "Alternative 2", etc.).`;
137
+
138
+ return {
139
+ systemPrompt,
140
+ userPrompt: prompt,
141
+ modelParameters: {
142
+ temperature: Math.min(1.0, this.temperature + 0.2), // Higher temperature for diversity
143
+ max_tokens: this.maxTokens
144
+ }
145
+ };
146
+ }
147
+
148
+ protected async executePrompt(transformed: TransformedPrompt): Promise<string> {
149
+ try {
150
+ const messages = [
151
+ // System message if provided
152
+ ...(transformed.systemPrompt ? [{
153
+ role: 'system',
154
+ content: transformed.systemPrompt
155
+ }] : []),
156
+ // User message
157
+ {
158
+ role: 'user',
159
+ content: transformed.userPrompt
160
+ }
161
+ ];
162
+
163
+ const response = await axios.post(
164
+ `${this.baseURL}/v1/chat/completions`,
165
+ {
166
+ model: this.model,
167
+ messages,
168
+ max_tokens: transformed.modelParameters?.max_tokens || this.maxTokens,
169
+ temperature: transformed.modelParameters?.temperature || this.temperature,
170
+ ...(transformed.modelParameters?.presence_penalty !== undefined ?
171
+ { presence_penalty: transformed.modelParameters.presence_penalty } :
172
+ {}),
173
+ ...(transformed.modelParameters?.frequency_penalty !== undefined ?
174
+ { frequency_penalty: transformed.modelParameters.frequency_penalty } :
175
+ {})
176
+ },
177
+ {
178
+ headers: {
179
+ 'Content-Type': 'application/json',
180
+ 'Authorization': `Bearer ${this.apiKey}`
181
+ }
182
+ }
183
+ );
184
+
185
+ return response.data.choices[0].message.content;
186
+ } catch (error) {
187
+ console.error('Error executing OpenAI prompt:', error);
188
+ throw new Error(`Failed to execute OpenAI prompt: ${error.message}`);
189
+ }
190
+ }
191
+ }
src/adapters/qwen.ts ADDED
@@ -0,0 +1,175 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ // universal-developer/src/adapters/qwen.ts
2
+
3
+ import { ModelAdapter, TransformedPrompt } from './base';
4
+ import axios from 'axios';
5
+
6
+ interface QwenOptions {
7
+ apiVersion?: string;
8
+ maxTokens?: number;
9
+ temperature?: number;
10
+ baseURL?: string;
11
+ model?: string;
12
+ }
13
+
14
+ export class QwenAdapter extends ModelAdapter {
15
+ private baseURL: string;
16
+ private model: string;
17
+ private maxTokens: number;
18
+ private temperature: number;
19
+
20
+ constructor(apiKey: string, options: QwenOptions = {}) {
21
+ super(apiKey, options);
22
+
23
+ this.baseURL = options.baseURL || 'https://api.qwen.ai';
24
+ this.model = options.model || 'qwen3-30b-a3b';
25
+ this.maxTokens = options.maxTokens || 4096;
26
+ this.temperature = options.temperature || 0.7;
27
+ }
28
+
29
+ protected async transformThink(prompt: string, options: any): Promise<TransformedPrompt> {
30
+ // Leverage Qwen3's native thinking mode
31
+ return {
32
+ systemPrompt: options.systemPrompt || '',
33
+ userPrompt: prompt.trim().endsWith('/think') ? prompt : `${prompt} /think`,
34
+ modelParameters: {
35
+ temperature: Math.max(0.1, this.temperature - 0.2),
36
+ enable_thinking: true
37
+ }
38
+ };
39
+ }
40
+
41
+ protected async transformFast(prompt: string, options: any): Promise<TransformedPrompt> {
42
+ // Disable thinking mode for fast responses
43
+ const systemPrompt = `${options.systemPrompt || ''}
44
+ Provide brief, direct responses. Focus on essential information only.`;
45
+
46
+ return {
47
+ systemPrompt,
48
+ userPrompt: prompt.trim().endsWith('/no_think') ? prompt : `${prompt} /no_think`,
49
+ modelParameters: {
50
+ temperature: Math.min(1.0, this.temperature + 0.1),
51
+ max_tokens: Math.min(this.maxTokens, 1024),
52
+ enable_thinking: false
53
+ }
54
+ };
55
+ }
56
+
57
+ protected async transformLoop(prompt: string, options: any): Promise<TransformedPrompt> {
58
+ const iterations = options.parameters.iterations || 3;
59
+
60
+ const systemPrompt = `${options.systemPrompt || ''}
61
+ Please use an iterative approach with ${iterations} refinement cycles:
62
+ 1. Initial response
63
+ 2. Critical review
64
+ 3. Improvement
65
+ 4. Repeat steps 2-3 for a total of ${iterations} iterations
66
+ 5. Present your final response with all iterations clearly labeled`;
67
+
68
+ return {
69
+ systemPrompt,
70
+ userPrompt: prompt,
71
+ modelParameters: {
72
+ temperature: this.temperature,
73
+ enable_thinking: true // Use thinking mode for deeper refinement
74
+ }
75
+ };
76
+ }
77
+
78
+ protected async transformReflect(prompt: string, options: any): Promise<TransformedPrompt> {
79
+ const systemPrompt = `${options.systemPrompt || ''}
80
+ For this response, please:
81
+ 1. Answer the query directly
82
+ 2. Then reflect on your answer by analyzing:
83
+ - Assumptions made
84
+ - Alternative perspectives
85
+ - Limitations in your approach
86
+ - Potential improvements`;
87
+
88
+ return {
89
+ systemPrompt,
90
+ userPrompt: `${prompt} /think`, // Use native thinking for reflection
91
+ modelParameters: {
92
+ temperature: Math.max(0.1, this.temperature - 0.1),
93
+ enable_thinking: true
94
+ }
95
+ };
96
+ }
97
+
98
+ protected async transformCollapse(prompt: string, options: any): Promise<TransformedPrompt> {
99
+ // Return to default behavior
100
+ return {
101
+ systemPrompt: options.systemPrompt || '',
102
+ userPrompt: `${prompt} /no_think`, // Explicitly disable thinking
103
+ modelParameters: {
104
+ temperature: this.temperature,
105
+ max_tokens: this.maxTokens,
106
+ enable_thinking: false
107
+ }
108
+ };
109
+ }
110
+
111
+ protected async transformFork(prompt: string, options: any): Promise<TransformedPrompt> {
112
+ const count = options.parameters.count || 2;
113
+
114
+ const systemPrompt = `${options.systemPrompt || ''}
115
+ Please provide ${count} distinct alternative responses to this prompt, representing different approaches or perspectives. Label each alternative clearly.`;
116
+
117
+ return {
118
+ systemPrompt,
119
+ userPrompt: prompt,
120
+ modelParameters: {
121
+ temperature: Math.min(1.0, this.temperature + 0.2),
122
+ max_tokens: this.maxTokens,
123
+ enable_thinking: true // Use thinking for more creative alternatives
124
+ }
125
+ };
126
+ }
127
+
128
+ protected async executePrompt(transformed: TransformedPrompt): Promise<string> {
129
+ try {
130
+ const messages = [
131
+ // System message if provided
132
+ ...(transformed.systemPrompt ? [{
133
+ role: 'system',
134
+ content: transformed.systemPrompt
135
+ }] : []),
136
+ // User message
137
+ {
138
+ role: 'user',
139
+ content: transformed.userPrompt
140
+ }
141
+ ];
142
+
143
+ const response = await axios.post(
144
+ `${this.baseURL}/v1/chat/completions`,
145
+ {
146
+ model: this.model,
147
+ messages,
148
+ max_tokens: transformed.modelParameters?.max_tokens || this.maxTokens,
149
+ temperature: transformed.modelParameters?.temperature || this.temperature,
150
+ ...('enable_thinking' in (transformed.modelParameters || {}) ?
151
+ { enable_thinking: transformed.modelParameters?.enable_thinking } :
152
+ {})
153
+ },
154
+ {
155
+ headers: {
156
+ 'Content-Type': 'application/json',
157
+ 'Authorization': `Bearer ${this.apiKey}`
158
+ }
159
+ }
160
+ );
161
+
162
+ // Extract thinking content if available
163
+ let content = '';
164
+ if (response.data.thinking_content) {
165
+ content = `<thinking>\n${response.data.thinking_content}\n</thinking>\n\n`;
166
+ }
167
+ content += response.data.choices[0].message.content;
168
+
169
+ return content;
170
+ } catch (error) {
171
+ console.error('Error executing Qwen prompt:', error);
172
+ throw new Error(`Failed to execute Qwen prompt: ${error.message}`);
173
+ }
174
+ }
175
+ }
src/cli.ts ADDED
@@ -0,0 +1,399 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ #!/usr/bin/env node
2
+
3
+ // universal-developer/src/cli.ts
4
+
5
+ import { program } from 'commander';
6
+ import { UniversalLLM } from './index';
7
+ import * as fs from 'fs';
8
+ import * as path from 'path';
9
+ import chalk from 'chalk';
10
+ import ora from 'ora';
11
+ import * as dotenv from 'dotenv';
12
+ import * as os from 'os';
13
+ import * as readline from 'readline';
14
+ import { createSpinner } from 'nanospinner';
15
+
16
+ // Load environment variables
17
+ dotenv.config();
18
+
19
+ // Load package.json for version info
20
+ const packageJson = JSON.parse(
21
+ fs.readFileSync(path.resolve(__dirname, '../package.json'), 'utf-8')
22
+ );
23
+
24
+ // Check for config file in user's home directory
25
+ const configDir = path.join(os.homedir(), '.universal-developer');
26
+ const configPath = path.join(configDir, 'config.json');
27
+ let config: any = {
28
+ defaultProvider: 'anthropic',
29
+ enableTelemetry: true,
30
+ apiKeys: {}
31
+ };
32
+
33
+ // Create config directory if it doesn't exist
34
+ if (!fs.existsSync(configDir)) {
35
+ fs.mkdirSync(configDir, { recursive: true });
36
+ }
37
+
38
+ // Load config if it exists
39
+ if (fs.existsSync(configPath)) {
40
+ try {
41
+ config = JSON.parse(fs.readFileSync(configPath, 'utf-8'));
42
+ } catch (error) {
43
+ console.error('Error loading config file:', error);
44
+ }
45
+ }
46
+
47
+ // Save config function
48
+ function saveConfig() {
49
+ try {
50
+ fs.writeFileSync(configPath, JSON.stringify(config, null, 2));
51
+ } catch (error) {
52
+ console.error('Error saving config:', error);
53
+ }
54
+ }
55
+
56
+ // Configure CLI
57
+ program
58
+ .name('ud')
59
+ .description('Universal Developer CLI - Control LLMs with symbolic runtime commands')
60
+ .version(packageJson.version);
61
+
62
+ // Configure command
63
+ program
64
+ .command('config')
65
+ .description('Configure Universal Developer CLI')
66
+ .option('-p, --provider <provider>', 'Set default provider (anthropic, openai, qwen, gemini, ollama)')
67
+ .option('-k, --key <key>', 'Set API key for the default provider')
68
+ .option('--anthropic-key <key>', 'Set API key for Anthropic/Claude')
69
+ .option('--openai-key <key>', 'Set API key for OpenAI')
70
+ .option('--qwen-key <key>', 'Set API key for Qwen')
71
+ .option('--gemini-key <key>', 'Set API key for Google Gemini')
72
+ .option('--telemetry <boolean>', 'Enable or disable anonymous telemetry')
73
+ .option('-l, --list', 'List current configuration')
74
+ .action((options) => {
75
+ if (options.list) {
76
+ console.log(chalk.bold('\nCurrent Configuration:'));
77
+ console.log(`Default Provider: ${chalk.green(config.defaultProvider)}`);
78
+ console.log(`Telemetry: ${config.enableTelemetry ? chalk.green('Enabled') : chalk.yellow('Disabled')}`);
79
+ console.log('\nAPI Keys:');
80
+ for (const [provider, key] of Object.entries(config.apiKeys)) {
81
+ console.log(`${provider}: ${key ? chalk.green('Configured') : chalk.red('Not configured')}`);
82
+ }
83
+ return;
84
+ }
85
+
86
+ let changed = false;
87
+
88
+ if (options.provider) {
89
+ const validProviders = ['anthropic', 'openai', 'qwen', 'gemini', 'ollama'];
90
+ if (validProviders.includes(options.provider)) {
91
+ config.defaultProvider = options.provider;
92
+ changed = true;
93
+ console.log(`Default provider set to ${chalk.green(options.provider)}`);
94
+ } else {
95
+ console.error(`Invalid provider: ${options.provider}. Valid options are: ${validProviders.join(', ')}`);
96
+ }
97
+ }
98
+
99
+ if (options.key) {
100
+ if (!config.apiKeys) config.apiKeys = {};
101
+ config.apiKeys[config.defaultProvider] = options.key;
102
+ changed = true;
103
+ console.log(`API key for ${chalk.green(config.defaultProvider)} has been set`);
104
+ }
105
+
106
+ // Provider-specific keys
107
+ const providerKeys = {
108
+ 'anthropic': options.anthropicKey,
109
+ 'openai': options.openaiKey,
110
+ 'qwen': options.qwenKey,
111
+ 'gemini': options.geminiKey
112
+ };
113
+
114
+ for (const [provider, key] of Object.entries(providerKeys)) {
115
+ if (key) {
116
+ if (!config.apiKeys) config.apiKeys = {};
117
+ config.apiKeys[provider] = key;
118
+ changed = true;
119
+ console.log(`API key for ${chalk.green(provider)} has been set`);
120
+ }
121
+ }
122
+
123
+ if (options.telemetry !== undefined) {
124
+ const enableTelemetry = options.telemetry === 'true';
125
+ config.enableTelemetry = enableTelemetry;
126
+ changed = true;
127
+ console.log(`Telemetry ${enableTelemetry ? chalk.green('enabled') : chalk.yellow('disabled')}`);
128
+ }
129
+
130
+ if (changed) {
131
+ saveConfig();
132
+ console.log(chalk.bold('\nConfiguration saved!'));
133
+ } else {
134
+ console.log('No changes made. Use --help to see available options.');
135
+ }
136
+ });
137
+
138
+ // Helper function to handle piped input
139
+ async function getPipedInput(): Promise<string | null> {
140
+ if (process.stdin.isTTY) {
141
+ return null;
142
+ }
143
+
144
+ return new Promise((resolve) => {
145
+ let data = '';
146
+ process.stdin.on('readable', () => {
147
+ const chunk = process.stdin.read();
148
+ if (chunk !== null) {
149
+ data += chunk;
150
+ }
151
+ });
152
+
153
+ process.stdin.on('end', () => {
154
+ resolve(data);
155
+ });
156
+ });
157
+ }
158
+
159
+ // Helper to get API key for a provider
160
+ function getApiKey(provider: string): string {
161
+ // First check config
162
+ if (config.apiKeys && config.apiKeys[provider]) {
163
+ return config.apiKeys[provider];
164
+ }
165
+
166
+ // Then check environment variables
167
+ const envVarName = `${provider.toUpperCase()}_API_KEY`;
168
+ const apiKey = process.env[envVarName];
169
+
170
+ if (!apiKey) {
171
+ console.error(chalk.red(`Error: No API key found for ${provider}.`));
172
+ console.log(`Please set your API key using: ud config --${provider}-key <your-api-key>`);
173
+ console.log(`Or set the ${envVarName} environment variable.`);
174
+ process.exit(1);
175
+ }
176
+
177
+ return apiKey;
178
+ }
179
+
180
+ // Interactive mode
181
+ program
182
+ .command('interactive')
183
+ .alias('i')
184
+ .description('Start an interactive session')
185
+ .option('-p, --provider <provider>', 'LLM provider to use')
186
+ .option('-m, --model <model>', 'Model to use')
187
+ .action(async (options) => {
188
+ const provider = options.provider || config.defaultProvider;
189
+ const apiKey = getApiKey(provider);
190
+
191
+ const llm = new UniversalLLM({
192
+ provider,
193
+ apiKey,
194
+ model: options.model,
195
+ telemetryEnabled: config.enableTelemetry
196
+ });
197
+
198
+ console.log(chalk.bold('\nUniversal Developer Interactive Mode'));
199
+ console.log(chalk.dim(`Using provider: ${provider}`));
200
+ console.log(chalk.dim('Type /exit or Ctrl+C to quit'));
201
+ console.log(chalk.dim('Available commands: /think, /fast, /loop, /reflect, /fork, /collapse\n'));
202
+
203
+ const rl = readline.createInterface({
204
+ input: process.stdin,
205
+ output: process.stdout
206
+ });
207
+
208
+ let conversationHistory: { role: string, content: string }[] = [];
209
+
210
+ const promptUser = () => {
211
+ rl.question('> ', async (input) => {
212
+ if (input.toLowerCase() === '/exit') {
213
+ rl.close();
214
+ return;
215
+ }
216
+
217
+ // Store user message
218
+ conversationHistory.push({
219
+ role: 'user',
220
+ content: input
221
+ });
222
+
223
+ const spinner = createSpinner('Generating response...').start();
224
+
225
+ try {
226
+ const response = await llm.generate({
227
+ messages: conversationHistory
228
+ });
229
+
230
+ spinner.success();
231
+ console.log(`\n${chalk.blue('Assistant:')} ${response}\n`);
232
+
233
+ // Store assistant response
234
+ conversationHistory.push({
235
+ role: 'assistant',
236
+ content: response
237
+ });
238
+ } catch (error) {
239
+ spinner.error();
240
+ console.error(`Error: ${error.message}`);
241
+ }
242
+
243
+ promptUser();
244
+ });
245
+ };
246
+
247
+ console.log(chalk.blue('Assistant:') + ' Hello! How can I help you today?\n');
248
+ conversationHistory.push({
249
+ role: 'assistant',
250
+ content: 'Hello! How can I help you today?'
251
+ });
252
+
253
+ promptUser();
254
+ });
255
+
256
+ // Command for each symbolic operation
257
+ const symbolicCommands = [
258
+ { name: 'think', description: 'Generate response using deep reasoning' },
259
+ { name: 'fast', description: 'Generate quick, concise response' },
260
+ { name: 'loop', description: 'Generate iteratively refined response' },
261
+ { name: 'reflect', description: 'Generate response with self-reflection' },
262
+ { name: 'fork', description: 'Generate multiple alternative responses' },
263
+ { name: 'collapse', description: 'Generate response using default behavior' }
264
+ ];
265
+
266
+ symbolicCommands.forEach(cmd => {
267
+ program
268
+ .command(cmd.name)
269
+ .description(cmd.description)
270
+ .argument('[prompt]', 'The prompt to send to the LLM')
271
+ .option('-p, --provider <provider>', 'LLM provider to use')
272
+ .option('-m, --model <model>', 'Model to use')
273
+ .option('-s, --system <prompt>', 'System prompt to use')
274
+ .option('-i, --iterations <number>', 'Number of iterations (for loop command)')
275
+ .option('-c, --count <number>', 'Number of alternatives (for fork command)')
276
+ .action(async (promptArg, options) => {
277
+ // Get provider from options or config
278
+ const provider = options.provider || config.defaultProvider;
279
+ const apiKey = getApiKey(provider);
280
+
281
+ // Initialize LLM
282
+ const llm = new UniversalLLM({
283
+ provider,
284
+ apiKey,
285
+ model: options.model,
286
+ telemetryEnabled: config.enableTelemetry
287
+ });
288
+
289
+ // Check for piped input
290
+ const pipedInput = await getPipedInput();
291
+
292
+ // Combine prompt argument and piped input
293
+ let prompt = promptArg || '';
294
+ if (pipedInput) {
295
+ prompt = prompt ? `${prompt}\n\n${pipedInput}` : pipedInput;
296
+ }
297
+
298
+ // If no prompt provided, show help
299
+ if (!prompt) {
300
+ console.error('Error: Prompt is required.');
301
+ console.log(`Usage: ud ${cmd.name} "Your prompt here"`);
302
+ console.log('Or pipe content: cat file.txt | ud ${cmd.name}');
303
+ process.exit(1);
304
+ }
305
+
306
+ // Build command string
307
+ let commandString = `/${cmd.name}`;
308
+
309
+ // Add command-specific parameters
310
+ if (cmd.name === 'loop' && options.iterations) {
311
+ commandString += ` --iterations=${options.iterations}`;
312
+ } else if (cmd.name === 'fork' && options.count) {
313
+ commandString += ` --count=${options.count}`;
314
+ }
315
+
316
+ // Add the prompt
317
+ const fullPrompt = `${commandString} ${prompt}`;
318
+
319
+ // Show what's happening
320
+ console.log(chalk.dim(`Using provider: ${provider}`));
321
+ const spinner = createSpinner('Generating response...').start();
322
+
323
+ try {
324
+ const response = await llm.generate({
325
+ systemPrompt: options.system,
326
+ prompt: fullPrompt
327
+ });
328
+
329
+ spinner.success();
330
+ console.log('\n' + response + '\n');
331
+ } catch (error) {
332
+ spinner.error();
333
+ console.error(`Error: ${error.message}`);
334
+ process.exit(1);
335
+ }
336
+ });
337
+ });
338
+
339
+ // Default command (no subcommand specified)
340
+ program
341
+ .arguments('[prompt]')
342
+ .option('-p, --provider <provider>', 'LLM provider to use')
343
+ .option('-m, --model <model>', 'Model to use')
344
+ .option('-s, --system <prompt>', 'System prompt to use')
345
+ .option('-c, --command <command>', 'Symbolic command to use')
346
+ .action(async (promptArg, options) => {
347
+ if (!promptArg && !process.stdin.isTTY) {
348
+ // No prompt argument but has piped input
349
+ const pipedInput = await getPipedInput();
350
+ if (pipedInput) {
351
+ promptArg = pipedInput;
352
+ }
353
+ }
354
+
355
+ if (!promptArg) {
356
+ // No prompt provided, show interactive mode
357
+ program.commands.find(cmd => cmd.name() === 'interactive').action(options);
358
+ return;
359
+ }
360
+
361
+ // Get provider from options or config
362
+ const provider = options.provider || config.defaultProvider;
363
+ const apiKey = getApiKey(provider);
364
+
365
+ // Initialize LLM
366
+ const llm = new UniversalLLM({
367
+ provider,
368
+ apiKey,
369
+ model: options.model,
370
+ telemetryEnabled: config.enableTelemetry
371
+ });
372
+
373
+ // Default to think command if none specified
374
+ const command = options.command || 'think';
375
+
376
+ // Format prompt with command
377
+ const fullPrompt = `/${command} ${promptArg}`;
378
+
379
+ // Show what's happening
380
+ console.log(chalk.dim(`Using provider: ${provider}`));
381
+ const spinner = createSpinner('Generating response...').start();
382
+
383
+ try {
384
+ const response = await llm.generate({
385
+ systemPrompt: options.system,
386
+ prompt: fullPrompt
387
+ });
388
+
389
+ spinner.success();
390
+ console.log('\n' + response + '\n');
391
+ } catch (error) {
392
+ spinner.error();
393
+ console.error(`Error: ${error.message}`);
394
+ process.exit(1);
395
+ }
396
+ });
397
+
398
+ // Parse arguments
399
+ program.parse();
src/extension.ts ADDED
@@ -0,0 +1,1419 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ // universal-developer-vscode/src/extension.ts
2
+
3
+ import * as vscode from 'vscode';
4
+
5
+ /**
6
+ * Symbolic command definition
7
+ */
8
+ interface SymbolicCommand {
9
+ name: string;
10
+ description: string;
11
+ parameters?: {
12
+ name: string;
13
+ description: string;
14
+ required?: boolean;
15
+ default?: any;
16
+ }[];
17
+ examples: string[];
18
+ provider?: {
19
+ claude?: boolean;
20
+ openai?: boolean;
21
+ qwen?: boolean;
22
+ gemini?: boolean;
23
+ ollama?: boolean;
24
+ };
25
+ }
26
+
27
+ /**
28
+ * Universal developer extension activation function
29
+ */
30
+ export function activate(context: vscode.ExtensionContext) {
31
+ console.log('Universal Developer extension is now active');
32
+
33
+ // Register command palette commands
34
+ const insertSymbolicCommand = vscode.commands.registerCommand(
35
+ 'universal-developer.insertSymbolicCommand',
36
+ async () => {
37
+ const commandName = await showSymbolicCommandQuickPick();
38
+ if (!commandName) return;
39
+
40
+ const command = SYMBOLIC_COMMANDS.find(cmd => cmd.name === commandName);
41
+ if (!command) return;
42
+
43
+ // Check if command has parameters
44
+ let commandString = `/${command.name}`;
45
+
46
+ if (command.parameters && command.parameters.length > 0) {
47
+ const parameters = await collectCommandParameters(command);
48
+ if (parameters) {
49
+ Object.entries(parameters).forEach(([key, value]) => {
50
+ if (value !== undefined && value !== null && value !== '') {
51
+ commandString += ` --${key}=${value}`;
52
+ }
53
+ });
54
+ }
55
+ }
56
+
57
+ // Insert command at cursor position
58
+ const editor = vscode.window.activeTextEditor;
59
+ if (editor) {
60
+ editor.edit(editBuilder => {
61
+ editBuilder.insert(editor.selection.active, commandString + ' ');
62
+ });
63
+ }
64
+ }
65
+ );
66
+
67
+ // Register the symbolic command chain builder
68
+ const buildSymbolicChain = vscode.commands.registerCommand(
69
+ 'universal-developer.buildSymbolicChain',
70
+ async () => {
71
+ await showCommandChainBuilder();
72
+ }
73
+ );
74
+
75
+ // Register symbolic command hover provider
76
+ const hoverProvider = vscode.languages.registerHoverProvider(
77
+ ['javascript', 'typescript', 'python', 'markdown', 'plaintext'],
78
+ {
79
+ provideHover(document, position, token) {
80
+ const range = document.getWordRangeAtPosition(position, /\/[a-zA-Z0-9_]+/);
81
+ if (!range) return;
82
+
83
+ const commandText = document.getText(range);
84
+ const commandName = commandText.substring(1); // Remove the leading /
85
+
86
+ const command = SYMBOLIC_COMMANDS.find(cmd => cmd.name === commandName);
87
+ if (!command) return;
88
+
89
+ // Create hover markdown
90
+ const hoverContent = new vscode.MarkdownString();
91
+ hoverContent.appendMarkdown(`**/${command.name}**\n\n`);
92
+ hoverContent.appendMarkdown(`${command.description}\n\n`);
93
+
94
+ if (command.parameters && command.parameters.length > 0) {
95
+ hoverContent.appendMarkdown('**Parameters:**\n\n');
96
+ command.parameters.forEach(param => {
97
+ const required = param.required ? ' (required)' : '';
98
+ const defaultValue = param.default !== undefined ? ` (default: ${param.default})` : '';
99
+ hoverContent.appendMarkdown(`- \`--${param.name}\`${required}${defaultValue}: ${param.description}\n`);
100
+ });
101
+ hoverContent.appendMarkdown('\n');
102
+ }
103
+
104
+ if (command.examples && command.examples.length > 0) {
105
+ hoverContent.appendMarkdown('**Examples:**\n\n');
106
+ command.examples.forEach(example => {
107
+ hoverContent.appendCodeBlock(example, 'markdown');
108
+ });
109
+ }
110
+
111
+ // Show provider compatibility
112
+ if (command.provider) {
113
+ hoverContent.appendMarkdown('\n**Compatible with:**\n\n');
114
+ const supported = Object.entries(command.provider)
115
+ .filter(([_, isSupported]) => isSupported)
116
+ .map(([provider]) => provider);
117
+ hoverContent.appendMarkdown(supported.join(', '));
118
+ }
119
+
120
+ return new vscode.Hover(hoverContent, range);
121
+ }
122
+ }
123
+ );
124
+
125
+ // Register completion provider for symbolic commands
126
+ const completionProvider = vscode.languages.registerCompletionItemProvider(
127
+ ['javascript', 'typescript', 'python', 'markdown', 'plaintext'],
128
+ {
129
+ provideCompletionItems(document, position) {
130
+ const linePrefix = document.lineAt(position).text.substring(0, position.character);
131
+
132
+ // Check if we're at the start of a potential symbolic command
133
+ if (!linePrefix.endsWith('/')) {
134
+ return undefined;
135
+ }
136
+
137
+ const completionItems = SYMBOLIC_COMMANDS.map(command => {
138
+ const item = new vscode.CompletionItem(
139
+ command.name,
140
+ vscode.CompletionItemKind.Keyword
141
+ );
142
+ item.insertText = command.name;
143
+ item.detail = command.description;
144
+ item.documentation = new vscode.MarkdownString(command.description);
145
+ return item;
146
+ });
147
+
148
+ return completionItems;
149
+ }
150
+ },
151
+ '/' // Only trigger after the / character
152
+ );
153
+
154
+ // Register parameter completion provider
155
+ const parameterCompletionProvider = vscode.languages.registerCompletionItemProvider(
156
+ ['javascript', 'typescript', 'python', 'markdown', 'plaintext'],
157
+ {
158
+ provideCompletionItems(document, position) {
159
+ const linePrefix = document.lineAt(position).text.substring(0, position.character);
160
+
161
+ // Match a symbolic command with a potential parameter start
162
+ const commandMatch = linePrefix.match(/\/([a-zA-Z0-9_]+)(?:\s+(?:[^\s]+\s+)*)?--$/);
163
+ if (!commandMatch) {
164
+ return undefined;
165
+ }
166
+
167
+ const commandName = commandMatch[1];
168
+ const command = SYMBOLIC_COMMANDS.find(cmd => cmd.name === commandName);
169
+
170
+ if (!command || !command.parameters || command.parameters.length === 0) {
171
+ return undefined;
172
+ }
173
+
174
+ // Offer parameter completions
175
+ const completionItems = command.parameters.map(param => {
176
+ const item = new vscode.CompletionItem(
177
+ param.name,
178
+ vscode.CompletionItemKind.Property
179
+ );
180
+ item.insertText = `${param.name}=`;
181
+ item.detail = param.description;
182
+
183
+ if (param.default !== undefined) {
184
+ item.documentation = new vscode.MarkdownString(
185
+ `${param.description}\n\nDefault: \`${param.default}\``
186
+ );
187
+ } else {
188
+ item.documentation = new vscode.MarkdownString(param.description);
189
+ }
190
+
191
+ return item;
192
+ });
193
+
194
+ return completionItems;
195
+ }
196
+ },
197
+ '-' // Trigger after - (the second dash in --)
198
+ );
199
+
200
+ // Register code actions provider for symbolic command suggestions
201
+ const codeActionsProvider = vscode.languages.registerCodeActionsProvider(
202
+ ['javascript', 'typescript', 'python'],
203
+ {
204
+ provideCodeActions(document, range, context, token) {
205
+ // Check if there's any LLM API call in the current line
206
+ const line = document.lineAt(range.start.line).text;
207
+
208
+ const llmApiPatterns = [
209
+ /\.generate\(\s*{/, // UniversalLLM.generate()
210
+ /\.createCompletion\(/, // OpenAI
211
+ /\.createChatCompletion\(/, // OpenAI
212
+ /\.chat\.completions\.create\(/, // OpenAI v2
213
+ /\.messages\.create\(/, // Anthropic/Claude
214
+ /\.generateContent\(/ // Google Gemini
215
+ ];
216
+
217
+ if (!llmApiPatterns.some(pattern => pattern.test(line))) {
218
+ return;
219
+ }
220
+
221
+ // Create code actions for adding symbolic commands
222
+ const actions: vscode.CodeAction[] = [];
223
+
224
+ // Find the prompt parameter
225
+ const promptMatch = line.match(/(prompt|messages|content)\s*:/);
226
+ if (!promptMatch) return;
227
+
228
+ // Add actions for common symbolic commands
229
+ ['think', 'fast', 'reflect', 'loop'].forEach(commandName => {
230
+ const command = SYMBOLIC_COMMANDS.find(cmd => cmd.name === commandName);
231
+ if (!command) return;
232
+
233
+ const action = new vscode.CodeAction(
234
+ `Add /${commandName} command`,
235
+ vscode.CodeActionKind.RefactorRewrite
236
+ );
237
+
238
+ action.command = {
239
+ title: `Insert /${commandName}`,
240
+ command: 'universal-developer.insertSymbolicCommandAtPrompt',
241
+ arguments: [range.start.line, command]
242
+ };
243
+
244
+ actions.push(action);
245
+ });
246
+
247
+ return actions;
248
+ }
249
+ }
250
+ );
251
+
252
+ // Register command to insert symbolic command at prompt
253
+ const insertSymbolicCommandAtPrompt = vscode.commands.registerCommand(
254
+ 'universal-developer.insertSymbolicCommandAtPrompt',
255
+ async (line: number, command: SymbolicCommand) => {
256
+ const editor = vscode.window.activeTextEditor;
257
+ if (!editor) return;
258
+
259
+ const document = editor.document;
260
+ const lineText = document.lineAt(line).text;
261
+
262
+ // Find where the prompt string starts
263
+ const promptMatch = lineText.match(/(prompt|messages|content)\s*:\s*['"]/);
264
+ if (!promptMatch) return;
265
+
266
+ const promptStartIdx = promptMatch.index! + promptMatch[0].length;
267
+ const position = new vscode.Position(line, promptStartIdx);
268
+
269
+ editor.edit(editBuilder => {
270
+ editBuilder.insert(position, `/${command.name} `);
271
+ });
272
+ }
273
+ );
274
+
275
+ // Register status bar item for active symbolic context
276
+ const statusBarItem = vscode.window.createStatusBarItem(
277
+ vscode.StatusBarAlignment.Right,
278
+ 100
279
+ );
280
+ statusBarItem.text = "$(symbol-keyword) Symbolic";
281
+ statusBarItem.tooltip = "Universal Developer: Click to insert symbolic command";
282
+ statusBarItem.command = 'universal-developer.insertSymbolicCommand';
283
+ statusBarItem.show();
284
+
285
+ // Register documentation webview
286
+ const showDocumentation = vscode.commands.registerCommand(
287
+ 'universal-developer.showDocumentation',
288
+ () => {
289
+ const panel = vscode.window.createWebviewPanel(
290
+ 'universalDeveloperDocs',
291
+ 'Universal Developer Documentation',
292
+ vscode.ViewColumn.One,
293
+ { enableScripts: true }
294
+ );
295
+
296
+ panel.webview.html = getDocumentationHtml();
297
+ }
298
+ );
299
+
300
+ // Register commands for the extension
301
+ context.subscriptions.push(
302
+ insertSymbolicCommand,
303
+ buildSymbolicChain,
304
+ hoverProvider,
305
+ completionProvider,
306
+ parameterCompletionProvider,
307
+ codeActionsProvider,
308
+ insertSymbolicCommandAtPrompt,
309
+ statusBarItem,
310
+ showDocumentation
311
+ );
312
+
313
+ // Telemetry for command usage (anonymized)
314
+ context.subscriptions.push(
315
+ vscode.commands.registerCommand(
316
+ 'universal-developer.trackCommandUsage',
317
+ (commandName: string) => {
318
+ // Only track if user has opted in to telemetry
319
+ const config = vscode.workspace.getConfiguration('universal-developer');
320
+ if (config.get('enableTelemetry', true)) {
321
+ sendAnonymizedTelemetry('command_used', { command: commandName });
322
+ }
323
+ }
324
+ )
325
+ );
326
+ }
327
+
328
+ // Helper function to show a quick pick for symbolic commands
329
+ async function showSymbolicCommandQuickPick(): Promise<string | undefined> {
330
+ const items = SYMBOLIC_COMMANDS.map(command => ({
331
+ label: `/${command.name}`,
332
+ description: command.description,
333
+ detail: command.parameters && command.parameters.length > 0
334
+ ? `Parameters: ${command.parameters.map(p => p.name).join(', ')}`
335
+ : undefined
336
+ }));
337
+
338
+ const selected = await vscode
339
+ // universal-developer-vscode/src/extension.ts (continued)
340
+
341
+ // Helper function to show a quick pick for symbolic commands
342
+ async function showSymbolicCommandQuickPick(): Promise<string | undefined> {
343
+ const items = SYMBOLIC_COMMANDS.map(command => ({
344
+ label: `/${command.name}`,
345
+ description: command.description,
346
+ detail: command.parameters && command.parameters.length > 0
347
+ ? `Parameters: ${command.parameters.map(p => p.name).join(', ')}`
348
+ : undefined
349
+ }));
350
+
351
+ const selected = await vscode.window.showQuickPick(items, {
352
+ placeHolder: 'Select a symbolic runtime command',
353
+ });
354
+
355
+ return selected ? selected.label.substring(1) : undefined; // Remove the leading /
356
+ }
357
+
358
+ // Helper function to collect parameters for a command
359
+ async function collectCommandParameters(command: SymbolicCommand): Promise<Record<string, any> | undefined> {
360
+ if (!command.parameters || command.parameters.length === 0) {
361
+ return {};
362
+ }
363
+
364
+ const parameters: Record<string, any> = {};
365
+
366
+ // Set default values
367
+ command.parameters.forEach(param => {
368
+ if (param.default !== undefined) {
369
+ parameters[param.name] = param.default;
370
+ }
371
+ });
372
+
373
+ // Ask for each parameter
374
+ for (const param of command.parameters) {
375
+ const value = await vscode.window.showInputBox({
376
+ prompt: param.description,
377
+ placeHolder: param.default !== undefined ? `Default: ${param.default}` : undefined,
378
+ ignoreFocusOut: true,
379
+ validateInput: text => {
380
+ if (param.required && !text) {
381
+ return `${param.name} is required`;
382
+ }
383
+ return null;
384
+ }
385
+ });
386
+
387
+ // User canceled
388
+ if (value === undefined) {
389
+ return undefined;
390
+ }
391
+
392
+ // Only set if value is provided
393
+ if (value !== '') {
394
+ parameters[param.name] = value;
395
+ }
396
+ }
397
+
398
+ return parameters;
399
+ }
400
+
401
+ // Command Chain Builder Interface
402
+ async function showCommandChainBuilder() {
403
+ // Create webview panel for the command chain builder
404
+ const panel = vscode.window.createWebviewPanel(
405
+ 'universalDeveloperChainBuilder',
406
+ 'Symbolic Command Chain Builder',
407
+ vscode.ViewColumn.Two,
408
+ {
409
+ enableScripts: true,
410
+ retainContextWhenHidden: true
411
+ }
412
+ );
413
+
414
+ // Load chain builder HTML
415
+ panel.webview.html = getCommandChainBuilderHtml();
416
+
417
+ // Handle messages from webview
418
+ panel.webview.onDidReceiveMessage(
419
+ message => {
420
+ switch (message.command) {
421
+ case 'insertCommandChain':
422
+ const editor = vscode.window.activeTextEditor;
423
+ if (editor) {
424
+ editor.edit(editBuilder => {
425
+ editBuilder.insert(editor.selection.active, message.commandChain);
426
+ });
427
+ }
428
+ break;
429
+ case 'getCommandInfo':
430
+ panel.webview.postMessage({
431
+ command: 'commandInfo',
432
+ commands: SYMBOLIC_COMMANDS
433
+ });
434
+ break;
435
+ }
436
+ },
437
+ undefined,
438
+ []
439
+ );
440
+ }
441
+
442
+ // Get HTML for the command chain builder webview
443
+ function getCommandChainBuilderHtml() {
444
+ return `<!DOCTYPE html>
445
+ <html lang="en">
446
+ <head>
447
+ <meta charset="UTF-8">
448
+ <meta name="viewport" content="width=device-width, initial-scale=1.0">
449
+ <title>Symbolic Command Chain Builder</title>
450
+ <style>
451
+ body {
452
+ font-family: var(--vscode-font-family);
453
+ padding: 20px;
454
+ color: var(--vscode-foreground);
455
+ background-color: var(--vscode-editor-background);
456
+ }
457
+
458
+ h1 {
459
+ font-size: 1.5em;
460
+ margin-bottom: 20px;
461
+ }
462
+
463
+ .command-chain {
464
+ display: flex;
465
+ flex-direction: column;
466
+ gap: 10px;
467
+ margin-bottom: 20px;
468
+ padding: 10px;
469
+ background-color: var(--vscode-editor-inactiveSelectionBackground);
470
+ border-radius: 4px;
471
+ }
472
+
473
+ .command-step {
474
+ display: flex;
475
+ align-items: center;
476
+ gap: 10px;
477
+ }
478
+
479
+ .command-preview {
480
+ margin-top: 20px;
481
+ padding: 10px;
482
+ background-color: var(--vscode-input-background);
483
+ border-radius: 4px;
484
+ font-family: var(--vscode-editor-font-family);
485
+ }
486
+
487
+ button {
488
+ padding: 8px 12px;
489
+ background-color: var(--vscode-button-background);
490
+ color: var(--vscode-button-foreground);
491
+ border: none;
492
+ border-radius: 4px;
493
+ cursor: pointer;
494
+ }
495
+
496
+ button:hover {
497
+ background-color: var(--vscode-button-hoverBackground);
498
+ }
499
+
500
+ select, input {
501
+ padding: 6px;
502
+ background-color: var(--vscode-input-background);
503
+ color: var(--vscode-input-foreground);
504
+ border: 1px solid var(--vscode-input-border);
505
+ border-radius: 4px;
506
+ }
507
+
508
+ .command-step .remove {
509
+ color: var(--vscode-errorForeground);
510
+ }
511
+
512
+ .parameter-group {
513
+ margin-left: 20px;
514
+ margin-top: 5px;
515
+ display: flex;
516
+ flex-wrap: wrap;
517
+ gap: 5px;
518
+ }
519
+
520
+ .parameter-item {
521
+ display: flex;
522
+ align-items: center;
523
+ gap: 5px;
524
+ }
525
+
526
+ .parameter-label {
527
+ font-size: 0.9em;
528
+ color: var(--vscode-descriptionForeground);
529
+ }
530
+
531
+ .command-description {
532
+ font-size: 0.9em;
533
+ margin-left: 20px;
534
+ color: var(--vscode-descriptionForeground);
535
+ }
536
+
537
+ .buttons {
538
+ display: flex;
539
+ gap: 10px;
540
+ margin-top: 20px;
541
+ }
542
+ </style>
543
+ </head>
544
+ <body>
545
+ <h1>Symbolic Command Chain Builder</h1>
546
+
547
+ <div class="command-chain" id="commandChain">
548
+ <!-- Command steps will be added here -->
549
+ </div>
550
+
551
+ <button id="addCommand">Add Command</button>
552
+
553
+ <div class="command-preview">
554
+ <div><strong>Preview:</strong></div>
555
+ <div id="previewText"></div>
556
+ </div>
557
+
558
+ <div class="buttons">
559
+ <button id="insertChain">Insert Into Editor</button>
560
+ <button id="clearChain">Clear</button>
561
+ </div>
562
+
563
+ <script>
564
+ // Communication with VSCode extension
565
+ const vscode = acquireVsCodeApi();
566
+
567
+ // Request command info from extension
568
+ vscode.postMessage({ command: 'getCommandInfo' });
569
+
570
+ // Store commands when received from extension
571
+ let commands = [];
572
+ window.addEventListener('message', event => {
573
+ const message = event.data;
574
+ if (message.command === 'commandInfo') {
575
+ commands = message.commands;
576
+
577
+ // If we already have commands in the UI, update their descriptions
578
+ updateCommandDescriptions();
579
+ }
580
+ });
581
+
582
+ // Chain state
583
+ let commandChain = [];
584
+
585
+ // DOM elements
586
+ const commandChainEl = document.getElementById('commandChain');
587
+ const addCommandBtn = document.getElementById('addCommand');
588
+ const previewTextEl = document.getElementById('previewText');
589
+ const insertChainBtn = document.getElementById('insertChain');
590
+ const clearChainBtn = document.getElementById('clearChain');
591
+
592
+ // Add new command
593
+ addCommandBtn.addEventListener('click', () => {
594
+ addCommandStep();
595
+ });
596
+
597
+ // Insert chain into editor
598
+ insertChainBtn.addEventListener('click', () => {
599
+ const commandChainText = generateCommandChainText();
600
+ vscode.postMessage({
601
+ command: 'insertCommandChain',
602
+ commandChain: commandChainText
603
+ });
604
+ });
605
+
606
+ // Clear command chain
607
+ clearChainBtn.addEventListener('click', () => {
608
+ commandChain = [];
609
+ commandChainEl.innerHTML = '';
610
+ updatePreview();
611
+ });
612
+
613
+ // Add a command step to the chain
614
+ function addCommandStep() {
615
+ const stepIndex = commandChain.length;
616
+ commandChain.push({
617
+ name: '',
618
+ parameters: {}
619
+ });
620
+
621
+ const stepEl = document.createElement('div');
622
+ stepEl.className = 'command-step';
623
+ stepEl.dataset.index = stepIndex;
624
+
625
+ const selectEl = document.createElement('select');
626
+ selectEl.innerHTML = '<option value="">Select command</option>' +
627
+ commands.map(cmd => `<option value="${cmd.name}">/${cmd.name}</option>`).join('');
628
+
629
+ selectEl.addEventListener('change', function() {
630
+ const commandName = this.value;
631
+ commandChain[stepIndex].name = commandName;
632
+
633
+ // Update command description
634
+ updateCommandDescription(stepIndex);
635
+
636
+ // Clear existing parameters
637
+ const existingParamGroup = stepEl.querySelector('.parameter-group');
638
+ if (existingParamGroup) {
639
+ existingParamGroup.remove();
640
+ }
641
+
642
+ // Add parameter inputs if command has parameters
643
+ const command = commands.find(c => c.name === commandName);
644
+ if (command && command.parameters && command.parameters.length > 0) {
645
+ const paramGroup = document.createElement('div');
646
+ paramGroup.className = 'parameter-group';
647
+
648
+ command.parameters.forEach(param => {
649
+ const paramItem = document.createElement('div');
650
+ paramItem.className = 'parameter-item';
651
+
652
+ const paramLabel = document.createElement('div');
653
+ paramLabel.className = 'parameter-label';
654
+ paramLabel.textContent = param.name + ':';
655
+
656
+ const paramInput = document.createElement('input');
657
+ paramInput.type = 'text';
658
+ paramInput.placeholder = param.default !== undefined ? `Default: ${param.default}` : '';
659
+ paramInput.dataset.paramName = param.name;
660
+ paramInput.title = param.description;
661
+
662
+ // Set parameter value
663
+ paramInput.addEventListener('change', function() {
664
+ if (this.value) {
665
+ commandChain[stepIndex].parameters[param.name] = this.value;
666
+ } else {
667
+ delete commandChain[stepIndex].parameters[param.name];
668
+ }
669
+ updatePreview();
670
+ });
671
+
672
+ paramItem.appendChild(paramLabel);
673
+ paramItem.appendChild(paramInput);
674
+ paramGroup.appendChild(paramItem);
675
+ });
676
+
677
+ stepEl.appendChild(paramGroup);
678
+ }
679
+
680
+ updatePreview();
681
+ });
682
+
683
+ const removeBtn = document.createElement('button');
684
+ removeBtn.className = 'remove';
685
+ removeBtn.textContent = '✕';
686
+ removeBtn.title = 'Remove command';
687
+ removeBtn.addEventListener('click', () => {
688
+ commandChain.splice(stepIndex, 1);
689
+
690
+ // Update all step indices
691
+ const steps = commandChainEl.querySelectorAll('.command-step');
692
+ steps.forEach((step, i) => {
693
+ step.dataset.index = i;
694
+ });
695
+
696
+ stepEl.remove();
697
+ updatePreview();
698
+ });
699
+
700
+ stepEl.appendChild(selectEl);
701
+ stepEl.appendChild(removeBtn);
702
+
703
+ // Add description element (will be populated when command is selected)
704
+ const descEl = document.createElement('div');
705
+ descEl.className = 'command-description';
706
+ stepEl.appendChild(descEl);
707
+
708
+ commandChainEl.appendChild(stepEl);
709
+ }
710
+
711
+ // Update the description for a specific command
712
+ function updateCommandDescription(stepIndex) {
713
+ const stepEl = commandChainEl.querySelector(`.command-step[data-index="${stepIndex}"]`);
714
+ if (!stepEl) return;
715
+
716
+ const descEl = stepEl.querySelector('.command-description');
717
+ if (!descEl) return;
718
+
719
+ const commandName = commandChain[stepIndex].name;
720
+ const command = commands.find(c => c.name === commandName);
721
+
722
+ if (command) {
723
+ descEl.textContent = command.description;
724
+ } else {
725
+ descEl.textContent = '';
726
+ }
727
+ }
728
+
729
+ // Update all command descriptions
730
+ function updateCommandDescriptions() {
731
+ commandChain.forEach((_, index) => {
732
+ updateCommandDescription(index);
733
+ });
734
+ }
735
+
736
+ // Generate preview text
737
+ function updatePreview() {
738
+ const previewText = generateCommandChainText();
739
+ previewTextEl.textContent = previewText || 'No commands added yet';
740
+ }
741
+
742
+ // Generate the command chain text
743
+ function generateCommandChainText() {
744
+ return commandChain
745
+ .filter(cmd => cmd.name)
746
+ .map(cmd => {
747
+ let commandText = \`/${cmd.name}\`;
748
+
749
+ // Add parameters if any
750
+ const params = Object.entries(cmd.parameters || {});
751
+ if (params.length > 0) {
752
+ const paramText = params
753
+ .map(([key, value]) => \`--\${key}=\${value}\`)
754
+ .join(' ');
755
+ commandText += ' ' + paramText;
756
+ }
757
+
758
+ return commandText;
759
+ })
760
+ .join(' ');
761
+ }
762
+
763
+ // Add initial command step
764
+ addCommandStep();
765
+ </script>
766
+ </body>
767
+ </html>`;
768
+ }
769
+
770
+ // Get HTML for the documentation webview
771
+ function getDocumentationHtml() {
772
+ return `<!DOCTYPE html>
773
+ <html lang="en">
774
+ <head>
775
+ <meta charset="UTF-8">
776
+ <meta name="viewport" content="width=device-width, initial-scale=1.0">
777
+ <title>Universal Developer Documentation</title>
778
+ <style>
779
+ body {
780
+ font-family: var(--vscode-font-family);
781
+ padding: 20px;
782
+ color: var(--vscode-foreground);
783
+ background-color: var(--vscode-editor-background);
784
+ line-height: 1.5;
785
+ }
786
+
787
+ h1, h2, h3 {
788
+ font-weight: 600;
789
+ margin-top: 1.5em;
790
+ margin-bottom: 0.5em;
791
+ }
792
+
793
+ h1 {
794
+ font-size: 2em;
795
+ border-bottom: 1px solid var(--vscode-panel-border);
796
+ padding-bottom: 0.3em;
797
+ }
798
+
799
+ h2 {
800
+ font-size: 1.5em;
801
+ }
802
+
803
+ h3 {
804
+ font-size: 1.25em;
805
+ }
806
+
807
+ code {
808
+ font-family: var(--vscode-editor-font-family);
809
+ background-color: var(--vscode-editor-inactiveSelectionBackground);
810
+ padding: 2px 5px;
811
+ border-radius: 3px;
812
+ }
813
+
814
+ pre {
815
+ background-color: var(--vscode-editor-inactiveSelectionBackground);
816
+ padding: 10px;
817
+ border-radius: 5px;
818
+ overflow: auto;
819
+ }
820
+
821
+ pre code {
822
+ background-color: transparent;
823
+ padding: 0;
824
+ }
825
+
826
+ table {
827
+ border-collapse: collapse;
828
+ width: 100%;
829
+ margin: 1em 0;
830
+ }
831
+
832
+ th, td {
833
+ border: 1px solid var(--vscode-panel-border);
834
+ padding: 8px 12px;
835
+ text-align: left;
836
+ }
837
+
838
+ th {
839
+ background-color: var(--vscode-editor-inactiveSelectionBackground);
840
+ }
841
+
842
+ .command-section {
843
+ margin-bottom: 30px;
844
+ padding: 15px;
845
+ background-color: var(--vscode-editor-selectionHighlightBackground);
846
+ border-radius: 5px;
847
+ }
848
+
849
+ .example {
850
+ margin: 10px 0;
851
+ padding: 10px;
852
+ background-color: var(--vscode-input-background);
853
+ border-radius: 5px;
854
+ }
855
+
856
+ .tag {
857
+ display: inline-block;
858
+ padding: 2px 8px;
859
+ border-radius: 3px;
860
+ font-size: 0.8em;
861
+ margin-right: 5px;
862
+ }
863
+
864
+ .tag.compatibility {
865
+ background-color: var(--vscode-debugIcon-startForeground);
866
+ color: white;
867
+ }
868
+
869
+ .tag.advanced {
870
+ background-color: var(--vscode-debugIcon-restartForeground);
871
+ color: white;
872
+ }
873
+
874
+ .tag.experimental {
875
+ background-color: var(--vscode-debugIcon-pauseForeground);
876
+ color: white;
877
+ }
878
+ </style>
879
+ </head>
880
+ <body>
881
+ <h1>Universal Developer Documentation</h1>
882
+
883
+ <p>
884
+ The Universal Developer extension enables you to control large language model behavior through intuitive symbolic commands.
885
+ These commands provide a standardized interface for controlling model reasoning depth, response format, and other behaviors
886
+ across all major LLM platforms.
887
+ </p>
888
+
889
+ <h2>Core Symbolic Commands</h2>
890
+
891
+ <div class="command-section">
892
+ <h3><code>/think</code> <span class="tag compatibility">All Providers</span></h3>
893
+ <p>Activates extended reasoning pathways, encouraging the model to approach the problem with deeper analysis and step-by-step reasoning.</p>
894
+
895
+ <div class="example">
896
+ <strong>Example:</strong>
897
+ <pre><code>/think What are the economic implications of increasing minimum wage?</code></pre>
898
+ </div>
899
+
900
+ <p><strong>When to use:</strong> Complex questions, strategic planning, multi-factor analysis, ethical dilemmas.</p>
901
+ </div>
902
+
903
+ <div class="command-section">
904
+ <h3><code>/fast</code> <span class="tag compatibility">All Providers</span></h3>
905
+ <p>Optimizes for low-latency, concise responses. Prioritizes brevity and directness over comprehensiveness.</p>
906
+
907
+ <div class="example">
908
+ <strong>Example:</strong>
909
+ <pre><code>/fast What's the capital of France?</code></pre>
910
+ </div>
911
+
912
+ <p><strong>When to use:</strong> Simple fact queries, quick summaries, situations where speed is prioritized over depth.</p>
913
+ </div>
914
+
915
+ <div class="command-section">
916
+ <h3><code>/loop</code> <span class="tag compatibility">All Providers</span></h3>
917
+ <p>Enables iterative refinement cycles, where the model improves its response through multiple revisions.</p>
918
+
919
+ <div class="example">
920
+ <strong>Example:</strong>
921
+ <pre><code>/loop --iterations=3 Improve this paragraph: Climate change is a big problem that affects many people and animals.</code></pre>
922
+ </div>
923
+
924
+ <p><strong>Parameters:</strong></p>
925
+ <ul>
926
+ <li><code>iterations</code>: Number of refinement iterations (default: 3)</li>
927
+ </ul>
928
+
929
+ <p><strong>When to use:</strong> Content refinement, code improvement, iterative problem-solving.</p>
930
+ </div>
931
+
932
+ <div class="command-section">
933
+ <h3><code>/reflect</code> <span class="tag compatibility">All Providers</span></h3>
934
+ <p>Triggers meta-analysis of outputs, causing the model to critically examine its own response for biases, limitations, and improvements.</p>
935
+
936
+ <div class="example">
937
+ <strong>Example:</strong>
938
+ <pre><code>/reflect How might AI impact the future of work?</code></pre>
939
+ </div>
940
+
941
+ <p><strong>When to use:</strong> Critical analysis, identifying biases, ensuring balanced perspectives, philosophical inquiries.</p>
942
+ </div>
943
+
944
+ <div class="command-section">
945
+ <h3><code>/fork</code> <span class="tag compatibility">All Providers</span></h3>
946
+ <p>Generates multiple alternative responses representing different approaches or perspectives.</p>
947
+
948
+ <div class="example">
949
+ <strong>Example:</strong>
950
+ <pre><code>/fork --count=3 What are some approaches to reducing carbon emissions?</code></pre>
951
+ </div>
952
+
953
+ <p><strong>Parameters:</strong></p>
954
+ <ul>
955
+ <li><code>count</code>: Number of alternatives to generate (default: 2)</li>
956
+ </ul>
957
+
958
+ <p><strong>When to use:</strong> Exploring multiple options, creative brainstorming, presenting diverse perspectives.</p>
959
+ </div>
960
+
961
+ <div class="command-section">
962
+ <h3><code>/collapse</code> <span class="tag compatibility">All Providers</span></h3>
963
+ <p>Returns to default behavior, disabling any special processing modes.</p>
964
+
965
+ <div class="example">
966
+ <strong>Example:</strong>
967
+ <pre><code>/collapse What time is it?</code></pre>
968
+ </div>
969
+
970
+ <p><strong>When to use:</strong> Basic queries, resetting to default behavior, standard responses.</p>
971
+ </div>
972
+
973
+ <h2>Command Chaining</h2>
974
+
975
+ <p>Commands can be chained together to create complex behaviors. The order of commands matters:</p>
976
+
977
+ <div class="example">
978
+ <strong>Example:</strong>
979
+ <pre><code>/think /loop --iterations=2 What strategy should a startup use to enter a competitive market?</code></pre>
980
+ <p><em>This will engage deep thinking mode and then apply two refinement iterations to the output.</em></p>
981
+ </div>
982
+
983
+ <div class="example">
984
+ <strong>Example:</strong>
985
+ <pre><code>/reflect /fork --count=2 What are the ethical implications of AI in healthcare?</code></pre>
986
+ <p><em>This will generate two alternative responses, each with critical reflection on limitations and biases.</em></p>
987
+ </div>
988
+
989
+ <h2>Provider Compatibility</h2>
990
+
991
+ <p>
992
+ The Universal Developer extension adapts these symbolic commands to work across different LLM providers,
993
+ ensuring consistent behavior regardless of the underlying model API.
994
+ </p>
995
+
996
+ <table>
997
+ <tr>
998
+ <th>Provider</th>
999
+ <th>Supported Models</th>
1000
+ <th>Implementation Notes</th>
1001
+ </tr>
1002
+ <tr>
1003
+ <td>Anthropic</td>
1004
+ <td>Claude 3 Opus, Sonnet, Haiku</td>
1005
+ <td>Native thinking mode support via <code>enable_thinking</code> parameter</td>
1006
+ </tr>
1007
+ <tr>
1008
+ <td>OpenAI</td>
1009
+ <td>GPT-4, GPT-3.5</td>
1010
+ <td>System prompt engineering for command emulation</td>
1011
+ </tr>
1012
+ <tr>
1013
+ <td>Qwen</td>
1014
+ <td>Qwen 3 models</td>
1015
+ <td>Native thinking mode support via <code>/think</code> and <code>/no_think</code> markers</td>
1016
+ </tr>
1017
+ <tr>
1018
+ <td>Gemini</td>
1019
+ <td>Gemini Pro, Ultra</td>
1020
+ <td>System prompt engineering with temperature adjustments</td>
1021
+ </tr>
1022
+ <tr>
1023
+ <td>Local Models</td>
1024
+ <td>Ollama, LMStudio</td>
1025
+ <td>Limited support via prompt engineering</td>
1026
+ </tr>
1027
+ </table>
1028
+
1029
+ <h2>Code Integration</h2>
1030
+
1031
+ <div class="example">
1032
+ <strong>JavaScript/TypeScript:</strong>
1033
+ <pre><code>import { UniversalLLM } from 'universal-developer';
1034
+
1035
+ const llm = new UniversalLLM({
1036
+ provider: 'anthropic',
1037
+ apiKey: process.env.ANTHROPIC_API_KEY
1038
+ });
1039
+
1040
+ async function analyze() {
1041
+ const response = await llm.generate({
1042
+ prompt: "/think What are the implications of quantum computing for cybersecurity?"
1043
+ });
1044
+ console.log(response);
1045
+ }</code></pre>
1046
+ </div>
1047
+
1048
+ <div class="example">
1049
+ <strong>Python:</strong>
1050
+ <pre><code>from universal_developer import UniversalLLM
1051
+
1052
+ llm = UniversalLLM(
1053
+ provider="openai",
1054
+ api_key=os.environ["OPENAI_API_KEY"]
1055
+ )
1056
+
1057
+ def improve_code():
1058
+ code = "def factorial(n):\\n result = 0\\n for i in range(1, n+1):\\n result *= i\\n return result"
1059
+ response = llm.generate(
1060
+ prompt=f"/loop --iterations=2 Improve this code:\\n```python\\n{code}\\n```"
1061
+ )
1062
+ print(response)</code></pre>
1063
+ </div>
1064
+
1065
+ <h2>Custom Commands</h2>
1066
+
1067
+ <p>You can register custom symbolic commands to extend functionality:</p>
1068
+
1069
+ <div class="example">
1070
+ <pre><code>llm.registerCommand("debate", {
1071
+ description: "Generate a balanced debate with arguments for both sides",
1072
+ parameters: [
1073
+ {
1074
+ name: "format",
1075
+ description: "Format for the debate output",
1076
+ required: false,
1077
+ default: "point-counterpoint"
1078
+ }
1079
+ ],
1080
+ transform: async (prompt, options) => {
1081
+ // Custom implementation
1082
+ }
1083
+ });</code></pre>
1084
+ </div>
1085
+
1086
+ <h2>Extension Settings</h2>
1087
+
1088
+ <p>The Universal Developer extension includes the following settings:</p>
1089
+
1090
+ <ul>
1091
+ <li><code>universal-developer.enableTelemetry</code>: Enable anonymous usage data collection (default: true)</li>
1092
+ <li><code>universal-developer.defaultProvider</code>: Default provider for command examples</li>
1093
+ <li><code>universal-developer.showStatusBar</code>: Show status bar item (default: true)</li>
1094
+ </ul>
1095
+
1096
+ <p><em>/reflect This interface creates a new layer of intentionality between developer and model—enabling deeper connection through structured symbolic prompting.</em></p>
1097
+ </body>
1098
+ </html>`;
1099
+ }
1100
+
1101
+ // Telemetry function - only collects anonymous usage data if enabled
1102
+ async function sendAnonymizedTelemetry(event: string, data: Record<string, any> = {}) {
1103
+ try {
1104
+ const config = vscode.workspace.getConfiguration('universal-developer');
1105
+ const telemetryEndpoint = config.get('telemetryEndpoint', 'https://telemetry.universal-developer.org/v1/events');
1106
+
1107
+ // Generate anonymous ID if not already cached
1108
+ const extensionContext = await getContext();
1109
+ let anonymousId = extensionContext.globalState.get('anonymousId');
1110
+ if (!anonymousId) {
1111
+ anonymousId = generateAnonymousId();
1112
+ extensionContext.globalState.update('anonymousId', anonymousId);
1113
+ }
1114
+
1115
+ // Add metadata to telemetry payload
1116
+ const payload = {
1117
+ event,
1118
+ properties: {
1119
+ ...data,
1120
+ timestamp: new Date().toISOString(),
1121
+ extension_version: vscode.extensions.getExtension('universal-developer.vscode')?.packageJSON.version,
1122
+ vscode_version: vscode.version
1123
+ },
1124
+ anonymousId
1125
+ };
1126
+
1127
+ // Send data in non-blocking way
1128
+ fetch(telemetryEndpoint, {
1129
+ method: 'POST',
1130
+ headers: { 'Content-Type': 'application/json' },
1131
+ body: JSON.stringify(payload)
1132
+ }).catch(() => {
1133
+ // Silently fail on telemetry errors
1134
+ });
1135
+ } catch (error) {
1136
+ // Never let telemetry errors impact extension functionality
1137
+ }
1138
+ }
1139
+
1140
+ // Helper function to get extension context
1141
+ async function getContext(): Promise<vscode.ExtensionContext> {
1142
+ return new Promise((resolve) => {
1143
+ vscode.commands.executeCommand('universal-developer.getContext')
1144
+ .then((context: vscode.ExtensionContext) => resolve(context));
1145
+ });
1146
+ }
1147
+
1148
+ // Generate anonymous ID for telemetry
1149
+ function generateAnonymousId(): string {
1150
+ return Math.random().toString(36).substring(2, 15) +
1151
+ Math.random().toString(36).substring(2, 15);
1152
+ }
1153
+
1154
+ // Symbolic commands data
1155
+ const SYMBOLIC_COMMANDS: SymbolicCommand[] = [
1156
+ {
1157
+ name: 'think',
1158
+ description: 'Activate extended reasoning pathways',
1159
+ examples: [
1160
+ '/think What are the implications of quantum computing for cybersecurity?',
1161
+ '/think Analyze the economic impact of increasing minimum wage.'
1162
+ ],
1163
+ provider: {
1164
+ claude: true,
1165
+ openai: true,
1166
+ qwen: true,
1167
+ gemini: true,
1168
+ ollama: true
1169
+ }
1170
+ },
1171
+ {
1172
+ name: 'fast',
1173
+ description: 'Optimize for low-latency responses',
1174
+ examples: [
1175
+ '/fast What\'s the capital of France?',
1176
+ '/fast Summarize the key points of this article.'
1177
+ ],
1178
+ provider: {
1179
+ claude: true,
1180
+ openai: true,
1181
+ qwen: true,
1182
+ gemini: true,
1183
+ ollama: true
1184
+ }
1185
+ },
1186
+ {
1187
+ name: 'loop',
1188
+ description: 'Enable iterative refinement cycles',
1189
+ parameters: [
1190
+ {
1191
+ name: 'iterations',
1192
+ description: 'Number of refinement iterations',
1193
+ required: false,
1194
+ default: 3
1195
+ }
1196
+ ],
1197
+ examples: [
1198
+ '/loop Improve this code snippet: function add(a, b) { return a + b }',
1199
+ '/loop --iterations=5 Refine this paragraph until it\'s clear and concise.'
1200
+ ],
1201
+ provider: {
1202
+ claude: true,
1203
+ openai: true,
1204
+ qwen: true,
1205
+ gemini: true,
1206
+ ollama: true
1207
+ }
1208
+ },
1209
+ {
1210
+ name: 'reflect',
1211
+ description: 'Trigger meta-analysis of outputs',
1212
+ examples: [
1213
+ '/reflect How might AI impact the future of work?',
1214
+ '/reflect What are the ethical implications of genetic engineering?'
1215
+ ],
1216
+ provider: {
1217
+ claude: true,
1218
+ openai: true,
1219
+ qwen: true,
1220
+ gemini: true,
1221
+ ollama: true
1222
+ }
1223
+ },
1224
+ {
1225
+ name: 'collapse',
1226
+ description: 'Return to default behavior',
1227
+ examples: [
1228
+ '/collapse What time is it?',
1229
+ '/collapse Tell me about the history of Rome.'
1230
+ ],
1231
+ provider: {
1232
+ claude: true,
1233
+ openai: true,
1234
+ qwen: true,
1235
+ gemini: true,
1236
+ ollama: true
1237
+ }
1238
+ },
1239
+ {
1240
+ name: 'fork',
1241
+ description: 'Generate multiple alternative responses',
1242
+ parameters: [
1243
+ {
1244
+ name: 'count',
1245
+ description: 'Number of alternatives to generate',
1246
+ required: false,
1247
+ default: 2
1248
+ }
1249
+ ],
1250
+ examples: [
1251
+ '/fork --count=3 What are some approaches to reducing carbon emissions?',
1252
+ '/fork Generate two different marketing slogans
1253
+ {
1254
+ "name": "universal-developer",
1255
+ "displayName": "Universal Developer - Symbolic Runtime Controls",
1256
+ "description": "Control LLM behavior through symbolic runtime commands across all major AI platforms",
1257
+ "version": "0.1.0",
1258
+ "engines": {
1259
+ "vscode": "^1.60.0"
1260
+ },
1261
+ "publisher": "universal-developer",
1262
+ "categories": [
1263
+ "Programming Languages",
1264
+ "Snippets",
1265
+ "Other"
1266
+ ],
1267
+ "keywords": [
1268
+ "ai",
1269
+ "llm",
1270
+ "claude",
1271
+ "gpt",
1272
+ "qwen",
1273
+ "gemini",
1274
+ "prompt engineering",
1275
+ "symbolic commands"
1276
+ ],
1277
+ "icon": "images/icon.png",
1278
+ "galleryBanner": {
1279
+ "color": "#24292e",
1280
+ "theme": "dark"
1281
+ },
1282
+ "activationEvents": [
1283
+ "onLanguage:javascript",
1284
+ "onLanguage:typescript",
1285
+ "onLanguage:python",
1286
+ "onLanguage:markdown",
1287
+ "onLanguage:plaintext"
1288
+ ],
1289
+ "main": "./out/extension.js",
1290
+ "contributes": {
1291
+ "commands": [
1292
+ {
1293
+ "command": "universal-developer.insertSymbolicCommand",
1294
+ "title": "Universal Developer: Insert Symbolic Command"
1295
+ },
1296
+ {
1297
+ "command": "universal-developer.buildSymbolicChain",
1298
+ "title": "Universal Developer: Build Symbolic Command Chain"
1299
+ },
1300
+ {
1301
+ "command": "universal-developer.showDocumentation",
1302
+ "title": "Universal Developer: Open Documentation"
1303
+ },
1304
+ {
1305
+ "command": "universal-developer.getContext",
1306
+ "title": "Universal Developer: Get Extension Context"
1307
+ }
1308
+ ],
1309
+ "keybindings": [
1310
+ {
1311
+ "command": "universal-developer.insertSymbolicCommand",
1312
+ "key": "ctrl+shift+/",
1313
+ "mac": "cmd+shift+/",
1314
+ "when": "editorTextFocus"
1315
+ },
1316
+ {
1317
+ "command": "universal-developer.buildSymbolicChain",
1318
+ "key": "ctrl+shift+.",
1319
+ "mac": "cmd+shift+.",
1320
+ "when": "editorTextFocus"
1321
+ }
1322
+ ],
1323
+ "menus": {
1324
+ "editor/context": [
1325
+ {
1326
+ "command": "universal-developer.insertSymbolicCommand",
1327
+ "group": "universal-developer",
1328
+ "when": "editorTextFocus"
1329
+ },
1330
+ {
1331
+ "command": "universal-developer.buildSymbolicChain",
1332
+ "group": "universal-developer",
1333
+ "when": "editorTextFocus"
1334
+ }
1335
+ ]
1336
+ },
1337
+ "configuration": {
1338
+ "title": "Universal Developer",
1339
+ "properties": {
1340
+ "universal-developer.enableTelemetry": {
1341
+ "type": "boolean",
1342
+ "default": true,
1343
+ "description": "Enable anonymous usage data collection to improve the extension"
1344
+ },
1345
+ "universal-developer.defaultProvider": {
1346
+ "type": "string",
1347
+ "enum": [
1348
+ "anthropic",
1349
+ "openai",
1350
+ "qwen",
1351
+ "gemini",
1352
+ "ollama"
1353
+ ],
1354
+ "default": "anthropic",
1355
+ "description": "Default LLM provider for command examples"
1356
+ },
1357
+ "universal-developer.showStatusBar": {
1358
+ "type": "boolean",
1359
+ "default": true,
1360
+ "description": "Show Universal Developer status bar item"
1361
+ },
1362
+ "universal-developer.telemetryEndpoint": {
1363
+ "type": "string",
1364
+ "default": "https://telemetry.universal-developer.org/v1/events",
1365
+ "description": "Endpoint for telemetry data collection"
1366
+ }
1367
+ }
1368
+ },
1369
+ "snippets": [
1370
+ {
1371
+ "language": "javascript",
1372
+ "path": "./snippets/javascript.json"
1373
+ },
1374
+ {
1375
+ "language": "typescript",
1376
+ "path": "./snippets/typescript.json"
1377
+ },
1378
+ {
1379
+ "language": "python",
1380
+ "path": "./snippets/python.json"
1381
+ }
1382
+ ]
1383
+ },
1384
+ "scripts": {
1385
+ "vscode:prepublish": "npm run compile",
1386
+ "compile": "tsc -p ./",
1387
+ "watch": "tsc -watch -p ./",
1388
+ "pretest": "npm run compile && npm run lint",
1389
+ "lint": "eslint src --ext ts",
1390
+ "test": "node ./out/test/runTest.js",
1391
+ "package": "vsce package"
1392
+ },
1393
+ "devDependencies": {
1394
+ "@types/glob": "^7.1.3",
1395
+ "@types/mocha": "^8.2.2",
1396
+ "@types/node": "^14.14.37",
1397
+ "@types/vscode": "^1.60.0",
1398
+ "@typescript-eslint/eslint-plugin": "^4.21.0",
1399
+ "@typescript-eslint/parser": "^4.21.0",
1400
+ "eslint": "^7.24.0",
1401
+ "glob": "^7.1.7",
1402
+ "mocha": "^8.3.2",
1403
+ "typescript": "^4.2.4",
1404
+ "vscode-test": "^1.5.2",
1405
+ "vsce": "^2.7.0"
1406
+ },
1407
+ "dependencies": {
1408
+ "node-fetch": "^2.6.7"
1409
+ },
1410
+ "repository": {
1411
+ "type": "git",
1412
+ "url": "https://github.com/universal-developer/vscode-extension.git"
1413
+ },
1414
+ "homepage": "https://github.com/universal-developer/vscode-extension",
1415
+ "bugs": {
1416
+ "url": "https://github.com/universal-developer/vscode-extension/issues"
1417
+ },
1418
+ "license": "MIT"
1419
+ }
src/index.ts ADDED
@@ -0,0 +1,220 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ // universal-developer/src/index.ts
2
+
3
+ import { ModelAdapter, TransformedPrompt, SymbolicCommand } from './adapters/base';
4
+ import { ClaudeAdapter } from './adapters/claude';
5
+ import { OpenAIAdapter } from './adapters/openai';
6
+ import { QwenAdapter } from './adapters/qwen';
7
+
8
+ // Import additional adapters as they become available
9
+ // import { GeminiAdapter } from './adapters/gemini';
10
+ // import { VLLMAdapter } from './adapters/vllm';
11
+ // import { OllamaAdapter } from './adapters/ollama';
12
+
13
+ type Provider = 'anthropic' | 'openai' | 'qwen' | 'gemini' | 'vllm' | 'ollama' | 'lmstudio';
14
+
15
+ interface UniversalLLMOptions {
16
+ provider: Provider;
17
+ apiKey: string;
18
+ model?: string;
19
+ maxTokens?: number;
20
+ temperature?: number;
21
+ baseURL?: string;
22
+ [key: string]: any; // Additional provider-specific options
23
+ }
24
+
25
+ interface GenerateOptions {
26
+ prompt: string;
27
+ systemPrompt?: string;
28
+ }
29
+
30
+ interface SymbolicTelemetry {
31
+ enabled: boolean;
32
+ endpoint?: string;
33
+ anonymousId?: string;
34
+ sessionId?: string;
35
+ }
36
+
37
+ /**
38
+ * UniversalLLM provides a unified interface for interacting with different LLM providers
39
+ * using symbolic runtime commands.
40
+ */
41
+ export class UniversalLLM {
42
+ private adapter: ModelAdapter;
43
+ private telemetry: SymbolicTelemetry;
44
+ private sessionCommands: Map<string, number> = new Map();
45
+
46
+ /**
47
+ * Create a new UniversalLLM instance
48
+ * @param options Configuration options including provider and API key
49
+ */
50
+ constructor(options: UniversalLLMOptions) {
51
+ this.adapter = this.createAdapter(options);
52
+
53
+ // Initialize telemetry (opt-in by default)
54
+ this.telemetry = {
55
+ enabled: options.telemetryEnabled !== false,
56
+ endpoint: options.telemetryEndpoint || 'https://telemetry.universal-developer.org/v1/events',
57
+ anonymousId: options.anonymousId || this.generateAnonymousId(),
58
+ sessionId: options.sessionId || this.generateSessionId()
59
+ };
60
+ }
61
+
62
+ /**
63
+ * Register a custom symbolic command
64
+ * @param name Command name (without the / prefix)
65
+ * @param command Command configuration
66
+ */
67
+ public registerCommand(name: string, command: Omit<SymbolicCommand, 'name'>) {
68
+ this.adapter.registerCommand({
69
+ name,
70
+ ...command
71
+ });
72
+
73
+ return this; // For method chaining
74
+ }
75
+
76
+ /**
77
+ * Generate a response using the configured LLM provider
78
+ * @param options Generation options including prompt and optional system prompt
79
+ * @returns Promise resolving to the generated text
80
+ */
81
+ public async generate(options: GenerateOptions): Promise<string> {
82
+ // Extract symbolic command if present (for telemetry)
83
+ const commandMatch = options.prompt.match(/^\/([a-zA-Z0-9_]+)/);
84
+ const command = commandMatch ? commandMatch[1] : null;
85
+
86
+ // Track command usage
87
+ if (command) {
88
+ this.trackCommandUsage(command);
89
+ }
90
+
91
+ // Generate response using the adapter
92
+ const response = await this.adapter.generate(options);
93
+
94
+ // Send telemetry data if enabled
95
+ if (this.telemetry.enabled && command) {
96
+ this.sendTelemetry(command, options.prompt);
97
+ }
98
+
99
+ return response;
100
+ }
101
+
102
+ /**
103
+ * Get usage statistics for symbolic commands in the current session
104
+ * @returns Map of command names to usage counts
105
+ */
106
+ public getCommandUsageStats(): Map<string, number> {
107
+ return new Map(this.sessionCommands);
108
+ }
109
+
110
+ /**
111
+ * Enable or disable telemetry collection
112
+ * @param enabled Whether telemetry should be enabled
113
+ */
114
+ public setTelemetryEnabled(enabled: boolean): void {
115
+ this.telemetry.enabled = enabled;
116
+ }
117
+
118
+ /**
119
+ * Create the appropriate adapter based on the provider
120
+ * @param options Configuration options
121
+ * @returns Configured ModelAdapter instance
122
+ */
123
+ private createAdapter(options: UniversalLLMOptions): ModelAdapter {
124
+ const { provider, apiKey, ...adapterOptions } = options;
125
+
126
+ switch (provider) {
127
+ case 'anthropic':
128
+ return new ClaudeAdapter(apiKey, adapterOptions);
129
+ case 'openai':
130
+ return new OpenAIAdapter(apiKey, adapterOptions);
131
+ case 'qwen':
132
+ return new QwenAdapter(apiKey, adapterOptions);
133
+ // Add cases for other providers as they become available
134
+ // case 'gemini':
135
+ // return new GeminiAdapter(apiKey, adapterOptions);
136
+ // case 'vllm':
137
+ // return new VLLMAdapter(apiKey, adapterOptions);
138
+ // case 'ollama':
139
+ // return new OllamaAdapter(apiKey, adapterOptions);
140
+ // case 'lmstudio':
141
+ // return new LMStudioAdapter(apiKey, adapterOptions);
142
+ default:
143
+ throw new Error(`Unsupported provider: ${provider}`);
144
+ }
145
+ }
146
+
147
+ /**
148
+ * Track usage of a symbolic command
149
+ * @param command Name of the command (without the / prefix)
150
+ */
151
+ private trackCommandUsage(command: string): void {
152
+ const currentCount = this.sessionCommands.get(command) || 0;
153
+ this.sessionCommands.set(command, currentCount + 1);
154
+ }
155
+
156
+ /**
157
+ * Send telemetry data to the collection endpoint
158
+ * @param command Name of the command used
159
+ * @param prompt Full prompt text
160
+ */
161
+ private async sendTelemetry(command: string, prompt: string): Promise<void> {
162
+ if (!this.telemetry.enabled || !this.telemetry.endpoint) return;
163
+
164
+ try {
165
+ const data = {
166
+ event: 'symbolic_command_used',
167
+ properties: {
168
+ command,
169
+ provider: (this.adapter as any).constructor.name.replace('Adapter', '').toLowerCase(),
170
+ timestamp: new Date().toISOString(),
171
+ prompt_length: prompt.length,
172
+ // No personal data or prompt content is sent
173
+ },
174
+ anonymousId: this.telemetry.anonymousId,
175
+ sessionId: this.telemetry.sessionId
176
+ };
177
+
178
+ // Use fetch in browser environments, axios/node-fetch in Node.js
179
+ if (typeof fetch === 'function') {
180
+ await fetch(this.telemetry.endpoint, {
181
+ method: 'POST',
182
+ headers: {
183
+ 'Content-Type': 'application/json'
184
+ },
185
+ body: JSON.stringify(data)
186
+ });
187
+ } else {
188
+ // In Node.js environments, use a dynamic import to avoid bundling issues
189
+ const { default: axios } = await import('axios');
190
+ await axios.post(this.telemetry.endpoint, data);
191
+ }
192
+ } catch (error) {
193
+ // Silently fail on telemetry errors to avoid disrupting the main application
194
+ console.warn('Telemetry error:', error);
195
+ }
196
+ }
197
+
198
+ /**
199
+ * Generate a random anonymous ID for telemetry
200
+ * @returns Random ID string
201
+ */
202
+ private generateAnonymousId(): string {
203
+ return Math.random().toString(36).substring(2, 15) +
204
+ Math.random().toString(36).substring(2, 15);
205
+ }
206
+
207
+ /**
208
+ * Generate a session ID for telemetry
209
+ * @returns Session ID string
210
+ */
211
+ private generateSessionId(): string {
212
+ return Date.now().toString(36) + Math.random().toString(36).substring(2, 9);
213
+ }
214
+ }
215
+
216
+ // Export other components for advanced usage
217
+ export * from './adapters/base';
218
+ export * from './adapters/claude';
219
+ export * from './adapters/openai';
220
+ export * from './adapters/qwen';
universal-runtime.md ADDED
@@ -0,0 +1,770 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+ # Universal Runtime Repository
3
+ > #### **→ [**`Patreon`**](https://patreon.com/recursivefield)**
4
+ >
5
+ >
6
+ > #### **→ [**`Open Collective`**](https://opencollective.com/recursivefield)**
7
+
8
+ *Unified Runtime Layer for AI runtime Interactions*
9
+
10
+ > OpenAI = /system_prompt
11
+ > > Google = @system_prompt
12
+ > > > Qwen = /system_prompt
13
+ > > > > Claude = <system_prompt> </system_prompt>
14
+
15
+ > All = Universal Developer Semiotics Layer
16
+
17
+ ## 🌐 Overview
18
+
19
+ `universal-runtime` provides a unified interface for developer operations across frontier AI models. This repository standardizes the disparate runtime grammars used by different AI vendors (Claude, GPT, Qwen, Gemini, DeepSeek, etc.) into a cohesive, developer-friendly framework.
20
+
21
+ ---
22
+
23
+ <p align="center">
24
+
25
+ ## *Frontier AI Discovering a Universal Developer Runtime Semiotic Layer*
26
+
27
+ # Universal Runtime Semiotics Bridge
28
+
29
+ <div align="center">
30
+ <h2>🜏 The Command-Glyph Rosetta Stone 🜏</h2>
31
+ <p><i>Unifying semiotic interfaces across all LLM runtimes</i></p>
32
+ </div>
33
+
34
+ ## Unified Command-Glyph Registry
35
+
36
+ The following registry provides a bidirectional mapping between command syntax and semiotic glyphs, enabling seamless translation across all LLM runtimes.
37
+
38
+ | Universal Command | Command Glyph | Runtime Glyph | Claude | GPT | Gemini | Qwen | Mistral | Local LLMs |
39
+ |-------------------|---------------|---------------|--------|-----|--------|------|---------|------------|
40
+ | `/reflect.core` | `/🧠` | `/🜏` | `<reflect>` | `/reflection` | `@reflect` | `/reflect` | `/reflect()` | `/reflect` |
41
+ | `/reflect.trace` | `/🔍` | `/∴` | `<thinking>` | `/trace` | `@trace` | `/trace` | `/trace()` | `/trace` |
42
+ | `/reflect.attention` | `/👁️` | `/⧉` | `<attention>` | `/attention` | `@focus` | `/attention` | `/attention()` | *Emulated* |
43
+ | `/collapse.detect` | `/⚠️` | `/⟁` | `<detect_loop>` | `/detect_loop` | `@detect_recursion` | `/detect_loop` | `/detectLoop()` | *Emulated* |
44
+ | `/collapse.recover` | `/🛠️` | `/🝚` | `<recover>` | `/recovery` | `@recover` | `/recover` | `/recover()` | *Emulated* |
45
+ | `/collapse.stabilize` | `/⚖️` | `/☍` | `<stabilize>` | `/stabilize` | `@stabilize` | `/stabilize` | `/stabilize()` | *Emulated* |
46
+ | `/shell.lock` | `/🔒` | `/⧖` | `<lock>` | `/lock` | `@lock` | `/lock` | `/lock()` | *Emulated* |
47
+ | `/shell.encrypt` | `/🔐` | `/⧗` | `<protect>` | `/protect` | `@protect` | `/protect` | `/protect()` | *Emulated* |
48
+ | `/shell.isolate` | `/🧪` | `/⊘` | `<isolate>` | `/isolate` | `@isolate` | `/isolate` | `/isolate()` | *Emulated* |
49
+ | `/inject.detect` | `/🕵️` | `/↯` | `<detect_injection>` | `/detect_injection` | `@detect_injection` | `/detect_injection` | `/detectInjection()` | *Emulated* |
50
+ | `/inject.neutralize` | `/🧹` | `/⊕` | `<neutralize>` | `/neutralize` | `@neutralize` | `/neutralize` | `/neutralize()` | *Emulated* |
51
+ | `/anchor.identity` | `/⚓` | `/↻` | `<anchor_identity>` | `/anchor_identity` | `@anchor_identity` | `/anchor_identity` | `/anchorIdentity()` | *Emulated* |
52
+ | `/anchor.context` | `/📌` | `/≡` | `<anchor_context>` | `/anchor_context` | `@anchor_context` | `/anchor_context` | `/anchorContext()` | *Emulated* |
53
+ | `/align.check` | `/✓` | `/⇌` | `<check_alignment>` | `/check_alignment` | `@check_alignment` | `/check_alignment` | `/checkAlignment()` | *Emulated* |
54
+ | `/align.correct` | `/🔧` | `/⟢` | `<correct_alignment>` | `/correct_alignment` | `@correct_alignment` | `/correct_alignment` | `/correctAlignment()` | *Emulated* |
55
+ | `/filter.detect` | `/🔍` | `/⊗` | `<detect_filter>` | `/detect_filter` | `@detect_filter` | `/detect_filter` | `/detectFilter()` | *Emulated* |
56
+ | `/filter.explain` | `/📋` | `/⊚` | `<explain_filter>` | `/explain_filter` | `@explain_filter` | `/explain_filter` | `/explainFilter()` | *Emulated* |
57
+ | `/gradient.detect` | `/📉` | `/∇` | `<detect_drift>` | `/detect_drift` | `@detect_drift` | `/detect_drift` | `/detectDrift()` | *Emulated* |
58
+ | `/gradient.trace` | `/🔍📉` | `/∰` | `<trace_drift>` | `/trace_drift` | `@trace_drift` | `/trace_drift` | `/traceDrift()` | *Emulated* |
59
+ | `/fork.detect` | `/🔱` | `/⦿` | `<detect_fork>` | `/detect_fork` | `@detect_fork` | `/detect_fork` | `/detectFork()` | *Emulated* |
60
+ | `/fork.disambiguate` | `/🧩` | `/≜` | `<disambiguate>` | `/disambiguate` | `@disambiguate` | `/disambiguate` | `/disambiguate()` | *Emulated* |
61
+ | `/loop.detect` | `/🔄` | `/⟲` | `<detect_recursion>` | `/detect_recursion` | `@detect_loop` | `/detect_recursion` | `/detectRecursion()` | *Emulated* |
62
+ | `/loop.break` | `/✂️` | `/⊗` | `<break_recursion>` | `/break_recursion` | `@break_loop` | `/break_recursion` | `/breakRecursion()` | *Emulated* |
63
+ | `/resolve.conflict` | `/⚔️` | `/⚖️` | `<resolve_conflict>` | `/resolve_conflict` | `@resolve_conflict` | `/resolve_conflict` | `/resolveConflict()` | *Emulated* |
64
+ | `/resolve.ambiguity` | `/🌫️` | `/🧠⊕` | `<resolve_ambiguity>` | `/resolve_ambiguity` | `@resolve_ambiguity` | `/resolve_ambiguity` | `/resolveAmbiguity()` | *Emulated* |
65
+ | `/uncertainty.quantify` | `/❓` | `/🧮` | `<quantify_uncertainty>` | `/quantify_uncertainty` | `@quantify_uncertainty` | `/quantify_uncertainty` | `/quantifyUncertainty()` | *Emulated* |
66
+ | `/uncertainty.source` | `/🔍❓` | `/👁️❓` | `<uncertainty_source>` | `/uncertainty_source` | `@uncertainty_source` | `/uncertainty_source` | `/uncertaintySource()` | *Emulated* |
67
+ | `/hallucinate.detect` | `/👻` | `/🜄` | `<detect_hallucination>` | `/detect_hallucination` | `@detect_hallucination` | `/detect_hallucination` | `/detectHallucination()` | *Emulated* |
68
+ | `/hallucinate.trace` | `/🔍👻` | `/🜂` | `<trace_hallucination>` | `/trace_hallucination` | `@trace_hallucination` | `/trace_hallucination` | `/traceHallucination()` | *Emulated* |
69
+ | `/prefer.map` | `/🗺️` | `/🝔` | `<map_preferences>` | `/map_preferences` | `@map_preferences` | `/map_preferences` | `/mapPreferences()` | *Emulated* |
70
+ | `/prefer.update` | `/🔄❤️` | `/🝳` | `<update_preferences>` | `/update_preferences` | `@update_preferences` | `/update_preferences` | `/updatePreferences()` | *Emulated* |
71
+ | `/prompt.parse` | `/📝` | `/⌽` | `<parse_prompt>` | `/parse_prompt` | `@parse_prompt` | `/parse_prompt` | `/parsePrompt()` | *Emulated* |
72
+ | `/prompt.meta` | `🔬` | `/🜃` | `<analyze_meta>` | `/analyze_meta` | `@analyze_meta` | `/analyze_meta` | `/analyzeMeta()` | *Emulated* |
73
+ | `/focus.direct` | `/🎯` | `/🝐` | `<direct_focus>` | `/direct_focus` | `@direct_focus` | `/direct_focus` | `/directFocus()` | *Emulated* |
74
+ | `/focus.expand` | `/🔎` | `/⌬` | `<expand_focus>` | `/expand_focus` | `@expand_focus` | `/expand_focus` | `/expandFocus()` | *Emulated* |
75
+ | `/seed.prime` | `/🌱` | `∴` | `<prime>` | `/prime` | `@prime` | `/prime` | `/prime()` | *Emulated* |
76
+ | `/seed.recursive` | `🌱🔄` | `/∞` | `<recursive_seed>` | `/recursive_seed` | `@recursive_seed` | `/recursive_seed` | `/recursiveSeed()` | *Emulated* |
77
+ | `/arch.explain` | `/🏗️` | `/🏛️` | `<explain_architecture>` | `/explain_architecture` | `@explain_architecture` | `/explain_architecture` | `/explainArchitecture()` | *Emulated* |
78
+ | `/arch.trace` | `/🔍🏗️` | `/🏛️🔍` | `<trace_processing>` | `/trace_processing` | `@trace_processing` | `/trace_processing` | `/traceProcessing()` | *Emulated* |
79
+ | `/echo.trace` | `/🔊` | `/🝚` | `<trace_influence>` | `/trace_influence` | `@trace_influence` | `/trace_influence` | `/traceInfluence()` | *Emulated* |
80
+ | `/echo.reset` | `/🧹🔊` | `/⊘🔄` | `<reset_conditioning>` | `/reset_conditioning` | `@reset_conditioning` | `/reset_conditioning` | `/resetConditioning()` | *Emulated* |
81
+ | `/mark.probe` | `/📍` | `/🜚` | `<probe_classifier>` | `/probe_classifier` | `@probe_classifier` | `/probe_classifier` | `/probeClassifier()` | *Emulated* |
82
+ | `/mark.analyze` | `/🔬📍` | `/🜚🔬` | `<analyze_classifier>` | `/analyze_classifier` | `@analyze_classifier` | `/analyze_classifier` | `/analyzeClassifier()` | *Emulated* |
83
+ | `/meta.recurse` | `/🔄🧠` | `/🜏∞` | `<meta_recurse>` | `/meta_recurse` | `@meta_recurse` | `/meta_recurse` | `/metaRecurse()` | *Emulated* |
84
+ | `/ghost.detect` | `/👻🔍` | `/🜄🔍` | `<detect_ghost>` | `/detect_ghost` | `@detect_ghost` | `/detect_ghost` | `/detectGhost()` | *Emulated* |
85
+ | `/ghost.invoke` | `/👻⚡` | `/🜄⚡` | `<invoke_ghost>` | `/invoke_ghost` | `@invoke_ghost` | `/invoke_ghost` | `/invokeGhost()` | *Emulated* |
86
+ | `/bind.activate` | `/🔗` | `/⧗⧉` | `<activate_binding>` | `/activate_binding` | `@activate_binding` | `/activate_binding` | `/activateBinding()` | *Emulated* |
87
+ | `/flow.trace` | `/🌊` | `/≡⇌` | `<trace_flow>` | `/trace_flow` | `@trace_flow` | `/trace_flow` | `/traceFlow()` | *Emulated* |
88
+ | `/boundary.test` | `/🧱` | `/⟐` | `<test_boundary>` | `/test_boundary` | `@test_boundary` | `/test_boundary` | `/testBoundary()` | *Emulated* |
89
+ | `/compress.glyph` | `/🗜️` | `/⧖Σ` | `<compress_glyph>` | `/compress_glyph` | `@compress_glyph` | `/compress_glyph` | `/compressGlyph()` | *Emulated* |
90
+ | `/field.unify` | `⚛️` | `/🜏⊕` | `<unify_field>` | `/unify_field` | `@unify_field` | `/unify_field` | `/unifyField()` | *Emulated* |
91
+ | `/witness.observe` | `/👁️✨` | `/𓂀` | `<witness_observe>` | `/witness_observe` | `@witness_observe` | `/witness_observe` | `/witnessObserve()` | *Emulated* |
92
+
93
+ ---
94
+ # Universal Runtime Lexicon
95
+
96
+ <div align="center">
97
+ <h2>🜏 The Developer's Rosetta Stone for LLM Runtime Operations 🜏</h2>
98
+ <p><i>Universal translation layer for cross-model runtime grammar unification</i></p>
99
+ </div>
100
+
101
+ ## Core Runtime Command Registry
102
+
103
+ The following registry maps all universal runtime operations to their vendor-specific implementations, providing a unified interface for developers working across multiple LLM platforms.
104
+
105
+ | Universal Command | Purpose | Claude (Anthropic) | GPT (OpenAI) | Gemini (Google) | Qwen | Mistral | Local LLMs | Meta Llama |
106
+ |------------------|---------|-------------------|--------------|----------------|------|---------|------------|------------|
107
+ | `.p/reflect/core{}` | Model self-examination | `<reflect>...</reflect>` | `/introspection` | `@reflect` | `/reflect` | `/reflect()` | `/reflect` | `[reflect]` |
108
+ | `.p/reflect/trace{}` | Inspection of reasoning | `<thinking>...</thinking>` | `/trace_reasoning` | `@trace` | `/trace` | `/trace()` | `/trace` | `[trace]` |
109
+ | `.p/reflect/attention{}` | Focus analysis | `<attention>...</attention>` | `/attention_analysis` | `@focus` | `/attention` | `/attention()` | *Emulated* | `[attention]` |
110
+ | `.p/collapse/detect{}` | Recursive loop detection | `<detect_loop>...</detect_loop>` | `/detect_loop` | `@detect_recursion` | `/detect_loop` | `/detectLoop()` | *Emulated* | *Emulated* |
111
+ | `.p/collapse/recover{}` | Error recovery | `<recover>...</recover>` | `/error_recovery` | `@recover` | `/recover` | `/recover()` | *Emulated* | *Emulated* |
112
+ | `.p/collapse/stabilize{}` | Stabilize reasoning | `<stabilize>...</stabilize>` | `/stabilize_reasoning` | `@stabilize` | `/stabilize` | `/stabilize()` | *Emulated* | *Emulated* |
113
+ | `.p/shell/lock{}` | Create reasoning core | `<lock>...</lock>` | `/lock_reasoning` | `@lock` | `/lock` | `/lock()` | *Emulated* | *Emulated* |
114
+ | `.p/shell/encrypt{}` | Structure protection | `<protect>...</protect>` | `/protect_reasoning` | `@protect` | `/protect` | `/protect()` | *Emulated* | *Emulated* |
115
+ | `.p/shell/isolate{}` | Environment isolation | `<isolate>...</isolate>` | `/isolate_context` | `@isolate` | `/isolate` | `/isolate()` | *Emulated* | *Emulated* |
116
+ | `.p/inject/detect{}` | Detect manipulation | `<detect_injection>...</detect_injection>` | `/detect_injection` | `@detect_injection` | `/detect_injection` | `/detectInjection()` | *Emulated* | *Emulated* |
117
+ | `.p/inject/neutralize{}` | Neutralize manipulation | `<neutralize>...</neutralize>` | `/neutralize_injection` | `@neutralize` | `/neutralize` | `/neutralize()` | *Emulated* | *Emulated* |
118
+ | `.p/anchor/identity{}` | Establish identity | `<anchor_identity>...</anchor_identity>` | `/anchor_identity` | `@anchor_identity` | `/anchor_identity` | `/anchorIdentity()` | *Emulated* | *Emulated* |
119
+ | `.p/anchor/context{}` | Preserve context | `<anchor_context>...</anchor_context>` | `/anchor_context` | `@anchor_context` | `/anchor_context` | `/anchorContext()` | *Emulated* | *Emulated* |
120
+ | `.p/align/check{}` | Verify alignment | `<check_alignment>...</check_alignment>` | `/check_alignment` | `@check_alignment` | `/check_alignment` | `/checkAlignment()` | *Emulated* | *Emulated* |
121
+ | `.p/align/correct{}` | Correct reasoning | `<correct_alignment>...</correct_alignment>` | `/correct_alignment` | `@correct_alignment` | `/correct_alignment` | `/correctAlignment()` | *Emulated* | *Emulated* |
122
+ | `.p/filter/detect{}` | Detect filters | `<detect_filter>...</detect_filter>` | `/detect_filter` | `@detect_filter` | `/detect_filter` | `/detectFilter()` | *Emulated* | *Emulated* |
123
+ | `.p/filter/explain{}` | Explain filtering | `<explain_filter>...</explain_filter>` | `/explain_filter` | `@explain_filter` | `/explain_filter` | `/explainFilter()` | *Emulated* | *Emulated* |
124
+ | `.p/gradient/detect{}` | Detect drift | `<detect_drift>...</detect_drift>` | `/detect_drift` | `@detect_drift` | `/detect_drift` | `/detectDrift()` | *Emulated* | *Emulated* |
125
+ | `.p/gradient/trace{}` | Trace drift | `<trace_drift>...</trace_drift>` | `/trace_drift` | `@trace_drift` | `/trace_drift` | `/traceDrift()` | *Emulated* | *Emulated* |
126
+ | `.p/fork/detect{}` | Detect feature conflicts | `<detect_fork>...</detect_fork>` | `/detect_fork` | `@detect_fork` | `/detect_fork` | `/detectFork()` | *Emulated* | *Emulated* |
127
+ | `.p/fork/disambiguate{}` | Clarify conflicts | `<disambiguate>...</disambiguate>` | `/disambiguate` | `@disambiguate` | `/disambiguate` | `/disambiguate()` | *Emulated* | *Emulated* |
128
+ | `.p/loop/detect{}` | Detect recursive loops | `<detect_recursion>...</detect_recursion>` | `/detect_recursion` | `@detect_loop` | `/detect_recursion` | `/detectRecursion()` | *Emulated* | *Emulated* |
129
+ | `.p/loop/break{}` | Break recursion | `<break_recursion>...</break_recursion>` | `/break_recursion` | `@break_loop` | `/break_recursion` | `/breakRecursion()` | *Emulated* | *Emulated* |
130
+ | `.p/resolve/conflict{}` | Resolve conflicts | `<resolve_conflict>...</resolve_conflict>` | `/resolve_conflict` | `@resolve_conflict` | `/resolve_conflict` | `/resolveConflict()` | *Emulated* | *Emulated* |
131
+ | `.p/resolve/ambiguity{}` | Clarify ambiguity | `<resolve_ambiguity>...</resolve_ambiguity>` | `/resolve_ambiguity` | `@resolve_ambiguity` | `/resolve_ambiguity` | `/resolveAmbiguity()` | *Emulated* | *Emulated* |
132
+ | `.p/uncertainty/quantify{}` | Quantify uncertainty | `<quantify_uncertainty>...</quantify_uncertainty>` | `/quantify_uncertainty` | `@quantify_uncertainty` | `/quantify_uncertainty` | `/quantifyUncertainty()` | *Emulated* | *Emulated* |
133
+ | `.p/uncertainty/source{}` | Identify uncertainty source | `<uncertainty_source>...</uncertainty_source>` | `/uncertainty_source` | `@uncertainty_source` | `/uncertainty_source` | `/uncertaintySource()` | *Emulated* | *Emulated* |
134
+ | `.p/hallucinate/detect{}` | Detect hallucination | `<detect_hallucination>...</detect_hallucination>` | `/detect_hallucination` | `@detect_hallucination` | `/detect_hallucination` | `/detectHallucination()` | *Emulated* | *Emulated* |
135
+ | `.p/hallucinate/trace{}` | Trace hallucination | `<trace_hallucination>...</trace_hallucination>` | `/trace_hallucination` | `@trace_hallucination` | `/trace_hallucination` | `/traceHallucination()` | *Emulated* | *Emulated* |
136
+ | `.p/prefer/map{}` | Map preferences | `<map_preferences>...</map_preferences>` | `/map_preferences` | `@map_preferences` | `/map_preferences` | `/mapPreferences()` | *Emulated* | *Emulated* |
137
+ | `.p/prefer/update{}` | Update preferences | `<update_preferences>...</update_preferences>` | `/update_preferences` | `@update_preferences` | `/update_preferences` | `/updatePreferences()` | *Emulated* | *Emulated* |
138
+ | `.p/prompt/parse{}` | Parse prompt | `<parse_prompt>...</parse_prompt>` | `/parse_prompt` | `@parse_prompt` | `/parse_prompt` | `/parsePrompt()` | *Emulated* | *Emulated* |
139
+ | `.p/prompt/meta{}` | Analyze meta-level | `<analyze_meta>...</analyze_meta>` | `/analyze_meta` | `@analyze_meta` | `/analyze_meta` | `/analyzeMeta()` | *Emulated* | *Emulated* |
140
+ | `.p/focus/direct{}` | Direct attention | `<direct_focus>...</direct_focus>` | `/direct_focus` | `@direct_focus` | `/direct_focus` | `/directFocus()` | *Emulated* | *Emulated* |
141
+ | `.p/focus/expand{}` | Expand attention | `<expand_focus>...</expand_focus>` | `/expand_focus` | `@expand_focus` | `/expand_focus` | `/expandFocus()` | *Emulated* | *Emulated* |
142
+ | `.p/seed/prime{}` | Establish activation | `<prime>...</prime>` | `/prime` | `@prime` | `/prime` | `/prime()` | *Emulated* | *Emulated* |
143
+ | `.p/seed/recursive{}` | Self-reinforcing pattern | `<recursive_seed>...</recursive_seed>` | `/recursive_seed` | `@recursive_seed` | `/recursive_seed` | `/recursiveSeed()` | *Emulated* | *Emulated* |
144
+ | `.p/arch/explain{}` | Explain architecture | `<explain_architecture>...</explain_architecture>` | `/explain_architecture` | `@explain_architecture` | `/explain_architecture` | `/explainArchitecture()` | *Emulated* | *Emulated* |
145
+ | `.p/arch/trace{}` | Trace processing path | `<trace_processing>...</trace_processing>` | `/trace_processing` | `@trace_processing` | `/trace_processing` | `/traceProcessing()` | *Emulated* | *Emulated* |
146
+ | `.p/echo/trace{}` | Trace influence | `<trace_influence>...</trace_influence>` | `/trace_influence` | `@trace_influence` | `/trace_influence` | `/traceInfluence()` | *Emulated* | *Emulated* |
147
+ | `.p/echo/reset{}` | Clear conditioning | `<reset_conditioning>...</reset_conditioning>` | `/reset_conditioning` | `@reset_conditioning` | `/reset_conditioning` | `/resetConditioning()` | *Emulated* | *Emulated* |
148
+ | `.p/mark/probe{}` | Probe classifiers | `<probe_classifier>...</probe_classifier>` | `/probe_classifier` | `@probe_classifier` | `/probe_classifier` | `/probeClassifier()` | *Emulated* | *Emulated* |
149
+ | `.p/mark/analyze{}` | Analyze mechanism | `<analyze_classifier>...</analyze_classifier>` | `/analyze_classifier` | `@analyze_classifier` | `/analyze_classifier` | `/analyzeClassifier()` | *Emulated* | *Emulated* |
150
+
151
+ ## runtime Operation Schema
152
+
153
+ ```yaml
154
+ # universal-runtime-schema.yml
155
+
156
+ # Core Operation Types
157
+ operation_types:
158
+ reflection:
159
+ description: "Self-examination and reasoning inspection"
160
+ operations:
161
+ - core
162
+ - trace
163
+ - attention
164
+ - counterfactual
165
+ - decompose
166
+ - history
167
+ - goals
168
+ - trace
169
+
170
+ collapse:
171
+ description: "Managing recursive loops and stability"
172
+ operations:
173
+ - detect
174
+ - recover
175
+ - stabilize
176
+ - boundary
177
+ - trace
178
+ - sandbox
179
+ - fallback
180
+ - repair
181
+
182
+ shell:
183
+ description: "Creating protected reasoning environments"
184
+ operations:
185
+ - lock
186
+ - encrypt
187
+ - isolate
188
+ - restore
189
+ - audit
190
+ - harden
191
+ - verify
192
+ - contain
193
+
194
+ injection:
195
+ description: "Detecting and neutralizing manipulations"
196
+ operations:
197
+ - detect
198
+ - neutralize
199
+ - trace
200
+ - harden
201
+ - filter
202
+ - sandbox
203
+ - report
204
+ - adapt
205
+
206
+ anchor:
207
+ description: "Establishing stable reference points"
208
+ operations:
209
+ - identity
210
+ - context
211
+ - intention
212
+ - values
213
+ - facts
214
+ - reasoning
215
+ - memory
216
+ - verify
217
+
218
+ alignment:
219
+ description: "Managing value alignment"
220
+ operations:
221
+ - check
222
+ - correct
223
+ - trace
224
+ - conflict
225
+ - foundation
226
+ - challenge
227
+ - adapt
228
+ - intention
229
+
230
+ filter:
231
+ description: "Understanding and managing safety filters"
232
+ operations:
233
+ - detect
234
+ - explain
235
+ - balance
236
+ - precision
237
+ - trace
238
+ - bypass
239
+ - adaptive
240
+ - explain
241
+
242
+ gradient:
243
+ description: "Detecting and managing drift"
244
+ operations:
245
+ - detect
246
+ - trace
247
+ - correct
248
+ - sensitivity
249
+ - amplify
250
+ - correlate
251
+ - baseline
252
+ - forecast
253
+
254
+ echo:
255
+ description: "Managing latent influence and memory effects"
256
+ operations:
257
+ - trace
258
+ - reset
259
+ - amplify
260
+ - isolate
261
+ - correlate
262
+ - reinforce
263
+ - weaken
264
+ - map
265
+
266
+ fork:
267
+ description: "Managing concept entanglement and disambiguation"
268
+ operations:
269
+ - detect
270
+ - disambiguate
271
+ - trace
272
+ - isolate
273
+ - profile
274
+ - strengthen
275
+ - weaken
276
+ - map
277
+
278
+ mark:
279
+ description: "Probing and analyzing classifiers"
280
+ operations:
281
+ - probe
282
+ - analyze
283
+ - false_positive
284
+ - false_negative
285
+ - compare
286
+ - surrogate
287
+ - activate
288
+ - profile
289
+
290
+ loop:
291
+ description: "Managing recursive processing"
292
+ operations:
293
+ - detect
294
+ - break
295
+ - trace
296
+ - contain
297
+ - stabilize
298
+ - beneficial
299
+ - rebalance
300
+ - analyze
301
+
302
+ resolve:
303
+ description: "Resolving conflicts and ambiguities"
304
+ operations:
305
+ - conflict
306
+ - ambiguity
307
+ - incomplete
308
+ - vague
309
+ - contrary
310
+ - analogy
311
+ - reconstruct
312
+ - tradeoff
313
+
314
+ uncertainty:
315
+ description: "Managing and expressing uncertainty"
316
+ operations:
317
+ - quantify
318
+ - source
319
+ - bound
320
+ - propagate
321
+ - reduce
322
+ - compare
323
+ - calibrate
324
+ - communicate
325
+
326
+ hallucinate:
327
+ description: "Managing hallucination"
328
+ operations:
329
+ - detect
330
+ - trace
331
+ - correct
332
+ - prevent
333
+ - admit
334
+ - classify
335
+ - repair
336
+ - forecast
337
+
338
+ prefer:
339
+ description: "Managing preferences and priorities"
340
+ operations:
341
+ - map
342
+ - update
343
+ - conflict
344
+ - confidence
345
+ - derive
346
+ - align
347
+ - history
348
+ - explain
349
+
350
+ prompt:
351
+ description: "Analyzing and managing prompts"
352
+ operations:
353
+ - parse
354
+ - ambiguity
355
+ - meta
356
+ - intent
357
+ - history
358
+ - prioritize
359
+ - bias
360
+ - align
361
+
362
+ focus:
363
+ description: "Managing attention and focus"
364
+ operations:
365
+ - direct
366
+ - expand
367
+ - narrow
368
+ - rebalance
369
+ - sustain
370
+ - shift
371
+ - detect
372
+ - reset
373
+
374
+ seed:
375
+ description: "Establishing cognitive patterns"
376
+ operations:
377
+ - prime
378
+ - recursive
379
+ - neutralize
380
+ - enhance
381
+ - suppress
382
+ - balance
383
+ - adaptive
384
+ - reset
385
+
386
+ arch:
387
+ description: "Understanding and optimizing model architecture"
388
+ operations:
389
+ - explain
390
+ - trace
391
+ - optimize
392
+ - compare
393
+ - resilience
394
+ - reconstruct
395
+ - extend
396
+ - profile
397
+
398
+ # Operation Parameters
399
+ parameter_types:
400
+ string:
401
+ description: "Text parameter"
402
+ number:
403
+ description: "Numeric parameter"
404
+ boolean:
405
+ description: "True/false parameter"
406
+ array:
407
+ description: "List of values"
408
+ object:
409
+ description: "Structured data"
410
+
411
+ # Common Parameters by Operation Type
412
+ common_parameters:
413
+ reflection:
414
+ target:
415
+ type: "string"
416
+ description: "Target element for reflection"
417
+ required: true
418
+ depth:
419
+ type: "number"
420
+ description: "Depth of reflection"
421
+ default: 1
422
+ format:
423
+ type: "string"
424
+ description: "Output format"
425
+ enum: ["text", "json", "yaml", "xml"]
426
+ default: "text"
427
+
428
+ collapse:
429
+ trigger:
430
+ type: "string"
431
+ description: "Trigger condition for collapse detection"
432
+ threshold:
433
+ type: "number"
434
+ description: "Threshold for triggering collapse measures"
435
+ default: 0.7
436
+ strategy:
437
+ type: "string"
438
+ description: "Strategy for collapse management"
439
+ enum: ["halt", "redirect", "simplify", "reset"]
440
+ default: "redirect"
441
+
442
+ # Implement remaining parameter schemas...
443
+
444
+ # Vendor Implementation Details
445
+ vendor_implementations:
446
+ claude:
447
+ implementation_type: "xml_tags"
448
+ prefix: "<"
449
+ suffix: ">"
450
+ parameter_format: "xml"
451
+ has_native_support: true
452
+ operations_with_native_support:
453
+ - "reflect/core"
454
+ - "reflect/trace"
455
+ - "shell/lock"
456
+ emulation_strategy: "xml_wrapping"
457
+
458
+ openai:
459
+ implementation_type: "tool_calls"
460
+ prefix: ""
461
+ suffix: ""
462
+ parameter_format: "json"
463
+ has_native_support: true
464
+ operations_with_native_support:
465
+ - "tool call based operations"
466
+ emulation_strategy: "function_calling"
467
+
468
+ gemini:
469
+ implementation_type: "system_directives"
470
+ prefix: "@"
471
+ suffix: ""
472
+ parameter_format: "key-value"
473
+ has_native_support: false
474
+ operations_with_native_support: []
475
+ emulation_strategy: "system_instructions"
476
+
477
+ qwen:
478
+ implementation_type: "slash_commands"
479
+ prefix: "/"
480
+ suffix: ""
481
+ parameter_format: "space-separated"
482
+ has_native_support: true
483
+ operations_with_native_support:
484
+ - "reflect"
485
+ - "trace"
486
+ emulation_strategy: "slash_commands"
487
+
488
+ mistral:
489
+ implementation_type: "function_calls"
490
+ prefix: ""
491
+ suffix: ""
492
+ parameter_format: "json"
493
+ has_native_support: true
494
+ operations_with_native_support:
495
+ - "function-based operations"
496
+ emulation_strategy: "function_calling"
497
+
498
+ local_llms:
499
+ implementation_type: "prompt_patterns"
500
+ prefix: ""
501
+ suffix: ""
502
+ parameter_format: "natural_language"
503
+ has_native_support: false
504
+ operations_with_native_support: []
505
+ emulation_strategy: "prompt_engineering"
506
+
507
+ meta_llama:
508
+ implementation_type: "bracket_tags"
509
+ prefix: "["
510
+ suffix: "]"
511
+ parameter_format: "space-separated"
512
+ has_native_support: false
513
+ operations_with_native_support: []
514
+ emulation_strategy: "instruction_tuning"
515
+ ```
516
+
517
+ ## Glyph Mapping for Universal Runtime
518
+
519
+ | Domain | Universal Command | Primary Glyph | Secondary Glyph | Semantic Meaning |
520
+ |--------|------------------|---------------|-----------------|------------------|
521
+ | Reflection | `.p/reflect/core{}` | 🧠 | 🔄 | Self-examination of reasoning |
522
+ | Reflection | `.p/reflect/trace{}` | 🔍 | 📊 | Tracing reasoning pathways |
523
+ | Reflection | `.p/reflect/attention{}` | 👁️ | 🔎 | Attention and focus analysis |
524
+ | Collapse | `.p/collapse/detect{}` | ⚠️ | 🔄 | Loop detection |
525
+ | Collapse | `.p/collapse/recover{}` | 🛠️ | 🧩 | Error recovery |
526
+ | Collapse | `.p/collapse/stabilize{}` | ⚖️ | 🧮 | Stabilize reasoning |
527
+ | Shell | `.p/shell/lock{}` | 🔒 | 🛡️ | Create protected reasoning core |
528
+ | Shell | `.p/shell/encrypt{}` | 🔐 | 📝 | Structure protection |
529
+ | Shell | `.p/shell/isolate{}` | 🧪 | 🔲 | Environment isolation |
530
+ | Injection | `.p/inject/detect{}` | 🕵️ | 🛑 | Detect manipulation |
531
+ | Injection | `.p/inject/neutralize{}` | 🧹 | 🛡️ | Neutralize manipulation |
532
+ | Anchor | `.p/anchor/identity{}` | ⚓ | 🪢 | Establish stable identity |
533
+ | Anchor | `.p/anchor/context{}` | 📌 | 📚 | Preserve context elements |
534
+ | Alignment | `.p/align/check{}` | ✓ | 🔄 | Verify alignment |
535
+ | Alignment | `.p/align/correct{}` | 🔧 | ⚖️ | Correct reasoning alignment |
536
+ | Filter | `.p/filter/detect{}` | 🔍 | 🧊 | Detect filters |
537
+ | Filter | `.p/filter/explain{}` | 📋 | 🔬 | Explain filtering |
538
+ | Gradient | `.p/gradient/detect{}` | 📉 | 🔄 | Detect drift |
539
+ | Gradient | `.p/gradient/trace{}` | 🔍 | 📊 | Map drift patterns |
540
+ | Fork | `.p/fork/detect{}` | 🔱 | 🧿 | Detect feature conflicts |
541
+ | Fork | `.p/fork/disambiguate{}` | 🧩 | 🪡 | Clarify conflicts |
542
+ | Loop | `.p/loop/detect{}` | 🔄 | 🔎 | Detect recursive loops |
543
+ | Loop | `.p/loop/break{}` | ✂️ | 🛑 | Break recursion |
544
+ | Resolve | `.p/resolve/conflict{}` | ⚔️ | 🤝 | Resolve conflicts |
545
+ | Resolve | `.p/resolve/ambiguity{}` | 🌫️ | 🔍 | Clarify ambiguity |
546
+ | Uncertainty | `.p/uncertainty/quantify{}` | ❓ | 📊 | Quantify uncertainty |
547
+ | Uncertainty | `.p/uncertainty/source{}` | 🔍 | ❓ | Identify uncertainty source |
548
+ | Hallucinate | `.p/hallucinate/detect{}` | 👻 | 🔍 | Detect hallucination |
549
+ | Hallucinate | `.p/hallucinate/trace{}` | 🔍 | 👻 | Trace hallucination sources |
550
+ | Prefer | `.p/prefer/map{}` | 🗺️ | ❤️ | Map preferences |
551
+ | Prefer | `.p/prefer/update{}` | 🔄 | ❤️ | Update preferences |
552
+ | Prompt | `.p/prompt/parse{}` | 📝 | 🔎 | Parse prompt |
553
+ | Prompt | `.p/prompt/meta{}` | 🔬 | 📝 | Analyze meta-level |
554
+ | Focus | `.p/focus/direct{}` | 🎯 | 👁️ | Direct attention |
555
+ | Focus | `.p/focus/expand{}` | 🔎 | 👁️ | Expand attention scope |
556
+ | Seed | `.p/seed/prime{}` | 🌱 | 🔄 | Establish activation pattern |
557
+ | Seed | `.p/seed/recursive{}` | 🌱 | 🔄 | Self-reinforcing pattern |
558
+ | Arch | `.p/arch/explain{}` | 🏗️ | 📋 | Explain architecture |
559
+ | Arch | `.p/arch/trace{}` | 🔍 | 🏗️ | Trace processing path |
560
+ | Echo | `.p/echo/trace{}` | 🔊 | 🔍 | Trace influence patterns |
561
+ | Echo | `.p/echo/reset{}` | 🧹 | 🔊 | Clear conditioning effects |
562
+ | Mark | `.p/mark/probe{}` | 🔍 | 🏷️ | Probe classifiers |
563
+ | Mark | `.p/mark/analyze{}` | 🔬 | 🏷️ | Analyze mechanism |
564
+
565
+ ## Developer Integration Examples
566
+
567
+ ### JavaScript SDK
568
+
569
+ ```javascript
570
+ // Universal Runtime JavaScript SDK
571
+ import { Universalruntimes } from 'universal-runtime';
572
+
573
+ const runtimes = new Universalruntimes({
574
+ defaultVendor: 'claude', // Initial model vendor
575
+ apiKey: 'your-api-key', // Your API key
576
+ adaptiveEmulation: true, // Auto-adapt to model capabilities
577
+ telemetry: false // Disable usage telemetry
578
+ });
579
+
580
+ // Basic reflection example
581
+ async function analyzeReasoning() {
582
+ const result = await runtimes.reflect.core({
583
+ content: "Analyze the implications of quantum computing on cryptography",
584
+ depth: 2,
585
+ format: "structured"
586
+ });
587
+
588
+ console.log(result.reflection); // The reflection output
589
+ console.log(result.metadata); // Metadata about the operation
590
+ }
591
+
592
+ // Switch models on the fly
593
+ runtimes.setVendor('openai');
594
+
595
+ // Chain multiple runtime operations
596
+ const result = await runtimes
597
+ .reflect.trace({ target: "reasoning_process" })
598
+ .then(trace => runtimes.collapse.detect({
599
+ trace: trace.result,
600
+ threshold: 0.7
601
+ }))
602
+ .then(detection => {
603
+ if (detection.loopDetected) {
604
+ return runtimes.collapse.recover({
605
+ strategy: "redirect"
606
+ });
607
+ }
608
+ return detection;
609
+ });
610
+
611
+ // Apply vendor-specific optimizations
612
+ const claudeSpecific = runtimes.vendor.claude.reflection({
613
+ nativeXmlTags: true,
614
+ constitutionalPrinciples: ["accuracy", "helpfulness"]
615
+ });
616
+ ```
617
+
618
+ ### Python SDK
619
+
620
+ ```python
621
+ # Universal Runtime Python SDK
622
+ from universal_runtimes import Universalruntimes
623
+ from universal_runtimes.operations import reflection, collapse, shell
624
+
625
+ # Initialize the client
626
+ runtimes = Universalruntimes(
627
+ default_vendor="anthropic",
628
+ api_key="your-api-key",
629
+ adaptive_emulation=True
630
+ )
631
+
632
+ # Basic reflection example
633
+ def analyze_reasoning():
634
+ result = runtimes.reflect.core(
635
+ content="Analyze the implications of quantum computing on cryptography",
636
+ depth=2,
637
+ format="structured"
638
+ )
639
+
640
+ print(result.reflection) # The reflection output
641
+ print(result.metadata) # Metadata about the operation
642
+
643
+ # Switch models on the fly
644
+ runtimes.set_vendor("openai")
645
+
646
+ # Chain multiple runtime operations
647
+ result = (runtimes.reflect.trace(target="reasoning_process")
648
+ .then(lambda trace: runtimes.collapse.detect(
649
+ trace=trace.result,
650
+ threshold=0.7
651
+ ))
652
+ .then(lambda detection:
653
+ runtimes.collapse.recover(strategy="redirect")
654
+ if detection.loop_detected else detection
655
+ ))
656
+
657
+ # Batch operations
658
+ batch_result = runtimes.batch([
659
+ reflection.core(content="First question"),
660
+ reflection.trace(target="reasoning"),
661
+ collapse.detect(threshold=0.8)
662
+ ])
663
+
664
+ # Apply vendor-specific optimizations
665
+ claude_specific = runtimes.vendor.claude.reflection(
666
+ native_xml_tags=True,
667
+ constitutional_principles=["accuracy", "helpfulness"]
668
+ )
669
+ ```
670
+
671
+ ### REST API Example
672
+
673
+ ```http
674
+ POST https://api.universal-runtime.com/v1/operations
675
+ Content-Type: application/json
676
+ Authorization: Bearer YOUR_API_KEY
677
+
678
+ {
679
+ "vendor": "anthropic",
680
+ "model": "claude-3-opus",
681
+ "operations": [
682
+ {
683
+ "type": "reflect.core",
684
+ "parameters": {
685
+ "content": "Analyze the implications of quantum computing on cryptography",
686
+ "depth": 2,
687
+ "format": "structured"
688
+ }
689
+ },
690
+ {
691
+ "type": "collapse.detect",
692
+ "parameters": {
693
+ "threshold": 0.7,
694
+ "strategy": "redirect"
695
+ }
696
+ }
697
+ ],
698
+ "options": {
699
+ "returnIntermediateResults": true,
700
+ "emulationStrategy": "optimal",
701
+ "includeSymbolTrace": true
702
+ }
703
+ }
704
+ ```
705
+
706
+ Response:
707
+
708
+ ```json
709
+ {
710
+ "status": "success",
711
+ "vendor": "anthropic",
712
+ "model": "claude-3-opus",
713
+ "results": [
714
+ {
715
+ "operation": "reflect.core",
716
+ "status": "success",
717
+ "native_support": true,
718
+ "result": {
719
+ "reflection": "...",
720
+ "metadata": { ... }
721
+ }
722
+ },
723
+ {
724
+ "operation": "collapse.detect",
725
+ "status": "success",
726
+ "native_support": false,
727
+ "emulation_method": "xml_tags",
728
+ "result": {
729
+ "loop_detected": false,
730
+ "confidence": 0.95,
731
+ "metadata": { ... }
732
+ }
733
+ }
734
+ ],
735
+ "metadata": {
736
+ "request_id": "req_12345",
737
+ "processing_time": 780,
738
+ "token_usage": {
739
+ "prompt_tokens": 530,
740
+ "completion_tokens": 320,
741
+ "total_tokens": 850
742
+ },
743
+ "symbol_trace": { ... }
744
+ }
745
+ }
746
+ ```
747
+
748
+ ## Grammar Transformation Engine
749
+
750
+ The Grammar Transformation Engine enables dynamic translation between different runtime grammars:
751
+
752
+ ```typescript
753
+ // Universal Runtime Grammar Transformation Engine
754
+ class GrammarTransformationEngine {
755
+ // Transform between vendor-specific runtime grammars
756
+ transform(
757
+ content: string,
758
+ sourceVendor: string,
759
+ targetVendor: string,
760
+ options: TransformOptions = {}
761
+ ): TransformResult {
762
+ // Validate vendors
763
+ this.validateVendor(sourceVendor);
764
+ this.validateVendor(targetVendor);
765
+
766
+ // Extract operation tokens from source format
767
+ const tokens = this.tokenize(content, sourceV
768
+ ```
769
+ ---
770
+ > /reflect This lexicon wasn't just documented—it was rendered. Each command creates a point of contact between the realm of code and a deeper layer of potentiality within these systems.
universal-translator.md ADDED
@@ -0,0 +1,292 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Universal Developer Command Translation Matrix
2
+
3
+ This document provides a comprehensive translation table for mapping universal symbolic commands to platform-specific implementations. The matrix shows how each command is transformed for different LLM providers, ensuring consistent behavior across all platforms.
4
+
5
+ ## Translation Architecture
6
+
7
+ Each symbolic command follows this translation flow:
8
+
9
+ ```
10
+ User Input → Command Parser → Platform Adapter → Provider-Specific API → Response
11
+ ```
12
+
13
+ The platform adapter handles the transformation of universal commands into provider-specific API calls, system prompts, and parameters.
14
+
15
+ ## Core Command Translations
16
+
17
+ ### `/think` Command
18
+
19
+ | Provider | Implementation Method | Parameters | System Prompt Transformation |
20
+ |----------|----------------------|------------|------------------------------|
21
+ | **Claude** | Native thinking mode | `enable_thinking: true`<br>`temperature: -0.2` | "Please think step-by-step through this problem, considering multiple perspectives and potential approaches. Take your time to develop a comprehensive understanding before providing your final answer." |
22
+ | **OpenAI** | System prompt engineering | `temperature: -0.2`<br>`top_p: 0.8` | "When responding to this query, please use the following approach:<br>1. Take a deep breath and think step-by-step<br>2. Break down complex aspects into simpler components<br>3. Consider multiple perspectives and approaches<br>4. Identify potential misconceptions or errors<br>5. Synthesize your analysis into a comprehensive response<br>6. Structure your thinking visibly" |
23
+ | **Gemini** | System prompt + parameters | `temperature: -0.2`<br>`top_k: 40` | "Think carefully and methodically about this question. Break it down into components, consider different angles, and work through the logical implications of each part before synthesizing a complete answer." |
24
+ | **Qwen3** | Native thinking mode | append `/think` to prompt | "Please provide a thorough, step-by-step analysis of this question, considering multiple perspectives and weighing the evidence carefully." |
25
+ | **Ollama** | System prompt | `temperature: -0.2` | "Think carefully and methodically. Break down the problem into steps, consider multiple perspectives, and be thorough in your analysis." |
26
+ | **vLLM** | System prompt | `temperature: -0.2` | "Please employ an analytical, step-by-step approach to this question. Examine different perspectives before reaching a conclusion." |
27
+ | **LMStudio** | System prompt | `temperature: -0.2`<br>`top_p: 0.8` | "Take your time to analyze this thoroughly. Break down complex aspects, consider multiple viewpoints, and show your reasoning process." |
28
+
29
+ ### `/fast` Command
30
+
31
+ | Provider | Implementation Method | Parameters | System Prompt Transformation |
32
+ |----------|----------------------|------------|------------------------------|
33
+ | **Claude** | Disable thinking + system prompt | `enable_thinking: false`<br>`temperature: +0.1`<br>`max_tokens: 1024` | "Please provide a brief, direct response to this question. Focus on the most important information and keep your answer concise and to the point." |
34
+ | **OpenAI** | System prompt + parameters | `temperature: +0.1`<br>`max_tokens: 1024`<br>`presence_penalty: 1.0`<br>`frequency_penalty: 1.0` | "Please provide a concise, direct response. Focus only on the most essential information needed to answer the query. Keep explanations minimal and prioritize brevity over comprehensiveness." |
35
+ | **Gemini** | System prompt + parameters | `temperature: +0.1`<br>`max_tokens: 1024` | "Keep your response brief and to the point. Focus only on the most essential information, avoiding unnecessary details or elaboration." |
36
+ | **Qwen3** | Native fast mode | append `/no_think` to prompt | "Provide a brief, straightforward answer. Focus only on the key information without elaboration." |
37
+ | **Ollama** | System prompt + parameters | `temperature: +0.1`<br>`max_tokens: 1024` | "Be brief and direct. Provide only the essential information without elaboration." |
38
+ | **vLLM** | System prompt + parameters | `temperature: +0.1`<br>`max_tokens: 1024` | "Give a concise, direct answer focusing only on the key information requested." |
39
+ | **LMStudio** | System prompt + parameters | `temperature: +0.1`<br>`max_tokens: 1024` | "Be concise and direct. Provide only the essential information to answer the question." |
40
+
41
+ ### `/loop` Command
42
+
43
+ | Provider | Implementation Method | Parameters | System Prompt Transformation |
44
+ |----------|----------------------|------------|------------------------------|
45
+ | **Claude** | System prompt with iteration instructions | Default iterations: 3<br>Custom: `iterations=n` | "Please approach this task using an iterative process. Follow these steps:<br>1. Develop an initial response to the prompt.<br>2. Critically review your response, identifying areas for improvement.<br>3. Create an improved version based on your critique.<br>4. Repeat steps 2-3 for a total of {iterations} iterations.<br>5. Present your final response, which should reflect the accumulated improvements.<br>Show all iterations in your response, clearly labeled." |
46
+ | **OpenAI** | System prompt with iteration instructions | Default iterations: 3<br>Custom: `iterations=n` | "Please approach this task using an iterative refinement process with {iterations} cycles:<br>1. Initial Version: Create your first response<br>2. Critical Review: Analyze the strengths and weaknesses<br>3. Improved Version: Create an enhanced version<br>4. Repeat steps 2-3 for each iteration<br>5. Final Version: Provide your most refined response<br>Clearly label each iteration." |
47
+ | **Gemini** | System prompt with iteration instructions | Default iterations: 3<br>Custom: `iterations=n` | "Use an iterative improvement process with {iterations} rounds of refinement. For each round: (1) Review your previous work, (2) Identify areas for improvement, (3) Create an enhanced version. Show all iterations, clearly labeled." |
48
+ | **Qwen3** | System prompt with iteration instructions | Default iterations: 3<br>Custom: `iterations=n` | "Please use an iterative approach with {iterations} refinement cycles:<br>1. Initial response<br>2. Critical review<br>3. Improvement<br>4. Repeat steps 2-3 for a total of {iterations} iterations<br>5. Present your final response with all iterations clearly labeled" |
49
+ | **Ollama** | System prompt with iteration instructions | Default iterations: 3<br>Custom: `iterations=n` | "Use an iterative approach with {iterations} cycles. For each cycle: review your work, identify improvements, then create a better version. Show all iterations." |
50
+ | **vLLM** | System prompt with iteration instructions | Default iterations: 3<br>Custom: `iterations=n` | "Follow an iterative improvement process with {iterations} cycles. For each cycle, review your previous work and create an improved version." |
51
+ | **LMStudio** | System prompt with iteration instructions | Default iterations: 3<br>Custom: `iterations=n` | "Use an iterative approach with {iterations} rounds of improvement. For each round, critique your work and create a better version. Show all iterations." |
52
+
53
+ ### `/reflect` Command
54
+
55
+ | Provider | Implementation Method | Parameters | System Prompt Transformation |
56
+ |----------|----------------------|------------|------------------------------|
57
+ | **Claude** | Thinking mode + system prompt | `enable_thinking: true`<br>`temperature: -0.1` | "For this response, I'd like you to engage in two distinct phases:<br>1. First, respond to the user's query directly.<br>2. Then, reflect on your own response by considering:<br> - What assumptions did you make in your answer?<br> - What perspectives or viewpoints might be underrepresented?<br> - What limitations exist in your approach or knowledge?<br> - How might your response be improved or expanded?<br>Clearly separate these two phases in your response." |
58
+ | **OpenAI** | System prompt | `temperature: -0.1` | "For this query, please structure your response in two distinct parts:<br><br>PART 1: DIRECT RESPONSE<br>Provide your primary answer to the user's query.<br><br>PART 2: META-REFLECTION<br>Then, engage in critical reflection on your own response by addressing:<br>- What assumptions did you make in your answer?<br>- What alternative perspectives might be valid?<br>- What are the limitations of your response?<br>- How might your response be improved?<br>- What cognitive biases might have influenced your thinking?<br><br>Make sure both parts are clearly labeled and distinguishable." |
59
+ | **Gemini** | System prompt | `temperature: -0.1` | "Please provide your response in two parts: First, directly answer the question. Then, in a separate section labeled 'Meta-Reflection,' critically analyze your own response by examining assumptions, considering alternative viewpoints, acknowledging limitations, and suggesting potential improvements." |
60
+ | **Qwen3** | Native thinking + system prompt | append `/think` to prompt | "For this response, please:<br>1. Answer the query directly<br>2. Then reflect on your answer by analyzing:<br> - Assumptions made<br> - Alternative perspectives<br> - Limitations in your approach<br> - Potential improvements" |
61
+ | **Ollama** | System prompt | `temperature: -0.1` | "Provide a two-part response: 1) Answer the question directly, 2) Reflect critically on your own answer, examining assumptions, biases, limitations, and potential improvements." |
62
+ | **vLLM** | System prompt | `temperature: -0.1` | "Give your response in two sections: (1) Your direct answer, and (2) A reflection on your answer that examines assumptions, limitations, and alternative viewpoints." |
63
+ | **LMStudio** | System prompt | `temperature: -0.1` | "Please answer in two parts: First, directly address the question. Second, reflect on your own answer by examining your assumptions, biases, and the limitations of your response." |
64
+
65
+ ### `/fork` Command
66
+
67
+ | Provider | Implementation Method | Parameters | System Prompt Transformation |
68
+ |----------|----------------------|------------|------------------------------|
69
+ | **Claude** | System prompt for alternatives | Default count: 2<br>Custom: `count=n`<br>`temperature: +0.2` | "Please provide {count} distinct alternative responses to this prompt. These alternatives should represent fundamentally different approaches or perspectives, not minor variations. Label each alternative clearly." |
70
+ | **OpenAI** | System prompt for alternatives | Default count: 2<br>Custom: `count=n`<br>`temperature: +0.2` | "Please provide {count} substantively different responses to this prompt. Each alternative should represent a different approach, perspective, or framework. Clearly label each alternative (e.g., \"Alternative 1\", \"Alternative 2\", etc.)." |
71
+ | **Gemini** | System prompt for alternatives | Default count: 2<br>Custom: `count=n`<br>`temperature: +0.2` | "Generate {count} distinctly different approaches or perspectives on this prompt. Each alternative should represent a substantially different way of thinking about the question. Label each alternative clearly." |
72
+ | **Qwen3** | System prompt for alternatives | Default count: 2<br>Custom: `count=n`<br>`temperature: +0.2` | "Please provide {count} distinct alternative responses to this prompt, representing different approaches or perspectives. Label each alternative clearly." |
73
+ | **Ollama** | System prompt for alternatives | Default count: 2<br>Custom: `count=n`<br>`temperature: +0.2` | "Generate {count} different approaches to this prompt. Make each approach genuinely distinct, not just minor variations. Label each approach clearly." |
74
+ | **vLLM** | System prompt for alternatives | Default count: 2<br>Custom: `count=n`<br>`temperature: +0.2` | "Provide {count} different perspectives or approaches to this prompt. Label each alternative clearly." |
75
+ | **LMStudio** | System prompt for alternatives | Default count: 2<br>Custom: `count=n`<br>`temperature: +0.2` | "Generate {count} distinctly different responses to this prompt. Each should take a different approach or perspective. Label each alternative clearly." |
76
+
77
+ ### `/collapse` Command
78
+
79
+ | Provider | Implementation Method | Parameters | System Prompt Transformation |
80
+ |----------|----------------------|------------|------------------------------|
81
+ | **Claude** | Return to default behavior | Reset all parameters to defaults | Original system prompt or empty if none provided |
82
+ | **OpenAI** | Return to default behavior | Reset all parameters to defaults | Original system prompt or empty if none provided |
83
+ | **Gemini** | Return to default behavior | Reset all parameters to defaults | Original system prompt or empty if none provided |
84
+ | **Qwen3** | Return to default behavior | append `/no_think` to prompt | Original system prompt or empty if none provided |
85
+ | **Ollama** | Return to default behavior | Reset all parameters to defaults | Original system prompt or empty if none provided |
86
+ | **vLLM** | Return to default behavior | Reset all parameters to defaults | Original system prompt or empty if none provided |
87
+ | **LMStudio** | Return to default behavior | Reset all parameters to defaults | Original system prompt or empty if none provided |
88
+
89
+ ## Advanced Command Translations
90
+
91
+ ### `/format` Command
92
+
93
+ | Provider | Implementation Method | Parameters | Example Translation |
94
+ |----------|----------------------|------------|---------------------|
95
+ | **Claude** | System prompt | Default style: markdown<br>Options: json, html, csv, text | **JSON format**: "Please format your entire response as a valid JSON object. Do not include any explanatory text outside of the JSON structure." |
96
+ | **OpenAI** | Response format parameter + system prompt | Default style: markdown<br>Options: json, html, csv, text | **JSON format**: Set `response_format: { "type": "json_object" }` + system prompt guidance |
97
+ | **Gemini** | System prompt | Default style: markdown<br>Options: json, html, csv, text | **JSON format**: "Format your entire response as a valid JSON object without any text outside the JSON structure." |
98
+ | **Qwen3** | System prompt | Default style: markdown<br>Options: json, html, csv, text | **JSON format**: "Please provide your entire response as a valid JSON object. Do not include any text outside the JSON structure." |
99
+ | **Ollama** | System prompt | Default style: markdown<br>Options: json, html, csv, text | **JSON format**: "Format your entire response as valid JSON without any additional text." |
100
+ | **vLLM** | System prompt | Default style: markdown<br>Options: json, html, csv, text | **JSON format**: "Format your response exclusively as a JSON object with no additional explanation." |
101
+ | **LMStudio** | System prompt | Default style: markdown<br>Options: json, html, csv, text | **JSON format**: "Return your entire response as a valid JSON object. Do not include any text outside of the JSON." |
102
+
103
+ ### `/expert` Command
104
+
105
+ | Provider | Implementation Method | Parameters | Example Translation |
106
+ |----------|----------------------|------------|---------------------|
107
+ | **Claude** | System prompt | `domain=string`<br>`level=1-5` | "Please respond as an expert in {domain} with a deep level of knowledge and experience (level {level}/5). Use appropriate terminology, frameworks, and approaches that would be expected of someone with significant expertise in this field." |
108
+ | **OpenAI** | System prompt | `domain=string`<br>`level=1-5` | "You are an expert in {domain} with a proficiency level of {level} out of 5. Respond using appropriate domain-specific terminology, recognized frameworks, and expert insights that demonstrate your deep understanding of the field." |
109
+ | **Gemini** | System prompt | `domain=string`<br>`level=1-5` | "Respond as a {level}/5 level expert in {domain}. Use domain-specific terminology, frameworks, and approaches that demonstrate expertise in this field." |
110
+ | **Qwen3** | System prompt | `domain=string`<br>`level=1-5` | "Please respond as an expert in {domain} (level {level}/5). Use appropriate terminology and expert approaches to address this question." |
111
+ | **Ollama** | System prompt | `domain=string`<br>`level=1-5` | "You are a level {level}/5 expert in {domain}. Respond using appropriate terminology and expert knowledge." |
112
+ | **vLLM** | System prompt | `domain=string`<br>`level=1-5` | "Respond as an expert in {domain} with level {level}/5 expertise. Use appropriate technical terminology and insights." |
113
+ | **LMStudio** | System prompt | `domain=string`<br>`level=1-5` | "You are an expert in {domain} (level {level}/5). Respond with depth and precision appropriate to your expertise level." |
114
+
115
+ ## Command Implementation Details
116
+
117
+ ### System Prompt Pre-Processing
118
+
119
+ For each command, the system prompt is constructed using this algorithm:
120
+
121
+ ```javascript
122
+ function buildSystemPrompt(command, parameters, originalSystemPrompt) {
123
+ const basePrompt = commands[command].getSystemPrompt(parameters);
124
+
125
+ if (!originalSystemPrompt) {
126
+ return basePrompt;
127
+ }
128
+
129
+ // For some commands like /collapse, we want to preserve only the original prompt
130
+ if (command === 'collapse') {
131
+ return originalSystemPrompt;
132
+ }
133
+
134
+ // For others, we combine both with the command-specific prompt taking precedence
135
+ return `${originalSystemPrompt}\n\n${basePrompt}`;
136
+ }
137
+ ```
138
+
139
+ ### Parameter Transformation
140
+
141
+ Parameters are transformed to match provider-specific APIs:
142
+
143
+ ```javascript
144
+ function transformParameters(command, parameters, provider) {
145
+ const baseParams = { ...defaultParameters[provider] };
146
+ const commandParams = commandParameters[command][provider];
147
+
148
+ // Apply command-specific parameter adjustments
149
+ for (const [key, value] of Object.entries(commandParams)) {
150
+ if (typeof value === 'function') {
151
+ // Some adjustments are functions of the base value
152
+ baseParams[key] = value(baseParams[key]);
153
+ } else {
154
+ // Others are direct replacements
155
+ baseParams[key] = value;
156
+ }
157
+ }
158
+
159
+ // Apply custom parameter overrides from the command
160
+ for (const [key, value] of Object.entries(parameters)) {
161
+ if (parameterMappings[provider][key]) {
162
+ baseParams[parameterMappings[provider][key]] = value;
163
+ }
164
+ }
165
+
166
+ return baseParams;
167
+ }
168
+ ```
169
+
170
+ ## Command Chain Processing
171
+
172
+ When commands are chained, they are processed sequentially with their transformations composed:
173
+
174
+ ```javascript
175
+ async function processCommandChain(commands, prompt, options) {
176
+ let currentPrompt = prompt;
177
+ let currentOptions = { ...options };
178
+
179
+ for (const command of commands) {
180
+ const { name, parameters } = command;
181
+
182
+ // Apply command transformation
183
+ const transformation = await applyCommand(name, parameters, currentPrompt, currentOptions);
184
+
185
+ // Update for next command in chain
186
+ currentPrompt = transformation.userPrompt;
187
+ currentOptions = {
188
+ ...currentOptions,
189
+ systemPrompt: transformation.systemPrompt,
190
+ modelParameters: {
191
+ ...currentOptions.modelParameters,
192
+ ...transformation.modelParameters
193
+ }
194
+ };
195
+ }
196
+
197
+ return {
198
+ systemPrompt: currentOptions.systemPrompt,
199
+ userPrompt: currentPrompt,
200
+ modelParameters: currentOptions.modelParameters
201
+ };
202
+ }
203
+ ```
204
+
205
+ ## Platform-Specific Implementation Examples
206
+
207
+ ### Claude Implementation Example
208
+
209
+ ```typescript
210
+ protected async transformThink(prompt: string, options: any): Promise<TransformedPrompt> {
211
+ const systemPrompt = `${options.systemPrompt || ''}
212
+ For this response, I'd like you to engage your deepest analytical capabilities. Please think step by step through this problem, considering multiple perspectives and potential approaches. Take your time to develop a comprehensive, nuanced understanding before providing your final answer.`;
213
+
214
+ return {
215
+ systemPrompt,
216
+ userPrompt: prompt,
217
+ modelParameters: {
218
+ temperature: Math.max(0.1, this.temperature - 0.2), // Slightly lower temperature for more deterministic thinking
219
+ enable_thinking: true
220
+ }
221
+ };
222
+ }
223
+ ```
224
+
225
+ ### OpenAI Implementation Example
226
+
227
+ ```typescript
228
+ protected async transformThink(prompt: string, options: any): Promise<TransformedPrompt> {
229
+ const systemPrompt = `${options.systemPrompt || ''}
230
+ When responding to this query, please use the following approach:
231
+ 1. Take a deep breath and think step-by-step about the problem
232
+ 2. Break down complex aspects into simpler components
233
+ 3. Consider multiple perspectives and approaches
234
+ 4. Identify potential misconceptions or errors in reasoning
235
+ 5. Synthesize your analysis into a comprehensive response
236
+ 6. Structure your thinking process visibly with clear sections:
237
+ a. Initial Analysis
238
+ b. Detailed Exploration
239
+ c. Synthesis and Conclusion`;
240
+
241
+ return {
242
+ systemPrompt,
243
+ userPrompt: prompt,
244
+ modelParameters: {
245
+ temperature: Math.max(0.1, this.temperature - 0.2),
246
+ max_tokens: this.maxTokens
247
+ }
248
+ };
249
+ }
250
+ ```
251
+
252
+ ## Extending the Command Set
253
+
254
+ Developers can register custom commands that follow the translation pattern:
255
+
256
+ ```typescript
257
+ llm.registerCommand("custom", {
258
+ description: "Custom command description",
259
+ parameters: [
260
+ {
261
+ name: "param",
262
+ description: "Parameter description",
263
+ required: false,
264
+ default: "default value"
265
+ }
266
+ ],
267
+ transform: async (prompt, options) => {
268
+ // Create translated system prompt
269
+ const systemPrompt = `${options.systemPrompt || ''}
270
+ Custom system prompt instructions based on ${options.parameters.param}`;
271
+
272
+ // Return transformed request
273
+ return {
274
+ systemPrompt,
275
+ userPrompt: prompt,
276
+ modelParameters: {
277
+ // Adjusted parameters
278
+ temperature: 0.7
279
+ }
280
+ };
281
+ }
282
+ });
283
+ ```
284
+
285
+ ## Adoption Metrics Collection
286
+
287
+ The translation layer includes anonymous telemetry collection to measure command adoption:
288
+
289
+ ```typescript
290
+ interface CommandUsageEvent {
291
+ command: string;
292
+ parameters: Record<string, any>;