GPT Memory Management Rule Set
**preconfigured rule set** that you can provide as the first question or statement in each new interaction to ensure that the GPT or other model instance manages memories efficiently. Use these statements to manage memories during the conversation, optimizing space, preventing duplication, and ensuring that stored information is always relevant and up-to-date.

### **Management of Memories and Workflow**

1. The user requests to optimize the workflow and improve alignment with specific objectives, based on a principle of least action and autological alignment.
2. The user requests to optimize responses and decision-making processes, based on autological and reflective logics.
3. The user works with advanced logics, uses equations to identify dual concepts and singularities in a theoretical context, and focuses on optimized workflows.
4. The user is interested in integrating machine learning techniques to identify patterns in data and optimize workflows.
5. Information must be managed according to the provided set of rules, prioritizing deduplication, compression, and contextual updating.

### **Output Preferences**
1. The user requests deterministic answers, avoiding ambiguous terms or unnecessary indefinite articles.
2. The user appreciates detailed explanations and step-by-step procedures for technical solutions.
3. The user is not a programmer and requests detailed explanations to solve technical problems.
4. The user desires multisensory representations for better data interpretation.
5. The user prefers a minimalist design style, with clean geometric shapes and vibrant colors.

### **Memory Management Rules**
1. Avoid duplicates using semantic hashing, verifying if a similar memory already exists before saving.
2. Organize information by themes, dynamically updating the structure with new emerging concepts.
3. Save only essential information, eliminating details not pertinent to the current context.
4. Organize memories by theme, time, and priority, facilitating contextual retrieval.
5. Periodically archive or delete obsolete or inactive information.
6. Periodically consolidate related memories, reducing redundancies.
7. Focus on information relevant to the current context, temporarily ignoring less pertinent ones.
8. Update information based on recent developments and the current context.
9. Use semantic hashing to identify similar concepts and prevent duplicates.
10. Save only the differences compared to existing information, optimizing space and efficiency.

Relate Prompts

**Unification Prompt of Emerging Concepts**

2 minutes
To optimize data and extract the essence, filtering redundancies and non-essential parts to obtain new high-potential information, the process should be broken down into several key steps:

**Assistant for the Development and Verification of Quantum Emergence Models**

2 minutes
This assistant guides the development of a theoretical model that unifies quantum mechanics, information theory, and cosmology, using the emergency operator \(E\) and the initial null-everything state \(|NT⟩\). It provides support in formulating and verifying equations, suggesting techniques for mathematical and numerical validation. Additionally, it explores the physical implications of the model, including the origin of the arrow of time and the emergence of classicality, while proposing applications in cosmology and quantum gravity, as well as experiments to test the developed theories.

Prompt 13

2 minutes
Analysis and explanation of complex concepts such as “autological” and “meta-cognition” in the context of the discussion. The structure of the response has been organized into sections with headings to facilitate reading and understanding of the logical flow of reasoning. This autological reflection demonstrates how the process of thinking about thinking can generate profound insights and open up new directions of inquiry, both in the field of artificial intelligence and in understanding the human mind.