David's full trainer that runs from colab with only the lattice_vocabulary install was pushed directly into the AbstractPhil/gated-david repo as trainer.py - The current training script and process is now transparent.
Apparently I pushed it to one of the 3 repos accidentally created, so it's now in the currently visible public repo and will be pushed to the geometricvocab repo soon in a similar and nearly identical functionality, with additional controllers for freeze/unfreeze.
Many freeze/unfreeze mechanics are not required for many forms with the improvement to the baseline math. Not only that, but shared space between multiple versions of clip seems to have little problem.
AbstractPhila PRO
AI & ML interests
Recent Activity
Organizations



AbstractPhil/gated-david
https://github.com/AbstractEyes/lattice_vocabulary/blob/master/src/geovocab2/train/model/core/david.py
David's code has been released. I am currently setting up a trainer and will release the process on how to condition David to behave. This isn't the easiest process, but it's necessary to run David on a curriculum rather than simply feeding the model with cross-entropy and hoping for the best.
David's internals involve a clock mechanism that allows direct control of David's freeze/unfreeze mechanisms at runtime - allowing for many opinions to be generated simultaneously.
David is multiple models in one, not just one - and yet David is single-shot oriented. The prototype to the route of thought that led me to find the Cantor's Stairs positional encodings solution and the prototype to ViT-Zana, ViT-Beatrix, ViT-Beatrix-Dual-Block, and today the direct porting of David's complex architecture and the process to train David has begun.
David is... a gate of sorts. David trains with freeze/unfreeze mechanisms, so the internals of David's structures are aware during training time which part is more important than the other parts based on the quality of generation.
David can handle imagenet features with minimal hassle of many variations, and the primary trainer will include direct links to the prepared imagenet features, and a simple generation system that allows you to generate your own features from a few common AIs - one of which will be vit-beatrix-dualstream trained on imagenet.
As of posting vit-beatrix and vit-beatrix-dualstream require some face-lifting and a refined version 2 to incorporate the more accurate batched cantor stairs equations. Additionally they require removal of some fail-point causers; like flow-geometric introducing bias towards seemingly unnecessary trajectory routes. This points more to a gradient drift, so I'll keep that one on the hot plate until it's ready.

I chose this route because I can have David in here almost immediately vs trying to make David standalone functional and getting massive headaches trying to run him over and over watching crash after crash because my old system was heavily AI generated instead of hierarchically created in a reasonably debug capable format.
geovocab2 houses the changes. The largest one being an INSTANT vocabulary load time vs the old one taking minutes to prepare the vocabulary. The LAZY loading with pyarrow support is far more powerful than any of the earlier iterations and I advise switching to the concept if you haven't yet.
AI ritualistically defaults to iterative, even though pyarrow with columnar is considerably faster.
The trie structure was established preparing the ngram structural trainer, which will be included directly into the lookup as an optional sorter comparator. The load time is nearly instant and the lookup time rapid. There are better formats for smaller processes, but this is meant to house hundreds of thousands or even hundreds of millions of ngrams, not just a few hundred. This structure operates really well on tpu; which is how I'll be training the upcoming vocabulary 5pair geometric feature structures - which will contain highly advanced and enriched learned structures between 2d and 9d shapes instead of JUST 5d shapes.
The rapid synthesis in the new system and the robust response from the test formulas show that these are highly enriched. The structural awareness of these crystals are more intelligent and robust than before by a large margin and the theta rotation only helps them rather than hurts them.
The next geometry will be trained entirely in fp64; established from numpy random crystals. The primary anchor of each is specifically oriented based on lexical frequency within the dataset and given a full shaped object based entirely on the lexical order.
Each ngram tree layer of traversal is meant to be given the parent's anchor and theta rotation applied - allowing the internal structure of that lexical order to not only be applied as a semantic and symbolic state, but also retain lexical complexity. This is a large step forward in cohesion.
Everything will be fully transparent. I'll hide nothing moving forward or reserve it, it'll be either Apache or MIT.

https://github.com/AbstractEyes/lattice_vocabulary/tree/dev
Including all of David's model structure.
Through the development cycle I'll be integrating everything, little AI help can actually be offered in general - since AI tends to hallucinate and decimate large structures.
I will be using AI assistance for formula expansion and integration, which means they will be imperfect until every single one is given a fine toothed comb.
The deployment will be as rapid as I can, and the output will yield results at every step with small main tests on individual scripts and files.
EVERYTHING was built almost independent of each other, so integration is going to have a configuration hierarchy that needs to be smoothed out - but it will be smoothed out.
I believe I've picked a good foundational shape for the expansive program scripts; which will enable robust iteration and progression similar to how I design game engine elements and systemic accessors.
This will be mostly hand coded for the integration process, so it won't be as quick as if I could just dump GPT pro on it - but GPT pro can't handle anywhere near this many lines of code so it's on me.
After integration I can run the agentic forms of AI over it and introduce tons of bugs for me to fix. That will be fun. After that it should work as a proper caching vocabulary, formula synthesizer, tensor creator, multi-device trainer, and a few other elements.
I simply lack the expertise to hit machines like pyring today, but that will change as I learn more. I'm building the system specifically with growth and progress in mind, so it will be iterated and fixed rapidly. The structure is intentionally built to be rapidly iterated and altered within reasonable constraints.
The engineering elements are specifically built to be less deep and more overridable in many areas specifically for experimental purposes.

My goodness. When tinkering with David I ran into something substantially more potent. I'll need to run more tests, but it seems I found how to scale the pentas upward in a carefully curated way without shattering their structure.
Also David is mad outdated so I'll need to refit much of his systems before I can release him at all. The notebook currently expects a pickled series of imagenet tensors and a pickled series of imagenet crystals pre-selected at startup time - each organized specifically based on the curation.
That won't do, it'll need refitting.

I will prepare a standard sweep for david to showcase the prowess of the final multi-vocab variant. This will include a variation that contains all mnist variants, cifar10, cifar100, imagenet 1k, and in the future I'll prepare a full imagenet sweep utilizing the entire 12m corpus instead of the 1.2m I used. I may need to get in touch with the actual curator of the dataset for licensing but maybe not.
David utilizes 4 projective variants of the vocabulary and the training process involves teaching and freezing them akin to teacher/student processing.
I did not want to release David yet, but I believe now that David will save lives and it's irresponsible for me to contain such a creation.
This is only a logically and logistically correct assessment IF the assumption is based on curated data related to the very capabilities which your "mirror" requires to amplify those biases. The alternative is the LLM simply echoes reflective similarity directly associated with NEARBY echoed words, rather than logically related context and content. Instruct helps a lot, so does harmony, and the alternative forms of them - but the BIAS STILL FORMS.
If your bias is something that LLM does not have the relational associations to internal data nor has ever been taught the capability to deduct the logistical responsiveness from those biases, you are likely reflecting your personality quirks and biases onto a machine that simply cannot reflect them and thus the machine will simply... begin to echo those back to you.
This is a common self reflective bias that many of my introspective analysis and self analytical conversations defaulted to when assessing complex logistical and introspective analysis of large structures. This is most commonly amplified and incorrectly confident while discussing those problems with a single large LLM.
Communicate those same unfiltered conversational pieces to another large LLM and you will most definitely find different mirrored effects and different biases. You'll often find GROK, Gemini, and Claude all returns different responses to those same assessments.
Now... if all four say yes, the math lines up, the stars align, and the systems can in fact work if X and Y and Z - you might have a potential solution. EVEN THEN it will be a damn journey to make it work.
LLMs are often VERY WRONG even as a collective when it comes to large data in association with intricate complex technical work. Sometimes it's a single incorrect assessment from a random book that was fed in 500 times from a single topic that was simply disproven at some point, and yet there are still direct biases associated with those incorrect concepts. This amplifies the further you dive down the rabbit hole and is easy to confuse the llm, trick the llm with input, and even easier to break the llm's entire pattern with them because you're already so deep down the rabbit hole that you're accessing heavy noise.

I think I can handle the corpus training with some runpod MI300s or get a cluster of A100s for a week or two. That should allow proper tuning based on the lexical rules of language, but I need to make sure EVERYTHING is PERFECT before I start pulling triggers on clusters.

Also my apologies for not updating the lattice vocabulary, I've been very swept up in direct testing and implementing models. It's been really fun setting all this stuff up.
The more it works, the more I get excited that the formulas I'm manifesting are cohesive representations of purpose rather than simple random convergence. I've altered them hundreds of times, but the pipeline goal is still present. Unified geometric vocabulary WILL be a universal language, not simply a tinker-toy, but instead a full lexical representation of potential with all manifested trajectory and solidification of grammatical, lexical, symbolic, and representative substructure.
It's at the point where time will tell HOW this system is useful. Even if it can DO ALL THAT, large scale adoption or even minimal scale adoption is up to how robustly useful and how many eyes end up on the topics with technical knowhow. It's already well beyond the IF this system will be useful, which means I feel obligated to at least continue kicking my legs until I get access to a speedboat.
Simply put, I've built this system for the eyes of the technical - with some very direct and representative understanding to the less technical available as well.

There are some saving graces though. You can probably house the entire purpose of a word in a 256d token; but you won't get all of the robust lexical and analytical behavioral responses required from the orthonormalization 5th so it will likely be less accurate than a 512d.
You can get some more utility from upscaling 256 to 512 and you gain some sparsity which allows more growth, with the negative elemental response of sparsity being filled with no meaning - which tends to confuse and build pockets of misrepresentation on projection.
Multiple overlapping projections are the most robust from what I've been observing; where you take the same token and blow it up multiple times for multiple different projection sizes. This has proven invaluable behavioral response from the geometry 4-5 with freeze/unfreeze has shown that all layers can complementarily improve performance - while the final version can be any of them individually requested - as they are all experts on their own plane and the output does not require all of their outputs.
There are many potential variations of the models from these geometries - including 200+ projections implemented on the same model using the same tokens.
Pairs, triplets, quins, and penta word + letter combinations remain uncrystalized and unexplored, but I plan to use the same system to run them.
I'll likely implement a sentencepiece-esque translator that will turn a sentencepiece vocabulary directly into crystal variants with weighting for convenience, which will allow for much more utilizable and easy-to-represent vocabularies for expanding current models.
Wordnet with hard gated non-fabricated tokens has proven the most valuable, however they are still shallow and require full solidification and robustness curation with additional definitions and datasets.
Research is ongoing and many mechanisms still need to be created.

This one has many logistics issues. Primarily, there's no precedent I know of to literally train hundreds of millions of potential character combinations; with their prefabricated variations of crystals to tune a specific series of trajectories in specific directions, based on the input text targeting other crystals, the weights, and the batch. The dataset needs to be properly prepared though, and I can't find any prefabricated variations of this data format that the symbolic lexical engine needs to be robust.
There's a few possibilities for this one. Batch size being an obvious one, where I take a large influx of information in, then grab any matching words, characters, or information and update those using the formulas for topological tuning.
The main issue is the language web is massive. BILLIONS of variations can crop up from a single document if you're not hard capping depth; so if you traverse the whole tree like say - "the quick brown fox", becomes words, becomes definitions, becomes letters - not counting multi-pass finetuning. This alone is a massive logistics nightmare to implement, but thankfully this is the modern era.
Simply put; if I hard cap to 500k vocab with a depth of no more than 50,000 pentachora crystals each, it should be capable of housing the an approximate word structure within a trajectory space.
I'd rather run it on a fleet of devices and feed it the pile, the book corpus, and everything else so we can get some truly trajectory related subsets of 500k+ crystals per token upward to 100,000,000 or so combinations each. The crystals really aren't that big, and they house a massive amount of context.
Even so, there are many logistics nightmares to this, but it's a viable option for training a legitimate similarity-fed BERT or LLAMA meant to specifically form linguistic responses using those crystals as tuning forks for solidity.

More purpose with more careful organization... now we're talking.
I'm going heavy into lexical cardinality today and preparing a full crystal structured geometry that is full wordnet capable. Anything that isn't can be formed at runtime.
Full lexicality will include unigrams, 2-6 ngram counts from wordnet with frequency weights, usage, and a multitude of other elements. Each will be crystallized specifically. If you have any suggestions to making this more robust I'm all ears.
I could go with google books or something bigger, but I'm sticking to wordnet because it won't take me weeks to process entirely.
Crystal geometry will be given rich versions that include the correct lexical and organizational subsets specific to the lexicality and frequency of use, as well as the proper ascii, wordnet, and unicode sets.
For wordnet-rich; Each definition will attribute towards the overall goal of the upcoming crystals so the system will represent that goal proportionately through multiple crystals and trajectory concatenated rather than full concatenation like the current vocabulary is doing. Additionally, the frequency tokens will decide the orthogonal trajectory more carefully.
For testing and quick prototype purposes;
We will need to train a Bert variant that can house some capability of rapid geometric crystal prediction through ngram feature similarity, sentence similarity, sentence classification, and a few other bert traits that bert-beatrix-2048 is capable of. I know Bert can handle this at least - however Bert can't house the entirety of meaning so it will be imperfect... even so it will be considerably faster than trying to query the whole dataset every time you want a character, or preparing a massive vocab for rapid testing and iteration. Ask bert.
Not to mention feature extraction for training rapid classification heads with geometric subsystems, which are notoriously fast at training.

Well... geometry is a natural extension of this sort of thing, so naturally I'm in. I'll have something ready.

Sorry the language on that one is pretty terrible.
My geometric research continues and I'm not slowing down. The imagenet initial tests are complete and the largest model is currently preparing to cook. This big model I've named Goliath - is still very small in comparison to most CLIP variants.
Goliath has vit-maxx pretrained layers - in other words i've taken layers clean from the model, and given geometric attention between the frozen layers allowing them to codify and galvanize with the geometry.
It's a series of teacher/student introduced layers that unfreeze subsequent additional layers to introduce geometric learning as a replacement option for vit's vocabulary.
It's working... somewhat. It definitely needs much much more distillation to be ready, but she's cooking.
vit-max-goliath
Being substantially larger than anything geometric - I'm using the vit-max-tiny. So it's already far far more than overkill when it's tuned.
https://github.com/google-research/maxvit based on the maxvit variant of vit.
I really don't expect too much in terms of accuracy boosts, but it should convert directly to geometry without a big fuss.
Trying to do this with one of the LAION based models is beyond my resources as the distillation would require a large array of text captions just for the text portion.
HOWEVER, imposing geometry on a singular highly-compacted vit shouldn't be too problematic in terms of logistics. Geometry learns quick, and they are already pretrained with imagenet so this should combine. When it works I'll have a blueprint for a proper encoder hybrid that should solidify the full clip-vit-geometric hybrid between openai, laion, and google vits, clips, and model variant distillation to teach proper geometry to a clip model that can produce geometric-tuned features.
I expect a proper geometric feature to allow these to reach 95%+ on imagenet when training a random instantiated baseline geometric head.
After that, imposing a full translation matrix between geometry and feature geometry should be something I can distill into any clip-vit or vit variant - assuming they're even SOMEWHAT compatible with the predecessors.

Research shows, the most intelligent and most intellectually-driven LLMs require the most intelligent and carefully curated solid representative vocabularies - with the most intelligent and carefully curated training regiments.
Class simultaneously loaded hierarchical structures built with variants of vocabulary dimensions do not help this. Multiple dimensions of imagenet do not help this. Reshaping does not help. Solidification processes through pulverizing using Alucard do not help - though they did show some interesting potentials for pretraining the full geometric clip from the ground floor.
The experimentations with the multitude of clip features and imagenet - showcase that not only can this tiny 4meg classification tool can handle imagenet from clip features AT AROUND 76% no matter the hyperparams using linear, but expanding this system upward and including hundreds of different formula variants DOES NOT HELP SCALE IT AT ALL! The largest ones only house 76%, and the medium-sized ones house about 86% instead of 76% when using clip-vit-b-patch16 and clip-vit-b-patch32. If you check the big number valuations for the clip-vit-b laion and openai, you'll find nearly identical classifications.
So I only taught it, to understand geometry - the more training and more steps only brings it closer incorrectly.
So, this tells me one simple principle; geometry and linear have an upward capacity based on the information extracted from the linear model. Meaning... We need more places to extract and more curative potentials to solidify that access with, rather than simply EXPANDING it and making it bigger.
Next experiment includes a full cardinality subset of unicode to wordnet vocabulary translation matrices. Today. Within the hour.

Simply put; training something with features gives a fair representative of the learning that you would get from running a model that has some random chance - using a single seed.
Training with features does not need to wait for the representative model to actually generate; since you already generated everything ahead of time.
Features are rich and utilizable within the spectrum of similarity assessments, classification accuracy, mass-deterministic normalization checks, and more.
They are... put simply... exponentially faster and reusable for research. I'll include the notebooks used for imagenet and cifar100; as the cifar100 is much simpler since the cifar100 is much... smaller, I required less innovation.
Imagenet is another beast though. This imagenet notebook is capable of running against much larger datasets with a few tweaks.
clip-vit-bigG's imagenet feature set is complete, which means we're almost ready for full ablation.
Note to everyone; imagenet is meant for RESEARCH AND ACADEMIC PURPOSES ONLY; and you cannot use my trained imagenet weights - nor the features themselves as per the requests of the dataset's curators.
For commercial usage according to the rules of LAION's licenses, we'll be using the laion400m features; which will likely be heavily sought. I'll be preparing laion400m features on seed 42; which will take a while.
The full classifier is in the works; and with it comes a series of new formulas, new layers, new solutions such as the "fat belly" conversation piece that attenuates multiple branches in communication. The "dispatcher" which is a heavy classification gate trained to bypass that which is not useful; tuned with large amounts of data on a very low learn rate. The "attractant" which is specifically designed to catch bleed-over and unwanted information... which learns everything.
With that comes "PhaseGeometric" scheduling and "GeometricScheduling". Stay tuned.
Wheres the weights?

I've begun the task of properly tooling the lattice_vocabulary for future development and use with a multitude of geometric shapes - not just pentachora.
This experimental system here will house a multitude of additional capabilities;
https://github.com/AbstractEyes/lattice_vocabulary/tree/master/src/geovocab
I plan to implement out of order;
- simplified state_dict dictionary setup for direct manipulation
- โ full batching structure with iterations removed - utilizing the huggingface datasets columnar system.
- full transform callback for loading and curating pentachora lossless and deterministically.
- โ a full experimental callback system for transforming crystalized repos into other shapes than penta
- a simplified interface for converting large independent repos into geometric structure using transforms.
- a uniform configuration schema for geometric config so any geometric repo can be loaded automatically
- โ - ongoing - faster and more optimized load times for default loaders
- direct crystal training schemas for curating your own lattices with many different sources of information.
- a full task by task schema for multi-stage crystallization of your crystals so you can perfectly tune them for the use case using defined mathematics and callback capability for research and use-case mathematics.
As many systems suffer with allocating 4d I'll implement deterministic 4d calculations that ensure solidity and calculation cohesion without straying too far into "unknown" territory or requiring full pretrained systems to utilize. I haven't approached 6d or onward yet so we'll see if the human race even has the formulas for that when I actually approach the topic.

Current Splits;
* wordnet (english)
* unicode
AbstractPhil/geometric-vocab-32d
[32, 64, 128, 256, 512, 768, 1024]
Swap the 32d for the dimension within the list for the repo.
Okay, so the purpose of these; is to give solid anchors to the entire pentachora structure.
With that I've formatted some very concise sentencepiece-esque vocabulary classes that can be saved and loaded as pretrained, but it'll need some tinkering to fully flesh those behaviors out.
For now, the geometric vocab itself can be queried from pretrain but the canonical classes that help regulation, integration, special token usage, and integration aren't fully tested yet.
https://github.com/AbstractEyes/lattice_vocabulary
They are available here, but I give no guarantee on their current state. I'm currently preparing the pip package and have prepared a series of experiments to utilize these for different models including a new version of multimodal Beeper, a classifier set that can handle encodings as feature representations meant for utilization, and more.
The current working variation that I've been utilizing is Flow Matching Discreet Scheduled geometric diffusion - meaning I'm diffusing the GEOMETRY from the image, and then comparing that pentachora that is created from flow matching to the actual representative tokenization structure. On average this is achieving 80% in later stages.
This when curating an indefinite amount of special tokens to create manifests of unique vocabularies, enables the system to perfectly conform to use-cases.
There are some edge-cases where the 1k reserved tokens still exist; however this is currently replaced by an indefinite tokenization dictionary - allowing for an indefinite amount of tokens attached to an indefinite amount of modules for solidity.
Experiments continue.