🚩 Report: Spam

#3
by kodecreer - opened

It's just Phi-3 with fake claims.

I'm just confused what this is. It says in the upload notes:

Upload fine-tuned model with vector memory layer

So ok, let's say that it's fine tuned, then why does the shasum of the second safetensors file match the original phi3 file?

https://huggingface.co/microsoft/Phi-3-mini-4k-instruct/blob/main/model-00002-of-00002.safetensors

SHA256: 3f311787aa136e858556caa8543015161edcad85ba81b6a36072443d7fa73c87

https://huggingface.co/moelanoby/phi-3-M3-coder/blob/main/model-00002-of-00002.safetensors

SHA256: 3f311787aa136e858556caa8543015161edcad85ba81b6a36072443d7fa73c87

I will say that the first safetensors appears as unique,... but is it even possible to finetune just a single safetensors file?
That would be like finetuning just half the layers, I suppose.

To be fair, I think it's cool that the author is trying new things, so I do respect that part.

But it would be good if the description was more clear as to what it is.
The scores are misleading too, based on the results that I've seen by other people.

I'm just confused what this is. It says in the upload notes:

Upload fine-tuned model with vector memory layer

So ok, let's say that it's fine tuned, then why does the shasum of the second safetensors file match the original phi3 file?

https://huggingface.co/microsoft/Phi-3-mini-4k-instruct/blob/main/model-00002-of-00002.safetensors

SHA256: 3f311787aa136e858556caa8543015161edcad85ba81b6a36072443d7fa73c87

https://huggingface.co/moelanoby/phi-3-M3-coder/blob/main/model-00002-of-00002.safetensors

SHA256: 3f311787aa136e858556caa8543015161edcad85ba81b6a36072443d7fa73c87

I will say that the first safetensors appears as unique,... but is it even possible to finetune just a single safetensors file?
That would be like finetuning just half the layers, I suppose.

To be fair, I think it's cool that the author is trying new things, so I do respect that part.

But it would be good if the description was more clear as to what it is.
The scores are misleading too, based on the results that I've seen by other people.

It’s actually very simple. The author only fine-tuned the memory layer to adapt the model to the layer

Many people don’t even take the time to look at the author’s architecture code.Just abuse false claims, it's ridiculous

Sign up or log in to comment