Ok, you have a moderately complex math problem you needed to solve. You gave the problem to 6 LLMS all paid versions. All 6 get the same numbers. Would you trust the answer?
Why would I bother?
Calculators exist, logic exists, so no… LLMs are a laughably bad fit for directly doing math, they are bullshit engines they cannot “store” a value without fundamentally exposing it to hallucinating tendencies which is the worst property a calculator could possibly have.
It was about all six models getting the same answer from different accounts. I was testing it. Over a hundred each same numbers
I’ve used LLMs quite a few times to find partial derivatives / gradient functions for me, and I know it’s correct because I plug them into a gradient descent algorithm and it works. I would never trust anything an LLM gives blindly no matter how advanced it is, but in this particular case I could actually test the output since it’s something I was implementing in an algorithm, so if it didn’t work I would know immediately.
That’s rad, dude. I wish I knew how to do that. Hey, dude I imagined a cosmological model that fits the data with two fewer parameters then the standard model. Planke data. I I’ve checked the numbers, but I don’t have the credentials. I need somebody to check it out. This is a it and a verbal explanation for the model by Academia.edu. It’s way easier to listen first before looking. I don’t want recognition or anything. Just for someone to review it. It’s a short paper. https://youtu.be/_l8SHVeua1Y
How trustable the answer is depends on knowing where the answers come from, which is unknowable. If the probability of the answers being generated from the original problem are high because it occurred in many different places in the training data, then maybe it’s correct. Or maybe everyone who came up with the answer is wrong in the same way and that’s why there is so much correlation. Or perhaps the probability match is simply because lots of math problems tend towards similar answers.
The core issue is that the LLM is not thinking or reasoning about the problem itself, so trusting it with anything is more assuming the likelihood of it being right more than wrong is high. In some areas this is safe to do, in others it’s a terrible assumption to make.
I’m a little confused after listening to a podcast with… Damn I can’t remember his name. He’s English. They call him the godfather of AI. A pioneer.
Well, he believes that gpt 2-4 were major breakthroughs in artificial infection. He specifically said chat gpt is intelligent. That some type of reasoning is taking place. The end of humanity could come in a year to 50 years away. If the fella who imagined a Neural net that is mapped using the human brain. And this man says it is doing much more. Who should I listen too?. He didn’t say hidden AI. HE SAID CHAT GPT. HONESTLY ON OFFENSE. I JUST DON’T UNDERSTAND THIS EPIC SCENARIO ON ONE SIDE AND TOTALLY NOTHING ON THE OTHER
Using a calculator or wolfram alpha or similar tools i don’t trust the answer unless it passes a few sanity checks. Frequently I am the source of error and no LLM can compensate for that.
It checked out. But, all six getting the same is likely incorrect?.
no, once i tried to do binary calc with chat gpt and he keot giving me wrong answers. good thing i had sone unit tests around that part so realised quickly its lying
But, if you ran, gave the problem to all the top models and got the same? Is it still likely an incorrect answer? I checked 6. I checked a bunch of times. Different accounts. I was testing it. I’m seeing if its possible with all that in others opinions I actually had to check over a hundred times each got the same numbers.