Meet's Blog

Complaining about quality of LLMs? you might have skill issues

LLMs know a lot of stuff, but intuition is only human's gift.

This is a reason or the point we try to make in making ourselves superior over LLMs.

Do LLMs have wisdom now(artificial)?

But is it true. LLMs can reason now, so can they really have a intuition? Maybe because depending on how that reasoning model is trained on, it will ideally be human's thoughts put on paper. With this LLMs can have a huge advantage and leap in creating artificial wisdom from its knowledge. Now, now, ok you might be thinking I am idiotic about LLMs and wisdom can't be put in the same line. Maybe that was true a couple of month's back.

New Reasoning Models

But now with Deepseek-R1, o3, o4 whatever those are gave a head start in those reasoning model ecosystem.

And Google 2.5 series of model have just taken that huge leap of quality. Just try it, before complaining, before having you opinion on this. Give it a shot.

I am not saying that series of model is a human like experience, but it is quite great at deep research, thinking and reasoning. It might not give you straight up good and excellent answers, but it will give you a direction, a intuition that only humans can develop(atleast for now). If you think LLMs are stupid, its a skill issue.

Prompting is a skill

Knowing how to prompt and applying the output to do your own research is a skill and we need to adapt to that, no one is master at at, but day after day it is becoming a thing to practise and iterate.

Happy Coding :)