Great use is subjective. But I use them to better understand university lectures. I can have a real discussion, ask questions, ask for examples and so on. I had countless situations where web searches would not have helped me because the ressources cannot do reasoning to explain intuitions. I’m also using it for coding. It’s awesome for boilerplate code. I also sometimes ask it to improve my existing code, so I can learn new coding practicesand tricks from that.
None of these applications require the LLM to be correct 100% of the time. It’s still great value for me. And when I suspect that it’s wrong about something or it’s hallucinating or bad at explaining something, I’ll just do some web searches for validation.
You might not find it useful because you’re using it wrong, or simply because you have no application for the value it can provide. But that doesn’t mean it’s all bad. OP certainly doesn’t know how to use it. I would never even think about asking it about historical events.
So literally you use it for information retrieval hahahaha. I did use copilot, codium, and the jetbrains one for a bit. But I had to disable each one, the amount fo wrong code simply doesn’t justify the little boilerplate it generates.
That’s not information retrieval. There is a difference between asking it about historical events and asking it to come up with their own stuff based on reasoning. I know that it can be wrong about factual questions and I embrace that. OP and many others don’t understand that and think it’s a problem when the AI gives a wrong answer about a specific question. You’re simply using it wrong.
It’s been a while since ChatGPT4 has spit out non-working bullshit code for me. And if it does, I immediately notice it and it’ll still be a time-saver because there is at least something I can take from every response even if it’s a wrong response. I’m using it as intended. And I see value in it. So keep convincing yourself it’s terrible, but stop being annoying about it if others disagree.
Great use is subjective. But I use them to better understand university lectures. I can have a real discussion, ask questions, ask for examples and so on. I had countless situations where web searches would not have helped me because the ressources cannot do reasoning to explain intuitions. I’m also using it for coding. It’s awesome for boilerplate code. I also sometimes ask it to improve my existing code, so I can learn new coding practicesand tricks from that.
None of these applications require the LLM to be correct 100% of the time. It’s still great value for me. And when I suspect that it’s wrong about something or it’s hallucinating or bad at explaining something, I’ll just do some web searches for validation.
You might not find it useful because you’re using it wrong, or simply because you have no application for the value it can provide. But that doesn’t mean it’s all bad. OP certainly doesn’t know how to use it. I would never even think about asking it about historical events.
So literally you use it for information retrieval hahahaha. I did use copilot, codium, and the jetbrains one for a bit. But I had to disable each one, the amount fo wrong code simply doesn’t justify the little boilerplate it generates.
That’s not information retrieval. There is a difference between asking it about historical events and asking it to come up with their own stuff based on reasoning. I know that it can be wrong about factual questions and I embrace that. OP and many others don’t understand that and think it’s a problem when the AI gives a wrong answer about a specific question. You’re simply using it wrong.
It’s been a while since ChatGPT4 has spit out non-working bullshit code for me. And if it does, I immediately notice it and it’ll still be a time-saver because there is at least something I can take from every response even if it’s a wrong response. I’m using it as intended. And I see value in it. So keep convincing yourself it’s terrible, but stop being annoying about it if others disagree.