The narrow purpose models seem to be the most successful, so this would support the idea that a general AI isn’t going to happen from LLMs alone. It’s interesting that hallucinations are seen as a problem yet are probably part of why LLMs can be creative (much like humans). We shouldn’t want to stop them, but just control when they happen and be aware of when the AI is off the tracks. A group of different models working together and checking each other might work (and probably has already been tried, it’s hard to keep up).
The narrow purpose models seem to be the most successful, so this would support the idea that a general AI isn’t going to happen from LLMs alone. It’s interesting that hallucinations are seen as a problem yet are probably part of why LLMs can be creative (much like humans). We shouldn’t want to stop them, but just control when they happen and be aware of when the AI is off the tracks. A group of different models working together and checking each other might work (and probably has already been tried, it’s hard to keep up).