The problem is that, in LLMs, words (symbols) are not grounded in experiences of the real world, so any meaning of words needs to be inferred from the relations of words to each other, floating in some abstract space and being prone to misinterpretation, let alone hallucinations.
Max Riesenhuber is the co-director of Georgetown’s Center for Neuroengineering.
Leave a Reply