flebitz a day ago

> Most models lack a robust understanding of the factive nature of knowledge, that knowledge inherently requires truth.

I’d say that LLMs maybe understand better than we do, because of their lack of grandstanding classification of information, that belief and fact are tightly interwoven.

There is a dichotomy that truth can exist while fiction can be widely accepted as truth without the ability for humans to distinguish which is which and all the while thinking that some or most can.

I’m not pushing David Hume on you, but I think this is a learning opportunity.

  • scrubs a day ago

    Pretty boys talking nonsense on tv (or social media) with all the implied grandstanding is a problem. But good lord we have to aim a lot higher than that.

more_corn 2 hours ago

I mean people suck at it too.

The only way we’ve learned is through referencing previous established trustworthy knowledge. The scientific consensus in merely a system that vigorously tests and discards previously held beliefs when they don’t match new evidence. We’ve spent thousands of years living in a world of make-believe. We only learned to emerge relatively recently.

It would be unreasonable to expect an LLM to do it without the tools we have.

It shouldn’t be hard to teach an LLM if you can’t verify it by reference to an evidence based source it’s not fact.

fuzzfactor a day ago

Naturally as expected, language models can more strongly leverage fiction than fact quite a bit like those fluent in regular languages have done since the beginning of time.

Sometimes the more fluent, the more often the fiction may fly under the radar.

For AI this could likely be when they are as close to human as possible.

Anything less, and well the performance will be lower by some opinion or another.

mock-possum a day ago

Well no, of course not - people seemingly can’t, or don’t care to, do that; and LLMs can only generate what they’ve seen people say.

It’s just another round of garbage in garbage out.