karlperera 14 hours ago

This is a non-invasive approach from researchers at the University of Texas at Austin that uses fMRI scans and a language model to reconstruct the 'gist' of what a person is hearing or thinking. It doesn't get a word-for-word transcript, but it captures the essence. The interesting part is that it worked even when participants were told to actively resist by thinking of other things. Raises some pretty significant privacy implications for the future.

myaccountonhn 5 hours ago

A classic case of scientists not asking themselves if they should even if they could.

This tool in the wrong hands is absolutely terrifying.

  • otikik 5 hours ago

    I am not a particularly pro-AI person myself, I definitely have misgivings about the technology as a whole.

    That said, any tool in the wrong hands can do a lot of damage. People can do a lot of damage with a simple hammer.

    A friend of mine's brother recently had an accident and became tetraplegic. Any advancement on direct brain-to-computer interfaces has the potential to increasing his quality of life immensely.

    • karlperera 4 hours ago

      As with most technology there is both a potential for harm and a potential for the positive. Knowledge can be incredibly dangerous in the wrong hands but we can't exactly unlearn things.