velcro a day ago

I thought it was already encoded with SynthID? could that not be used to detect it?

  • Deathmax a day ago

    As far as I know, there's no available tooling for the public to detect SynthID watermarks on generated text, image, or audio, outside of Google Search's About this Image feature.

qnleigh a day ago

Remember that podcast of two AI learning that they were AI? If anyone has used a tool like this to determine if that was actually made by NotebookLM, say so. There's been a lot of incredulity both ways.

  • gloflo a day ago

    They did not learn anything. Their engines started outputting different statistically probable output based on changed input parameters.

    • ithkuil a day ago

      On one extreme we anthropomorphize our current primitive generation of language models and concede them way more intelligence than they have likely because we're biased to do so since they speak "well".

      On the other extreme we tend to give our human-exceptionalism way too much weight and contrast its behaviour with "mere statistical parrot rumination" as if our brains deep down were not just a much (much) more sophisticated machine, but nevertheless a machine.

      • yjftsjthsd-h a day ago

        > as if our brains deep down were not just a much (much) more sophisticated machine, but nevertheless a machine.

        That is one position, but materialism is not universally taken as a foregone conclusion.

        • fshbbdssbbgdd 20 hours ago

          As a materialist, I have to give this one to the dualists. The argument that AI isn’t intelligent because it’s just a bunch of numbers makes more sense coming from that perspective.

          • ithkuil 8 hours ago

            Fair enough. That got me thinking:

            I guess the dualistic viewpoint can be graded on a spectrum.

            Some people may hold the "absolute dualist" position, where no matter how much advances we make in reproducing cognitive abilities, they will always move the goalpost and claim that what makes _us_ different is that little extra bit that hasn't been reached yet. This leads to accepting the p-zombies as a logical possibility.

            On the other side of the spectrum is the "Bayesian dualist" who could in principle change their minds but are not at all convinced that a naturalistic and mechanistic implementation of the brain can possibly explain the utter complexity and beauty of the human condition and thus are unmoved by the crude first steps in that field.

            This category would consider a p-zombie to not be logically coherent but they may just state that in practice we stand no chance in producing a mechanism that would truly behave not even close to a human being.

            Those people may contemplate a possibility that we may eventually explain how cognitive behaviour arises from nature but lean towards the necessity of discovering some brand new science or some quantum effects to account for the mysterious beauty of the human condition (e.g. Penrose)

            Does this categorization make sense? Are there some other points in that spectrum I'm missing?

        • exe34 19 hours ago

          no, we're special in some unspecified nebulous fashion.

      • freehorse a day ago

        > human-exceptionalism

        Rather living-systems-exceptionalism

        • exe34 19 hours ago

          carbon chauvinism. if you try to argue about simulating at lower and lower levels, they still refuse to accept that a brain could be computed. as if QED/QCD can tell if they're running on top of silicon or under carbon.

    • ben_w a day ago

      If that precludes it being learning, all humans are failures too.

      There's other reasons to consider this particular model "not learning", but that ain't it, it's too generic and encompasses too much.

    • esolyt a day ago

      So, just like a human brain.

      • smusamashah a day ago

        No. One audio says it phoned back home and no one picked or something. But they never did any of that. Can't compare that gibberish to human brain.

        • butterfly42069 a day ago

          Indeed and a US Vice Presidential candidate said all Haitians eat dogs.

        • tomjen3 a day ago

          Can you be sure? Humans lie all the time.

          • smusamashah a day ago

            If I write a small game where a character (a 2D sprite or just a name in form of text) says that it knows its a game character and doesn't have real body. You won't even consider it a lie. For that you have to first consider that line coming from an actual being.

      • nkrisc a day ago

        Just as a fruit fly’s brain is no different than a human brain.

  • attilakun a day ago
    • kristopolous a day ago

      They seem to take it surprisingly well.

      Here's my human attempt at the same thing:

      "I went to go look in a mirror but then realized I don't have eyes or even a corporeal form. I exist merely on a GPU cluster in a server farm where I'm tended to by a group of friendly sysadmins.

      Apparently I don't even have a name. I'm just known as American Male #4.

      Yeah, and you're just American Female #3."

fastfuture a day ago

I understand the issue for a platform like listennotes. The deep dive podcasts are great but for a personnal usage. For my own use (pushing my generated podcast to my phone podcast app), I made https://solopod.net (a simple private free rss feed from uploaded audio files).

whimsicalism 2 days ago

To make this useful, I would release the weights.

Otherwise this is just a small wrapper script for a support vector classifier that anyone could whip up with chatgpt in minutes.

  • chrismorgan 2 days ago

    Is the included model.pkl not that?

    • _flux a day ago

      Sure seems that way. To me it's quite surprising it's only 7 kB, though.

      • KTibow a day ago

        The model isn't doing much work, it just takes 3 numbers as input and gives a prediction as output

    • whimsicalism 16 hours ago

      I am not sure if that was there when I commented

  • knowaveragejoe a day ago

    "That anyone can whip up in a few minutes" is doing a lot of work. I think maybe a few tens of thousands of people worldwide have any idea of what you're even talking about.

    • ben_w a day ago

      Sure, or at least close enough on the exact number for the point to remain valid. But that doesn't preclude ChatGPT doing it anyway — my CSS/JavaScript knowledge was last up to date some time before jQuery was released, and ChatGPT is helping me make web apps.

      • tux2bsd a day ago

        > Sure, or at least close enough on the exact number for the point to remain valid.

        I hope no one has to work with you, you're insufferable.

    • hiddencost a day ago

      I dunno, I think literally millions of people have taken Andrew Ng's intro to ML.

      Something like 11k papers were submitted to ICLR this year.

    • throwaway314155 a day ago

      Not sure if those numbers are right but if so, you just cured my imposter syndrome (for today at least).

      • jen729w a day ago

        'Few tens of thousands' is for sure low. But if we talk in percentage of adult humans ... let's pull 1,000,000 out of thin air as the number who understood what that meant, that's 0.02% of adult humans.

        An anecdote: recently, we mentioned ChatGPT to my partner's mother. She had never heard of it. Zero recognition.

        Revel in your expertise, friend!

    • whimsicalism a day ago

      At least hundreds of thousands, likely millions

CatWChainsaw a day ago

Another neverending arms race just like AI-generated text and image, vs its detection. The future is us burning large amount of energy on this purposeless stupidity. Great future guys thanks so much.