Research shows people who rely on computer algorithms for…

702 shares, 852 points

Research shows people who rely on computer algorithms for assistance with language-related, creative tasks didn’t improve their performance and were more likely to trust low-quality advice

Like it? Share with your friends!

702 shares, 852 points


Your email address will not be published.

  1. Important to note that the advice given is not algorithmically generated. Instead, the same advice is phrased differently for two groups of participants. From the study:

    > In the instructions, participants in the social condition saw an advice source that read: “the consensus of other people who have completed this task.” Participants in the algorithmic condition saw an advice source that read: “the output of an algorithm that has been trained to solve similar word association tasks.” The specific delivery format of the advice was varied across the questions.

    Hence, this study is not about the quality of computer algorithm results, but about how people find algorithmic advice more trustworthy than human advice even if the advice is not necessarily more helpful.

  2. This is a really important topic, but the coverage discussing the study unfortunately extrapolates way too far over the actual science.

    This was *not* a creative task as used, but rather a test that *has been used* to measure creativity when executed very differently. Simplifying, it’s the difference between:

    – Consider the phrases “annoyingly loud,” “frustratingly quiet,” and “obnoxiously nasal.” What would be another phrase that fits this pattern?


    – Consider the phrases “annoyingly loud,” “frustratingly quiet,” and “obnoxiously nasal.” Is “happily orange” a good fit for this pattern?

    (There’s more to the methods than this, but that’s really what it boils down to)

    The observation was that people accept bad or completely incorrect AI-generated suggestions with a surprisingly high frequency, and some speculation is made around factors into that like trained complacency. That’s a really interesting and provocative topic all on its own, given we usually laud AI as helping *improve* human decisions, but the second task is *not* the same as the first; humans doing the second thing are doing a much less intensive, much less skilled, much less *engaging* task than people doing the first, so extending these observations all the way to “writers are going to just start tab-complete auto-filling their novels and we’re never going to see high-quality original work with mass market appeal ever again.” Is way too much of a reach. For now.

    The thing is, we *do* need to start asking questions about that, because we’re truly at a critical precipice in the application of AI that’s going to render a lot of traditionally “skilled/creative, non-automatable work” obsolete, even if it’s just via cost efficiency at first.

    We already have AI that can generate visual art which beats humans in some contests. Generated writing samples, at least constrained ones, can already fool a lot of skilled readers. Synthesized voices can already pass for the people they’re modeled against in a lot of situations.

    And that’s just *the tip of the iceberg*. When you look at what technologies like GPT-3/GPT-4 can do, it’s creepy; and when you see how accessible things like OpenAI are poised to make this kind of technology, we’re almost certainly going to hit an explosion in creative and skilled tasks removing most of their human component in the “few years to couple of decades” timeframe.

    Sociologists have been evaluating what automation growing into “safe,” skilled professions” is going to do for employment and economic stability for a long time, but not nearly as much attention has been paid to what the downstream consequences to our collective potential for novel/creative capacities will be.

    It’s completely plausible that, ten years for now, some or even most of the most popular music will be primarily or entirely created by AI. Once that’s the case, what happens to the hopes and dreams of creative people hoping to “make it big,” now told in a hollow way that they still can and should be creative, but need to invent their own internal motivations? Would an author still want to slave over the details of a new book for years once they see it get drowned out by dozens of computer-generated derived works within weeks, days, or even hours of release?

    Those are the kinds of questions that will inform what ubiquitous availability of “good enough” computer-generated creative work does to original human creativity. It’s big, it’s scary, it’s important — but it’s not really what this study was looking at.

  3. Science explains why Millennial writers have yet to surface an original idea as they near the middle of their lackluster professional careers.