New research reveals people automatically downgrade content they believe is AI-generated, even when humans wrote it.

    Why it matters: As AI writing tools become more sophisticated, understanding public perception and bias against AI-created content is crucial. This research exposes a significant credibility gap that could impact AI adoption across industries.

    • The study comes as organizations increasingly explore AI for content creation and storytelling.

    Key finding: ChatGPT-written stories performed nearly as well as human-written ones, but people rated stories poorly when told they were AI-generated – regardless of actual authorship.

    “People don’t like when they think a story is written by AI, whether it was or not.”

    Haoran “Chris” Chu, Ph.D., professor of public relations at University of Florida

    The process:

    • Researchers created two versions of identical stories – one AI-written, one human-written
    • Participants rated engagement levels while researchers manipulated story attribution
    • Study measured “counterarguing” and “transportation” elements

    Keep in mind: AI still lags behind humans in creating truly engaging narratives that transport readers into the story world.

    Real-world impact: The findings have significant implications for:

    • Public health communications
    • Content marketing
    • Creative industries
    • Public trust in AI-generated content
    • The bias against AI authorship suggests organizations may need to carefully consider how they disclose AI use in content creation.

    TL;DR

    • AI can write coherent and logical stories that nearly match human quality
    • People show strong bias against content they believe is AI-generated, regardless of actual authorship
    • The findings could impact how organizations implement and disclose AI use in communications

    Read the Paper
    Can AI tell good stories? Narrative transportation and persuasion with ChatGPT

    Share.
    Leave A Reply