I’ve noticed a surge in interest in Jazz. Vampire Weekend’s new album, Only God Was Above Us, features many riffs within structured songs that give it jazzy feel. Techno artists like Berlioz are gaining popularity for his fusion of house, pop, and jazz into subtle club music. With all the time I spend with AI tools, there is a sense of chaos happening that I think we can learn from art like Jazz or the works of Pollock, to find a sense of beauty and comfort in all the noise. However, as with every new iteration, the old heads come to reject modernity and embrace tradition — which I don’t always agree with.
The Kids are Cheating
A year ago, Turnitin rolled out an AI writing detection tool that was trained on its trove of papers written by students as well as other AI-generated texts. Since then, more than 200 million papers have been reviewed by the detector, predominantly written by high school and college students. Turnitin found that 11 percent may contain AI-written language in 20 percent of its content, with 3 percent of the total papers reviewed getting flagged for having 80 percent or more AI writing. - Students Are Likely Writing Millions of Papers With AI, Wired.
As an avid user of AI tools, I’ve become a bit of a skeptic when it comes to restricted use of AI in education as well how these detection tools are even able to identify the writing. It’s true that AI-generated copy does have a sort of pattern to the way it generates text. I’ve noticed the misuse of pronouns, always sticking to proper nouns by their full name. It’s also quite wordy, even when I tell it to stop yapping, it’ll continuously yap away. First, I wanted to know how exactly Turnitin is actually doing this work.
According to Google AI search:
Turnitin's AI writing detection system uses a deep-learning architecture called the transformer model to identify statistical patterns of writing generated by AI. The model breaks submitted content into smaller chunks of text, then overlaps those segments to help capture each portion in context. The portions of text are then run against the AI writing indicator in Turnitin, which analyzes factors like sentence structure, vocabulary choices, and semantic patterns. The model uses the average scores of all the segments within the document to generate an overall prediction of how much text has been generated by AI.
Before we get into it, does that sound AI-generated to you? Look at how it starts each sentence with “The”. It’s subtle but when read aloud, you can hear something’s off. It’s like a robot is talking to you. Because it is.
The key point is how the model breaks down the content into smaller chunks of text. At the core of language models is natural language processing (NLP). Since computers only understand math, NLP enables computers to understand, interpret, and generate human language. Words are often broken down into vectors, capturing the semantic meaning and context of words, allowing for more accurate processing and analysis of textual data. So tools like Turnitin, use the model against itself, training it on sets of human language and AI-generated language to teach the machine to tell the difference. But if the user understands this, I imagine you can make enough tweaks to remove this embedded watermark, which is what I do. And that’s where things get fuzzy. If I summarize an article in my own words to make a point then it’s mine, but if I ask Chat to do it for me and then tweak the output — is it cheating? Is all the code being written today with the assistance of tools like co-pilot considered cheating?
Embracing Modernity
Part of the triumph of technology is the ability to do more. Let’s think about book publishing. The printing press allowed for mass distribution and took away the grueling work that came with the physical act of writing out an entire book. The internet took the challenges of paper supply away and allowed for an even wider distribution, again taking away the grueling work of having to obtain and print copies. Writing with AI frees up the time spent just sitting behind a blank page in an attempt to convey your thoughts into words. Are skills lost in the process? Yes, but new skills are gained. Lazy writing will always be lazy writing, and AI won’t fix that. A prompt is still needed. An idea is still needed.
Here’s how I wrote out this very post with the help of ChatGPT:
First, after reading the article that led me to want to write about this topic - I fed it to Chat and prompted:
New article. Using the attached post, put together a paragraph with the following structure:
3 sentences summarizing the entire article
1 sentence on the positive sides of AI writing
1 sentence on the negative sides of AI writing
I read through the response and made extensive edits to improve the style and get a better sense of how I wanted to write my message.
Then I asked Google, they have a new AI search feature that will summarize an answer using the sources provided in a normal Google search, like Reddit:
Turnitin ai checker how does it work?
This gave me the response above which I posted in full to show how AI writes without prompting for style, revealing the patterns.
Next prompt for Chat:
Now 2 sentences on how natural language processing works, with a focus on how words are broken down into vectors.
I wanted to explain the method behind AI detection tools with a focus on how it breaks down words into math, the basis for NLP.
How does an AI detector reverse engineer NLP to detect AI writing? Keep the answer to 2 sentences.
This allowed for the first draft of the post to be done in less than an hour, freeing up time for me to think about structure and style — the art of writing. Noticed how I would limit the response to a set number of sentences. This is where prompt design comes in, what lies at the heart of AI use. While I understand the fear for students, to have AI do their work and thus learn nothing, the true skill to be gained for a future with AI is prompt design. While there are plenty of examples out there, this skill can only be gained through practice.
This is where regulation can make a difference. One example would be to have all AI use produce an audit trail, which would allow users to see how AI was used in a work and decide for themselves how much is real or fake. In our short time with the internet, we’ve already produced all sorts of standards for privacy and terms of use. While not perfect, they’re a step in the right direction. Another idea would be for schools to rethink the assignments they give, striving for more open-ended questions that allow students to think more creatively about their response. Having AI produce such a response requires extensive prompting of an original idea, which inadvertently has the student think about the topic and learn by doing. Or we could just ban the use altogether and have our students stay behind. It never works.
Thank you
In my research, I’ve found that communication, in the form of prompt design, is the future of how the general public will use AI. Whenever I ask a friend about ChatGPT, they usually say something along the lines of “Oh yeah I’ve heard of it, used it a bit, and just stopped.” ChatGPT and other model UIs, as they currently stand, are too broad. Like a Google search, it’s open-ended and provides a lot of unnecessary information when unrestricted. I see it flourishing like the app store did. Apple provided the basis for apps, which led to an onslaught market economy all based around the app store (now the subject of Antitrust laws lol). We are still very early. As always, if you have any questions, want more explanations, or strongly disagree, comment below, follow me on Twitter (X), follow me on Instagram, or shoot me an email.
Disclaimer: These views are my own, and do not necessarily reflect the views of any organization with which I am affiliated with. This article is written with AI assistance.
In the case of jazz I'm a conservative, I love bebop, cool and hardbop, anything after the mid 60s bores me. I love GPT though.