AI Text Generation
For school, I had a blog assignment where I needed to use an AI software to generate a 250–300 word post, then go in and edit it myself. I used ChatGPT to create a short blog about the importance of copy in social media. At first glance, the response felt polished and usable. It had a clear structure, decent flow, and touched on relevant points like engagement and brand voice. But once I started editing it in Google Docs using Suggesting mode, the limitations became more obvious.
The biggest thing I noticed is that while the writing sounded correct, it lacked specificity and depth. It didn’t include real examples, data, or sources to support its claims. Everything was broad and safe. Nothing was technically wrong, but nothing felt especially insightful either. This made the editing process less about fixing obvious errors and more about rewriting the post to sound like an actual person with a clear perspective. Adding credible sources and links also forced me to think more critically about what was being said instead of just accepting it at face value.
This experience showed me that AI can be a helpful starting point, especially for brainstorming or getting past writer’s block. However, I don’t fully trust it to produce final, publishable content on its own. The risk is not always obvious misinformation. It is the subtle lack of originality and authority. Without human input, the writing can feel generic and disconnected.
Overall, I see tools like ChatGPT as assistants, not replacements. They can speed up the process, but they still require careful editing, fact-checking, and personalization to be truly effective.
