ChatGPT is undoubtedly a groundbreaking tool.
But as someone who uses it almost daily, I’ve started noticing a few areas where it still falls short.
Here are some of those limitations, based on my experience.


1. Image Generation Frustrations – “My name came out wrong”

When I ask GPT to generate an image, it often gets names or text wrong.
For instance, I requested the name “Uyeol” to be included in an image, but it came out as “Ual.”
Compared to text generation, image creation takes longer, consumes more data, and often yields underwhelming results.

This likely stems from the fact that GPT is fundamentally a language-based model, not an image-first system.


2. Structural Limitations – Everything is horizontal?

I once experimented by asking GPT to generate a “3×3 word puzzle.”
But GPT struggled with requests that involve visual structure or spatial layout.
Because the output is always formatted as horizontal text, it’s not well suited for puzzles, diagrams, or anything requiring visual arrangement.

Simply put, GPT still lacks the ability to handle “structured language” that needs to be seen, not just read.


3. The Freshness Problem – A staircase-style update system?

Sometimes GPT explains a policy or law based on outdated information.
Even when a regulation has been recently revised, it still gives the older version.

This happens because GPT doesn’t receive real-time updates — it learns in stages, through what we call “staircase-style updates.”
For time-sensitive topics, external verification is still necessary.


4. Lacking in Local Knowledge – Confusing cities?

When I mentioned “Gammunguk” from the city of Gimcheon in Korea, GPT repeatedly associated it with the neighboring city of Mungyeong.
This often happens when digital records or structured data are sparse or inconsistent.

Especially with local stories, oral history, or newly uncovered archaeological facts, GPT often lacks clear, region-specific context and may mislead by making the wrong associations.


5. When Minority Accuracy Gets Lost – Goguryeo vs. Goryeo

I explicitly referred to the Goguryeo dynasty’s Cheolli Jangseong (Thousand-Li Wall),
but GPT generalized it as the Goryeo dynasty’s wall, which is more commonly referenced online.

This reflects a deeper problem: GPT tends to prioritize commonly mentioned data, even at the expense of precise, less-known facts.
When the model favors frequency over accuracy, niche truths are at risk of being overwritten.

Just because something is widely cited doesn’t mean it’s true — a critical limitation of data-driven AI.


🧩 Final Thoughts

GPT is undeniably a powerful assistant.
But to make the most of it, we as users need to recognize its blind spots — and approach it with both curiosity and caution.

It’s far from perfect, but if we keep using it thoughtfully, sharing feedback, and highlighting flaws,
maybe that alone makes this whole journey worthwhile.


💬 Coming up next:
In the next post, I’ll be sharing some of the things I love about ChatGPT and the potential I see in large language models. Stay tuned!


☕️ If you found this post insightful or helpful,
you can support me over at Buy Me a Coffee.
Every cup fuels more honest writing like this. Thank you!

Posted in

댓글 남기기