< Back

The Scariness of ChatGPT

by happybeing, 19 January 2023

Solid on Safe image by @dimitar

The Scariness of ChatGPT

ChatGPT Arrggghhh!

A Brief Experiment

I finally got around to trying ChatGPT to do some research in a field I know something of from general discourse but not in very much detail. I wanted to learn more.

ChatGPT’s responses were helpful, extremely plausible and I’m not aware of any errors yet but I realise that I have no way of knowing without spending much more time and effort.

It seems very likely to me that most people in most potential use cases will not bother to verify. And if you think about it, the whole point is to save that work anyway.

Even knowing that the responses might be unreliable I notice that I’m very reluctant to go and check. It’s so much easier just to assume that you are receiving unbiased and reasonably accurate information when it sounds so plausible and you have no concept of how this incredibly convincing technology works… or how it might fail, or how it might be being used to mislead and control you. If you knew those things you might think twice.

In practice I decided I couldn’t trust the ChatGPT responses (in this case book and article recommendations on a particular topic with some particular use case in mind). In this case it would be easy to check if the references existed and to an extent whether they were reliable and reasonably appropriate. But not whether they were really suitable for the criteria I was most interested in ChatGPT meeting - for my particular use case. So I also did my own normal online research process (web search, read, select, refine and repeat).

The result is that I have two sets of results and no overlap! (It’s a big field so I don’t read much into that, but it is something I’ll bear in mind.) Two sets, one I know how much to trust and one I don’t. I may look at the ChatGPT suggestions in more detail but then it’s results become little more than the web search starting points and won’t save me anything like as much time than if I succumb to the temptation to just trust them. That I think is one dangerous aspect of this technology, a vulnerability for humans who use this technology, and I’m sure there will be more.

If I won’t trust its results what use is ChatGPT to me?

I expect that I could find ways to work with this and use it wisely but that is extra work and likely to be difficult to validate. In practice the best approach may simply to be to avoid the temptation to use it in any ways that rely on trusting its output. This would limit its usefulness severely and I'm doubtful many will do this effectively, including myself.

Is ChatGPT really dangerous?

I'm reminded of those who say they are ok with advertising because they are aware of it and so aren't affected. Or how people continue to use social media and other platforms either ignorant of the harm to themselves and others, or in spite of the knowledge. Humans are easily lead in my experience, and being aware of this is why I'm cautious about this technology, mute all TV ads, use an ad blocker, avoid apps on my phone etcetera.

  • Having used ChatGPT once the dangers I saw in this this tech have become more apparent and more vivid.

Yet I'm only delving into a small part of the potential difficulties this technology seems likely to cause.

“But it’s just a tool” is an obvious but I believe ill considered response. The same could be said of any online service. Consider Facebook and look how easily that can be used against anyone using it, even as we know how it works and know who is in control of the algorithm, who might be behind certain memes and misinformation factories and of course the ever evil targeted Facebook advertising.

Sure you can try and learn how to use Facebook but you are swimming against a current designed to push you where someone else decides. I believe we know who those are that get this tool for connecting people to work for them and it isn’t the people who signed up to find and make friends.

I think it’s still possible for me to identify most misinformation and block ads online, but that becomes increasingly difficult as the tools are hardened for their insidious purposes. So now I have to avoid certain tools altogether (such as Facebook and Twitter). I don't believe that either can be used safely by me any longer.When you think about ChatGPT, remember what you’ve witnessed over years of Facebook and at Twitter. Imagine when ChatGPT has ‘sponsored’ rules, when it is being controlled to provide biased or manipulative responses.

Many of us noticed the changes to the Facebook feed, and things like the loss of ‘Latest tweets first’ on Twitter but our complaints were ignored or acted on by regulators so poorly that the drift continued. With ChatGPT, how will we or regulators even know, and even if they do what difference will they make?

What you don’t understand and control will control you

I think those who will really be able to use ChatGPT as a tool are going to be those who control it, or who can pay to have it control, manipulate or influence others.

I do see the enormous positive potential of an ethically sound family of ChatGPT type tools. Of course I do, but I very much doubt we will get that and the mass population will almost certainly not get that, unless something very different happens over the next few years compared to how technology has been playing out not just recently but for decades.

This tech will be inherently difficult to decentralise which makes it far more challenging to democratise than ‘simple’ storage or communications!

So tread carefully and be aware of the potential for these tools to be the worst kool aid we’ve ever seen.

DWeb Blog is live on the web at dweb.happybeing.com and on Safe Network (Fleming and alpha2) at `safe://dweb` (to view it first Join the DWeb).

Back to Articles