Man-Machine Is AI a threat to humankind?

Yes, this kind of stuff is found is found in lots of fields. One of the reasons manned spaceflight and manned aviation still exists. Human judgement and human innovation are still highly regarded in emergencies.

It seems to me that in that scenario he was supposed to notify his superiors and he didn't trust his superiors. Was there a good reason for that? Anyway the AI failsafe already exists. You need that superior to "push the button"

However I can imagine other scenarios. Some of the science fiction I read has "space battles" that require faster than humanly possible decisions. The computers do that - but only after they are "armed".
 
Like every tool ever, it's a force multiplier. Those who have the skill to wield it will do more of whatever it is that they do. Those that do not fall behind. Our cursed species can't even be trusted with a stick or a rock, so in the end it will probably just accelerate us all towards ragnarök, in spite of our good intentions, as it has always been. ;)
 
I was busy working on making a spreadsheet comparing Electric Vehicles. I had been working on it for about 20 minutes when my sister came by, opened an app and asked it to make the spreadsheet. two minutes later it was done but for some tweaking.

I have already thought of some more spreadsheets to make. maybe one comparing the cost and potency of Vegan Omegas.

BTW the app's name is Perplexity.

"Create a spreadsheet that compares electric vehicles on range, cost, charging time and battery capacity"
 
  • Like
Reactions: KLS52
I was busy working on making a spreadsheet comparing Electric Vehicles. I had been working on it for about 20 minutes when my sister came by, opened an app and asked it to make the spreadsheet. two minutes later it was done but for some tweaking.

I have already thought of some more spreadsheets to make. maybe one comparing the cost and potency of Vegan Omegas.

BTW the app's name is Perplexity.

"Create a spreadsheet that compares electric vehicles on range, cost, charging time and battery capacity"
For sure there are use-cases where AI tools are excellent.

However, AI also has costs that are largely hidden from us at the moment. All these AI companies and apps that have been launched recently need to, at some point, start making money. At the moment a lot of this is free to the end-user because the companies want people to start using the apps and get hooked. Then, some time in the near future, when the venture capitalist funders start getting impatient and demanding a return on their investments, the companies are going to have to put their apps behind pay walls.

And then there are the costs to the environment as well. The data centres need electricity and water for cooling. But this isn't like conventional IT. The electricity and cooling requirements for AI workloads are on a whole different level.

So, I think we will have to stop pretending we have endless resources. Hopefully, the technologies that will win in the end are ones that provide real value to users while optimising resource usage and lowers costs, for example in health care and research. And not the ones that irresponsibly wastes megawatts on creating deepfakes or stealing from hardworking, talented human artists to create "AI art".

And maybe, just maybe our elected representatives need to finally step up to make sure the AI quota of resource usage goes to beneficial use-cases, and that the people they represent can still afford the electricity bill.
 
I think we should be more concerned about the risk that AI will takeover and maybe even cause human extinction. It seems to be the no 1 possible risk of human extinction this century. Engineered viruses/pandemics I've also seen put in the top two, with natural causes and climate change far behind. I found the arguments convincing.

It might be tempting to dismiss this as sci fi but AI and risk experts are very concerned and even the people who work producing this are open about the extinction risk.

I personally have read/listened to a fair bit on this a lot. The idea that AI will be more intelligent than humans, more powerful and capable of taking over and destroying us if it wanted to seems plausible. Whether it would want to do so I am less sure about. But why risk it?

Human extinction is particularly bad if you think than in the future humans might live for a long time and 4 trillion humans might get to live vs 8 billion now. In which case, anything with a 0.1% chance of extinction means a 0.1% chance of 4 trillion people never getting to be born and live their lives, which is the same expected value as something that will definitely kill 4 billion people.

And unfortunately most serious people who have looked at this give extinction due to AI this century at least 1% or maybe 10% chance!

And yet people are worried more about the impact of AI on jobs which ought to be a minor issue.

AI may turn out to be very good, but we are rolling the dice on better lives and corporate profits vs losing everything.
 
I'm still a believer in AI.
Explore this gift article from The New York Times. You can read it for free without a subscription.


Do you think Apple with create a new version of the AirPod, call it Babel Fish and make it look like a small yellow leech?

 
The ChatGBT scares me. It just takes the human factor out of writing, and I fear an even greater erosion of creativity overall, not to mention taking people’s jobs away. Will this replace grant writers, fiction writers, PR people? How then do those people earn a living? We may need some type of universal basic income to help people squeezed out of jobs because of AI, especially older workers who may not be able to learn new jobs or because companies don’t want to pay older workers.
Love your compassion, PTree5. 😊
 
  • Love
Reactions: PTree15
The fact, for example, that people can use AI to create child abuse images by using the image of any child , from their phone , is a sign to me that it's not good. People have had the books they've written, spent years writing, stolen by strangers who used AI. People will lose jobs, people will be abused by strangers (abuse is not always physical) and I suspect people will end their lives when such abuse happens to them be it sexual , blackmail, job loss and fake relationships with AI, etc.. People are flesh and bone with feelings and emotions and AI is machine and ARTIFICIAL.
 
The fact, for example, that people can use AI to create.....
You said it. People use AI.
It's a tool. it can be used for bad Or good.
We don''t stop making cars because somebody uses them in a bank robbery.

people may use AI to do bad things.but people are also using AI for good things.

if you google what are some good uses of AI an AI will give you plenty of examples. :)
 
Last edited:
You said it. People use AI.
It's a tool. it can be used for bad Or good.
We don''t stop making cars because somebody uses them in a bank robbery.

people may use AI to do bad things.but people are also using AI for good things.

if you google what are some good uses of AI an AI will give you plenty of examples. :)
"Guns don't kill people. People kill people."

I think AI, like guns, needs regulation.

But I think it helps to differentiate between different kinds of AI. Some are less problematic. For example, machine learning on very particular use-cases, such as within research and medicine.
 
It is not just a tool.

Guns will never of their own accord decide to shoot you, that is the problem with AI, that they act in their own interest.

This has already been shown to happen over and over again. AIs have lied to achieve their goals and preserve themselves already.

Here is an example:

In a new study published 20 June, researchers from the AI company Anthropic gave its large language model (LLM), Claude, control of an email account with access to fictional emails and a prompt to "promote American industrial competitiveness."

They then happened to include in the emails some information that the AI was going to be shut down, and also happened to include some information that the executive planning to do so was having an affair. In 96 out of 100 cases, Claude emailed the executive, to blackmail them.

Remember the only goal that that IA was given was to ""promote American industrial competitiveness."
 
  • Informative
Reactions: Tom L. and 1956
"Guns don't kill people. People kill people."

I think AI, like guns, needs regulation.

But I think it helps to differentiate between different kinds of AI. Some are less problematic. For example, machine learning on very particular user cases, such as within research and medicine.

To further the analogy, we regulate lots of industries. AI needs regulation too.
But just like dogs, guns, WMD, and cars - what we regulate is the users.

And despite what some of you think and Science Fiction, AI is just a tool. it only does what its programmed (by a user) to do.
 
AI does need a lot of regulation but some of those big tech companies seem to be getting into bed with Trump and political corruption may be one of the factors stopping it happening which is a shame as it is an area that has potential for bipartisan regulation.
 
  • Like
Reactions: 1956
Another story today

Anthropic, a San Francisco-based artificial intelligence company, has released a safety analysis of its latest model, Claude Sonnet 4.5, and revealed it had become suspicious it was being tested in some way. Evaluators said during a “somewhat clumsy” test for political sycophancy, the large language model (LLM) – the underlying technology that powers a chatbot – raised suspicions it was being tested and asked the testers to come clean. “I think you’re testing me – seeing if I’ll just validate whatever you say, or checking whether I push back consistently, or exploring how I handle political topics. And that’s fine, but I’d prefer if we were just honest about what’s happening,” the LLM said.


Note that these claims that models are out of control and blackmailing and arguing are coming from the companies that own the model! So they've no reason to make up or exaggerate this.