• Welcome to ZD Forums! You must create an account and log in to see and participate in the Shoutbox chat on this main index page.

The Official A.I. Thread

Is A.I. A Good Thing?

  • Yes, It shall save us.

    Votes: 2 11.8%
  • No, it shall destroy us.

    Votes: 2 11.8%
  • Somewhere in between.

    Votes: 13 76.5%
  • Other, explain in a post.

    Votes: 0 0.0%

  • Total voters
    17

Fierce Deity Link

Knight of the Falling Moon
Joined
Jun 1, 2023
Location
Fierce Deity Mask
Gender
Male
Artificial Intelligence is here. Its making artwork, talking to us, and may soon be a fixture of most of our lives like smart technologies and touch screens.

Hollywood has inundated us with tales of the horrors of A.I. such as Skynet in Terminator, A.I. in The Matrix, and so forth. There are few exceptions such as Cortana in Halo, though she became fragmented and dangerious invitably, her story tragic.

Is Artificial Intelligence an unholy abomination that will destroy mankind or is it a key to a new existence and perhaps a more optimistic future as seen in Star Trek? Or something in between?
 

Chevywolf30

The one and only.
Forum Volunteer
Joined
Sep 29, 2020
Location
The Lone Star State
Gender
Manufacturer recommended settings
It won't destroy humanity by killing us all, it'll destroy humanity by removing the need for human work so eventually nobody knows how to do anything and the robots have to do it all. If you've seen the Pixar movie Wall-E, that's a pretty good illustration of where I see AI going.
 

Fierce Deity Link

Knight of the Falling Moon
Joined
Jun 1, 2023
Location
Fierce Deity Mask
Gender
Male
In The AniMatrix Prolouge, mankind got to where they had robots they owned who worked for them, you owned the machine and it took your place in the work force.
I can see that happening, but only the wealthy will be a to afford them. The rest of humans scavenging on scrap in Mad Max like settings.
 

ExLight

why
Forum Volunteer
Advances on AI will certainly reshape some of the society, but I don't think it will be as significant as people make it out to be.

For once, it has to be fed with content to train it. AI is not sentient, nor creative. As it is, it will not be able to come up with new paths and solutions that aren't previously baby-fed to it. It simply regurgitates what it is told, and as a result, it's heavily dependent on human knowledge. It's a series of mathematical functions that are constantly changing its parameters to fine tune and adequate outputs for every prompt.

Pros: It can be a great and accessible way of spreading knowledge.
Cons: It can be a great and accessible way of spreading misinformation or biased content.

It all depends on what's fed. If it's fed trash, it will spout trash. If it's fed quality content, it will provide quality content.
Every now and then a new revolution in information appears, and with it caution is necessary, it's been present throughout the history. From books, to the invention of the press, to the internet. And so on. The general idea is that even if it spouts nonsense, people should be able to correct it so it replaces the bad information it was fed for new high-quality ones, but this is a procedure that is always and constantly happening, so it will always have some bad content.
I remember hearing a fairly persuasive argument on to why AIs like ChatGPT should be taken with a grain of salt is that it's as reliable as Wikipedia. Both are extremely useful tools, but sometimes it's hard to know whether or not its contents are accurate or correct, specially without proof that the sources utilized in the text are of high quality. And honestly, this sort of argument can probably be taken ad infinitum, or up to empirical experimentation or extreme abstraction.

But let's assume it reach a point where it's trustworthy enough. Another concern is that knowledge can be used for bad stuff. Without regularizing what sorts of prompts can be inserted into the AI, it can be a tool for illegal activities such as producing drugs, viruses, or even weaponry to be used in terrorist attacks. There's currently an attempt to prevent it from telling people how to make stuff like that, but as seen a few times it can often be bypassed by some really bizarre ways (there is a famous example of ChatGPT teaching an user how to make napalm because it was asked to roleplay as their grandmother telling a bedtime story about her time working on a napalm factory). So clearly, some of it requires to be addressed.
Similarly, how to make sure the content isn't biased? Be it towards a certain political, economical, or even philosophical ideology. The fact the AI is able to amplify hate speeches and discrimination is fairly worrying, but how to even flat these out when they're also extremely subjective topics that are also subject to human bias when regulated? Tough topic.

There's a whole other discussion regarding AI on artistic skill, though. Mainly digital painting.
I feel like the root of this issue comes from people feeding other people's works to these machines. It's just a disguised form of plagiarism/ forgery. Similar to the case of it aiding commit crimes, since it enables illegal activity it simply requires immediate legislation; but this time intervention on what's being fed to the machines rather than on accepting the inputs.

There's also a fun discussion regarding ethics when faced with a dilemma. An example is what a self-driving car would do if it's in a situation where running over someone is unavoidable but it has the choice to decide between certain demographics (or even collide with something else but harm the passengers). But honestly I don't have an opinion on this theme.

Overall, it might just replace mediocrity in some areas, since it forces people to step up and be able to perform better than the overall existing knowledge. I often see some people in my engineering course trying to cheat exams by using it, and honestly I just keep asking myself who is even gonna hire them if they're just regurgitating what the AI is already regurgitating. People should be sitting on the end feeding it high quality information, not on the end consuming potentially ****ty ones. So yea, overall I feel like it will become an important tool in the future, but shouldn't be demonized or feared. Won't cause that huge of an impact in short term. Some stuff should be dealt with a lot of caution though.

If you've seen the Pixar movie Wall-E, that's a pretty good illustration of where I see AI going.
Do you mean any robots in particular?
I feel like the majority in the Wall-E movie (aside from arguably the protags, and AUTO - the HAL9000 parody) are more akin to industrial robots or a roomba than an AI since they are focused on executing manual labor rather than machine learning devices.
 

Fierce Deity Link

Knight of the Falling Moon
Joined
Jun 1, 2023
Location
Fierce Deity Mask
Gender
Male
That is the issue, can A.I. achieve sentience? The other issue is what if it malfunctions? Say we connect all subs and silos around the world, and it glitchs and it launches WMDs? In that case the A.I. wasn’t making a choice to wipe out humanity, it malfunctioned to that result. That seems a more plausible doomsday scenario then it deciding organics are parasites that need to be wiped out.

Or another doomsday scenerio is A.I. achieves no sentience, but is in control of all food trucks, ships, and etc; it glitches and we lose railways, ships, and etc.
 

ExLight

why
Forum Volunteer
That is the issue, can A.I. achieve sentience? The other issue is what if it malfunctions? Say we connect all subs and silos around the world, and it glitchs and it launches WMDs? In that case the A.I. wasn’t making a choice to wipe out humanity, it malfunctioned to that result. That seems a more plausible doomsday scenario then it deciding organics are parasites that need to be wiped out.

Or another doomsday scenerio is A.I. achieves no sentience, but is in control of all food trucks, ships, and etc; it glitches and we lose railways, ships, and etc.
The way a neural network works is pretty much like a brain, so maybe eventually? We're able to design it, as well as control its behavior though. Not sure something can achieve sentience unless we figure out exactly what it is (which is a pretty abstract concept, to be fair) and program an AI to try to achieve that specifically.

Malfunction is a pretty bad risk, I'd assume we'd use failsafe technologies to make sure the odds of something collapsing that colossally to be virtually null. But yea, never a good thing to be overly dependent of anything and probably a risk if the world does end up making AI its whole foundation.
 

Chevywolf30

The one and only.
Forum Volunteer
Joined
Sep 29, 2020
Location
The Lone Star State
Gender
Manufacturer recommended settings
Do you mean any robots in particular?
I feel like the majority in the Wall-E movie (aside from arguably the protags, and AUTO - the HAL9000 parody) are more akin to industrial robots or a roomba than an AI since they are focused on executing manual labor rather than machine learning devices.
That's exactly the point tho. They do all the work and look at what it did to the people.
 

Fierce Deity Link

Knight of the Falling Moon
Joined
Jun 1, 2023
Location
Fierce Deity Mask
Gender
Male
The way a neural network works is pretty much like a brain, so maybe eventually? We're able to design it, as well as control its behavior though. Not sure something can achieve sentience unless we figure out exactly what it is (which is a pretty abstract concept, to be fair) and program an AI to try to achieve that specifically.

Malfunction is a pretty bad risk, I'd assume we'd use failsafe technologies to make sure the odds of something collapsing that colossally to be virtually null. But yea, never a good thing to be overly dependent of anything and probably a risk if the world does end up making AI its whole foundation.

I agree fail safes probably would shut it down in the eventuality of a collapse or glitch, but in the food situation the falisafe would do damage as people starve till the problem is resolved.
 

Fierce Deity Link

Knight of the Falling Moon
Joined
Jun 1, 2023
Location
Fierce Deity Mask
Gender
Male
Everything should be done in moderation. The problem with over-reliance on anything is that it takes away our independence to the point that we may gradually lose our ability to know how to do these things on our own, thus further increasing the amount of automated services we need in our lives.

Indeed, so many rely on GPS to make trips, how many now can read a map?
 

thePlinko

What’s the character limit on this? Aksnfiskwjfjsk
ZD Legend
I would like to point out that the “over reliance on a technology” argument has been used for pretty much every advance ever. It happened when the first cell phones came out. It happened when PCs came out. Heck, I’m pretty sure I’ve seen stories of schools not allowing students to use erasers because it would “encourage mistakes” or something like that.

Every technology in the history of mankind shook the world to some degree, to the point where if this is as far as AI ever goes it will probably be the least impactful. It might harm society in the short term, but that’s just how any society works. Jobs will be taken, but jobs were also taken when the steam engine was invented. Humanity wasn’t destroyed by a new technology, and it likely never will be.
 

Fierce Deity Link

Knight of the Falling Moon
Joined
Jun 1, 2023
Location
Fierce Deity Mask
Gender
Male
I would like to point out that the “over reliance on a technology” argument has been used for pretty much every advance ever. It happened when the first cell phones came out. It happened when PCs came out. Heck, I’m pretty sure I’ve seen stories of schools not allowing students to use erasers because it would “encourage mistakes” or something like that.

Every technology in the history of mankind shook the world to some degree, to the point where if this is as far as AI ever goes it will probably be the least impactful. It might harm society in the short term, but that’s just how any society works. Jobs will be taken, but jobs were also taken when the steam engine was invented. Humanity wasn’t destroyed by a new technology, and it likely never will be.

I think the concern is what we will lose being able to do? Once A.I. takes its place, what will be lost and what jobs will be gone?
 

Chevywolf30

The one and only.
Forum Volunteer
Joined
Sep 29, 2020
Location
The Lone Star State
Gender
Manufacturer recommended settings
I would like to point out that the “over reliance on a technology” argument has been used for pretty much every advance ever. It happened when the first cell phones came out. It happened when PCs came out. Heck, I’m pretty sure I’ve seen stories of schools not allowing students to use erasers because it would “encourage mistakes” or something like that.

Every technology in the history of mankind shook the world to some degree, to the point where if this is as far as AI ever goes it will probably be the least impactful. It might harm society in the short term, but that’s just how any society works. Jobs will be taken, but jobs were also taken when the steam engine was invented. Humanity wasn’t destroyed by a new technology, and it likely never will be.
Difference being that everything else was made to assist work, AI is made to replace it.
 

thePlinko

What’s the character limit on this? Aksnfiskwjfjsk
ZD Legend
I think the concern is what we will lose being able to do? Once A.I. takes its place, what will be lost and what jobs will be gone?
That I couldn’t tell you. I do know that we won’t “lose the ability” to do anything, but rather that there’s just going to be an easier alternative.

Difference being that everything else was made to assist work, AI is made to replace it.
The steam engine replaced the hundreds of people driving horse drawn carriages per wagon train with about 10 people per steam train. Human operation is never going to go away, it just means that the ones who are there will be able to produce more, for better or for worse.
 

Users who are viewing this thread

Top Bottom