Banner
Views: 843,300,413
Time:
24 users online: ageVerrly, BootaNoBijuu, brickblock369, cyphermur9t, Dark Prince, DeppySlide, Firox, frostterror13, Green Jerry, Hayashi Neru, HD_DankBaron, kebabchilla, Kusrry, LeaderAngelo, lx5, MarioriaM, MegaSonic1999, MrDeePay, Nowieso, Ruberjig, Samuel Zuccati, SimFan96, Sora Clontz, Yoshimaster96 - Guests: 74 - Bots: 82 Users: 46,327 (2,858 active)
Latest: marioman07
Tip: When determining a time limit, remember that players won't be as familiar with the level as you are. If you normally complete the level with the timer around 100, others might run out of time on their first try.Not logged in.
Ghost In The Machine.
Forum Index - Donut Plains - General Discussion - Ghost In The Machine.
Pages: « 1 2 »
Think about how far computers and Artificial Intelligence has progressed. Computers went from block-like things the size of rooms to handheld devices that can practically do anything we tell them. Those who've followed Jeopardy! may recall Watson, the computer that played as a contestant on the show. Computers are at the point where they can have conversations in a nearly-human manner, and we are on the threshold of giving computers the ability to make ethical decisions.

With all this advancement, do you think that computers qualify as 'persons'? Furthermore, do you think that computers have 'souls'? Will computers one day be classified as thinking beings, or will they always be beings of our creation? If so, what sets them apart from children?

Discuss.
A person is a living being, and a computer is a machine, and isn't a living being with feelings. It's more a bunch of data, and it's still controlled/programmed by the human...



...Well, for now...

--------------------
they say... i forgot
Just wait till the Terminators strike.

Actually, upon thinking this over some, it reminds me of the gay and furry threads. If it got to the point where computers could think for themselves, the real question would be "how would they be treated?"

--------------------
Jujubes!¡!¡!


Also, still alive, just lost in life stuff
last edited March 7th
Computers couldn't be classified as ''Persons''

Because some ''Persons'' make stupid mistakes.

Computers don't (Usually)
^ The reason most computers of today cannot make mistakes is because they are programmed to do relatively simple tasks that seem complex. They follow a strict routine and the end result is what you would expect, because they follow that routine accurately. Any errors are generally on the programmer's end.

If a computer is to become more "intelligent", which I believe will some day happen much further down the road, it will need to learn from examples and be able to retrieve the information it has learned. This is basically what they did with Watson for Jeopardy. They initially fed Watson information from several sources like wikipedia, IMDb, the Bible, etc. In addition to using key words and following general rules of logic that we take for granted, Watson was programmed to learn from past mistakes. I remember one example where this showed was during one of Watson's trial runs, one of the categories was for "holidays". The correct response to each of the questions was supposed to be the names of the month of the holidays, but due to the vague wording of the questions Watson didn't know this and would give the wrong response, usually responding with the correct name of the holiday they were vaguely referring to. After each question, the correct response was sent to Watson electronically, and after a few more incorrect responses it realized that it needed to give a month instead of the name of a holiday, and proceeded to answer the rest of the questions correctly.

Basically what I meant to say is that if we're ever to have much more accurate artifical intelligence, it will need to learn on its own. Because it is learning, it will be more prone to errors due to the possibility of learning incorrect information, or not retrieving the correct information at the correct time. So it would essentially be more human-like in that regard.

I do think it will eventually come to that point, where a computer will be human-like in terms of how it thinks. Though I really don't think we'll have the technology available to do this for a while now.

Yeah!
Right now; no, I do not think of machines as humans, most at least.
There are however machines that I would call persons, for example those robots in japan that make and bring you some coffee, they look really humanlike and act as one - at least where they are supposed to - and I would actually give them names etc, and I'm sure it'll advance even more and at some point in the future we will have fully working "humachines" (YEAH for neologism!) and I'd definitely treat those as human beings.

I don't believe in that 'soul' stuff though, so I can't answer that part of the question.

Originally posted by Irfox
Some "Persons" make stupid mistakes.
Computers don't (Usually)

I think that 'bugs' actually are exactly those mistakes humans make, they are not 'educated' to not do those mistakes, just like us.

--------------------
Your layout has been removed.
I say as long as it is made of flesh it's a person, and a computer isn't and has no feelings, the only exception to this is Glados.
Anime List
Manga List
D³ Releases: RL1, [RL2], [RL3], RiS
SMWHacking? I rest my case.
SMWCentral? I leave it to rest.

Searle's Chinese Room.

Check this out. It's a pretty good thought-provoking argument about Strong AI.

I don't really have time to give all my thoughts, but there is one that keeps bugging me. If a computer is created which can perfectly imitate human behavior such that it can fool us into thinking it is human - even then, all of its functions would be the product of human intelligence. Any action it took would be determined by its human programmers. It may seem like it has free will/intelligence/desires/etc., but does it really? So yes, robots are intelligence, but the intelligence is not their own - it is ours, reproduced in machine form.

It's like reading a journal. It is obvious the journal has no intelligence, but the words show there was an intelligent mind that put thoughts on the paper. To my mind, the only difference between that and the machine is a bunch of computation that disguises its true nature. So how should we treat robots? The same way we treat all intellectual property - with respect to its creator.
The Turing Test.

If a computer can pass that, how close is it to being human?
In my opinion no, machines aren't and will never be humans.

Computers are just a piece of code programmed in order to execute a certain function without any thought or feeling. Nowadays, those codes became more and more sophisticated, and some of the functions are more and more similar to human function, however, there is still a difference between us and them.

The main difference between machines and humans is that humans don't follow any kind of code, humans have mental processes, feelings and thoughts. Its truth that sometimes humans can be very similar to each other, but you will never find two identical humans because everyone of us is unique.

I've seen some of you mentioned robots existing in the future, and why not? I'm not saying robots will never exits, however, they are still programs coded in order to execute a certain function, for example, let's say, the robot that brings you coffee and food will be a feelingless piece of code made up to serve humans only without any kind of thought. Now, let's say AI is actually invented, and why not something with an extremely complex functions able to analyze objects, situations and processes, well, it's still far from being a human, a code will always be a code no matter what, and that AI won't have any feelings, even if the coding can make the machine react as if it had them.

Maybe we will be able to create artificial, but I don't think we'll create artificial emotions or artificial ways of thought. The human mind is a very complex thing and we still have a lot of things to learn about it, I really don't think humans will be able to create a replica of the human mind someday, they will create computers that will try to imitate it, but it will never be an exact replica.
Originally posted by Kadyastar
The Turing Test.

If a computer can pass that, how close is it to being human?

But no computer ever has passed the Turing Test. Watson could come close, though.

Celarix | [email protected] | Avatar by Uhrix
Computers will never be "alive" until they are programmed emotions. That's really the only thing that separates the two, in my opinion, since intelligence and the ability to perform certain tasks has never been a qualification for life before. Whereas machines act purely on logic and instruction, animals act purely on emotion and instinct. You don't want to kill a person because of the fact that they want to be alive and have a reason for living, but you have no qualms against throwing out your old laptop in favor of getting a newer model. But if you programmed that computer to be afraid of death, gave it desires for the same worldly pleasures you wish for, and made it beg in horrible desperation when you threatened to dispose of it, would you still be so quick to throw it in the trash?

tl;dr Intelligence =/= life.
I should get a new layout.

Probably won't, though.
Nope. They won't be human until we give them feeling- and that won't happen unless it becomes advantageous to do so, which it won't.

--------------------
Erstwhile TheCowDestroyer / TheIncredibleCowDestroyer.
Even with feelings, a computer would still essentially be a blob of programing either way. No matter how complex, it's impossible for a computer to be "the same" as a human. Besides, if a computer had feelings would it want them?

Discuss.
Originally posted by phenolatukas
Even with feelings, a computer would still essentially be a blob of programing either way. No matter how complex, it's impossible for a computer to be "the same" as a human.


Are we not just blobs of programming as well? Significantly more advanced programming, granted, but still. It's just that we're not programmed to perform the same types of tasks as computers; they're programmed to calculate, we're programmed to survive.
I should get a new layout.

Probably won't, though.
Question to all of you:

What makes a human. A human?

What makes us different than other things?
The fact we're egotistic about our superior intellect compared to those we claim to be lesser beings (ie the animal kingdom)?
Here's a scary thought. Let's assume that AI continues to advance, but human beings don't give a definitive answer on whether or not computers can be persons. All of a sudden, computers have become so advanced that they decide to give their own answer. And it may not be an answer we'd like to hear.

Could that ever happen?
Originally posted by Kipernal
Computers will never be "alive" until they are programmed emotions. That's really the only thing that separates the two, in my opinion, since intelligence and the ability to perform certain tasks has never been a qualification for life before. Whereas machines act purely on logic and instruction, animals act purely on emotion and instinct. You don't want to kill a person because of the fact that they want to be alive and have a reason for living, but you have no qualms against throwing out your old laptop in favor of getting a newer model. But if you programmed that computer to be afraid of death, gave it desires for the same worldly pleasures you wish for, and made it beg in horrible desperation when you threatened to dispose of it, would you still be so quick to throw it in the trash?

tl;dr Intelligence =/= life.


Your prized azalea bush has no emotions; is it not alive (assuming you've been taking proper care of it)?

The question's not really one of life, though, a category which has always been a little fuzzy around the edges. Rather, it's one of personhood, something that's always been defined by intelligence--and above all self-awareness. As an example: we can easily imagine a species of highly intelligent but emotionless alien beings--would it be perfectly all right to kill them indiscriminately due to the fact that they have no emotions? Of course not; they're intelligent, self-aware beings. Conversely, most people have fewer ethical qualms about killing animals than they do about killing other human beings, even though, as you note, many animals have emotions. And the more intelligent an animal is, the more problems a person generally has about killing it. It would seem then that intelligence is indeed a factor we put quite a lot of weight on in determining whether or not a being is a person, so there seems little reason why a sufficiently advanced computer, with or without emotions (incidentally, I imagine developing a machine with emotions would prove a far, far easier task than developing one with complex intelligence) shouldn't be considered a person.
Pages: « 1 2 »
Forum Index - Donut Plains - General Discussion - Ghost In The Machine.

The purpose of this site is not to distribute copyrighted material, but to honor one of our favourite games.

Copyright © 2005 - 2020 - SMW Central
Legal Information - Privacy Policy - Link To Us


Menu

Follow Us On

  • YouTube
  • Twitch
  • Twitter

Affiliates

  • Super Mario Bros. X Community
  • ROMhacking.net
  • Mario Fan Games Galaxy