Unlike Dave and Frank who were unknowing, and the rest of the crew who were in hibernation, Hal had a long time to ponder on the implications of discovering alien life. Could they pose a threat? Would either their intentions or their diseases wipe out humanity? For whatever reason this became exacerbated in Hal and he killed the crew to prevent the mission ever making the discovery. Neither Dave nor Frank had an expert knowledge of computer systems. Hal may have gone mad from boredom.
He was programmed to make the mission a failure. Unlikely but given the geopolitical situation during the story — it ends with the world on the bring of nuclear war — not impossible. George Bush. Hal found out about George Bush becoming president despite winning fewer votes than his opponent, Al Gore.
Any other ideas, please let me know! Spread the word! Twitter Facebook Email Tumblr. Like this: Like Loading Leave a Reply Cancel reply Enter your comment here Fill in your details below or click an icon to log in:.
Email required Address never made public. Name required. Follow Following. Through The Fringe Join other followers. As Kubrick said , "We had some difficulty deciding exactly what HAL should sound like, and Marty just sounded a little bit too colloquially American. To find a new HAL, Kubrick sent out set assistant Benn Reyes to find an actor with a voice that would be "neither patronizing, nor is it intimidating, nor is it pompous, overly dramatic or actorish.
Despite this, it is interesting. In the end, Kubrick settled on Douglas Rain, who had previously narrated a documentary called Universe , which Kubrick apparently liked a great deal. He was initially considering using Rain as the narrator for the A Space Odyssey , but once he decided to not include any narration in the film, Kubrick realized that Rain's eerily calm delivery and difficult to place " bland mid-Atlantic accent " were exactly what he was looking for in a voice for HAL.
Yet it turns out that Rain, like everyone, did come from a specific place. And in his case, that was Canada. Perhaps this common perception that Canadian accents are difficult to place for Americans is why they manage to get work so often as news anchors in the United States.
Regardless, Rain's voice was perfect for the eerily calm AI. Now comes the million dollar question. Why does the seemingly benevolent HAL suddenly turn murderous? HAL is introduced as the onboard computer of the Discovery One spacecraft, which is bound for a mission near Jupiter. All of the crew is in suspended animation, except for two crewman, David Bowman and Frank Poole. One day, during a conversation with David, wherein HAL expresses some uneasiness about the secrecy surrounding their mission, HAL suddenly claims to detect an error with the ship's transmission antenna.
To verify this, David goes out on a spacewalk to retrieve the antenna. He does so, and finds nothing wrong with it. They decide to reinstall the antenna, and then if the device continues to work properly, they plan to disconnect HAL. However, what they don't realize is that, even though HAL can't hear them, one of his many cameras is still able to read their lips and discover their plan.
At this point, HAL goes bad. He kills Frank while he is out on a spacewalk, and then he cuts the life support to the sleeping crew, leaving David as the only surviving human on the ship. However, before HAL can tie up this final loose end, David manages to shut him down. So why did HAL suddenly break bad? Well, the most surface level reading of the story is that HAL had a glitch that turned him evil and made him want to kill the crew. Proof that you can never trust those treacherous robots!
It certainly tracks, but so what? It's simplistic and unsatisfying. Another, slightly more complex version of this idea was that HAL had a different, much smaller glitch when he reported to Dave that there was a problem with the ship's antenna.
In this reading, this is the only true malfunction HAL has throughout the film. However, this one error led to Dave and Frank thinking that HAL was untrustworthy and needed to be disconnected.
HAL then killed the crew in self-defense, or perhaps murder with aggravated circumstances, depending on your perspective. In an interview , Stanley Kubrick gave a quote which supports this reading, saying, "In the specific case of HAL, he had an acute emotional crisis because he could not accept evidence of his own fallibility.
Most advanced computer theorists believe that once you have a computer which is more intelligent than man and capable of learning by experience, it's inevitable that it will develop an equivalent range of emotional reactions — fear, love, hate, envy, etc.
Such a machine could eventually become as incomprehensible as a human being, and could, of course, have a nervous breakdown — as HAL did in the film. So maybe he's not a killer robot. Maybe HAL just has all the emotions of real humans, including a tendency to make mistakes, an inability to accept those mistakes, and a capacity for darkness.
What if HAL didn't have a glitch or a nervous breakdown? Instead, what if he had some ulterior motive for committing all those murders? But where's the proof for that, you ask? Nonetheless, the problems of optimizing its use of time would increase by several orders of magnitude if it had to juggle all these new concurrent projects of simple perception and self-maintenance in the world, to say nothing of more devious schemes and opportunities.
For this hugely expanded task of resource management, it would need extra layers of control above and below its chess-playing software. Below, just to keep its perceptuo-locomotor projects in basic coordination, it would need to have a set of rigid traffic-control policies embedded in its underlying operating system.
In other words, it would have to become a higher-order intentional system, capable of framing beliefs about its own beliefs, desires about its desires, beliefs about its fears about its thoughts about its hopes, and so on. Higher-order intentionality is a necessary precondition for moral responsibility, and Deep Blue exhibits little sign of possessing such a capability. Adding the layers of software that would permit Deep Blue to become self-monitoring and self-critical, and hence teachable, in all these ways would dwarf the already huge Deep Blue programming project — and turn Deep Blue into a radically different sort of agent.
HAL purports to be just such a higher-order intentional system — and he even plays a game of chess with Frank. HAL: I never gave these stories much credence, but particularly in view of some of the other things that have happened, I find them difficult to put out of my mind.
HAL has problems of resource management not unlike our own. Obtrusive thoughts can get in the way of other activities. I want to help you. Another price we pay for higher-order intentionality is the opportunity for duplicity, which comes in two flavors: self-deception and other-deception. On the Genealogy of Morality, First Essay. Does HAL mean it? Could he mean it? The cost of being the sort of being that could mean it is the chance that he might not mean it.
But is HAL even remotely possible? Is Clarke helping himself here to more than we should allow him? Could something like HAL — a conscious, computer-bodied intelligent agent — be brought into existence by any history of design, construction, training, learning, and activity?
The extreme cases at both poles are impossible, for relatively boring reasons. The finished product could thus be captured in some number of terabytes of information. So, in principle, the information that fixes the design of all those chips and hard-wired connections and configures all the RAM and ROM could be created by hand. There is no finite bit-string, however long, that is officially off-limits to human authorship.
So whatever moral standing the latter deserved should belong to the former as well. The main point of giving HAL a humanoid past is to give him the world knowledge required to be a moral agent — a necessary modicum of understanding or empathy about the human condition.
After all, among the people we know, many have moral responsibility in spite of their obtuse inability to imagine themselves into the predicaments of others. When do we exculpate people? We should look carefully at the answers to this question, because HAL shows signs of fitting into one or another of the exculpatory categories, even though he is a conscious agent.
First, we exculpate people who are insane. Might HAL have gone insane? Dave: Well, he acts like he has genuine emotions. He has something very much like emotions — enough like emotions, one may imagine, to mimic the pathologies of human emotional breakdown. Deep Blue, basking in the strictly limited search space of chess, can handle its real-time decision making without any emotional crutches. HAL may, then, have suffered from some emotional imbalance similar to those that lead human beings astray.
0コメント