Level 6 Enquiry/Report/Essay (Screen)
Table of contents
Abstract
Dissertation
Introduction
Brief history of consciousness
Machine consciousness today
Condition of success for MC
Conclusion
Bibliography
Appendix
Abstract
Artificial Consciousness (AC), also known as Machine Consciousness, has the central objective of producing consciousness in an artificial system and determining what the process of consciousness is. Many hypotheses exist about consciousness but it is unknown which one could replicate the function of the human mind. In this work the plausibility of AC will be explored by studying the progress that has been made in the fields of robotics, computer science, neuroscience and others. With reference to issues of ethics and morality the paper will focus on AC’s potential impact on video games.
Dissertation
Introduction
The attempt to develop a systematic approach to the study of consciousness was started by René Descartes (1596–1650) who made a sharp distinction between the physical and the mental, known as Cartesian dualism (Velmans and Schneider, 2008:9). Since Descartes until recent years, psychologists, computer scientists, philosophers, physicists, neuroscientists, engineers, and many other theorists, have devoted their efforts to coming closer to understanding the product of the most complicated machine on Earth: the living brain. Increasingly, scientists have been showing further interest in the topic, which covers both the development of consciousness and subjectivity in machines and the use of computer software and robotics to try and understand what a conscious process actually is, known as Artificial Consciousness (AC) or Machine Consciousness (MC) (Raiko et al. 2008). The possibility of creating a conscious machine will be answered by looking into the latest prototypes and other achievements made in the field of AC. With advancements towards AC, new moral and ethical problems will arise and new modes of thinking will need to be developed in order to solve them. Clowes (et al. 2007:13) classifies AC as weak or strong, based on its complexity. In the circumstance that a machine is imbued with “strong” AC, our understanding of human consciousness will be revolutionised. The possible applications of AC within video games and computer simulation are staggering. Games already attempt to simulate conscious interaction with players using AI, and so the implementation of AC within a game environment will dramatically improve the believability of the experience. With the progress in this technology, a lot of innovation is brought by analysing the electric signals produced by the brain to detect player thoughts, feelings and expression, and use them to control a computer application (EPOC Neuroheadset). It also allows the player to communicate with other players or with non-playable characters in a game environment, by sending and receiving information like emotions and even more complex concepts (Wright, P. 2010).
Brief history of consciousness
What is conscious mind? This question has been explored by philosophers since it was first asked and yet no definite answers have been found (Haikonen, 2003). The obvious answer to the question is that the mind is that which is within a person’s head, or more specifically their observations, thoughts, imaginations, reasoning, will, emotions and the unconscious. The mind is what makes an individual person. According to Haikonen, the unique thing about it is that it is aware of itself, it is conscious.
The issue of what is consciousness is a heavily philosophically and ethically sensitive debate as it has serious implication on the human perception of self.
The results from the MC project can reveal aspects of the human consciousness which we have not presumed so far. This might bring a huge difference to how we perceive the mind so far. It further questions the notion of the unique human existence through personal thought.
By introducing conscious agents in a videogame, the experience from realism-focused applications will grow enormously. Every computer game has modules which support and deal with the out coming of the game, so in theory the game could be shaped based on the player’s conscience and behaviour. Furthermore the artificial players (known as bots) will be able to learn and behave differently every time the player interacts with them. If the bots are enemies, they will be able to set unpredictable challenges to the player and maybe even plan a different strategy to achieve their goal. This raises questions like the amount of influence these agents are afforded to have with the player and will the player be allowed to kill them.
Offray de la Mettrie (1705-51) extended the main Descartes's idea of consciousness by proposing that conscious and voluntary processes result from more complex mechanisms than involuntary and instinctive processes (Velmans et al. 2008:10). This is still in the essence of belief held by many of the followers of AC and the scientist searching for the neural correlates of consciousness in the twenty-first century. In philosophy literature today, the most common taxonomy of consciousness is “access” and “phenomenal”, which can also be described as “weak” AC and “strong” AC as done by Clowes et al. (2007) or also as “thin” and “thick” phenomenality. There are a lot of different words for it, but in its essence it is what was proposed by La Mettrie, there are at least two different elements that make up consciousness. Variants of the “thin” conception or the “access” one, also known as “cognitive”, is the part of the conscious mind which handles philosophically simple tasks, sometimes described as the unconscious. That is the part of the mind where cognition, reflexes and other functions vital to preserving a sensorimotor are performed. Tasks like these have already been performed by robots and computer systems with Artificial Intelligence (AI), because they work in a mathematical or logical fashion and are easier to understand as we know so much about that particular field in science. Examples of robots with “access” conception are all around us today, completing tasks such as image recognition, computation and everything that is a straightforward function. The access consciousness is a process which does not require complex activity like sentience. Clowes et al. (2007:12) describes the ‘thin’ conception’s explanation of consciousness as a super-layer upon physical or functional aspects of an agent. An example of an agent with only “access” consciousness, that has been a subject of interest for neuroscientists and psychologists, is a Haitian zombie, also known as philosophical zombie (Velman and Schneider 2008:18). This is a kind of zombie which is physically and behaviourally identical to a human being but is not conscious.
In Computer Games, artificial intelligence (AI) is used for simulating human-like actions such as decision making to produce an illusion of intelligence in the behaviour of non-playable characters (NPC). The player can interact with a NPC in the form of bots as enemies or allies in cooperative gameplay. In different styles of games the game AI is able to deal with a lot of actions which range from decision theory, problem-solving, environment awareness to squad tactics and army control. But the limitations of game AI stretch so far, because achieving something more complex using this approach like common sense knowledge, which is known as situated AI, requires enormous amounts of ontological engineering. One way it could be done is to have the computer understand enough concepts so it can learn through sources like the internet. If introduced into gaming, in theory it can enhance the capabilities of a game AI with actions such as abstract thinking, language interpretation, adaptation, awareness, subjective experience and will.
On the other hand, everything that distinguishes a person from a philosophical zombie is phenomenal consciousness and it is these parts of experience that defy functional depiction (Block 1997). Specifically the area of phenomenal consciousness is the area which has fascinated scientists and has been hard to be described or understand even with recent discoveries in science (Haikonen 2008:12). That is the precise reason supporters of AC are using all means to achieve consciousness in a machine, as it will provide a lot of answers to how the phenomenal conception is formed. Igor Alexander (Veman and Schneider 2008:95) describes the model of being conscious as stemming from five features of consciousness, dubbed as axioms. The author’s approach aims to identify mechanisms which, through neurons (physical or virtual), are able to represent the accuracy that is felt in reporting a sensation. The five axioms are:
“1 perception of oneself in an “out- there” world; 2 imagination of past events and fiction; 3 inner and outer attention; 4 volition and planning; 5 emotion. “
(Velman and Schneider 2007:95)
This is not an exhaustive list, but it is found that for a modelling study it is necessary to have such approach. In the belief that consciousness is the name given to a composition of the listed sensations, most of the methodology today seeks a variety of mechanistic or virtual models each of which is an imitation of one of the above basic sensations.
Such means to achieving machine consciousness, as building a depictive model of how thought can be synthesized in an artefact, is defined by Igor Alexander (Velman and Schneider 2008:87) as machine modelling of consciousness (MMC). It refers to the work of those who use both their analytic skills and the ability to design machines which come closer to better understanding what “being conscious” might mean.
Later in this paper, ethical, moral and social issues that might arise from accurately simulating conscious thought will be discussed
Machine consciousness today
It could be easy to think that in this age of scientific progress, the old mind-body theories would be irrelevant to modern research and that most of the questions, asked before significant scientific discoveries, would be answered or negated. But even with the convenience of all the technology and instruments and a better understanding of biology, Descartes’s dualism still applies (Haikonen 2012:12). Science today can study and explain that in the brain there are material neurons, synapses and glia, where physical and chemical reactions take place, and it can prove that without those processes, no consciousness or subjective experience could occur. These neural areas of the brain that are correlated with conscious experience are called neural correlates of consciousness (NCC). Haikonen mentions research done analysing those areas and attempting to discover evidence of the activation of a specific group when certain mental states arise, does not explain how the subjective experience would be created. Although there are some functionalists who believe that it is due to the lack of scientific progress, such as quantum entanglement at a macroscopic level, that still need to be understood before consciousness can be explained. Research has not been able to explain how these physical processes in our brain, real or specular, could host and create the inner subjective experience. Haikonen (2012) explains how the existence of this, so called explanatory gap, has been recognized. This is the same subject that has been covered since Descartes’s separation of the mental and the physical in philosophical terms.
On the other hand, with the improvement and better understanding of scientific disciplines, like neuroscience, theoretical physics, machine learning and artificial intelligence, many hypotheses were made. Most importantly, there has been much MC research that has contributed to the MC project and many difficulties resolved.
Despite all the progress that has been made on the subject, The contributors Chris Frith and Geraint Rees (Velmans and Schneider 2008:18) explain that machine consciousness is still as wondrous as it has always been. The subject of the human-like mind has created vast number of hypotheses, trying to explain how it works and how it can be created. If Alexander’s (2012) model for depicting consciousness is followed, then phenomenal concept of the mind will be broken down to even more modules, which are as complicated as the concept. According to them (Chris Frith and Geraint Rees), Eliminative materialists claim that consciousness itself is a vital essence and therefore does not actually exist. For functionalists, following La Mettrie, the vital essence is a complex computational algorithm that is able to produce consciousness and create its phenomenal characteristics. Furthermore, functionalists believe that if the same physical complexity that exists in the human brain is recreated in another environment, like silicon, consciousness will arise. Others (such as functionalists mentioned earlier) believe that a necessary scientific discovery has to be made before we are able to understand machine consciousness. And lastly mysterians think that consciousness is a subject which is so complex that the human brain can never come to an explanation or understanding of the process. Others believe that consciousness could be just an epiphenomenon, which can have no impact over the physical world we live in. On the other hand, followers of Charles Darwin believe that as any part of evolution, consciousness has evolved and gives some advantage to those who have it. Bringing this line of enquiry to a point, following the above example, consciousness could be associated with language and the creation of culture, and by their necessity, it grew and developed in human beings.
“consciousness in general developed itself only under the pressure of the need to communicate.” by Nietzsche
(Velman and Schneider 2007:18)
Condition of success for MC
As it has been stated, there are a lot theories on to how consciousness could have arisen but being the main point of interest in MC, many of them have been applied in building a potential machine. In his research paper, Igor Alexander (1996) proposes that that the principles for creating a conscious machine already exist but it would take forty years to train such a machine to understand language. From this it becomes obvious that there is another problem in all MC research, and that is the question, when is it declared that consciousness is present? Igor Alexander (2012) proposes the following criteria which have to be fulfilled in order to claim conditions of success for MC:
1 There needs to be a demonstrable representation that the agent is aware of the world around it and that it understands its role within it.
2 The machine must show a sufficient understanding of its human interlocutors.
3 Reactive, contemplative and supervisory levels of reasoning must be discernible in the process of committing an action or taking a choice.
4 The machine could be characterized by low-level mechanisms that have the same function as the processes that are proven to be crucial to consciousness in the neurology of living organisms.
5 The machine must have means of demonstrably depicting and using the out- thereness of the perceived world and be able to use such depictions to imagine worlds and the effect of its actions.
6 The design must qualify what is meant by an emotional evaluation of the content of consciousness.
In his paper Alexander emphasizes the fact that this paper is open-ended but together with his list of five axioms (mentioned above) defines almost all approaches that have been used over the years in terms of forming a depictive model which is essential to the MC project. There have been many theories and prototypes, which try to achieve one of the features mentioned by Alexander.
If consciousness is decidedly present in a virtual being, then this raises the ethical issue of how it should be treated. Should it be afforded the same rights as a living person? If it is implemented in a video game as a virtual mind, it would not be distributed separately for every game. For example, computer systems such as a Personal Computer, PlayStation, Xbox etc. are environments which provide specific possibilities to developers to programme games using those options. The virtual consciousness will exist on that system (such as a console) and it will be used to simulate that exact same environment on which the game was design, so it can be played properly. As the system learns behavioural information of his or her user, it will be able to run the game with the preferred by the user features, for example the difficulty will already be known, the language, the user’s account to online servers and other specific to the game options. Furthermore, if the computer system provides the game with its consciousness, it will be able to create virtual bots, which still think as the system and appear conscious, but when they are killed, the player will not be killing an artificial individual or a person. The system itself will always be “alive”, but if disconnected from a power source, it will just pause its process until reconnected.
Almost all the engineers and computer scientists involved in machine consciousness take a more or less conventional computational or neurally inspired approach, concentrating on the functions associated with machine consciousness. In his introductory texts, Holland (2003:2) talks about Rodney Cotterill’s project, CyberChild. In his approach, he brings together many recent methods to the problem of MC by the computer simulation of the brain, body and environment of a very young infant. His architecture of the child’s brain is a close neural model of what he has identified as the relevant parts of the mammalian nervous system. What is interesting about his approach is that it is developmental and interactive as the simulated child has to signal its needs to the experimenter, such as crying appropriately, and he must respond. Furthermore the agent has simulated metabolism, along with its brain and body, making it able to learn to deal with the environment with the risk of life and death.
If consciousness is achieved with this method, this kind of virtual simulation in theory can be run on portable devices such as smartphones, to create a virtual pet. It will be able to simulate all things that a real-life pet would do, so the owner will have to take care of its needs. This sort (but of course not conscious) has been developed in Japan in 1996 and is known as Tamagotchi. As of 2010, 76 Tamagotchis have been sold world-wide. An obvious ethical issue is faced here, as every pet would be an equivalent to a real animal, it cannot be allowed to be treated by anyone because it can die. The solution to this would be if the consciousness which deals with the simulation of all the personal “Tamagotchis” is hosted in a central place and then every user will connect over internet for example to access their pet and even if the pet dies, it will just be a virtual simulation powered by the MC system.
Holland explains that Cotterill’s approach is very open minded as it is looking for any answers in the field of consciousness that his MC project could discover. In contrast to Cotterill’s MMC is Professor Alexander Stoichev’s approach which is being conducted in Iowa State University, whose team has created a robot at the infant stages of learning (Robots Become Human). Their robot is performing not pre-programmed functions, by learning about its environment and objects around it. By observing and testing it comes to a conclusion about the behaviour of the items it deals with. It is a physical robot, unlike the CyberChild, which can perform simple tasks like picking objects up, shaking, scribbling and listening. Combining those tasks, it can perform certain actions, first of which is acoustic object recognition. By performing five exploratory behaviours, similar to what a child would do, it will come to understanding how the object sounds. Another feature concerning visual cognition is to try and scribble with the object. One of the goals is to make the robot identify which set of actions to perform in order to understand and discriminate the object.
If the robot made in Iowa State University continues to develop, at certain point as a physical agent it will have to follow a set of laws so the living beings it interacts with are protected and the robot to be protected as well. Also it is really important that the robot follows certain rules to protect itself from developing in an irreversible state or destroying itself. According to Jacques (2010:183), in 1941 Isaac Asimov introduced his laws of robotics, which were initially three, but were later updated with a zeroth law. The laws are:
“0. A robot may not harm humanity, or, by inaction, allow humanity to come to harm.
1. a robot may not injure a human being or, through inaction, allow a human being to come to harm;
2. a robot must obey orders given to it by human beings, except where such orders would conflict with the first law;
3. a robot must protect its own existence as long as such protection does not conflict with the first or second law. “ (Jacques 2010:183)
Jacques (2010:184) further describes that only those laws are not sufficient as there is too much self-neutralisation of them. For example this can lead to a serious drift, as in the film “I, Robot”, where a robot wants to control mankind to protect men against their greatest enemy: man. To overcome this, as described by Jacques (2010), Weld and Etzioni come up with two primitive actions which use the concept of “harm”. The first action is “don’t-disturb”, its only argument being a part of the state of the world. In particular, if it has received the order “don’t-disturb File-X”, the system will reject any instructions that could lead to modification or destruction of the file X. The second command is “restore”, which is less strict and it means that the robot can modify or even delete the provided file in the argument, but at the end it can restore it to its initial state. With those two orders, one can indicate to a system objects which are crucial: it can use them, but it must not destroy them.
Further research described by Alexander (2012:93), is a project done by Pentti Haikonen of the Nokia Company in Helsinki, Finland. He has created an architectural model that captures consciousness by having a comprehensive set of cognitive competencies. This relies on the ability of recursive or re-entrant neural networks to memorise and retrieve mental states. His neurologically focused research, operates in the same way a brain cell does, by creating artificial neurons that can receive an input signal and “learn” how to output an appropriate response. By connecting all neurons together, the system is able to reconstruct a learnt image by observing only parts of it. The architecture allows the system to represent both sensory input and inner reconstructions of meaningful states in the absence of input. Furthermore, it is capable of associating, for example the visual representation of a bike with the word “bike”. This working on a multi-level scale, it can associate certain states with words which can then make use of the word “I” in a meaningful way.
This system can be used in videogames as a consulting assistant of your experience in the game. Let’s say that a player starts playing a sophisticated game with a lot of variables and possible options, such as having a lot of skills, possible locations, dialog and explanations and huge variety of items. When the player is in the middle of the game progress, it is likely to forget all the things that the player has seen or learnt during his experience. Computer games are simulating 3D environment by rendering it as a 2D image, it is an image that could be remembered by the system and associated with the appropriate key words, phrases, colours and specific to the game details (location, chapter, stage). At any time in the game, the player will be able to ask the virtual assistant with any information and be given back a relevant result, for example to find a symbol or a sign that he or she has seen previously in the and be given the exact location, time and state where they have seen it.
Since we are not certain what constitutes consciousness, it is hard to know whether or not machines can possess it. There is no reason why human consciousness should be the only kind; indeed, as Geraci (2010: 112) argued, as robots become more complex, it would seem that there must be something that it is like to be a robot. Therefore, if this consciousness, achieved in a computer simulation or a physical robot, should be treated as a biological being.
Apocalyptic AI is a study of the aspect of machine consciousness which looks into the possibility of transferring oneself into a machine, where he or she can continue living, free of physical constitution. Predictions on the subject have garnered so much attention that—in combination with rapidly progressing robotic technology—widespread public attention has focused upon how human beings and robots should and will relate to one another as machines get smarter. Debates over robotic consciousness transition smoothly into what kinds of legal rights and personal ethics are at stake in the rise of intelligent robots. In response to the movement, philosophers, lawyers and governments, and theologians have all reconsidered their own positions. Geraci (2010) quotes the Scottish AI researcher David Levy on the subject, who argues
“we are in sight of the technologies that will endow robots with consciousness, making them as deserving of human-like rights as we are; robots who will be governed by ethical constraints and laws, just as we are; robots who love, and who welcome being loved, and who make love, just as we do; and robots who can reproduce. This is not fantasy—it is how the world will be, as the possibilities of Artificial Intelligence are revealed to be almost without limit”
(Geraci 2010:118)
Artificial beings, except for robots, are made of programs and data, exactly as a game program, a text processor or an operating system: to clone it, it is sufficient to copy their files. Thus, cloning an artificial being is a cheap and fast operation, which can be performed a large number of times, possibly generating billion of clones for an acceptable cost (Jacques 2010:57).
In many circumstances, we have to make choices which are difficult to evaluate. For instance, if we want to learn to play chess, we can use several methods: we can try to play as often as possible, to read many theoretical books, to play against very strong players and etc. Whatever our choice, we may wonder if another choice would not have been better. To come to the best conclusion, the exact circumstances will have to be replicated and another route taken in order to find out which one was better. According to Jacques (2010) to answer such questions, psychologists take several people, each one using a different learning method, and observe their progress. If at the exact moment when a virtual agent has to make a choice, a clone of that instance is made at the exact time and force it to use another method, making it able for the program to take multiple decisions and allowing only the better one to progress, deleting the previous one, will create a system which will be able to improve, resulting in the “perfect” agent. This is known as genetic algorithm and is vital to most MC research.
If introduced in gaming, the generic algorithm can help improve the knowledge of enemies. For example in a single player shooting game, the system will construct the best plan to fight back against the player. After the player neutralises an enemy, the consciousness of the game will update the intelligence of all the other artificial players so in a way they are learning from their experience and will remain a challenge to the user. In a massively multiplayer online role-playing game (MMORPG) where multiple users are fighting cooperatively against enemies. The main characters that have to be defeated will again learn from the players attacks and react and adopt every time accordingly to bring greater challenge, making the game more immersive.
Conclusion
Since Descartes and his phrase “Cogito ergo sum” (“I think therefore I am”) the subject of consciousness (human or artificial) has been lively debated by philosophers, psychologists, scientists, roboticists and many other in that sphere of knowledge. It has transcended from religious beliefs or science fiction writings to a vital figure in modern science. Disciplines like artificial intelligence, machine learning, robotics, neuroscience, machine consciousness and many more would not be present if consciousness was not such a crucial part to understanding the mind. The notion of consciousness has tantalised the minds of theorists because of its intrinsic link to our sense self and our perception of our existence in the world. Thanks to developments in the fields discussed in this dissertation, this seemingly mystical concept has now become a highly developed area with potential in discovering the very process of the biological phenomenology. If machine consciousness is achieved, it would change the way we perceive existence, the way we experience games, dictate new ways of describing mental states and make possible numerous new experiments. Not to mention requiring entire new paradigms of philosophical, ethical and moral thought. The great Isaac Newton lends this investigation an appropriate closing statement - “What we know is a drop, what we don't know is an ocean.”
Bibliography:
Books:
Aleksander, I. (1996) ‘Impossible Minds: My neurons, My Consciousness’, Imperial College Press.
Haikonen, P. (2003), The Cognitive Approach to Conscious Machines, Exeter, UK: Imprint Academic
Geraci, Robert (2010) Apocalyptic AI Visions of Heaven in Robotics, Artificial Intelligence, and Virtual Reality, New York: Oxford University Press Inc. Available at: http://herts.eblib.com/patron/FullRecord.aspx?p=477697 [last accessed 22/01/2013]
Wright, J. Talmadge (2010) Utopic Dreams and Apocalyptic Fantasies. Playmoth: Lexington Books. Available at: http://herts.eblib.com/patron/FullRecord.aspx?p=634254 [last accessed 22/01/2013]
Nordlinger, J., Cuddy, L. (2009) World of Warcraft and Philosophy. USA: Carus Publishing Company. Available at: http://www.herts.eblib.com/patron/FullRecord.aspx?p=547560 [last accessed 16/01/2013]
Lewis, M., Weber, R., Bowman, N. (2008 ) ‘They May be Pixels, but They’re MY Pixels’: Developing a Metric of Character Attachment in Role-Playing Video Games.
Meadows, Mark S.(2007) ‘I, Avatar: The Culture and Consequences of Having a Second Life’. Available at: http://proquest.safaribooksonline.com/book/web-applications-and-services/9780321550231 [last accessed 15/12/2012]
Edward Castronova, (2006) ‘Synthetic Worlds: The Business and Culture of Online Games’
Pitrat, Jacques (2010) ‘Artificial Beings: The conscience of a Conscious Machine’. Available at: http://herts.eblib.com/patron/FullRecord.aspx?p=477697 [last accessed 19/11/2012]
Holland, O. (ed. 2003), ‘Editorial introduction’, Journal of Consciousness Studies, 12 (4–5), Special issue on Machine Consciousness, pp.1–6.
Clowes, R., Torrance, S., Chrisley, R. (2007), ‘Machine Consciousness’, Embodiment and Imagination, Journal of Consciousness Studies, 14 (7), pp.1–6.
Haikonen, Pentti Olavi Antero (2012), Consciousness and Robot Sentience, e-book, accessed 18 March 2013, [Available at: http://HERTS.eblib.com/patron/FullRecord.aspx?p=1069824]
Block, Ned (1997), On a confusion about a function of consciousness in Block, Flanagan and Guzeldere (eds.) The Nature of Consciousness: Philosophical Debates, MIT Press
Raiko, T., Haikonen, P., Väyrynen, J. (2008) AI and Machine Consciousness. Espoo: Multiprint oy. Available at: http://www.stes.fi/step2008/proceedings/step2008proceedings.pdf [Accessed 15 February 2013]
Velmans, Max; Schneider, Susan (2008), The Blackwell Companion to Consciousness, e-book, accessed 18 March 2013, [Available at: http://HERTS.eblib.com/patron/FullRecord.aspx?p=351498]
Science journals and articles:
Gaglio, S. Intelligent Artificial Systems, Available at: http://www.consciousness.it/iwac2005/Material/Gaglio.pdf [last accessed 20/01/2013]
Aleksander, I. (1995) Artificial Neuroconsciousness An Update. Available at: http://web.archive.org/web/20050408042834/http://www.ee.ic.ac.uk/research/neural/publications/iwann.html [last accessed 10/01/2013]
Buttazzo, G. (2001) Artificial Consciousness: Utopia or Real Possibility? Available at: http://retis.sssup.it/~giorgio/paps/2001/ieeecm01.pdf [last accessed 05/01/2013]
EPOC Neuroheadset, Available at: http://www.emotiv.com/apps/epoc/299/ [last accessed 22/12/2012]
Wright, P. (2010) Emochat: emotional Instant Messaging with the Epoc Headset, Available at: http://www.slideshare.net/fwrigh2/emochat-emotional-instant-messaging-with-the-epoc-headset [last accessed 22/01/2013]
Gonzalez-Sanchez, J.; Chavez-Echeagaray, M.E.; Atkinson, R.; Burleson, W.; , "ABE: An Agent-Based Software Architecture for a Multimodal Emotion Recognition Framework," Software Architecture (WICSA), 2011 9th Working IEEE/IFIP Conference on , vol., no., pp.187-193, 20-24 June 2011, Available at: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=5959690&isnumber=5959683 [last accessed 10/12/2012]
Blankertz B., Dornhege G., Krauledat M., Müller KR., Curio G. (2007) The non-invasive Berlin Brain-Computer Interface, Available at: http://www.ncbi.nlm.nih.gov/pubmed/17475513 [last accessed 10/01/2013]
Esfahani, T. (2010) Using Brain-Computer interfaces to detect human satisfaction in human-robot interaction, Available at:http://www.me.ucr.edu/~etarkeshesfahan/IJHR2011.pdf [last accessed 15/12/2012]
Robots Become Human, Brink, Available at:http://science.discovery.com/tv-shows/brink/videos/brink-robots-become-human.htm [last accessed 15/12/2012]
Appendix
Ivan Phillips – 31st October 2012
Individual tutorial in which I was helped with the selection of my proposed enquiry. The meeting helped me choose a topic which was most relevant to my course and in which I was most interested in. I was advised that I had to look into the topic with the relation to games and that the subject is concerning complex philosophical area. This consisted of careful literature search, sharp scoping of topic and clear, well-structured question. I was provided with a couple of books, which I found to be very closely related to my research. The books were Ed Castronova Synthetic Worlds, Steven Meadows I, Avatar and Erik Davis Techgnosis.
Mark Broughton – 16th March 2013
Feedback from “Mapping the Field”. This was really helpful as it helped me to focus my research in a specific area and strengthen my approach. It further made me aware to clearly define my theoretical approach and choose a focused research question. I was also given hints to be more careful about referencing. I have acknowledged all those notes and found them to be important to achieving better essay structure and research.
No comments:
Post a Comment