* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project
Download here - WPI
Donald O. Hebb wikipedia , lookup
Neuroplasticity wikipedia , lookup
Subventricular zone wikipedia , lookup
History of neuroimaging wikipedia , lookup
Brain Rules wikipedia , lookup
Aging brain wikipedia , lookup
Neuroregeneration wikipedia , lookup
Neuropsychology wikipedia , lookup
Axon guidance wikipedia , lookup
Artificial general intelligence wikipedia , lookup
Clinical neurochemistry wikipedia , lookup
Psychoneuroimmunology wikipedia , lookup
Development of the nervous system wikipedia , lookup
Neural engineering wikipedia , lookup
Embodied cognitive science wikipedia , lookup
Neuroesthetics wikipedia , lookup
Biological neuron model wikipedia , lookup
Molecular neuroscience wikipedia , lookup
Optogenetics wikipedia , lookup
Neuroeconomics wikipedia , lookup
Synaptic gating wikipedia , lookup
Neural correlates of consciousness wikipedia , lookup
Neurophilosophy wikipedia , lookup
Multielectrode array wikipedia , lookup
Holonomic brain theory wikipedia , lookup
Cognitive neuroscience wikipedia , lookup
Electrophysiology wikipedia , lookup
Brain–computer interface wikipedia , lookup
Metastability in the brain wikipedia , lookup
Channelrhodopsin wikipedia , lookup
Stimulus (physiology) wikipedia , lookup
Neuroinformatics wikipedia , lookup
Feature detection (nervous system) wikipedia , lookup
Single-unit recording wikipedia , lookup
Efficient coding hypothesis wikipedia , lookup
Nervous system network models wikipedia , lookup
Neuroprosthetics wikipedia , lookup
Miles Arnett Section B 11/24/15 Introduction For hundreds of years, scientists have been trying to find ways to make peoples’ lives better, especially those people who suffer from some sort of disability. To a large extent, they have succeeded. Deaf people can hear again with the help of cochlear implants, and those unfortunate enough to have lost a limb can be rehabilitated by neural prosthetics. However, despite all the amazing technological advancements of the last decade, blind people have still been left in the dark. There have been attempts to restore sight in those who lack it, but none of them have seen any widespread success. This is why I decided to center my project around producing a device that would allow blind people to regain some independence and be able to navigate the world on their own. The first field that is important to my project is neurobiology, specifically the neurobiology of vision. This is one field that has benefited tremendously from recent technological advances, with important new discoveries being made every year. As such, there is a lot of recent literature on the inner workings of the central nervous system, as well as how those functions relate to vision. In order to generate a product that simulates vision, it is important to first understand how natural vision works. The other area of major importance to my project is the technology that will compose the final product. The most important component for the current model is an Arduino Uno, a microcontroller that is capable of converting various kinds of input into easily understandable output. Understanding this technology means being familiar with both its hardware and software components, as the two are equally important to the device’s functionality. Literature Review Neurobiology An understanding of neurobiology is necessary for any project involving the central nervous system, which is one of the most complex systems within the human body. Scientists have been struggling to understand it for decades. However, recent advancements in technology have allowed researchers to get a much better grasp on how this system works, opening the door for new types of studies to take place. The nervous system is made up of specialized cells called neurons, which are the basic working unit of the brain. The properties of neurons, both in terms of structure and function, are what give the brain its unique abilities. The basic structure of a mammalian neuron is a cell body, which contains the nucleus and cytoplasm, an axon which extends from that cell body, and a dendrite which extends from the cell body in the opposite direction. Mammalian brains contain between 100 million and 100 billion of these cells, with the exact amount varying from species to species. One of the most important features of neurons is their ability to communicate with each other. Axons end at nerve terminals, which send messages to the dendrites that are in contact with them. Dendrites are usually covered in these contact points, which are called synapses (Society for Neuroscience, 2012). Figure 1: A diagram of two connected neurons, showing the most important features of the cells. Neurons communicate through the transmission of electrical impulses, which are sent along the cell’s axon. Electrical impulses are sent through the nervous system by the opening and closing of ion channels, which are water-filled, selectively permeable molecular tunnels that traverse the cell membrane. These tunnels allow small molecules and electrically charged atoms known as ions to enter or leave a cell. This process generates an electrical current, which creates small voltage changes across a neuron cell’s surface (Society for Neuroscience, 2012). In order for a neuron to be able to send electrical impulses, there must be a difference between the charges on the inside and outside of the cell. Nerve impulses are produced by sudden reversals in the electrical potential of a neuron’s membrane, which occurs as the cell changes from a negative internal charge to a positive state. This change is referred to as an action potential, which can move along the axon’s membrane at speeds up to several hundred miles per hour. This rapid pace allows a single neuron to generate and release impulses multiple times every second (Society for Neuroscience, 2012). The next step of this process occurs when this type of voltage change reaches the end of an axon, prompting the release of neurotransmitters. These chemicals, which act as the brain’s messengers, travel from their point of release at nerve terminals across the synapse. When they reach another cell, which is generally a neuron but could also be a gland or muscle cell, they bind to receptors on its surface. Neurotransmitters fit into receptors similarly to the way that a key fits into a lock, as each receptor has a distinct shape that only recognizes a particular chemical messenger. This structural system allows the receptor to function as an on-off switch, altering its cell’s membrane potential when the neurotransmitter is in place. This triggers a reaction from the cell that is particular to its function, which could be anything from the contraction of a muscle to the generation of another action potential (Society for Neuroscience, 2012). To facilitate the travel of these electrical signals, which sometimes have to travel as far as three feet along a single neuron, many axons are covered with specialized cells called glia. These glia, which are known as oligodendrocytes or Schwann cells depending on their location in the nervous system, form a layered myelin sheath (Society for Neuroscience, 2012). While glia are important throughout the nervous system, they serve far more functions to the brain than they do to the peripheral nervous system. Though the specific ratio of glia to neurons varies in different parts of the brain (e.g. the 3:2 scale found in the gray matter or cerebral cortex), it is estimated that there is approximately one glia for every neuron in the average human brain. The glia serve many important functions in the nervous system, serving as partners to neurons in order to optimize overall brain function. More specifically, one role that glial cells play is in regulating the repair of neurons and neural pathways after injury. Glia are also vital for important processes such as synaptogenesis and synaptic plasticity, as well as making sure the nervous system develops normally. A specific type of star-shaped glia, known as an astrocyte, is even capable of communicating directly with neurons and modifying the signals that neuron sends and receives. This means that glial cells can affect not only the signaling of a given synapse, but the processing of information as well. Researchers are currently engaged in uncovering even more important new roles for glia in brain function (Society for Neuroscience, 2012). The Neuroscience of Vision As a result of extensive research, neuroscientists likely know more about vision than they do about any other sensory system. This research has mainly been conducted on animals, with the majority of information on how light is converted into electrical signals, a process known as visual transduction, coming from studies performed on Drosophila (fruit flies). Visual processing, on the other hand, has mostly been studied in monkeys and cats (Society for Neuroscience, 2012). The cornea and the lens combine to form a clear image of the visual world. Light passes through the cornea first, which does the majority of the focusing before the light gets to the lens, which finishes the job. The final image is produced on the retina, which is essentially a sheet of photoreceptors. In contrast to the lens and cornea, the retina is actually part of the central nervous system, but is located at the back of the eye (Society for Neuroscience, 2012). Figure 2: A diagram of the composition of the inner eye. In order to gather visual information, photoreceptors absorb light before sending electrical signals to other retinal neurons. There, the information is processed and integrated before being sent to other parts of the brain via the optic nerve. It is the brain that ultimately processes the image from the retina and allows us to see (Society for Neuroscience, 2012). There are two types of photoreceptors located in the retina. These are known as rods and cones, respectively, and they are both secondary sense cells. Rods are far more numerous, with each human eye containing approximately 120 million of them. The photopigment (the molecules which absorb photons) in rods is called rhodopsin, and the peak absorption in the rod’s cell is around 500 nm. These conditions add up to give rods primary functional abilities when the eye is in dim or nearly absent light, also known as scotopic vision (Society for Neuroscience, 2012). Cones, on the other hand, are most useful during daytime conditions (photopic vision). The human eye contains three types of cones, which work in combination to transmit information about all visible colors. Much like an infrared sensor or computer, each of these cone types is sensitive to a different range of colors: red, green, and blue respectively. These types of cones are concentrated in different parts of the eye. For example, the fovea, the inner section of the human retina where most light is focused and the image of primary interest is projected, contains only red and green cones (Iaizzo, 2013). The macula, which is the area around the fovea, is critical for reading and driving, which makes death of photoreceptors in that area a serious problem. In fact, macular degeneration is one of the leading causes of blindness in the elderly population of developed countries (Society for Neuroscience, 2012). Because of the clear differences between rods and cones in both function and number, the two types of photoreceptors are not evenly distributed across the eye. For example, the aforementioned fovea has a high density of cones and a notable lack of rods. This discrepancy makes the fovea essential for vision during the daytime. This high cone density, when coupled with the convergence of cone cell output onto a single associated ganglion cell, allows for a much greater spatial resolution (Iaizzo, 2013). Vision Beyond the Eye However, visual acuity is dependent on factors beyond just the eye. Without simultaneous contrasts in the world being observed, spatial resolution decreases sharply. Also, certain elements of vision, such as color perception, are dependent on processing within the central nervous system as well as just stimulus and receptors (Iaizzo, 2013). While scientists have quite a bit of information on how visual information is encoded in the retina, they know relatively little about the lateral geniculate nucleus (LGN), which represents an intermediate stage between the retina and the visual cortex. They also have only a small amount of data on the visual cortex itself, which means that the best knowledge we have to date on how the sensory information is analyzed and processed in the brain comes from studies on the retina (Iaizzo, 2013). The primary visual cortex is a sheet of tissue less than one-tenth of an inch thick which is located in the back of the brain, in the occipital lobe. Visual information from the retina is relayed there through the LGN, which is part of the thalamus. The primary visual cortex is arranged in several layers, each of which performs a different function. The middle layer receives messages from the LGN, and has similar responses to those seen in the retina. However, cells above and below this layer prefer stimuli with edges at a particular orientation, or stimuli in a bar shape. Further studies have indicated that preferences in angle or direction of motion vary even between individual cells (Iaizzo, 2013). Figure 3: The path that visual information follows to get from the eye to the brain. Recent physiological and anatomical studies performed on monkeys have helped increase our understanding of visual processing mechanisms. These studies suggest that there are at least three separate systems: one that mainly processes color, one mainly shape, and the third mainly movement, location, and spatial organization. These findings are corroborated by psychological studies performed on humans, which show that perception of shading, perspective, movement, the relative movement of objects, the relative size of objects, depth, and gradations in texture are all primarily dependent upon contrasts in light intensity rather than color. The brain is able to group parts of an image together, as well as separating images from each other or from their backgrounds. This means that perception requires various elements to be organized so that related ones are grouped together. In order to merge all of these systems into a vivid image, the brain extracts relevant information at each stage and uses past experience to decide on the correct neuronal firing pattern (Iaizzo, 2013). Brain-Computer Interface The purpose of brain-computer interfacing, which remains a relatively new technology, is to aid people who have been disabled by neuromuscular disorders and to enhance functions in healthy individuals. When Grey Walter used a scalp-recorded electroencephalogram (EEG) to control a slide projector in 1964, it marked the first true demonstration of brain-computer interface (BCI) technology. At around the same time, Eberhard Fetz taught monkeys to earn food rewards by using alterations in the firing rate of a specific cortical neuron to control a meter needle. Later, in the 1970s, Belgian scientist Jacques Vidal used the scalp-recorded visual evoked potential (VEP) over the visual cortex to generate a system that could determine someone’s visual fixation point, or where their eyes were looking. This was used to determine the direction in which a person wanted to move a computer cursor. Vidal was the first person to use the term “brain-computer interface” to describe his creation. However, after this advancement, BCI research slowed. A study would appear only once every few years, likely because people of the time lacked the complex imaging and computer systems that we have today. One of the studies performed during this downtime include the research of Elbert et al., who showed that people could learn to use slow cortical potentials (SCPs) in scalp-recorded EEG activity to control the vertical position of an image on a TV screen. Farwell and Donchin showed that people could learn to spell words on a computer screen using scalp-recorded P300 eventrelated potentials (ERPs). Some other research took place, but there was generally very little interest in BCI research during this period (He et al., 2013). Since the mid-1990s, however, the expanse and pace of BCI research has been increasing at a tremendous rate. In the past 20 years, scientists have been exploring BCIs in relation to a wide variety of fields, ranging from applied neuroscience to materials engineering. In this time, many studies and books have been published detailing the function and importance of BCI research and development, and interest in the field has begun to spread to the general public as well (He et al., 2013). The main goal of recent research into BCI systems has been the development of potent new assistive technology for those whose lives have been severely impacted by neuromuscular diseases such as spinal cord injury or multiple sclerosis. Increased interest in the field itself has been well-supported by an increased societal awareness of the needs of people with neuromuscular disabilities, as well as the enthusiasm of the disabled people for any technology that can enhance their ability to live enjoyable and productive lives. Beyond simply aiding the disabled, however, recent research has begun to explore the possibility of developing BCIs for the general population with the aim of enhancing human performance in demanding tasks. Also, BCI has the potential to expand and enhance media access, computer gaming, and artistic expression, which means it could have a major impact on people’s everyday lives. Furthermore, recent studies have explored the assistive potential of BCIs beyond just neuromuscular disorders. This means that BCI technology could also have a positive impact on the victims of stroke or other acute events that would otherwise severely impact their ability to lead normal lives (He et al., 2013). Linking Technology to the Senses The most successful attempt to link technology to a human sense is cochlear implants, which are capable of mostly restoring hearing to the deaf. They represent the greatest link between biology and technology to date, as well as one of the great success stories of modern medicine. Cochlear implants have developed immensely since their conception, with particular progress being made in the 1980s. At that point in time, systems with multiple processing channels were invented, which supported noticeably higher levels of speech reception than older models. Since then, the implant has been reimagined with multiple electrodes and even better processing strategies, which have improved the system massively. These improvements gave users over an 80% proper speech recognition rate, according to a study done by the National Institute of Health. This level of success is far greater than that achieved by more well-known neural prosthesis systems, such as neural prosthetics (Wilson & Dorman, 2008). The most important elements of a cochlear implant system are (1) a microphone that can sense sound, (2) a processor that can turn sound input into electrical stimuli for the electrode array, (3) a way for power and stimulus information to be transmitted across the skin, likely in the form of a transcutaneous link, (4) an implanted receiver/stimulator to decode the information sent by the radio frequency signal and generate stimuli appropriately, (5) links from the receiver/stimulator to the electrode array, and finally (6) the array itself. Of course, all of these components must work in tandem in order for the system to function. If any of the parts break down, then the entire system breaks down as well (Wilson & Dorman, 2008). It is notable that cochlear implants can only be inserted so far into the human body, due to limits created by other parts of the inner ear. The furthest that any electrode array has ever been inserted is just 30 mm, while typical insertions tend to fall somewhere between 18 mm and 26 mm, with the total length of the human cochlea being approximately 35 mm. In certain cases, such as when there is some sort of bony obstruction in the way, only very shallow insertions are possible (Wilson & Dorman, 2008). In order to accurately simulate normal hearing, different electrodes in the array are responsible for stimulating different clusters off neurons. This is done because different parts of the cochlea respond to different sonic frequencies in normal hearing. This is replicated by having certain electrodes go off in response to lower frequencies and having others go off in response to higher frequencies. The system that is used in all modern-day implants, which is known as a “monopolar coupling configuration,” features arrays of single electrodes that are each referenced to a remote electrode outside the cochlea. This system is used so ubiquitously largely due to its relatively low current and battery power needs (Wilson & Dorman, 2008). The reason that cochlear implants are so relevant to a project involving the visual system is that hearing and sight are two very similar biological systems. A similar system for sight might involve a camera as its input, a similar processor, receiver, and power link, and of course the necessary electrode array. Arduino Software Arduino processors are coded using a language that is a variant of C/C++. Some of the Arduino syntax is similar to its parent language, but much of the language is specific to the functions that Arduino offers. Therefore, it is helpful to be fairly well-versed on both types of code. 1 2 3 4 5 6 7 8 9 #include <iostream> using namespace std; int main() { cout<<"Hello, World! I’m alive!\n"; cin.get(); } A piece of sample C++ code can be seen above, which outlines many of the fundamental elements of a program in this language. Starting from the top, the #include is a directive of the “preprocessor” type, which tells the compiler to do something before it actually creates the executable. In this case, it is being asked to include code from the header iostream, which gives access to a number of different funcitons such as cout, which is used later in the snippet. The next part, “using namespace std;,” tells the compiler that it should use a group of functions from the standard library (std). This line allows you to use another set of functions, which once again includes cout. The semicolons found throughout the program perform the same function that they do in Java, which is to tell the compiler that it has reached the end of a command (Allain, n.d.). The next line with code in it contains “int main().” This line essentially tells the compiler that a function named main exists, and that it returns an integer. The curly braces included with this function can be thought of as meaning “begin,” and “end,” respectively (Allain, n.d.). The next line contains the “cout” object (pronounced “C out”), which performs the same function as the “print” command in other programming languages. This function is simply to display the text that it contains. It uses the symbols <<, which are known as insertion symbols, to indicate what it is supposed to output. The output, in this case, is whatever is typed in the quotation marks. The quotes are necessary to make sure the compiler does not try to interpret the message within as code. The “/n” found at the end is treated as one character, which serves to move the user’s cursor to the next line (Allain, n.d.). The last command in this piece of code is “cin.get(),” which is another function call, meaning that it reads an input and expects the user to hit the return key. This particular command serves an especially important purpose in compiler environments that open a new console window to run the program. In such an environment, cin.get() keeps the window from closing until the user has hit enter. This means that the user actually has time to watch the program run before it disappears (Allain, n.d.). At this point, the program reaches the “end” brace, which tells it to return the integer 0 (this is why the original main function was told to return an integer) to the operating system. This return value tells the operating system whether or not the code actually succeeded. In this case, as the main function was used, a return value of zero is automatically returned to indicate a successful run. Some other functions would require a return value to be manually entered. The return value can be altered by using a return statement, as can be seen at the end of the code below (Allain, n.d.). 1 2 3 4 5 6 #include <iostream> using namespace std; int main() { cout<<"HEY, you, I'm alive! Oh, and Hello World!\n"; cin.get(); 7 8 9 10 11 return 1; } Arduino Hardware The Arduino board itself is a microcontroller development platform, which forms the heart of any project that involves it. It uses sensors to gain an understanding of its environment, and then affects it by controlling motors, lights, and other actuators. To use an Arduino in a larger project, it is necessary to build circuits and interfaces to interact with the board and to tell the microcontroller how to interface with other components. An example of an Arduino can be seen below (Arduino, n.d.). Digital I/O Pins (2-13) – These pins can be used with the digitalRead() and digitalWrite() commands, and also analogWrite() if they are on the PWM symbol. Pin 13 LED – This pin has the only actuator that is built-in to the board, which makes it very useful for debugging. Power LED – This LED indicates that the board is receiving power. It is also useful for debugging. ATmega328 Microcontroller – This is the heart of the Arduino, which allows the board to perform its various functions. Analog In Pins – These pins can be used with the analogRead() command. Use these pins with analogRead(). GND and 5V Pins – These pins can be used to provide +5V power and ground to circuits. External Power Supply – This is used to power the Arduino when it is not plugged into a USB port for power. It accepts voltages from 7-12V. TX and RX LEDs – These LEDs indicate connection between the board and a computer by flickering rapidly during serial communication and sketch upload. These are also useful for debugging. USB Plug – This can also be used to power the Arduino, as well as to upload sketches and to communicate with an uploaded sketch via the Serial and println() commands. Reset Button – This button resets the ATmega microcontroller (Arduino, n.d.).