Biomedical Engineering Reference
In-Depth Information
is at the back of the brain. From V1, signals diverge to subsequent levels of visual cor-
tex where higher level processing takes place. In a blind individual, parts of the visual
pathway may not function. Therefore, visual signals do not reach the visual cortex. A
successful prosthesis would bypass these inoperative sections in order to deliver signals
to V1.
The Australian Research Council funded a new collaborative research initiative in
2009 to develop a functional visual prosthesis. One of the two proposals accepted for
this initiative was by a Monash University led team of researchers, now known as the
Monash Vision Group (MVG) [16]. Established in 2010, the MVG aims to develop a
visual prosthesis (Monash Bionic Eye) centred on a cortical implant, making use of ap-
proximately 600 electrodes.
As research grows in this new area of bionics, there is a great need for simulation
or visualisation of the possible results of such an implant. Bionic eye simulators serve
as good platforms for researchers to investigate the effectiveness of implemented al-
gorithms, tune parameters, and realise the importance of certain parameters prior to
actual clinical trials. The simulators would be used most in psychophysical trials - trials
involving normally sighted individuals attempting to complete tasks with the limited
vision provided by a simulator. However, the simulators would also be of use to the
general public for educational purposes and to handle the expectations of families and
friends of potential patients. Input to the system is in the form of an image or image
stream. This image data goes through processing that transforms it into a representation
that attempts to mimic the elicitation of phosphenes through electrode stimulation. The
processed image data is then stored and/or displayed on a screen for viewing by the
user.
Many visual prosthesis simulators have already been developed and some of the
more recent work is found in [6,12,22,23,26]. Nevertheless, there are some significant
limitations that arise in their implementations. The majority of these simulators per-
form their image processing on a computer using image processing libraries and so are
often limited to use within an area close to a stationary computer. Depending on the
complexity of processing and the available processing power of the equipment in use,
these systems may sometimes suffer from latency and frame rate issues. In the case of
simulators for cortical visual prostheses, visuotopic mapping the mapping of electrode
placement on the visual cortex to elicitation of phosphenes in the visual field, has often
been overlooked or used simplified models.
Our system aims to address the shortcomings of currently implemented systems. In
comparison to other cortical FPGA based systems [12,22], the HatPack is very mo-
bile and has been used to do untethered preliminary psychophysics testing. It is based
on a Field Programmable Gate Array (FPGA) system implementation. FPGAs are mi-
crochips that offer extremely dense amounts of electronically reconfigurable logic gates.
FPGA systems offer the advantages of low latency, highly parallel implementation and
the ability to integrate with large numbers of external devices through the high availabil-
ity of peripheral interface pins. Figure 1 shows the main components of our simulator
system. A CMOS camera captures a stream of image data, which is then processed on
an FPGA development board and finally displayed on a head-mounted display and op-
tionally on an external monitor as well. An infra-red remote control interface is used to
 
Search WWH ::




Custom Search