Image Analysis for Automatically-Driven Bionic Eye (Bioengineering in Neurological Disorders) Part 3

Circuit and system approach

Principle and objective

The proposed solution is based on Pr. Sawan research [Coulombe (2007)Sawan (2008)]. The implementation is a visual prosthesis implanted into the human cortex. In the first case, the principle of this application consists in stimulating the visual cortex by implanting a silicium micro-chip on a network of electrodes made of biocompatible materials [Kim (2010)Piedade (2005)] and in which each electrode injects a stimulating electrical current in order to provoke a series of luminous points to appear (an array of pixels) in the field of vision of the sightless person [Piedade (2005)]. This system is composed of two distinct parts:

• The implant lodged in the visual cortex wirelessly receives dedicated data and associated energy from the external controller. This electro-stimulator generates the electrical stimuli and oversees the changing microelectrode/biological tissue interface,

• The battery-operated outer control includes a micro-camera which captures the image as well as a processor and a command generator. They process the imaging data in order to: 1. select and translate the captured images,

Image context and points of interest

Fig. 11. Image context and points of interest

Scene exploration process

Fig. 12. Scene exploration process

2. generate and manage the electrical stimulation process

3. oversee the implant.

The topology is based on the schematic of Fig. 13.

An analog signal captured by the camera provides information to the DSP (Digital Signal Processor) component. The image is transmitted by using the FPGA which realizes the first Image Pre-processing. A DAM (Direct Access Memory) is placed at the input of the DSP card in order to transfer the preprocessing image to the SDRAM. The DSP realizes then the image processing in order to reproduce the eye behavior and a part of the cortex operation. The LCD screen is added in order to achieve debug of the image processing. In the final version, this last one will be removed. The FPGA drives two motors in two axes directions (horizontal, vertical) in order to reproduce the eye movements. We will know focus on the different components of

Schematic principle of bionic eye this bionic eye topology.

Fig. 13. Schematic principle of bionic eye this bionic eye topology.

Camera component

With the development of the mobile phone, the CMOS camera became more compact, lower powered, with higher resolution and quicker frame rate. As for biomedical systems, the constraints tend to be the same, this solution retained our attention. Indeed, for example, Omnivision has created a 14 megapixel CMOS camera with a frame rate of 60 fps for a 1080p frame and a package of 9 mm χ 7 mm. In this project, we have retained a choice of a 1.3 megapixel camera at a frame rate of 15 fps for mainly two reasons: the package who is easy to implement and the large number of different outputs thanks to the internal registers of the camera. The registers allow us to output a lot of standard resolutions (SXVGA, VGA, QVGA etcÉ), the output formats (RGB or YUV) and the frame rate (15 fps or 7.5 fps). These registers are initialized by the I2C controller of the DSP. This allows a dynamic configuration of the camera by the DSP. The camera outputs are 8 bits parallel data that allow a datastream up to 0, 3 Gb/s with 3 control signals (horizontal, vertical and pixel clocks). For the prototype we output at a VGA resolution in RGB 565 at 15 fps.

In order to reproduce of the eye movement, two analog servo motors have been used (horizontal and vertical) mounted on a steel frame and controlled by the FPGA.

FPGA (Field-Programmable Gate Array) component

The FPGA realizes two processes in parallel. The first one consists in controlling the servo motor. The FPGA transforms an angle in pulse width with a refresh rate of 50 Hz (Fig. 13). The angle is incremented or decremented by two pulse updates during the signal of a new frame (Fig. 15). For 15 fps a pulse is 2 degrees for a use at the maximal speed of the servo motor (0.15s @ 60a).

Time affectation of the pulse width

Fig. 14. Time affectation of the pulse width

New frame: increment/decrement signal

Fig. 15. New frame: increment/decrement signal

The second process is the image preprocessing. This process consists of the transformation of 16 bits by pixel image with 2 clocks by pixel into 24 bits by pixel image with one clock by pixel. For this, we divide the pixel clock by two and we interpolate the pixel color with 5 or 6 bits to a pixel color with 8 bits.

DSP (Digital Signal Processor) component

For a full embedded product, we need a core that can run a heavy load due to the image processing in real-time. This is why we focus our attention on a DSP solution and precisely on the DSP with an integrated ARM core by Texas Instrument, in fact the OpenCV library8 is not optimized for DSP core (the mainline development of openCV effort targets the x86 architecture) but it has been successfully ported to the ARM platforms9. Nevertheless, several algorithms require floating-point computation and the DSP is the most suitable core for this thanks to the native floating point unit (Fig. 16).

Function Name

ARM91™ (ms)

ARM Cortex<-A8 (ms)

C674x DSP (ms)
























Fig. 16. Operation time execution

Moreover, the parallelism due to the dual-core adds more velocity to the image processing (Fig. 17). And finally, we use pipeline architecture for an efficient use of the CPU thanks to the multiple controller included in the DSP. The first controller used is the direct memory access controller that allows to record the frame from the FPGA to a ping-pong buffer without the use of the CPU. The ping-pong buffer allows to record the second frame to a different address. This enables to work on the first frame during the record of the second frame without the problem of the double use of a file.

QpenCV Function

ARM Cortex™-AS with NEON (ms)

ARM Cortex-A S with C674x DSP (ms)













Fig. 17. Dual Core operation time execution

The second controller used is the SDRAM controller that controls two external 256 Mb SDRAM. The controller manages the priority of the use of the SDRAM, the refresh of the SDRAM and the signals control. The third controller used is the LCD controller that allows to display the frame at the end of the image processing in order to verify the result and presentation of the product. This architecture offers a use of the CPU exclusively dedicated to the image processing (Fig. 18).

Image processing 4.5 Electronic prototype

Fig. 18. Image processing 4.5 Electronic prototype

A prototype has been realized, as shown in Fig. 19. As introducted before, this prototype is based on : (i) a camera (ii) a FPGA card, (iii) a DSP card and (iv) a LCD screen.

Its associated size is 20*14*2cm. This size is due to the use of a development card. We choose respectively for the FPGA and DSP cards a a spectrum11 digital evm omap l137. But on these two cards (FPGA, DSP), we just need the FPGA, DSP, memories and I/O ports. Indeed, the objective is to validate the software image processing. The LCD screen on the left of Fig19 is added to see the resulting image. This last one will not be present on the final product. For the test of the project, we choose a TFT sharp LQ043T3DX02.

So, the objective size for the final product is first of all a large reduction by removing the obsolete parts of these two cards (80%) and then by using integrated circuit solution. The support technology will be standard 0.35^m CMOS technology which provides low current leakage [Flandre (2011)] and so consumption reduction.

An other advantage of using this technology is the possibility to develop on the same wafer analog and digital circuits. In this case, it is possible to realize powerful functions with low consumption and size.

Bionic Eye prototype 5. Image processing and analysis

Fig. 19. Bionic Eye prototype 5. Image processing and analysis

The two main steps in HVS data processing that will be mimicked are focus of attention and detection of points of interest. Focus of attention enables to direct gaze at a particular point. In this way, the image around the focusing point is very clear (central vision) and becomes more and more blurred when the distance to the focusing point increases (peripheral vision).

Detection of points of interest is the stage where a sequence of focusing points is determined in order to explore a scene.

Next post:

Previous post: