ABSTRACT:
This paper explains the method for lossless compression multi-channel electroencephalogram signals. The Multichannel wavelet transform is used to exploit the inter-correlation among the EEG channels. An Huffman transform is applied to further minimize the temporal redundancy. A compression algorithm is built based on the principle of ‘lossy plus residual coding,’ consisting of a matrix/tensor decomposition based coder in the lossy layer. This approach guarantees a specifiable maximum absolute error between original and reconstructed signals. The compression algorithm is applied to three different scalp EEG datasets and an intracranial EEG dataset, each with different sampling rate and resolution. The compression of Electroencephalographic (EEG) signal is of great interest to many in the biomedical community. The motivation for this is the large amount of data involved in collecting EEG information which requires more memory for storage and high bandwidth for transmission. Lossless compression of EEG is essential due to the necessity for exact recovery of the data for diagnostic purposes.
Keywords’ Brain Computer Interface (BCI), Electroencephalogram (EEG)
INTRODUCTION
Some human beings suffer severe motor impairments (like late stage of head trauma and spinal injuries) cannot express their thoughts as healthyfor assistance of patients and can be controlled by user alone. It uses EEG signals as input and produces output signals in the form of inverse DWT.
BCI is a communication system between human brain and computer. This enables generation of control signals from brain signals, such as sensory motor rhythms and evoked potentials, therefore persons, because they are not capable of talking or moving. But still they are conscious and capable of performing mental tasks equivalent to healthy individual, using brain signals. The communication system for person with severe disabilities helps him to express thoughts for translating their actions into activity using BCI. The advantage of the proposed work is, there is no need of any caretakers, it constitutes a novel communication option for people with severe motor disabilities. In general non-invasive approach is followed because of easy applicability and low procedural risk, by placing EEG electrodes on surface of scalp that captures brain signals and can be processed. The recording of electrical activity of brain is called as Electroencephalogram (EEG). We have used motor imagery and no-motor imagery signals to perform certain tasks and push objects to represent the different thoughts of the user. The user thoughts are expressed in form of speech using speech synthesizer. People who are paralyzed or have other severe movement disorder need alternative methods for communication and control. These individuals need an alternative communication and control that does not depend upon muscle control. They need a method to express their wishes that does not rely on the brain’s normal output pathways of peripheral nerves and muscles.
METHODOLOGY
EXISTING METHOD
1. Discrete cosine transform compression: DCT
The two-dimensional DFT can be computed by performing one-dimensional DFT on the result of another one-dimensional DFT. This important property is called separability and it also applies to other transforms derived from Fourier transform. It means that the 2D DFT of two-dimensional signal can be computed by computing a one-dimensional DFT of the rows of input signal matrix followed by the computation of one-dimensional DFT of the columns. This simple procedure for computing multidimensional separable transforms is called row-column decomposition. It follows that any separable multidimensional transform can be computed by a series of one-dimensional transforms. Later it will be shown that row-column decomposition is not an ideal way of computing multi-dimensional transforms, actually it’s a rather naive approach. More complex algorithms for computing multidimensional transforms rely on the properties of the transform itself to compute the result directly without decomposition.
2. SPATIAL ENCODING
A common method of encoding naturally occurring data. Data sampled from nature, like a sound wave. These methods are Spatial and Spectral encoding. Temporal encoding is something a bit different and is covered later. This module covers spatial and spectral encoding and introduces the Fourier Transform first, from which the Discrete Cosine Transform can be derived.
Spatial encoding is probably the most well known and the most intuitive coding method. When spatially encoding, the power (amplitude) of a sample at a particular point in time or space is recorded. ie. over time for audio waves and over space for images. As the samples build up over time or area, a digital approximation of the original waveform is produced. The frequency and accuracy of the samples obviously effect the quality of the approximation.
Example: Take an audio wave for example. It’s one-dimensional and varies over time. If this audio wave is sampled at a standard (CD) rate of 44.1 thousand samples per second, with each sample being an 8 bit value (0-255 range), a digital representation wave form is built up. Each sample explicitly records the amplitude of the wave at a point in time for 1/44,100th of a second. The resulting sequence of numbers is a digital representation of the original waveform.
To re-create the original waveform, the samples are simply played back in the correct order and at the correct speed. In the music/audio industry, this sort of encoding is referred to as Pulse Code Modulation (PCM). Obviously, when spatially encoding a waveform, the higher the sampling frequency and the greater the number of bits available to each sample value, then the higher the digitized wave quality will be.
3. KL TRANSFORM
Transform coding is one of the most important methods for lossy image compression. However, as an optimal linear dimension-reduction transform in the mean-squared error or reconstructed error sense, the Karhunen-Loeve transform (KLT) [1] or principal component analysis (PCA) can hardly be used in image compression due to its slow speed in seeking the transform from the covariance matrix constructed by given training data. Assume we are given a set of n-dimensional training data then for any n-dimensional vector x, the Karhunen-Loeve transforms applied. The larger the scale or dimension of the covariance matrix is, the slower the speed of computing the eigenvectors and hence transform is, and then so is performing compressing or encoding transform. To mitigate this problem, two methods are usually adopted. The first one is to replace KLT with Discrete Cosine Transform (DCT), although able to achieve much faster compression than KLT, DCT leads to relatively great degradation of compression quality at the same compression ratio compared to KLT. The second is to use the parallelism technique. The experimental results show that the MatKLT method requires much less computation time than KLT at the price of a little degradation of compressed image quality. This method has the potentiality to be a faster method for data reduction.
DRAWBACKS IN EXISTING METHOD:
1. DCT is lossless compression EEG has more than one channels so it is not suitable for it
2. SPHIT compression needs more time to execute
3. At receiver side we can’t get accurate data if we use any spatial method.
PROPOSED METHOD
1. Multiwavelet based EEG compression:
The lossless and near-lossless compression algorithms for multichannel electroencephalogram signals (EEG) are presented based on image and volumetric coding. Multichannel EEG signals have significant correlation among spatially adjacent channels; moreover, EEG signals are also correlated across time. Suitable representations are proposed to
utilize those correlations effectively. In particular, multichannel EEG is represented either in the form of image (matrix) or volumetric data (tensor), next a wavelet transform is applied to those EEG representations. The compression algorithms are designed following the principle of ‘lossy plus residual coding’, consisting of a wavelet-based lossy coding layer followed by arithmetic coding on the residual. Such approach guarantees a specifiable maximum error between original and reconstructed signals. EEG signals are typically analyzed in two ways: visual inspection by human experts, automatic analysis using signal processing algorithms. Consequently, any type of compression technique would be suitable as long as the reconstructed EEG signals do not introduce any errors in such analysis. Particularly, near-lossless compression techniques are of great use, as they can limit the distortion to a user defined maximum amount. we propose compression schemes for multichannel EEG that alleviate some of those shortcomings. In
particular, our algorithms have the following properties:
‘ They exploit the inter- and intra-channel correlation simultaneously,
by arranging the multichannel EEG as matrix (image) or tensor (volume).
‘ They support progressive transmission.
‘ They guarantees a maximum amount of distortion in L1 sense, by means of a two-stage coding procedure
We represent the EEG in the form of an image (matrix) or volume (tensor). Such representations help to exploit both the spatial and temporal correlations. We followed a ‘two-stage’ coding philosophy: the EEG data is first coded at an optimal rate using a wavelet-based scheme, and next the residuals are further encoded by an entropy coding scheme (particularly, modified arithmetic coding). We achieve attractive compression ratios for low error values.
2. HUFF MAN ENCODING :
EEG is electroencephalogram signals, which deals with the brain activity. In telemedicine field using transmission media and compression techniques to deliver bio signals such as ECG, EEG for long distance medical services has become reality and challenge. For the urgent treatment or ordinary healthcare or patient monitoring system, it is necessary to compress and transmit these data for the efficient use of bandwidth. In telemedicine application, transmitting a large amount of data through limited bandwidth and compressed form become a challenge one. EEG signal is evaluated and assessed by using parameters such as PRD, SNR, cross correlation and power spectral density. With low value PRD and single layer perception, reconstructed signal can be preserved for significant information. It produces better compression result compared to lossless scheme. So the algorithm based on sampling can provide good data rate requires by EEG data, different transforming techniques use to improve the compression ratio, noise cancellation and to improve the performance parameter like signal to noise ratio (SNR)can be achieve by filtering and amplifying methods, coding can be used for encoding a data in real time. The mention techniques and methods could produce the best compression ratio. Compression of EEG data remains an important issue, despite the vast increase in storage capacity and transmission speed in communication pathways. This is due to their diagnostic characteristics, which set a common endeavor to all compression approaches i.e., efficient EEG data compression and transmission yet unaffected diagnostic characteristics. In this paper, algorithms that are commonly used and the ones recently developed are discussed. Over the years there has been an improvement in the detection algorithms but their performance is still not perfect. To address the challenge in real time compression and transmission for EEG data there is still scope to improve the parameters such as compression ratio, bandwidth.
ADVANTAGES OF PROPOSED SYSTEM
1. More low frequency data we may get.
2. Huff man is one of the lossy technique so it might produce good compression ratio.
Fig. 1.2 block diagram of EEG signal compression.
1. EEG signal Transforming and Filtering:
The given input is in the form EEG signal for an operation input .The EEG waveforms are to be operated for an input formation based of done.
FILTERING
The given input is in the form of fully weighted signals only given, so we are going to recomposed by that formation of an filtering is used.
MULTI CHANNEL WAVELET TRANSFORMATION
Here it was a wavelet transformation is used for an input and output transformation for an coded functions input. The Multichannel wavelet transform is used to exploit the inter-correlation among the EEG channels. The transform is approximated using lifting scheme which results in a reversible realization under ‘nite precision processing. An Huffman transform is applied to further minimize the temporal redundancy. A compression algorithm is built based on the principle of ‘ lossy plus residual coding’.
2. EEG signal encoding by Huffman encoding and compressing
The transformation applied after an encoding purposes the Huffman coding is to be used. The coded output after the compressed bits are to be used. It produces better compression result compared to lossless scheme. The transformation applied after an encoding purposes the Huffman coding is to be used. The coded output after the compressed bits are to be used. EEG signal is evaluated and assessed by using parameters such as PRD, SNR, cross correlation and power spectral density.
The compression of Electroencephalographic (EEG) signal is in Huffman coding of great interest to many in the biomedical community. The motivation for this research is the large amount of low amplitude data involved in collecting EEG information which requires storage space and high bandwidth for transmission. Lossless compression of EEG provides the necessity for exact recovery of the data for diagnostic and analysis purposes. Efficient compression and transmission of the EEG signal is a difficult task due to the unpredictability in the signal and also signal is having very low amplitude, Thus with various relevant data compression and transmission for EEG signal could be the great one.
Fig. 1.4 Huffman code flow
3. EEG signal Decoding by Huffman Decoding and Decompressing
For the processing of an inverse transformation, the Huffman decoded process is to be done. The decoded output is to be for an inverse transformation is to be applied.For the processing of an inverse transformation, the Huffman decoded process is to be done. The decoded output is to be for an inverse transformation is to be applied. It simulates identical prediction processes.
Decompressing:
The process of decompression is simply a matter of translating the stream of prefix codes to individual byte values.By traversing the Huffman tree node by node as each bit is read from the input stream.
4. Inverse wavelet transformation by decoded output:
Here the transformation operations inverse operation is to be done. The inversion process, In this processes, the noisy images are obtained in that as processed by an inverse wavelet transformation too. The reverse operation of an transformation is could be processed for an de noising purposes. The decoder can obtain a reconstructed image by taking inverse wavelet transform,
——— (1.1)
called as ‘progressive transmission’.
A major objective in a progressive transmission scheme is to select the most important information-which yields the largest distortion reduction-to be transmitted first. For this selection, we use the mean squared-error (MSE) distortion measure
———- (1.2)
Where Nis the number of image pixels. is the Original pixel value and is the reconstructed pixel value. Furthermore, we use the fact that the
Euclidean norm is invariant to the unitary transformation
———– (1.3)
From the above the equation, it is clear that if the exact value of the transform coefficient is sent to the decoder, then the MSE decreases by . This means that the coefficients with larger magnitude should be transmitted first because they have a larger content of information. This is the progressive transmission method. Extending this approach, we can see that the information the value of can also be ranked according to itsbinary representation, and the most significant bits should betransmitted first. This idea is used, for example, in thebit-plane method for progressive transmission. Following, we present a progressive transmission scheme that incorporates these two concepts: ordering the coefficients by magnitude and transmitting the most significant bits first. To simplify the exposition, we first assume that the ordering information is explicitly transmitted to the decoder. Later, we show a much more efficient method to code the ordering information.
5. Comparision the signals using Haar filter
Haar transform
The lifting scheme is a useful way of looking at discrete wavelet transform. It is easy to understand, since it performs all operations in the time domain, rather than in the frequency domain, and has other advantages as well. This section illustrates the lifting approach using the Haar Transform .
The Haar transform is based on the calculations of the averages (approximation co-efficient) and differences (detail co-efficient). Given two adjacent pixels a and b, the principle is to calculate the average and the difference . If a and b are similar, s will be similar to both and d will be small, i.e., require few bits to represent. This transform is reversible, since and and it can be written using matrix notation as
=(a,b)A, =
Consider a row of pixels values for . There are pairs Each pair is transformed into an average and the difference . The result is a set averages and a set differences.
SPEECH SYNTHESIZING
The textual data corresponding to the trained signal is given to the speech synthesizer. Speech synthesizer is the component that produces artificial speech for the given text as input. This allows java applications to incorporate speech technology in to the user interface. It defines a cross platform API to support to command and control dictation system. Speech synthesis is the artificial production of human speech. A computer system used for this purpose is called a speech synthesizer, and can be implemented in software or hardware. A text-to-speech (TTS) system converts normal language text into speech; other systems render symbolic linguistic representations like phonetic transcriptions into speech.
CONCLUSION AND FUTURE ENHANCEMENTS
The project was developed using MATLAB for feature extraction and classification while the speech synthesizer was developed using java speech API’s. Here raw EEG signals were the data sets which were previously taken from the subjects using Emotive head set and stored in computer. We have proposed an implementation of 3 tasks. Preprocessing is done before feature extraction to increase signal to noise ratio (SNR). This process is decomposing or de-noising the captured signal in order to remove noise and to enhance the EEG. The length of the wave is decreased by reducing the number of values in signal and retaining the original waveform.Thus the best basis selection and optimization of the mother wavelet through parameterization lead to improvement of performance in signal compression with respect to random selection of the mother wavelet. The method provides an adaptive approach for optimal signal representation for the purpose of compression and can thus be applied .Different algorithms are used to implement an innovative technique in compressing EEG signals and extracting features of for translating text and speech synthesizing for the future enhancements.
REFERENCES:
1] S. K. Mitra and S. N. Sarbhadhikari, ‘Iterative system function andgenetic algorithm based EEG compression,’ Med. Engg. Phy., vol. 19,no. 7, pp. 605’617, 1997.
[2] H. G??urkan, U. Guz, and B. S. Yarman, ‘EEG signal compression basedon classi’ed signature and envelope vector sets,’ Intl J. Circuit theoryAppl., vol. 37, no. 2, pp. 351’363, 2009.
[3] K.-K. Poh and P. Marziliano, ‘Compressive sampling of EEG signalswith ‘nite rate of innovation,’ EURASIP J. Adv. Signal Process., vol.2010, pp. 1’12, 2010.
[4] Y. Wongsawat, S. Oraintara, T. Tanaka, and K. Rao, ‘Lossless multichannel EEG compression,’ in Proc. IEEE Intl Symp. Circuits and Syst.,sep. 2006, pp. 1611’1614.
[5] D. Gopikrishna and A using a multichannel model,’ in High Perf. Comp., ser.LNCS, vol. 2552. Springer Berlin / Heidelberg, 2002, pp. 443’451.
[6]Guler, I., Kiymik, M. K., Akin, M., &Alkan, A. AR spectralanalysis of EEG signals by using maximum likelihood estimation. Computers in Biology and Medicine, 31, 441’450,2001.
[7]Zoubir, M., &Boashash, B. Seizure detection of newborn EEGusing a model approach. IEEE Transactions on Biomedical
Engineering, 45, 673’685, 1998.
[8] Adeli H, Zhou Z, Dadmehr N. Analysis of EEG records in anepileptic patient using wavelet transform. J NeurosciMethods
;123(1):69’87, 2003.
[9] American Psychiatric Association, Diagnostic and StatisticalManual of Mental Disorders, Third Edition, Revised, American
Psychiatric Association, Washington, DC, 1987.
Essay: SPEECH SYNTHESIZER AND FEATURE EXTRACTION USING LOSSLESS MULTIWAVELET EEG COMPRESSION
Essay details and download:
- Subject area(s): Computer science essays
- Reading time: 11 minutes
- Price: Free download
- Published: 25 November 2015*
- Last Modified: 15 October 2024
- File format: Text
- Words: 3,161 (approx)
- Number of pages: 13 (approx)
Text preview of this essay:
This page of the essay has 3,161 words.
About this essay:
If you use part of this page in your own work, you need to provide a citation, as follows:
Essay Sauce, SPEECH SYNTHESIZER AND FEATURE EXTRACTION USING LOSSLESS MULTIWAVELET EEG COMPRESSION. Available from:<https://www.essaysauce.com/computer-science-essays/essay-speech-synthesizer-and-feature-extraction-using-lossless-multiwavelet-eeg-compression/> [Accessed 17-12-24].
These Computer science essays have been submitted to us by students in order to help you with your studies.
* This essay may have been previously published on EssaySauce.com and/or Essay.uk.com at an earlier date than indicated.