Home > Health essays > EEG Based Body Area Networks

Essay: EEG Based Body Area Networks

Essay details and download:

  • Subject area(s): Health essays
  • Reading time: 22 minutes
  • Price: Free download
  • Published: 18 January 2017*
  • Last Modified: 23 July 2024
  • File format: Text
  • Words: 3,559 (approx)
  • Number of pages: 15 (approx)

Text preview of this essay:

This page of the essay has 3,559 words.

Chapter 1
Introduction
Background
In the current time, with people’s busy work schedules, finding time to visit a doctor or a nutritionist to get an update on one’s health is definitely not something most people can do on a regular basis. Also, an increase in elderly population and chronic diseases patients is observed in last few years. The traditional health care system can’t provide scalability as it relies on physical, one to one relationship between caregiver and patient.
Also, the test equipment and clinical setup involved in monitoring many health signals are time consuming and involves video monitoring along with long cables to amplifier and recording unit making the process cumbersome. Thus, making monitoring an impatient task for old and sick people. An EEG test equipment setup at hospital is shown in Fig. 1.1. So, there is a need of handy and portable device that can monitor the health of a person. This device allows one to sit comfortably at his/her home and also reduces the burden on medical staff and hospitals. Thus patients can carry out most of their activities normally and save trips to doctors and hospitals. Once the monitored data is captured, it can be sent to a hospital for analysis.
Figure 1.1: EEG monitoring Clinical Setup
Figure 1.2: WBAN setup
With the fast and quick development in wireless communication, tele-monitoring of health signals by Wireless Body Area Networks (WBANs) is an evolving field for home based e-health where various sensors are located on patient’s body [2, 3] as shown in Fig.1.2.
In the figure, first stage named Wearable Wireless Body Area Network (WWBAN) integrates various wireless medical sensor nodes that monitor several health signals such as electroencephalogram, electrocardiograph and others. The second stage includes personal server that runs in a cell phone or a home personal computer. The personal server provides an interface for the sensor and the medical server. It also stores the data collected from medical sensors locally and uploads to the medical server when network connection or internet is available. The last stage, tagged as medical server retrieves data with internet and stores them in medical records. This stage may also include other servers such as informal caregivers and emergency servers.
1.2 EEG Based Body Area Network
1.2.1 Electroencephalogram Signals
In this work, the health signal we are particularly interested in monitoring over WBAN’s is Electroencephalogram (EEG). EEG signals measure the electrical signals generated within the brain. These signals are captured by placing sensors at various locations on the scalp of the head.
EEG signals provide high temporal resolution but have poor spatial resolution. It is an important and commonly used brain imaging technique for diagnosis of various neurological disorders such as epileptic seizures. Monitoring of EEG with WBAN’s offers several advantages fin detection of seizure. Since seizure has rare occurrence, detection of it requires continuous monitoring for long time durations. This makes the process resource intensive for the hospital. Using WBAN allows the patients to monitor EEG themselves and then consult the doctor when the relevant data is collected.
Along with this, EEG signals are profusely used in many non-medical applications such as Brain Computer Interface (BCI) [1], wherein the EEG pattern associated with a particular activity is detected to understand a person’s response.
1.2.2 Challenges and Shortcomings in EEG WBAN’s
In WBAN’s EEG is collected by number of sensors placed on scalp of head and are transmitted via gateway node over an existing wireless transmission channel as shown in Fig.1.2. The main problem in any wireless sensor network (including WBAN) is its limited energy resources at the sensor nodes. So, in order to operate the device for longer duration there is a need to reduce energy consumption as much as possible. The idea of using larger batteries to run for elongated period of time will make the device bulkier and overheating of device might create safety concerns for the patient. But now, if we look at the power consumption profile of the system, which involves capturing, sampling, digitizing and transmitting of samples, we find that some of process have higher power needs than other. On the basis of this, the entire process can be clubbed into three components in their increasing order of power consumption as follows:
Processing
Sensing
Transmission
Energy saving in processing part can be achieved by using algorithms that have low computational complexity and hence are less power hungry. Transmission power can be reduced by reducing the number of samples that needs to be transmitted i.e. by compressing the signal. So, we need to find a simple system that consumes less power and uses a suitable technique for compression.
Using Transform coding to obtain signal compression is not appropriate for this scenario as encoder (to perform compression) is complex i.e. power hungry whereas decoder (used to extract original signal from compressed signal) is simpler. This is exactly opposite to our situation’s requirement where, the encoder needs to be simple (less power hungry) while the decoder being at the base station (hospital) has no premium on computational resources.
Compressed Sensing (CS) based methods are suited for this scenario. CS uses random projections (from higher to lower dimensions) for compression; this operation is computationally cheap (reduces processing power). Since the signal is now compressed, the transmission power is also reduced. However, the decoding is computationally expensive and requires solving a non-linear optimization problem. As mentioned before, this poses no issue since there are powerful computers at the base station. But, CS is not the best and only solution for power saving. In CS, the amount of power expended in sensing remained unaltered, which contribute significantly to total power consumption. Hence, a better energy efficient WBAN system can be developed if one can reduce the sensing power as well.
1.3 Aim of the Thesis
To make a significant improvement in power saving for WBAN’s over existing CS based methods, we present this work. As discussed before, in order to design a comfortable wearable power efficient device, the computation power and circuitry needs to be minimized. As listed above sensing is the second highest power consuming unit, so a lot more power at the encoder can be saved if this can be brought down. Based on the brief discussion of the problem, the goal of our work can be divided into following sub categories:
At first, we review the recent CS based compression and reconstruction techniques for
EEG signals.
We present a technique to reduce the total energy requirement of the EEG sensor nodes by significantly. Prior techniques could only reduce transmission energy, but our method can reduce acquisition energy as well and eliminates processing energy required for compression totally.
We also performing comparison of different techniques and also find out the error free and efficient recovery algorithm
These general purpose algorithms will find other areas of signal processing like multi-spectral imaging, X-Ray CT imaging and Magnetic Resonance Imaging to name a few.
1.4 Organization of Thesis
This thesis is organized as follows:
Chapter 2 provides a brief understanding of Compressed Sensing and description of concepts for developing the algorithm to solve various proposed signal recovery formulation.
Chapter 3 presents the literary survey of the work done on CS based techniques for acquisition and reconstruction of EEG signals.
Chapter 4 proposes method to reduce sensing power and also proposes formulation for signal recovery.
Chapter 5 Simulation environment and Performance parameter, in these chapter we are using some parameter for comparison between prior used algorithm and proposed algorithm.
Chapter 6 applies the formulation proposed in Chapter 4 on EEG data to test the performance of the proposed work against existing methods. This chapter also comments on the power saving achieved by proposed work as compared to existing CS based methods.
Finally, Chapter 7 concludes this thesis by summarizing the contribution of the work and research done along with the possible extension of our work that can be explored.
CHAPTER 2
COMPRESSIVE SENSING
Chapter 2
Compressive Sensing
This chapter briefly discusses the key theoretical concepts of Compressed Sensing and provides a basic know how of underlying concepts to understand our proposed algorithm. The two main pillars of our work, Alternating Direction Method of Multipliers (ADMM) algorithm and Split Bregman Algorithm are described in this chapter.
2.1 Background on Compressed Sensing
One of the central and important element in signal processing is the Nyquist or Shannon sampling theory, which states that number of samples needed to reconstruct a signal without error is governed by the smallest time interval or maximum frequency present in the signal. Nearly all signal acquisition protocols are based on the fact that sampling rate must be at least twice the maximum frequency component to recover the signal back. In the recent years, a new theory of compressed sampling is used which asserts that super resolved signals and images can be recovered with far fewer samples than required by Nyquist sampling. Although, there is large body of literature around the theory of compressed sensing, this section only provides a brief overview and focusses on basics needed for our work.
2.1.1 Sparse Recovery
Compressed Sensing studies the problem of solving an under-determined system of linear equations, when the solution is known to be sparse:
(2.1)
In general, such an under-determined system will have an infinite number of solutions. However, when the solution is sparse, the situation is more tractable. We assume that the solution is k- sparse, i.e. it has only k non-zero elements. In such a scenario, the k-sparse vector has at most 2k (k positions and corresponding k values) unknowns. When the number of equations (m) is larger than the number of independent variables (2k in this case), intuition tells us that, we should be able to solve the problem in Eq. 2.1. This arises the question if the problem is solvable or not.
Even though an under-determined system has infinitely many solutions, if the solution is known to be sparse, it is necessarily unique. But, in most cases Eq. 2.1 will have two solutions, both of which are k-sparse or one of which is k-sparse while the other is k +1-sparse.
Thus, when it is known that the solution to Eq. 2.1 is sparse, one may as well search for the sparsest solution.
(2.2)
subject to y=Ax
The l0-norm is not strictly a norm, but is a diversity measure – it counts the number of non-zeroes in the solution; therefore minimizing it would yield the sparsest solution. Unfortunately minimizing the l0-norm is an NP hard problem [4], and thus is not feasible for any practical large scale system.
There are two ways to address this. The first way is to approximately solve the NP hard problem using greedy algorithms like Orthogonal Matching Pursuit (OMP) [18]. However, they cannot solve ALL k-sparse solutions and have poor recovery guarantees. The other way to address the problem in Eq. 2.2 is to relax the NP hard l0-norm by its nearest convex surrogate- the l1-norm.
min||x||1 (2.3)
subject to y=Ax
The l1-norm (convex) minimization problem can be easily solved via linear programming. It has been shown in [4] and in other works, that when certain conditions are met, the solution from Eq. 2.2 and Eq. 2.3 are the same; i.e. both of them are guaranteed to recover the sparsest solution. The l1-norm minimization is also dubbed as the Basis Pursuit (BP); BP can recover ALL k-sparse solutions.
However, nothing comes for free. The minimum number of equations required by l0-norm in order to recover the solution is m=2k+1. However, the minimum number of equations required by l1-norm is:
(2.4)
The required number of measurements have increased by a factor of log (n). So, the ease of solvability is achieved at a price of increased measurements (convex l1-norm as compared to NP hard l0-norm).
2.1.2 Noisy Compressible Signals
Most of the practical signals are approximately sparse or compressible in nature i.e. their coefficients show fast decay behavior when sorted in decreasing order of magnitude. Also, the signals in most situations are corrupted by noise. This more realistic situation can be modelled as follows:
(2.5)
Here, η denotes noise with normalized distribution. Under such situation, the l1 minimization Problem is modified as below:
, subject to (2.6)
In theory, it has been shown that error between solutions to Eq. 2.6, assume it to be x and actual solution x0 is bounded by Eq. 2.7 [11]
(2.7)
Here, denotes the best s-term approximation of that indicates the s highest coefficient values in x0
The expression in above equation implies that the error between the actual and reconstructed signal is bounded by the noise variance and the best s-approximation of the compressible signal. The first term, C1 arises owing to the noise in the system and the second term is due to the fact that the signal is not exactly sparse, but compressible.
2.1.3 Practical Signal Recovery
So far we have discussed about the signals that are sparse. Unfortunately, practical signals are never sparse in the physical domain. But they do have a sparse representation in a transform domain, e.g. medical images are sparse in wavelet domain, seismic waveforms are sparse in curvelets, speech is sparse in short time Fourier transform etc. CS exploits the transform domain sparsity of these signals in order to recover them.
There are two variants for CS based reconstruction: Synthesis prior and Analysis prior. Transforms which are orthogonal or tight frame transform are good to go with any of the variants of CS. These transforms satisfy the following property:
Orthogonal: WTW = I = WWT (2.8)
Tight-frame: WTW = I ≠ WWT
Here, W denotes the sparsifying basis. The analysis and the synthesis equations hold for both Kinds of transforms, and is given by
Analysis: α= Wx (2.9)
Synthesis: x = WTα
Here, α is the vector of sparse transform coefficients and W is the sparsifying basis. Now using the synthesis equation one can express Eq.2.5 as follows:
y = AWT (2.10)
Since, α is the sparse representation of the signal x, the solution to Eq. 2.10 can be obtained by solving l1 minimization problem.
The formulation in Eq. 2.10 is in synthesis prior form. This is the most popular technique For CS recovery. Unfortunately, the synthesis prior form is very restrictive. It can only be formulated for orthogonal and tight-frame transforms.
However, images are modelled as piecewise smooth functions and are sparse under finite differencing, biomedical signals like EEG and ECG are sparse in the Gabor basis, but these transforms (finite differencing, Gabor) are not orthogonal or tight-frame. The synthesis prior formulation precludes employing such powerful transforms. For all linear transforms, the analysis equation holds, i.e. even for Gabor / finite differencing one can express =Wx, but the synthesis equation does not hold for these transforms, i.e. x ≠ WTα.
For such transforms, the analysis prior formulation needs to be used:
The analysis prior is more generalized than the synthesis prior and holds for all linear sparsifying transforms. The analysis prior does not enjoy widespread popularity like the synthesis prior; it is only recently that a concerted effort is being expended in understanding this form. It must be noted that the synthesis and the analysis forms are theoretically the same for orthogonal transforms, but are different for tight-frames. It has been empirically verified that the analysis prior with tight-frame transforms yields better results than orthogonal or tight-frame transforms using the synthesis prior [7].
2.1.4 Measurement Basis
One of the key component of signal compression in CS is measurement basis. CS relies on two principles: sparsity i.e. signal should have sparse representation in some sparsifying basis and incoherence which states that the measurement and sparsifying basis are incoherent i.e. the signal which has a sparse representation in a sparsifying basis must spread out in the measurement basis [9]. It has been found that random matrices are maximally incoherent with any fixed basis. The two most commonly used random basis are: i.i.d Gaussian and Binary.
This matrix have entries chosen independently from normal distribution with zero mean and 1/n variance. This is the most commonly used sensing or measurement basis. Being a dense matrix i.e. values present at every location, they have large storage and involves lot of computation when a signal is projected onto them.
Binary Matrices The entries of a random binary matrix takes the value of 1 or 0 with equal probability. These matrices have lesser storage and computation as compared to Gaussian random matrices while performing equally well [5].
2.2 Alternating Direction Method of Multipliers
ADMM algorithm combines the decomposability of dual ascent method with superior convergence property of method of multipliers [11]. Dual ascent method provides decomposability to the problem provided some strict assumptions are considered on the minimization function. Dual ascent solves a large data problem iteratively but provides relatively slower convergence.
So, in order to achieve a fast convergence, method of multipliers is used that relaxes the strict assumptions on the function. Method of multipliers is based on augmented Lagrangian method that adds an additional quadratic penalty term to the Lagrangian of the function. The quadratic nature of penalty function wipes out the decomposability in the method. Considering the positives provided by the two methods, alternating direction Method of multipliers came forth.
ADMM is a variant of augmented Lagrangian method that uses partial updates for dual variables. Before we start with ADMM algorithm let’s give a moment to method of multipliers to gain more insight for the situation. Consider an equality constraint minimization problem as given below Method of multiplier provide better convergence under much more relaxed condition i.e. f can be non-differentiable as compared to dual ascent method. But, method comes along with the loss of splitting due to quadratic term added in augmented Lagrangian step.
ADMM algorithm provides robustness to method of multipliers while retaining the decomposability of dual ascent method. ADMM algorithm solves the problem of the form given in the
min f(x)
Subject to y=Ax (2.11)
The augmented Lagrangian for Eq. 2.13 is given by
(2.12)
where ρ>0 is the penalty parameter. The method of multipliers is give in following steps:
(2.13)
(2.14)
Method of multiplier provide better convergence under much more relaxed condition i.e. f can be non-differentiable as compared to dual ascent method. But, method comes along with the loss of splitting due to quadratic term added in augmented Lagrangian step.
ADMM algorithm provides robustness to method of multipliers while retaining the decomposability of dual ascent method. ADMM algorithm solves the problem of the form given in the
Eq. 2.16 with two set of variables x and z.
(2.15)
Subject to (2.16)
Here, original x is splitted into two parts x and z which provides two separable objective functions f(x) and g(z). ADMM iterates in following steps;
(2.17)
The algorithm is very much similar to dual ascent and method of multiplier with two minimization steps(x and z) and a dual variable update. It’s clear from equation that ADMM retains decomposability by alternatively solving for x and z in two separate steps as opposed to method of multiplier that minimizes both variables simultaneously. This makes f(x) and g (z) separable.
The only two assumptions considered by ADMM algorithm are:
1. f and g are closed, proper and convex functions. This implies that x and z update steps are solvable.
2. Excluding the penalty term, Lagrangian of the function possess saddle point.
In context to our work, ADMM in conjunction with Split Bregman is used for problem solving.
The next section briefly explains about Split Bregman approach and its connection to ADMM.
CHAPTER 3
LITERATURE SURVEY
Chapter 3
Literature Survey
A significant amount of work has been done in the field of EEG signal compression. Broadly speaking the work can be clubbed under two heads: lossless compression and lossy compression. One can recover a signal perfectly with lossless compression method, but this is achieved at an expense of greater computational complexity and lower compression ratios. These properties make lossless compression techniques unsuitable for WBAN’s application. Whereas, on the other hand lossy compression techniques do not aim for perfect reconstruction and are good to consider if recovery error is within the tolerance limits. They are simpler and provide higher compression ratios as compared to lossless techniques making them apt for our situation. Since our proposed methods falls under lossy compression technique, previous works done with lossy techniques are presented in this section. Lossless compression is out of scope of our work. Most of the lossy compression techniques are developed on Compressed Sensing framework described in section 2.1.
Over the past few years, different CS based techniques for EEG signal compression and reconstruction have been proposed. In this chapter, a review of various techniques is presented. These techniques differ from each other in compression and reconstruction techniques. There is no straightforward way to organize these studies. Therefore, we discuss them in a chronological fashion.
3.1 Gaussian Compression Basis with Gabor Sparsifying Transform
One of the earliest studies in this area is [12], where the EEG signal is fully sampled and is compressed by projection onto an independent and identically distributed (i.i.d.) Gaussian basis. The i.i.d Gaussian basis is an optimal compression basis for CS recovery [13]. Formally, the compression can be modelled as,
y=Gx+ η (3.1)
Here, G is the i.i.d Gaussian basis, y is the compressed signal and x is the EEG signal that has been compressed and needs to be reconstructed at the base station.
The aforementioned work [13] assumed that the EEG signal is sparse in Gabor basis (H); and posed the reconstruction (at the base station) as a synthesis prior problem.
min┬α⁡〖||α||1〗
Subject to || y-GHTα||2 <= e (3.2)
Once the Gabor coefficients are recovered, the signal is computed using the synthesis equation,
i.e. (x ) ̂ = HT α
However, this approach is not theoretically correct. The Gabor basis is neither orthogonal nor tight-frame (in general); thus the analysis equation holds but the synthesis equation does not. Hence, it is not correct to express Eq. 3.1 in the following form,
y≠Gx+ η (3.3)
Hence, the synthesis prior form given in Eq. 3.2 is not theoretically correct.
This work proposed Gaussian basis for signal compression. Although mathematically optimal, this is practically infeasible. The Gaussian basis is dense; storing and operating on it is therefore not efficient.
The work in [12] suggests the possibility of jointly reconstructing the EEG signals from all the probes by exploiting the correlation information. Since, EEG signals are collected over multiple channels, these signals are correlated due to monitoring the same brain activity. However, it did not provide any concrete formulation for the same.
A recent study [14] posed reconstruction formulation that is same as in [12]. The only difference being that, prior to compression, the EEG signal is denoised using Independent Component Analysis that resulted in slightly improved reconstruction.
3.2 Non-Uniform Sampling and Reconstruction using Slepian Basis
The work in [15] is theoretical, and claimed nothing to compress the signal. Hence, this has no contribution to compression and transmission of signal. However, it provided a CS based approach for EEG reconstruction.
In this work, the EEG signal was assumed to be sparse in the Slepian Basis. The Slepian basis or the prolate spheroidal wave functions (PSWF) are a set of functions derived by time limiting, low passing, and a second time limiting operation.
Let QT denote the time truncation operator, such that x=QTx if x is time limited within [-T/2, T/2]. Similarly, let PW denote an ideal low-pass filtering operator, such that x=PWx iff x is band limited within [-W, W]. The operator QTPWQT is linear, bounded and self-adjoint.
For n = 1, 2 we denote with ∅n the nth eigen-function, which is defined as:
QTPWQT∅n = λn∅n (3.4)
Here, λn denotes the respective associated eigenvalues. The time limited functions {∅n }n are the Prolate Spheroidal Wave Functions.
These functions are connected with the sinc functions as eigen-functions of the integral operator:
φ=(1 )/λ ∫_(-T/2)^(T/2)▒〖φ(τ)s(t-τ)dτ〗 (3.5)
Where, s(t)=sinc(t) and eigenvalues λn are ordered in decreasing order of magnitudes.
In this work, an unusual problem is studied. It stated that samples are not taken uniformly at {nTs}, instead at random times around these values i.e. at t0 = 0 and
tn = cnTs +Δ, with c=Nn/M and Δ is a random variable uniformly distributed in [-0:5Ts; 0:5Ts].
Using the M-orthogonal expansion, the non-uniform samples can be written as:
(3.6)
In the matrix form,
(3.7)
Here, ϕ is the Slepian basis corresponding to the random sampling instants. The straightforward solution is obtained via Pseudo-inverse, i.e. coefficient estimation is done using following equation
(3.8)
From which the signal can be reconstructed using Eq. 3.6 i.e
(3.9)
Where,
In this work, an alternate CS based formulation is also proposed as well. Instead of estimating the Slepian coefficients using the pseudo-inverse, they proposed solving it via CS, i.e.
subject to (3.10)
The reconstruction equation remains the same as Eq. 3.9. This work claimed that the CS reconstruction yields better results than the pseudo-inverse reconstruction.
This is a pedagogic work; at least it is not relevant for our topic of discussion. However, as it is remotely related to the topic we have discussed it in detail over here.
3.3 Comparative Study of Different Sparsifying Basis
In [16], a comparative study on a variety of sparsifying transforms is performed. The data compression model is the same as 3.1; we repeat it for the sake of convenience
y=Gx+ η (3.11)
In [12, 14], it is assumed that the EEG signal is sparse in the Gabor basis. In [18], several other sparsifying basis are compared; cubic and linear splines, cubic and linear b-splines, Mexican hat and Gabor. This work suffered from the same problem as [12, 14]. Although, none of the aforementioned transforms are orthogonal or tight-framed, yet in [18] the reconstruction is posed as a synthesis prior problem. We will not discuss again why synthesis prior is theoretically erroneous; the argument for this remains the same as pointed out in section 3.1.
In the previous section, we explained Slepian basis functions, but it is not widely used in signal processing literature. Whereas, transforms used in this work [18] are widely used and does not require any introduction. It was empirically verified in [18] that the Gabor dictionary 18 yields the best reconstruction among all the sparsifying transforms tested. The i.i.d Gaussian compression basis was used in [18] as well.
3.4 Block Sparse Bayesian Learning
In [16] the EEG signal is modelled to be block sparse in wavelet / DCT domain, i.e. the transform domain coefficients are assumed to be divided into certain blocks as defined below:
(3.12)
Here α = Wx (analysis equation). The vector α is assumed to be divided into b blocks. Block sparsity means that only few blocks are non-zeroes, but in a non-zero block all the elements are non-zeroes.
So far, the previous studies assumed that the EEG signal is sparse in the transform domain (Gabor, spline, etc.). In [16] it is assumed the EEG signal is approximately block sparse in DCT domain. There are well established algorithms for recovering block-sparse signals. However, all these algorithms require the blocks to be specifically delineated. Unfortunately this is not the case for EEG signals; even though such signals are approximately block sparse, the block divisions are not known apriori. In such a situation, standard block sparse recovery algorithms fail.
To recover such block sparse signals where the block information is not available apriori, a Bayesian framework is proposed in [17]; this method is called the Block Sparse Bayesian Learning (BSBL) algorithm. The BSBL method is applied on EEG signals for recovering them from their compressed measurements. This method by far yields the best EEG reconstruction results. It should be kept in mind that the block sparse nature of the EEG signal (in transform domain) is an assumption; there is no physical explanation as to why it should be so.
The other novelty in [16] is the introduction of sparse binary random matrix as compression matrix. Prior studies [13, 14] and [15] used i.i.d Gaussian matrices to compress the EEG signal onto a lower dimension. Binary random matrix contains a fixed number of ones at random position along each row, while the rest of the positions are zeroes. Storing and operating such a matrix is very efficient (linear complexity) and is practical for our problem.
3.5 Joint Reconstruction
EEG signals are acquired over multiple probes. The possibility of reconstructing the signal using the multi-channel correlation information was mentioned in [12], but no concrete formulation was proposed. Method to jointly reconstruct all the EEG signals from all the channels was proposed. However, this work did not actually exploit the inter-channel correlations.
Extending the compression model in Eq. 2.4 to the multi-channel case:
(3.13)
Where C is the total number of channels/probes and B denotes a sparse random binary matrix.
This work also assumed that a sparse binary random matrix is used for compression. It was assumed that the EEG signal is sparse in the wavelet basis. Since wavelet basis is orthogonal and the analysis-synthesis equations hold, the synthesis equation can be invoked into Eq. 3.13:
(3.14)
In matrix vector form, this can be concisely expressed as:
(3.15)
Where
Since, each of the are sparse, the concatenated is sparse as well. Therefore, it can be reconstructed using l1-minimization.
(3.16)
This method jointly recovered the transform coefficients of the full EEG signal ensemble, but did not really account for the correlations among the different channels.
3.6 Analysis Prior
In [13, 15] and [16] the Gabor basis is used as the sparsifying basis for reconstructing EEG signals; albeit incorrectly as a synthesis prior recovery. The later works [17], used DCT and wavelets, thus their formulation as a synthesis prior recovery stands justified. However, using DCT or wavelets is not an optimal solution.
EEG signal reconstruction is the first step of information processing pipeline. After reconstruction, the signal is analyzed either manually or automatically. Automatic EEG signal analysis is almost always carried forth by using the Gabor basis. Thus, preserving the high valued Gabor coefficients is of paramount importance for EEG signal analysis.
When wavelets or DCT is used for recovery, CS will only guarantee the preservation of high valued wavelet or DCT coefficients; it cannot guarantee the preservation of Gabor coefficients.
To ensure that the high valued Gabor coefficients are preserved, an analysis prior recovery was proposed.
(3.17)
This is a channel by channel reconstruction technique where B denotes the sparse binary random matrix and H is Gabor transform matrix.
Our contribution in this work can be divided into two parts. Firstly, as discussed in the introduction section, sensing process in a WBAN is the second major energy hungry component after transmission. A lot of energy saving can be achieved if this can be brought down in some way. Here, a method is proposed that stands a step ahead of compression provided by existing lower dimension projection in CS that only reduced the transmission power. The proposed method compresses the signal right away during capturing, thus eliminating any further processing as required in CS based compression. So, the proposed technique reduces both transmission as well as sensing power. Secondly, in order to recover the original signal form compressed measurement at base station, several recovery methods are proposed. These methods are developed on the grounds of exploiting channel correlation inherent in EEG data. These proposed recovery formulations over perform the exiting CS methods with noticeable margin.
CHPTER 4
PROPOSED ALGORITHM
Chapter 4
Proposed Algorithm
4.1 Basic Introduction
The proposed algorithm is combine joint basis selection and sparse parameter estimation which is called fast Bayesian matching pursuit (FBMP). Every unknown coefficient of signal x with known probability may be active or inactive, an i.i.d. Gaussian distribution (with zero mean and variance σ2 1) is assigned for the values of coefficients which is active. A corrupted version for the unknown coefficients is modeled and has been mixed by a known matrix A which gives observation y. Tree of active/inactive configurations S with the goal to finding the configurations with dominant posterior probability, S through the FBMP navigates. These search has been controlled by a D parameter tradeoff between complexity and accuracy is affected.
Proposes algorithm suggest by Numerical experiments that the observed value by FBMP is very good perform (MSE) as compare to other popular algorithms (e.g., Sparse Bayes, OMP, L1, wavelet basis, and BCS) by several dB in very critical situations.
As the prior on the unknown parameter vector a Gaussian mixture or random matrix is chosen. It is returns both an approximate MSE estimate of the parameter vector and also a set of high probability mixing parameters. The case of a sparse parameter vector is emphasis. Demonstration of estimation performance given by numerical simulations. The set of high probability mixing parameters is not only provides MAP basis selection, but also yields relative probabilities that reveal potential ambiguity in the sparse model.
Sparse linear regression is a long standing interest in statistics and signal processing. The model of linear regression
(5.1)
With unknown parameter vector x, unit norm columns in the regressed matrix A, and additive noise. We provide necessarily incomplete and a brief, survey of existing approaches, with an emphasis on themes and relevant to the proposed estimator. Algorithmic approaches providing greedy solutions and proposed over several decades. Some examples include CLEAN [19], iteratively re-weighted least-squares [19], and orthogonal matched pursuit (OMP) [19]. Tropp and Gilbert [19], provide sufficient conditions on the sparsity of x and correlation among columns of A such that the greedy OMP provides correct model selection in the noiseless measurement case with high probability.
In addition to greedy approaches, penalized least-squares solutions for x have been presented in the past four decades. In this class of approaches, parameters are found with the optimization
(5.2)
Or, equivalently for some
(5.3)
(5.4)
These all proofs have validated the widespread use of (5.2)-(5.3), providing a deeper understanding, spurring a resurgent interest, and also promoting the interpretation as “compressive sensing.” The large class of methods adopting (5.2) is interpreted by implicitly seeking the Bayesian MAP and estimate of x under a sparsity inducing prior
(5.5)
Sparse Bayesian learning is a method, explicitly adopts a Bayesian framework with xi independent, zero-mean, unknown variance σ2 i with Gaussian. The unknown variances are given the Gamma conjugate prior, and an expectation maximization iteration computes a MAP estimate of x.
Model selection is the task placed on the detection of the few significant entries of the sparse, a task alternatively also known as basis selection. In contrast, from the noisy observations, y, we adopt a minimum mean-squared error estimation formulation and focus on accurately inferring x.
As a byproduct of approximating the proposed mean-squared error (MSE) estimation algorithm, we provide exact ratios of posterior probabilities for a set of high probability solutions for the detection problem. Potential ambiguity among multiple models is find by These relative probabilities, due to low signal to noise ratio and/or significant correlation among columns in the regressor matrix, A.
Consider observing y ∈ RM, a noisy linear combination of the parameters in x ∈ RN:
(5.6) where the noise is assumed to be white Gaussian with variance σ2.
x|s~N(0,R(s)), (5.7)
Where the covariance R(s) can be determined through a discrete random vector s = [s0. . . sN−1]T of mixture parameters. For simplicity, we take R(s) to be diagonal with [R(s)]n,n = σ2 sn, implying that are independent with . Also for simplicity, we assume that the mixture parameters are2 Bernoulli (p1). To model sparse x, we choose σ2 0 = 0 and p1 ≪ 1. From the model assumptions it can be seen that
(5.8)
Where
(5.9)
4.2 Estimation of Basis and Parameter
In this part, we propose an efficient search procedure to find the most probable basis configurations along with their respective posterior probabilities. These posteriors can be used to compute a mean-squared error estimate (MSE) of the sparse parameters x.
4.2.1 Basis Selection Metric
As a consequence of the model described in Section II, the nonzero locations in s specify which of the basic elements are “active.” Thus, basis selection reduces to estimation of s. we have adopted a probabilistic model for {s, y}. The latter is accomplished through the estimation of dominant posteriors .
The posterior can be written, via Bayes rule, as
(5.10)
Where S = {0, 1}N, which shows that estimating reduces to estimating . The size of S makes it impractical to compute or for all s ∈ S. The set S⋆ responsible for the dominant posteriors can be quite small and therefore practical to compute. Working in the log domain, we find
(5.11)
(5.12)
(5.13)
(5.14)
We will refer to μ(s) as the basis selection metric.
4.2.1 Fast Metric Update
For use with the aforementioned Bayesian Matching Pursuit algorithm, we propose a fast metric update which computes the change in μ (•) that results from the activation of a single mixture parameter. More precisely, if we denote by sn the vector identical to s except for the nth coefficient, which is active in sn but inactive in s, then we seek an efficient method of computing. Note that the metric at the root node (i.e., s = 0) is
(5.15)
(5.16)
(5.17)
(5.18)
Equation (5.15)-(5.18) then imply
One could also determine the stopping parameter P adaptively.
Which, combined, yield
(5.19)
In this summary, ∆n(s) in (5.19) quantifies the change in our basis selection metric μ(•) due to the activation of the nth tap of s. E. Fast Bayesian Matching Pursuit Notice that the cost of computing with (5.17)-(5.18) is O(NM2), if standard matrix multiplication is used. As we describe, the complexity of this operation can be made linear in M by exploiting the structure of (s) −1.
Say that t = [t1, t2 . . . tp] T contains the indices of active elements in s. Then, from (5.16),
where b(i) and β(i) denote the values of b and β generated while activating index ti in the mixture parameter vector defined by the active indices [t1, . . . , ti−1]. From (5.17), we are required to compute
when activating the nth tap in s. The key observation is that the coefficients need only be computed once, i.e., when index ti is activated. Furthermore, only need to be computed for surviving indices ti. These tricks form the foundation of the Fast Bayesian Matching Pursuit algorithm outlined in Table I. From the table, it is straightforward to verify that the number of multiplications required by the algorithm is O (NMPD).
We presented a robust Bayesian matching pursuit (BMP) algorithm based on a fast recursive method. Compared with other robust algorithms, our algorithm doesn’t require signals to be derived from some known distribution. When we cannot estimate the parameters of the signal distributions this can be useful. Application of the proposed method on several different signal types demonstrated its superiority and robustness.

About this essay:

If you use part of this page in your own work, you need to provide a citation, as follows:

Essay Sauce, EEG Based Body Area Networks. Available from:<https://www.essaysauce.com/health-essays/eeg-based-body-area-networks/> [Accessed 17-01-25].

These Health essays have been submitted to us by students in order to help you with your studies.

* This essay may have been previously published on EssaySauce.com and/or Essay.uk.com at an earlier date than indicated.