TCP-Transfer control protocol
UDP-User datagram protocol
USB-Universal serial bus
WiFi-wireless fidelity
CD-Compact disc
Thursday, March 13, 2008
Deleting files from recent documents
1.right click on the task bar and click properties
2.Click the Start menu tab nd click the customize button.
3.Click advanced tab in that where you can find a button clear list
4.Click that button to clear the files from recent documents
5.Even you can remove recent documents by unchecking 'list my most recently opened document which is left to the 'clear list'button.
6.If you have any trouble drop your doubt in comment
2.Click the Start menu tab nd click the customize button.
3.Click advanced tab in that where you can find a button clear list
4.Click that button to clear the files from recent documents
5.Even you can remove recent documents by unchecking 'list my most recently opened document which is left to the 'clear list'button.
6.If you have any trouble drop your doubt in comment
Software testing
o Unit Testing
o System Testing
o Integration Testing
o Acceptance Testing
UNIT TESTING
This is the first level of testing. The different modules are tested against the specifications produced during the integration. This is done to test the internal logic of each module. Those resulting from the interaction between modules are initially avoided.
The input received and output generated is also tested to see whether it falls in the expected range of values. Unit testing is performed from the bottom up, starting with the smallest and lowest modules and proceeding one at a time.
The units in a system are the modules and routines that are assembled and integrated to perform a specific function. The programs are tested for correctness of logic applied and detection of errors in coding. Each of the modules was tested and errors are rectified. They were then found to function properly.
SYSTEM TESTING
The integration of each module in the system is checked during this level of testing. The objective of system testing is to check if the software meets its requirements.
System testing is done to uncover errors that were not found in earlier tests. This includes forced system failures and validation of total system as the user in the operational environment implements it. Under this testing, low volumes of transactions are generally based on live data. This volume is increased until the maximum level for each transactions type is reached. The total system is also tested for recovery after various major failures to ensure that no data are lost during the breakdown.
INTEGRATION TESTING
In integration testing, the tested modules are combined into sub-systems, which are then tested. The goal of integration testing to check whether the modules can be integrated properly emphasizing on the interfaces between modules.
The different modules were linked together and integration testing done on them.
ACCEPTANCE TESTING
The objective of the acceptance test is to tell the user about the validity and reliability of the system. It verifies whether the system operates as specified and the integrity of important data is maintained. User motivation is very important for the successful performance of the system.
All the modules were tested individually using both test data and live data. After each module was ascertained that it was working correctly and it had been "integrated" with the system. Again the system was tested as a whole. We hold the system tested with different types of users. The System Design, Data Flow Diagrams, procedures etc. were well documented so that the system can be easily maintained and upgraded by any computer professional at a later
Acceptance testing is done with live data provided by the client to ensure that the software works satisfactorily. This test focuses on the external behavior of the system. Data was entered and acceptance testing was performed.
o System Testing
o Integration Testing
o Acceptance Testing
UNIT TESTING
This is the first level of testing. The different modules are tested against the specifications produced during the integration. This is done to test the internal logic of each module. Those resulting from the interaction between modules are initially avoided.
The input received and output generated is also tested to see whether it falls in the expected range of values. Unit testing is performed from the bottom up, starting with the smallest and lowest modules and proceeding one at a time.
The units in a system are the modules and routines that are assembled and integrated to perform a specific function. The programs are tested for correctness of logic applied and detection of errors in coding. Each of the modules was tested and errors are rectified. They were then found to function properly.
SYSTEM TESTING
The integration of each module in the system is checked during this level of testing. The objective of system testing is to check if the software meets its requirements.
System testing is done to uncover errors that were not found in earlier tests. This includes forced system failures and validation of total system as the user in the operational environment implements it. Under this testing, low volumes of transactions are generally based on live data. This volume is increased until the maximum level for each transactions type is reached. The total system is also tested for recovery after various major failures to ensure that no data are lost during the breakdown.
INTEGRATION TESTING
In integration testing, the tested modules are combined into sub-systems, which are then tested. The goal of integration testing to check whether the modules can be integrated properly emphasizing on the interfaces between modules.
The different modules were linked together and integration testing done on them.
ACCEPTANCE TESTING
The objective of the acceptance test is to tell the user about the validity and reliability of the system. It verifies whether the system operates as specified and the integrity of important data is maintained. User motivation is very important for the successful performance of the system.
All the modules were tested individually using both test data and live data. After each module was ascertained that it was working correctly and it had been "integrated" with the system. Again the system was tested as a whole. We hold the system tested with different types of users. The System Design, Data Flow Diagrams, procedures etc. were well documented so that the system can be easily maintained and upgraded by any computer professional at a later
Acceptance testing is done with live data provided by the client to ensure that the software works satisfactorily. This test focuses on the external behavior of the system. Data was entered and acceptance testing was performed.
ARTIFICIAL NEURAL NETWORKS
Without going into detail about the history, let's just say neural networks were invented in the sixties by mathematicians (back in those days, computer scientists were often mathematicians; my how things have changed) looking for ways to model processing in the human brain at the lowest level. They took what they knew about a human neuron as a model for creating an electronic neuron.
3.2.1 The Neuron
These researchers started by boiling down to how they viewed a neuron as working in light of existing knowledge. A neuron, they said, may have many inputs ("dendrites") that a hooked into the single output ("axon") of another neuron. Signals received by some dendrites would tend to activate the neuron, while signals on others would tend to suppress activation
*Pattern Matching
Here is where a lot of introductions stop short. Let me explain what it means for one of these neurons to "know" something before explaining how they work in concert with other such neurons.
The goal of a neuron of this sort is to fire when it recognizes a known pattern of inputs. Let's say for example that the inputs come from a black and white image 10 x 10 pixels in size. That would mean we have 100 input values. For each pixel that is white, we'll say the input on its corresponding dendrite has a value of -1. Conversely, for each black pixel, we have a value of 1. Let's say our goal is to get this neuron to fire when it sees a letter "A" in the image, but not when it sees any other pattern. Figure 3 below illustrates this:
"Firing" occurs when the output is above some threshold. We could choose 0 as the threshold, but in my program, I found 0.5 is a good threshold. Remember; our only output (axon) for this neuron is always going to have a value between -1 and 1. The key to getting our neuron to recognize an "A" in the picture is to set the weights on each dendrite so that each of the input-times-weight values will be positive. So if one dendrite (for a pixel) is expecting to match white (-1), its weight should be -1. Then, when it sees white, it will contribute -1 * -1 = 1 to the sum of all inputs. Conversely, when a dendrite is expecting to match black (1), its weight should be 1. Then, its contribution when it sees black will be 1 * 1 = 1. So if the input image is exactly like the archetype our single neuron has of the letter "A" built into its dendrite weights, the sum will be exactly 100, the highest possible sum for 100 inputs. Using our "sigma" function, this reduces to an output of 1, the highest possible output value, and a sure indication that the image is of an A.
Now let’s say one of the input pixels was inverted from black to white or vice-versa. That pixel's dendrite would contribute a -1 to the sum, yielding 98 instead of 100. The resulting output would be just a little under 1. In fact, each additional "wrong" pixel color will reduce the sum and hence the output.
If 50% of the pixels were "wrong" - not matching what the dendrites say they are expecting as input - the sum would be exactly 0 and hence the output would be 0. Conversely, if the entire image of the "A" were inverted, 100% of them would be wrong, the sum would be -100, and the output would be -1. Since 0 and -1 are obviously below our threshold of 0.5, we would say this neuron does not "fire" in these cases.
The most important lesson to take away from this simple illustration is that knowledge is encoded in the weights on the dendrites. The second most important lesson is that the output of this sort of thinking is "fuzzy". That is, our neuron compares input patterns and generates variable outputs that are higher the closer the inputs are to its archetypical pattern. So while there are definitive "match" (1) and "non-match" (-1) outputs, there is a whole range in between of somewhat matching.
3.2.1 The Neuron
These researchers started by boiling down to how they viewed a neuron as working in light of existing knowledge. A neuron, they said, may have many inputs ("dendrites") that a hooked into the single output ("axon") of another neuron. Signals received by some dendrites would tend to activate the neuron, while signals on others would tend to suppress activation
*Pattern Matching
Here is where a lot of introductions stop short. Let me explain what it means for one of these neurons to "know" something before explaining how they work in concert with other such neurons.
The goal of a neuron of this sort is to fire when it recognizes a known pattern of inputs. Let's say for example that the inputs come from a black and white image 10 x 10 pixels in size. That would mean we have 100 input values. For each pixel that is white, we'll say the input on its corresponding dendrite has a value of -1. Conversely, for each black pixel, we have a value of 1. Let's say our goal is to get this neuron to fire when it sees a letter "A" in the image, but not when it sees any other pattern. Figure 3 below illustrates this:
"Firing" occurs when the output is above some threshold. We could choose 0 as the threshold, but in my program, I found 0.5 is a good threshold. Remember; our only output (axon) for this neuron is always going to have a value between -1 and 1. The key to getting our neuron to recognize an "A" in the picture is to set the weights on each dendrite so that each of the input-times-weight values will be positive. So if one dendrite (for a pixel) is expecting to match white (-1), its weight should be -1. Then, when it sees white, it will contribute -1 * -1 = 1 to the sum of all inputs. Conversely, when a dendrite is expecting to match black (1), its weight should be 1. Then, its contribution when it sees black will be 1 * 1 = 1. So if the input image is exactly like the archetype our single neuron has of the letter "A" built into its dendrite weights, the sum will be exactly 100, the highest possible sum for 100 inputs. Using our "sigma" function, this reduces to an output of 1, the highest possible output value, and a sure indication that the image is of an A.
Now let’s say one of the input pixels was inverted from black to white or vice-versa. That pixel's dendrite would contribute a -1 to the sum, yielding 98 instead of 100. The resulting output would be just a little under 1. In fact, each additional "wrong" pixel color will reduce the sum and hence the output.
If 50% of the pixels were "wrong" - not matching what the dendrites say they are expecting as input - the sum would be exactly 0 and hence the output would be 0. Conversely, if the entire image of the "A" were inverted, 100% of them would be wrong, the sum would be -100, and the output would be -1. Since 0 and -1 are obviously below our threshold of 0.5, we would say this neuron does not "fire" in these cases.
The most important lesson to take away from this simple illustration is that knowledge is encoded in the weights on the dendrites. The second most important lesson is that the output of this sort of thinking is "fuzzy". That is, our neuron compares input patterns and generates variable outputs that are higher the closer the inputs are to its archetypical pattern. So while there are definitive "match" (1) and "non-match" (-1) outputs, there is a whole range in between of somewhat matching.
Methodology for epilepsy detection
Entropy
o Measures signal complexity
o EEG with low entropy is due to a small number of dominating processes
o EEG with high entropy is due to a large number of processes
o Relatively simple measure of complexity and system regularity
o Quantifies the predictability of subsequent amplitude values of the EEG based on the knowledge of previous amplitude values
o As a relative measure depends on three parameters
The length of the epoch
The length of the compared runs
The filtering level
o Approximate entropy and Shannon entropy are two entirely different measures
o Approximate entropy measures the predictability of future amplitude values of the EEG based on the one or two previous amplitude values
o Increasing anesthetic concentrations are associated with increasing EEG pattern regularity
o EEG approximate entropy decreases with increasing anesthetic concentration
o At high doses of anesthetics, periods of EEG silence with intermittent bursts of high frequencies occur
o For example median EEG frequency method fail to characterize concentrations because of these bursts
o Brain’s EEG approximation Entropy value is a good candidate for characterizing different extents of cerebral ischemic injury.
First, in the early stage of ischemia, the EEGs’ approximate entropy difference between ischemic region and normal region increase.
Second, after ischemia 18 minutes, the approximate entropy of ischemic region become lower than that before ischemia (normal state), which may indicate an emergent injury being induced.
Last, the approximate entropy of ischemic region (left brain) is lower than that of normal region (right brain).
o Measures signal complexity
o EEG with low entropy is due to a small number of dominating processes
o EEG with high entropy is due to a large number of processes
o Relatively simple measure of complexity and system regularity
o Quantifies the predictability of subsequent amplitude values of the EEG based on the knowledge of previous amplitude values
o As a relative measure depends on three parameters
The length of the epoch
The length of the compared runs
The filtering level
o Approximate entropy and Shannon entropy are two entirely different measures
o Approximate entropy measures the predictability of future amplitude values of the EEG based on the one or two previous amplitude values
o Increasing anesthetic concentrations are associated with increasing EEG pattern regularity
o EEG approximate entropy decreases with increasing anesthetic concentration
o At high doses of anesthetics, periods of EEG silence with intermittent bursts of high frequencies occur
o For example median EEG frequency method fail to characterize concentrations because of these bursts
o Brain’s EEG approximation Entropy value is a good candidate for characterizing different extents of cerebral ischemic injury.
First, in the early stage of ischemia, the EEGs’ approximate entropy difference between ischemic region and normal region increase.
Second, after ischemia 18 minutes, the approximate entropy of ischemic region become lower than that before ischemia (normal state), which may indicate an emergent injury being induced.
Last, the approximate entropy of ischemic region (left brain) is lower than that of normal region (right brain).
History of epilepsy detection
Approximately 1% of the people in the world suffer from epilepsy. The electroencephalogram (EEG) signal is used for the purpose of the epileptic detection as it is a condition related to the brain’s electrical activity. Epilepsy is characterized by the occurrence of recurrent seizures in the EEG signal. In majority of the cases, the onset of the seizures cannot be predicted in a short period, a continuous recording of the EEG is required to detect epilepsy. A common form of recording used for this purpose is an ambulatory recording that contains EEG data for a very long duration of even up to one week.
It involves an expert’s efforts in analyzing the entire length of the EEG recordings to detect traces of epilepsy. As the traditional methods of analysis are tedious and time-consuming, many automated epileptic EEG detection systems have been developed in recent years.
With the advent of technology, it is possible to store and process the EEG data digitally. The digital EEG data can be fed to an automated seizure detection system in order to detect the seizures present in the EEG data. Hence, the neurologist can treat more patients in a given time as the time taken to review the EEG data is reduced considerably due to automation. Automated diagnostic systems for epilepsy have been developed using different approaches.
This paper discusses an automated epileptic EEG detection system using two different neural networks, namely, Elman network and probabilistic neural network using a time-domain feature of the EEG signal called approximate entropy that reflects the nonlinear dynamics of the brain activity. ApEn is a recently formulated statistical parameter to quantify the regularity of a time series data of physiological signals.
It was first proposed by Pincus in 1991 and has been predominantly used in the analysis of the heart rate variability and endocrine hormone release pulsatility, estimation of regularity in epileptic seizure time series data, and in the estimation of the depth of anesthesia.
Diambra et al. have shown that the value of the ApEn drops abruptly due to the synchronous discharge of large groups of neurons during an epileptic activity. Hence, it is a good feature to make use of in the automated detection of epilepsy. In this paper, this feature is applied, for the first time, in the automated detection of epilepsy using neural networks.
It involves an expert’s efforts in analyzing the entire length of the EEG recordings to detect traces of epilepsy. As the traditional methods of analysis are tedious and time-consuming, many automated epileptic EEG detection systems have been developed in recent years.
With the advent of technology, it is possible to store and process the EEG data digitally. The digital EEG data can be fed to an automated seizure detection system in order to detect the seizures present in the EEG data. Hence, the neurologist can treat more patients in a given time as the time taken to review the EEG data is reduced considerably due to automation. Automated diagnostic systems for epilepsy have been developed using different approaches.
This paper discusses an automated epileptic EEG detection system using two different neural networks, namely, Elman network and probabilistic neural network using a time-domain feature of the EEG signal called approximate entropy that reflects the nonlinear dynamics of the brain activity. ApEn is a recently formulated statistical parameter to quantify the regularity of a time series data of physiological signals.
It was first proposed by Pincus in 1991 and has been predominantly used in the analysis of the heart rate variability and endocrine hormone release pulsatility, estimation of regularity in epileptic seizure time series data, and in the estimation of the depth of anesthesia.
Diambra et al. have shown that the value of the ApEn drops abruptly due to the synchronous discharge of large groups of neurons during an epileptic activity. Hence, it is a good feature to make use of in the automated detection of epilepsy. In this paper, this feature is applied, for the first time, in the automated detection of epilepsy using neural networks.
Epilepsy detection Using Artificial Neural Networks
The electroencephalogram (EEG) signal plays an important role in the diagnosis of epilepsy. The EEG recordings of the ambulatory recording systems generate very lengthy data and the detection of the epileptic activity requires a time-consuming analysis of the entire length of the EEG data by an expert. The traditional methods of analysis being tedious, many automated diagnostic systems for epilepsy has emerged in recent years. This paper proposes a neural-network-based automated epileptic EEG detection system that uses approximate entropy (ApEn) as the input feature. ApEn is a statistical parameter that measures the predictability of the current amplitude values of a physiological signal based on its previous amplitude values. It is known that the value of the ApEn drops sharply during an epileptic seizure and this fact is used in the proposed system. Two different types of neural networks, namely, Elman and probabilistic neural networks are considered. ApEn is used for the first time in the proposed system for the detection of epilepsy using neural networks. It is shown that the overall accuracy values as high as 100% can be achieved by using the proposed system.
Subscribe to:
Posts (Atom)