CHAPTER I
INTRODUCTION
Computer networks are playing a principal role in lots of areas. The increasing measurement and complexity of networks result in the growth of complexity of their protection analysis. Viable fiscal, political, and different advantages, which can be received with the aid of cyber assaults, lead to tremendous increase of the quantity of capabilities malefactors. Despite these facts, the prevailing security analysis is a process which nonetheless dependents almost always on the expertise of safety directors. All these problems define the importance of the research and tendencies in the subject of automatic safety analysis of computer networks. This method suggests a framework for designing the Cyber assault Modeling and affect factor which implements the attack classification. In contrast to the present works describes the assault modeling and have an effect on evaluation options directed to optimization of assault classification and analysis approach with the goal to allow their usage within the systems operating in near real time. The fundamental contributions of the process is classify the following assaults: Probe,DOS,U2R,R2L headquartered on back propagation algorithm for assault classification, the fundamental principles of actual-time occasion analysis, the procedure to identify feasible visitors with the aid of inspecting the compliance between protection events and assaults, the applying of each time approach for the attack classification.
An intrusion detection system (IDS) displays community visitors and screens for suspicious undertaking and indicators the system or community administrator. In some cases the IDS may additionally reply to anomalous or malicious site visitors with the aid of taking motion similar to blocking off the user or supply IP tackle from accessing the community.
IDS come in a kind of “flavors” and strategy the purpose of detecting suspicious site visitors in different ways. There are community headquartered (NIDS) and host centered (HIDS) intrusion detection techniques. IDS that observe headquartered on looking for distinct signatures of identified threats- much like the way antivirus application probably detects and protects against malware- and there are IDS that become aware of situated on evaluating traffic patterns in opposition to a baseline and looking for anomalies, it is effectively screen and alert and participate in an motion or movements in keeping with a detected danger. Community Intrusion Detection techniques are positioned at a strategic factor or elements within the network to observe site visitors to and from all devices on the community. Ideally you would scan all inbound and outbound site visitors; nonetheless doing so could create a bottleneck that will impair the total velocity of the community. Host Intrusion Detection methods are run on individual hosts or instruments on the network. A HIDS screens the inbound and outbound packets from the gadget best and will alert the user or administrator of suspicious endeavor is detected.
A signature headquartered IDS will monitor packets on the network and examine them against a database of signatures or attributes from known malicious threats. That is much like the way most antivirus application detects malware. The limitation is that there might be a lag between a brand new risk being discovered in the wild and the signature for detecting that chance being applied to your IDS. For the duration of that lag time your IDS could be unable to become aware of the new risk. An IDS which is anomaly centered will screen community traffic and evaluate it in opposition to an established baseline.
The baseline will determine what’s “common” for that community- what kind of bandwidth is ordinarily used, what protocols are used, what ports and instruments typically join to one another- and alert the administrator or person when visitors is detected which is anomalous, or significantly distinctive, than the baseline. Know-how techniques and Networks are subject to digital assaults. Makes an attempt to breach understanding security are rising day-to-day, together with the provision of the Vulnerability comparison instruments which can be largely to be had on the web, at no cost, as good as for a commercial use.
The real lifestyles example above is the detailed same analogy of what might occur to the network. What’s valued at is that the thief could also be on your network for a very long time, and you might now not even comprehend it. Firewalls are doing a just right job guarding your entrance doorways, but they do not have a probability to provide you with a warning in case there is a backdoor or a gap within the infrastructure.
Script kiddies are continuously scanning the internet for identified bugs in the process, together with steady scans via subnets. More skilled crackers is also hired by your competitors, to goal your community above all, with a purpose to achieve aggressive advantage.
CHAPTER II
LITERATURE SURVEY
2.1. Cyber-Attack Detection: Modelling the Effects of Similarity and Scenario
Authors : Jajodia, S., Liu, P.,Swarup,V.,&Wang,C
Investigate the role of similarity ( An analyst’s way of comparing network events with experiences in memory) and the role of attack strategy (The timing of cyber attacks by an attacker) in influencing timely and accurate cyber attack detection. Its manipulate the attack strategy and similarity assumptions in the model and evaluate the effects of their manipulation on model’s accurate and timely detection of cyber attacks. An IBL model was defined by different similarity mechanisms to compare experiences in memory with network events: geometric (model uses geometric distance to evaluate similarity) and feature-based (model uses common and uncommon features to evaluate similarity). Also, attack strategy was manipulated as patient (all threats occur at the end of a scenario) and impatient (all threats occur at the beginning of a scenario). Results reveal that although attack strategy plays a significant role in cyber attack detection; the role of similarity is non-existent.
METHOD
Instance-based learning theory, security analyst and cognitive modeling.
PROBLEM
• The similarity mechanisms do not seem to influence the accuracy and timeliness in the model.
• This model did not focus on features of attributes.
2.2. How to Use Experience in Cyber Analysis: An Analytical Reasoning Support System
Authors : Chen,P. C.,Liu,P.,Yen,J.,&Mullen,T.
Cyber analysis is a difficult task for analysts due to huge amounts of noise, abundant monitoring data and increasing complexity of the reasoning tasks. Therefore, experience from experts can provide guidance for analysts’ analytical reasoning and contribute to training. Despite its great potential benefits, experience has not been effectively leveraged in the existing reasoning support systems due to the difficulty of elicitation and reuse. To fill the gap, and propose an experience-aided reasoning support system which can automatically capture experts’ experience and subsequently guide the novices’ reasoning in a step-by-step manner. Drawing on cognitive theory, our model uses experience as a reasoning process involving “actions”, “observations”, and “hypotheses”. Computability and adaptability are the comparative advantages of this model: the “hypotheses” capture analysts’ internal mental reasoning as a black box, while the “actions” and “observations” formally representing the external context and analysts’ evidence exploration activities. This project demonstrates how this system, built on this experience model, can capture and utilize experience effectively.
Propose an experience-aided reasoning support system for cyber analysis. The main motivations for such a system are:
(1)To capture and represent experience from experts.
(2) To provide novice analysts with step-by-step guidance using the captured experience.
(3) To enable analysts to effectively communicate with others to benefit from other analysts experience.
The contribution of this work is mainly two-fold:
• Model experience as a reasoning process involving action, observation and hypothesis. The model makes experience capturing and reusing computational and well adapted to analysts‟ reasoning which is highly uncertain due to the dynamic cyber environment.
• An experience-aided analytical reasoning support system is developed based on this model to capture experience and provide sequential guidance to analysts.
2.3. Game strategies in network security
Authors : Kong-wei Lye , Jeannette M. Wing
PROBLEM STATEMENT
How the network security problem can be modeled as a general-sum stochastic game between the attacker and the administrator.
WORK DONE
The act of seeking the interactions between an attacker and the administrator as a two-player stochastic game and construct a model for the game. Using a nonlinear program, its compute Nash equilibria or best-response strategies for the players (attacker and administrator). Using the nonlinear program NLP-1, computed multiple Nash equilibria, each denoting best strategies (best responses) for both players. An attacker on the Internet attempts to deface the home page on the public Web server on the network, launch an internal denial-of-service (DOS) attack, and capture some important data from a workstation on the network.With proper modeling, the game-theoretic analysis , presented here can also be applied to other general heterogeneous networks.
METHOD
Stochastic games and nonlinear programming.
PROBLEM
• This model will not reduce the computation time.
• Could not response for each player from the strategies for the components.
2.4. Intrusion and intrusion detection
Authors : John McHugh
WORK DONE
Describes the two primary intrusion detection techniques, anomaly detection and signature-based misuse detection, in some detail and describes a number of on temporary research and commercial intrusion detection systems. It ends with a brief discussion of the problems associated with evaluating intrusion detection systems and a discussion of the difficulties associated with making further progress in the field. With respect to the later, it notes that, like many fields, intrusion detection has been based on a combination of intuition and brute-force techniques. The suspect that these have carried the field as far as they can and that further significant progress will depend on the Development of an underlying theoretical basis for the field.
METHOD
Intrusion detection, Intrusive anomalies and Intrusion detection systems (IDS).
PROBLEM
The statement looks at the problem of malicious users from both a historical and practical standpoint. It traces the history of intrusion and intrusion detection from the early 1970s to the present day, beginning with a historical overview.
2.5. A Taxonomy of cyber awareness question for the user-centered design of cyber situation awareness
Authors : Paul,C,L.,&Whiteley,K.
Throughout the developed world, governments, defense organizations, and companies in finance, power, and telecommunications are increasingly targeted by overlapping surges of cyber attacks from criminals and nation-states seeking economic or military advantage. The number of attacks is now so large and their sophistication so great, that many organizations are having trouble determining which new threats and vulnerabilities pose the greatest risk and how resources should be allocated to ensure that the most probable and damaging attacks are dealt with first. This report summarizes vulnerability and attack trends, focusing on those threats that have the greatest potential to negatively impact your network and your business. It identifies key elements that enable these threats and associates these key elements with security controls that can mitigate your risk.
Disadvantages:
• Computing distinct logarithms is believed to be tough.
• No economical general technique for computing distinct logarithms on typical computers is understood, and several other necessary algorithms publicly key cryptography base their security on the idea that the distinct index downside has no economical answer.
2.6. Symantec Security Response online report, Symantec, Tech. Rep., February year 2011.
Authors : N . Falliere, Murchu, and E. Chien
Desktop computers are often compromised by the interaction of untrusted data and buggy software. To address this problem, we present Apiary, a system that transparently contains application faults while retaining the usage metaphors of a traditional desktop environment. Apiary accomplishes this with three key mechanisms. It isolates applications in containers that integrate in a controlled manner at the display and file system. It introduces phemeral containers that are quickly instantiated for single application execution, to prevent any exploit that occurs from persisting and to protect user privacy. It introduces the Virtual Layered File System to make instantiating containers fast and space efficient, and to make managing many containers no more complex than a single traditional desktop.
DIS-ADVANTAGES
Traffic classification techniques such as dynamic port numbers and user privacy protection. may rely on the port numbers specified by different applications or the signature strings in the payload of IP packets.
ADVANTAGES
Modern techniques normally utilize host/network behavior analysis or flow level statistical features by taking emerging and encrypted applications into account.
2.7. Lessons Learned from the Maroochy Water Breach, ser. IFIP International Federation for Information Processing. Springer US, 2007, vol. 253, pp. 73-82.
Authors : J. Slay and M. Miller
Web applications are the most common way to make services and data available on the Internet. Unfortunately, with the increase in the number and complexity of these applications, there has also been an increase in the number and complexity of vulnerabilities. Current techniques to identify security problems in web applications have mostly focused on input validation flaws, such as cross site scripting and SQL injection, with much less attention devoted to application logic vulnerabilities. Application logic vulnerabilities are an important class of defects that are the result of faulty application logic. These vulnerabilities are specific to the functionality of particular web applications, and, thus, they are extremely difficult to characterize and identify.
DIS-ADVANTAGES
In the state-of the-art traffic classification methods, Internet traffic is characterized by a set of flow statistical properties and machine learning techniques are applied to automatically search for structural patterns.
ADVANTAGES
The main reason for the underperformance of number of traditional classifiers including NB is the lack of the feature discretization process.
2.8. “Research challenges for the security of control systems, ” in Proceedings of the 3rd conference on Hot topics in security, ser. HOTSEC’08. Berkeley, CA, USA: USENIX Association, 2008, pp. 1-6.
Authors : A. A. Cardenas, S. Amin, and S. Sastry,
Cross-site scripting flaws have now surpassed buffer over-flaws as the world’s most publicly-reported security vulnerability. In recent years, browser vendors and researcher shave tried to develop client-side filters to mitigate these attacks. We analyze the best existing filters and find them to be either unacceptably slow or easily circumvented. Worse, some of these filters could introduce vulnerabilities into sites that were previously bug-free. We propose a new filter design that achieves both high performance and high precision by blocking scripts after HTML parsing but before execution. Compared to previous approaches, approach is faster, protects against more vulnerabilities, and is harder for attackers to abuse.
DIS-ADVANTAGES
The performance evaluation shows that the traffic classification using very few training samples can be significantly improved by our approach.
ADVANTAGES
A novel nonparametric approach, TCC, was proposed to investigate correlation information in real traffic data and incorporate it into traffic classification.
2.9. “Attack models and scenarios for networked control systems, ” in Proceedings of the 1st international conference on High Confidence Networked Systems, ser. HiCoNS ’12. New York, NY, USA: ACM, 2012,pp.55-64.
Authors : A. Teixeira, D. Perez, H. Sandberg, and K. H. Johansson
Learning-based anomaly detection has proven to bean effective black-box technique for detecting unknown attacks. However, the effectiveness of this technique crucially depends upon both the quality and the completeness of the training data. Unfortunately, in most cases, the traffic to the system (e.g., a web application or daemon process) protected by an anomaly detector is not uniformly distributed. Therefore, some components (e.g., authentication, payments, or content publishing) might not be exercised enough to train ananomaly detection system in a reasonable time frame. This is of particular importance in real-world settings, where anomaly detection systems are deployed with little or no manual configuration, and they are expected to automatically learn the normal behavior of a system to detect or block attacks. In this work, we first demonstrate that the features utilized to train a learning-based detector can be semantically grouped, and that features of the same group tend to induce similar models. We run our experiments on a real-world data set containing over 58 million HTTP requests to more than 36,000 distinct web application components. The results show that by using the proposed solution, it is possible to achieve effective attack detection even with scarce training data.
DIS-ADVANTAGES
• It assumes independent features.
• NB classifier is that it only requires a small amount of training data to estimate the parameters of a classification model.
ADVANTAGES
• NB with feature discretization demonstrates not only significantly higher accuracy but also much faster classification speed.
• NB effectively improves the accuracies of the support vector machine (SVM) at the price of lower classification speed.
2.10. “Distributed fault detection for interconnected second-order systems” Automatica vol 47, no. 12, year 2011.
Authors : I. Shames, A. M. Teixeira, H. Sandberg, and K. H. Johansson
Learning-based anomaly detection has proven to be an effective black-box technique for detecting unknown attacks. However, the effectiveness of this technique crucially depends upon both the quality and the completeness of the training data. Unfortunately, in most cases, the traffic to the system protected by an anomaly detector is not uniformly distributed. Therefore, some components (e.g., authentication, payments, or content publishing) might not be exercised enough to train an anomaly detection system in a reasonable time frame.This is of particular importance in real-world settings, where anomaly detection systems are deployed with little or no manual configuration, and they are expected to automatically learn the normal behavior of a system to detect or block attacks. In this work, we first demonstrate that the features utilized to train a learning-based detector can be semantically grouped, and that features of the same group tend to induce similar models.
DIS-ADVANTAGES
• In supervised traffic classification, sufficient supervised training data is a general assumption.
• To address the problems suffered by payload-based traffic classification
ADVANTAGES
• A novel non-parametric approach which incorporates correlation of traffic flows to improve the classification performance.
CHAPTER III
EXISTING SYSTEM
Traffic classification techniques such as dynamic port numbers and user privacy protection may rely on the port numbers specified by different applications or the signature strings in the payload of IP packets. Modern techniques normally utilize host/network behavior analysis or flow level statistical features by taking emerging and encrypted applications into account. In state-of the-art traffic classification methods, Internet traffic is characterized by a set of flow statistical properties and machine learning techniques are applied to automatically search for structural patterns.
The main reason for the under performance of number of traditional classifiers including NB is the lack of the feature discretization process. A big challenge for current network management is to handle a large number of emerging applications, where it is almost impossible to collect sufficient training samples in a limited time. Have to only manually label very few samples as supervised training data since traffic labeling is time-consuming. The supervised traffic classification methods analyse the supervised training data and produce an inferred function which can predict the output class for any testing flow.
3.1 DIS- ADVANTAGES
• Lot of existing intrusion Detection Systems (IDSs) examines the network packets individually within both the web server and the database system.
• However, there is very little work being performed on multitier Anomaly Detection (AD) systems that generate models of network behavior for both web and database network interactions.
• In such multitier architectures, the back-end database server is often protected behind a firewall while the web servers are remotely accessible over the Internet.
• Web delivered services and applications have increased in both popularity and complexity over the past few years.
• Daily tasks, such as banking, travel, and social networking, are all done via the web. Such services typically employ a web server front end that runs the application user interface logic, as well as a back-end server that consists of a database or file server.
• Due to their ubiquitous use for personal and/or corporate data, web services have always been the target of attacks.
3.2 SYSTEM ARCHITECTURE
Attacker
Figure ; 3.2 System Architecture
CHAPTER IV
PROPOSED SYSTEM
4.1 PROBLEM ANALYSIS
• The system used to detect attacks in multi tiered web services. This approach can create normality models of isolated user sessions that include both the web front-end (HTTP) and back-end (File or SQL) network transactions.
• To achieve this employ a lightweight virtualization technique to assign each user’s web session to a dedicated container, an isolated virtual computing environment.
• Container ID used to accurately associate the web request with the subsequent DB queries.
• A novel non-parametric approach which incorporates correlation of traffic flows to improve the classification performance.
• It Provide a detailed analysis on the novel classification approach and its performance benefit from both theoretical and empirical aspects.
• The performance evaluation shows that the traffic classification using very few training samples can be significantly improved by our approach.
• A novel nonparametric approach, TCC, was proposed to investigate correlation information in real traffic data and incorporate it into traffic classification.
4.2 ADVATAGES
• It will provide the efficient result when compare to the existing techniques.
• In this future work trace the location of LAN.
• Shutdown the client system with the help of the server.
• Able to identify a wide range of attacks.
• Lightweight virtualization technique.
4.3 DATAFLOW DIAGRAM
DOS
DOS
DOSd
Fig : 4.3.1 Data Flow Diagram
CHAPTER 5
SYSTEM REQUIREMENTS
5.1 HARDWARE REQUIREMENTS
• System : Pentium III & above
• Hard Disk : 40 GB
• RAM : 256 MB
5.2 SOFTWARE REQUIREMENTS
• O/S : Windows XP.
• Front End : Java
• IDE : Net Beans IDE 6.4.1
• Server
CHAPTER 6
SYSTEM ANALYSIS
6.1 MODULES
• CONNECTION ESTABLISHMENT
• LOAD DATASET
• CLASSIFICATION OF ATTACKS
• SCANNING PROCESS
• SYSTEM CONTROL
6.2 MODULES DESCRIPTION
6.2.1 CONNECTION ESTABLISHMENT
In this module, establish the connection between the server and client. For the connection establishment process, first run the server program and after ,it specify the client port number. In this concept ,establish connection between one server and more than one client at a time. After specifying the port number the connection will be successfully established. After establishment the established message is shown in the server machine.
6.2.2 LOAD DATA SET
It use the KDD cup99 data set as the input dataset. For identification of attack in the network , first load the data set into the system. This system contains both the normal and attack information. From that data set its find out or classify the attacks into 6 groups based upon the 4 different categories.
6.2.3 CLASSIFICATION OF ATTACKS
After loading the data set into the system next stage is the classification. In this stage , classify the attack into six different groups based upon the four different categories. The categories are,
DOS (Denial Of Service)
• U2R
• R2L and
• Probe attack.
These attacks are classified from the whole data set. The normal and threat data are separated from the list each of which can be handled in separate and efficient manner. And also , list out the amount of attack and normal data details.
6.2.4 SCANNING PROCESS
In this stage , apply the real time implementation. Here first scan the client system with the help of different scanners such as the port scanner, trace route tool, Host locator tool and ping tool. For LAN scanning, use the ping tool. In this work let see the process that are running in client system which just looks like a task manager. With the help of the SVM ,it trace the location of the LAN. The back propagation algorithm is used to compare the train and test data sets and find out the attacks. The loaded dataset is first classified after that detect the attack.
6.2.5 SYSTEM CONTROL
To control the client system with the server. In client the running process are listed out. If the list contain any attack, for example Wscript, its can end the process using end process option. And also shutdown the client system.
6.3 UML DIAGRAMS
UML stands for Unified Modeling Language. UML is a standardized general-purpose modeling language in the field of object-oriented software engineering. The standard is managed, and was created by, the Object Management Group.
The goal is for UML to become a common language for creating models of object oriented computer software. In its current form UML is comprised of two major components: a Meta-model and a notation. The Unified Modeling Language is a standard language for specifying, Visualization, Constructing and documenting the artifacts of software system, as well as for business modeling and other non-software systems.
The Unified Modeling Language is a standard language for specifying, Visualization, Constructing and documenting the artifacts of software system, as well as for business modeling and other non-software systems.
The UML represents a collection of best engineering practices that have proven successful in the modeling of large and complex systems.
The UML is a very important part of developing objects oriented software and the software development process. The UML uses mostly graphical notations to express the design of software projects.
6.3.1USE CASE DIAGRAMS
Use case diagrams model the functionality of a system using actors and use cases. Use cases are services or functions provided by the system to its users. The purpose of use case diagram is to capture the dynamic aspect of a system. But this definition is too generic to describe the purpose. Because other four diagrams (activity, sequence, collaboration and State chart) are also having the same purpose. So look into some specific purpose which will distinguish it from other four diagrams.
Figure : 6.3.1Use Case Diagram
6.3.2 CLASS DIAGRAM
The class diagram is a static diagram. It represents the static view of an application. Class diagram is not only used for visualizing, describing and documenting different aspects of a system but also for constructing executable code of the software application. The class diagram describes the attributes and operations of a class and also the constraints imposed on the system. The class diagrams are widely used in the modeling of object oriented systems because they are the only UML diagrams which can be mapped directly with object oriented languages. The class diagram shows a collection of classes, interfaces, associations, collaborations and constraints. It is also known as a structural diagram.
Figure : 6.3.2 Class Diagram
6.3.3 SEQUENCE DIAGRAM
Sequence diagrams describe interactions among classes in terms of an exchange of messages over time. An important characteristic of a sequence diagram is that time passes from top to bottom : the interaction starts near the top of the diagram and ends at the bottom (i.e. Lower equals Later). A popular use for them is to document the dynamics in an object-oriented system. For each key collaboration, diagrams are created that show how objects interact in various representative scenarios for that collaboration.
Figure : 6.3.3 Sequence diagram
6.3.4 ACTIVITY DIAGRAM
Activity diagrams are graphical representations of workflows of stepwise activities and actions with support for choice, iteration and concurrency.It is used to describe the business and operational step-by-step workflows of components in a system.
Figure 6.3.4 Activity Diagram
6.4 SCREENSHOTS
6.4.1 MAIN SERVER CONNECTED
6.4.2 IMPORT KDDCUPPDATASET
6.4.3 CLASSIFICATION OF U2R
6.4.4 CLASSFICATION OF R2L
6.4.4 LOAD CALCULATION
CHAPTER 7
CONCLUSION
7.1 CONCLUSION
Security is most important in an intrusion detection system. In the existing system, more number of methods and algorithms are used for detecting the attacks. The proposed system uses the two different learning schemes in the neural network. First one is the supervised learning and another one is the unsupervised learning. In supervised learning use the MLP algorithm to find out the attacks and in unsupervised learning method it use the SOM to trace the locations. For detection purpose ,use the KDD cup99 data sets. In proposed work ,can classify the attacks as six different groups based upon the four different categories. The attacks are the DOS, U2R, R2L, and Probe. In proposed work can also end the attack process in client system and also shutdown the system. The unwanted shutdown is also considered as one of the attacks.
7.2 FUTURE WORK
In future, this system can be used to identify the tradeoff between tracing bits and parity bits, where the former is to identify the malicious relay nodes and discard (erase) the bits received from them and the latter is to correct the errors caused by channel impairments such as fading and noise. Also can find that, there exists an optimal allocation of redundancy between tracing bits and parity bits that minimizes the probability of decoding error or maximizing the throughput.
REFERENCE
1. N. Falliere, Murchu, and E. Chien, “W32.stuxnet dossier, ” Symantec Security Response online report, Symantec, Tech. Rep., February 2011.
2. J. Slay and M. Miller, Lessons Learned from the Maroochy Water Breach, ser. IFIP International Federation for Information Processing. Springer US, 2007, vol. 253, pp. 73-82.
3. E. Byres and J. Lowe, “The myths and facts behind cyber security risk for industrial control systems, ” in In ISA Process Control Conference, 2003.
4. A. A. Cardenas, S. Amin, and S. Sastry, “Research challenges for the security of control systems, ” in Proceedings of the 3rd conference on Hot topics in security, ser. HOTSEC’08. Berkeley, CA, USA: USENIX Association, 2008, pp. 1-6.
5. A. A. Cardenas, S. Amin, and S. Sastry, “Secure control: Towards survivable cyber-physical systems, ” in First International Workshop on Cyber-Physical Systems, June 2008, pp. 495-500.
6. S. Amin, A. A. Cardenas, and S. S. Sastry, “Safe and secure networked control systems under denial-of-service attacks, ” in Proceedings of the 12th International Conference on Hybrid Systems: Computation and Control, ser. HSCC’09.Berlin,Heidelberg:Springer-Verlag,2009,
7. Z.-H. Pang and G.-P. Liu, “Design and implementation of secure networked predictive control systems under deception attacks, ” Control Systems Technology, IEEE Transactions on, vol. 20, no. 5, pp. 1334- 1342, September 2012.
8. A. Teixeira, D. Perez, H. Sandberg, and K. H. Johansson, “Attack models and scenarios for networked control systems, ” in Proceedings of the 1st international conference on High Confidence Networked Systems, ser. HiCoNS ’12. New York, NY, USA: ACM, 2012,pp.55-64.
9. I. Shames, A. M. Teixeira, H. Sandberg, and K. H. Johansson, “Distributed fault detection for interconnected second-order systems, ” Automatica, vol. 47, no. 12, pp. 27572764,2011.
10. Y. Mo and B. Sinopoli, “False data injection attacks in control systems, ” in First Workshop on Secure Control Systems, Cyber Physical Systems Week 2010, April 2010.
11. J. Gertler, Fault Detection and Diagnosis in Engineering Systems. New York: Marcel Dekker, Inc., 1998.
12. M. Blanke, M. Kinnaert, J. Lunze, and M. Staroswiecki, Diagnosis and Fault-Tolerant Control, 2nd ed. Springer, 2006.
13. S. X. Ding, Model-based Fault Diagnosis Techniques: Design Schemes, Algorithms, and Tools, 1st ed. Springer Publishing Company, Incorporated, 2008.