A mobile toolkit and customised location server for the creation of cross-referencing location-based services
- Ndakunda, Shange-Ishiwa Tangeni
- Authors: Ndakunda, Shange-Ishiwa Tangeni
- Date: 2013
- Subjects: Location-based services -- Security measures , Mobile communication systems -- Security measures , Digital communications , Java (Computer program language) , Application software -- Development -- Computer programs , User interfaces (Computer systems)
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4703 , http://hdl.handle.net/10962/d1013604
- Description: Although there are several Software Development kits and Application Programming Interfaces for client-side location-based services development, they mostly involve the creation of self-referencing location-based services. Self-referencing location-based services include services such as geocoding, reverse geocoding, route management and navigation which focus on satisfying the location-based requirements of a single mobile device. There is a lack of open-source Software Development Kits for the development of client-side location-based services that are cross-referencing. Cross-referencing location-based services are designed for the sharing of location information amongst different entities on a given network. This project was undertaken to assemble, through incremental prototyping, a client-side Java Micro Edition location-based services Software Development Kit and a Mobicents location server to aid mobile network operators and developers alike in the quick creation of the transport and privacy protection of cross-referencing location-based applications on Session Initiation Protocol bearer networks. The privacy of the location information is protected using geolocation policies. Developers do not need to have an understanding of Session Initiation Protocol event signaling specifications or of the XML Configuration Access Protocol to use the tools that we put together. The developed tools are later consolidated using two sample applications, the friend-finder and child-tracker services. Developer guidelines are also provided, to aid in using the provided tools.
- Full Text:
- Date Issued: 2013
- Authors: Ndakunda, Shange-Ishiwa Tangeni
- Date: 2013
- Subjects: Location-based services -- Security measures , Mobile communication systems -- Security measures , Digital communications , Java (Computer program language) , Application software -- Development -- Computer programs , User interfaces (Computer systems)
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4703 , http://hdl.handle.net/10962/d1013604
- Description: Although there are several Software Development kits and Application Programming Interfaces for client-side location-based services development, they mostly involve the creation of self-referencing location-based services. Self-referencing location-based services include services such as geocoding, reverse geocoding, route management and navigation which focus on satisfying the location-based requirements of a single mobile device. There is a lack of open-source Software Development Kits for the development of client-side location-based services that are cross-referencing. Cross-referencing location-based services are designed for the sharing of location information amongst different entities on a given network. This project was undertaken to assemble, through incremental prototyping, a client-side Java Micro Edition location-based services Software Development Kit and a Mobicents location server to aid mobile network operators and developers alike in the quick creation of the transport and privacy protection of cross-referencing location-based applications on Session Initiation Protocol bearer networks. The privacy of the location information is protected using geolocation policies. Developers do not need to have an understanding of Session Initiation Protocol event signaling specifications or of the XML Configuration Access Protocol to use the tools that we put together. The developed tools are later consolidated using two sample applications, the friend-finder and child-tracker services. Developer guidelines are also provided, to aid in using the provided tools.
- Full Text:
- Date Issued: 2013
An exploratory study of techniques in passive network telescope data analysis
- Authors: Cowie, Bradley
- Date: 2013
- Subjects: Web search engines Internet searching World Wide Web Malware (Computer software) Computer viruses Computer networks -- Monitoring Computer networks -- Security measures
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4573 , http://hdl.handle.net/10962/d1002038
- Description: Careful examination of the composition and concentration of malicious traffic in transit on the channels of the Internet provides network administrators with a means of understanding and predicting damaging attacks directed towards their networks. This allows for action to be taken to mitigate the effect that these attacks have on the performance of their networks and the Internet as a whole by readying network defences and providing early warning to Internet users. One approach to malicious traffic monitoring that has garnered some success in recent times, as exhibited by the study of fast spreading Internet worms, involves analysing data obtained from network telescopes. While some research has considered using measures derived from network telescope datasets to study large scale network incidents such as Code-Red, SQLSlammer and Conficker, there is very little documented discussion on the merits and weaknesses of approaches to analyzing network telescope data. This thesis is an introductory study in network telescope analysis and aims to consider the variables associated with the data received by network telescopes and how these variables may be analysed. The core research of this thesis considers both novel and previously explored analysis techniques from the fields of security metrics, baseline analysis, statistical analysis and technical analysis as applied to analysing network telescope datasets. These techniques were evaluated as approaches to recognize unusual behaviour by observing the ability of these techniques to identify notable incidents in network telescope datasets
- Full Text:
- Date Issued: 2013
- Authors: Cowie, Bradley
- Date: 2013
- Subjects: Web search engines Internet searching World Wide Web Malware (Computer software) Computer viruses Computer networks -- Monitoring Computer networks -- Security measures
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4573 , http://hdl.handle.net/10962/d1002038
- Description: Careful examination of the composition and concentration of malicious traffic in transit on the channels of the Internet provides network administrators with a means of understanding and predicting damaging attacks directed towards their networks. This allows for action to be taken to mitigate the effect that these attacks have on the performance of their networks and the Internet as a whole by readying network defences and providing early warning to Internet users. One approach to malicious traffic monitoring that has garnered some success in recent times, as exhibited by the study of fast spreading Internet worms, involves analysing data obtained from network telescopes. While some research has considered using measures derived from network telescope datasets to study large scale network incidents such as Code-Red, SQLSlammer and Conficker, there is very little documented discussion on the merits and weaknesses of approaches to analyzing network telescope data. This thesis is an introductory study in network telescope analysis and aims to consider the variables associated with the data received by network telescopes and how these variables may be analysed. The core research of this thesis considers both novel and previously explored analysis techniques from the fields of security metrics, baseline analysis, statistical analysis and technical analysis as applied to analysing network telescope datasets. These techniques were evaluated as approaches to recognize unusual behaviour by observing the ability of these techniques to identify notable incidents in network telescope datasets
- Full Text:
- Date Issued: 2013
An investigation of online threat awareness and behaviour patterns amongst secondary school learners
- Authors: Irwin, Michael Padric
- Date: 2013 , 2013-04-29
- Subjects: Computer security -- South Africa -- Grahamstown , Risk perception -- South Africa -- Grahamstown , High school students -- South Africa -- Grahamstown , Communication -- Sex differences -- South Africa -- Grahamstown , Internet and teenagers -- South Africa -- Grahamstown , Internet and teenagers -- Risk assessment -- South Africa -- Grahamstown , Internet -- Safety measures -- South Africa -- Grahamstown , Online social networks -- Social aspects -- South Africa -- Grahamstown
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4576 , http://hdl.handle.net/10962/d1002965 , Computer security -- South Africa -- Grahamstown , Risk perception -- South Africa -- Grahamstown , High school students -- South Africa -- Grahamstown , Communication -- Sex differences -- South Africa -- Grahamstown , Internet and teenagers -- South Africa -- Grahamstown , Internet and teenagers -- Risk assessment -- South Africa -- Grahamstown , Internet -- Safety measures -- South Africa -- Grahamstown , Online social networks -- Social aspects -- South Africa -- Grahamstown
- Description: The research area of this work is online threat awareness within an information security context. The research was carried out on secondary school learners at boarding schools in Grahamstown. The participating learners were in Grades 8 to 12. The goals of the research included determining the actual levels of awareness, the difference between these and self-perceived levels of the participants, the assessment of risk in terms of online behaviour, and the determination of any gender differences in the answers provided by the respondents. A review of relevant literature and similar studies was carried out, and data was collected from the participating schools via an online questionnaire. This data was analysed and discussed within the frameworks of awareness of threats, online privacy social media, sexting, cyberbullying and password habits. The concepts of information security and online privacy are present throughout these discussion chapters, providing the themes for linking the discussion points together. The results of this research show that the respondents have a high level of risk. This is due to the gaps identified in actual awareness and perception, as well as the exhibition of online behaviour patterns that are considered high risk. A strong need for the construction and adoption of threat awareness programmes by these and other schools is identified, as are areas of particular need for inclusion in such programmes. Some gender differences are present, but not to the extent that, there is as significant difference between male and female respondents in terms of overall awareness, knowledge and behaviour.
- Full Text:
- Date Issued: 2013
An investigation of online threat awareness and behaviour patterns amongst secondary school learners
- Authors: Irwin, Michael Padric
- Date: 2013 , 2013-04-29
- Subjects: Computer security -- South Africa -- Grahamstown , Risk perception -- South Africa -- Grahamstown , High school students -- South Africa -- Grahamstown , Communication -- Sex differences -- South Africa -- Grahamstown , Internet and teenagers -- South Africa -- Grahamstown , Internet and teenagers -- Risk assessment -- South Africa -- Grahamstown , Internet -- Safety measures -- South Africa -- Grahamstown , Online social networks -- Social aspects -- South Africa -- Grahamstown
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4576 , http://hdl.handle.net/10962/d1002965 , Computer security -- South Africa -- Grahamstown , Risk perception -- South Africa -- Grahamstown , High school students -- South Africa -- Grahamstown , Communication -- Sex differences -- South Africa -- Grahamstown , Internet and teenagers -- South Africa -- Grahamstown , Internet and teenagers -- Risk assessment -- South Africa -- Grahamstown , Internet -- Safety measures -- South Africa -- Grahamstown , Online social networks -- Social aspects -- South Africa -- Grahamstown
- Description: The research area of this work is online threat awareness within an information security context. The research was carried out on secondary school learners at boarding schools in Grahamstown. The participating learners were in Grades 8 to 12. The goals of the research included determining the actual levels of awareness, the difference between these and self-perceived levels of the participants, the assessment of risk in terms of online behaviour, and the determination of any gender differences in the answers provided by the respondents. A review of relevant literature and similar studies was carried out, and data was collected from the participating schools via an online questionnaire. This data was analysed and discussed within the frameworks of awareness of threats, online privacy social media, sexting, cyberbullying and password habits. The concepts of information security and online privacy are present throughout these discussion chapters, providing the themes for linking the discussion points together. The results of this research show that the respondents have a high level of risk. This is due to the gaps identified in actual awareness and perception, as well as the exhibition of online behaviour patterns that are considered high risk. A strong need for the construction and adoption of threat awareness programmes by these and other schools is identified, as are areas of particular need for inclusion in such programmes. Some gender differences are present, but not to the extent that, there is as significant difference between male and female respondents in terms of overall awareness, knowledge and behaviour.
- Full Text:
- Date Issued: 2013
Deploying DNSSEC in islands of security
- Authors: Murisa, Wesley Vengayi
- Date: 2013 , 2013-03-31
- Subjects: Internet domain names , Computer security , Computer network protocols , Computer security -- Africa
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4577 , http://hdl.handle.net/10962/d1003053 , Internet domain names , Computer security , Computer network protocols , Computer security -- Africa
- Description: The Domain Name System (DNS), a name resolution protocol is one of the vulnerable network protocols that has been subjected to many security attacks such as cache poisoning, denial of service and the 'Kaminsky' spoofing attack. When DNS was designed, security was not incorporated into its design. The DNS Security Extensions (DNSSEC) provides security to the name resolution process by using public key cryptosystems. Although DNSSEC has backward compatibility with unsecured zones, it only offers security to clients when communicating with security aware zones. Widespread deployment of DNSSEC is therefore necessary to secure the name resolution process and provide security to the Internet. Only a few Top Level Domains (TLD's) have deployed DNSSEC, this inherently makes it difficult for their sub-domains to implement the security extensions to the DNS. This study analyses mechanisms that can be used by domains in islands of security to deploy DNSSEC so that the name resolution process can be secured in two specific cases where either the TLD is not signed or the domain registrar is not able to support signed domains. The DNS client side mechanisms evaluated in this study include web browser plug-ins, local validating resolvers and domain look-aside validation. The results of the study show that web browser plug-ins cannot work on their own without local validating resolvers. The web browser validators, however, proved to be useful in indicating to the user whether a domain has been validated or not. Local resolvers present a more secure option for Internet users who cannot trust the communication channel between their stub resolvers and remote name servers. However, they do not provide a way of showing the user whether a domain name has been correctly validated or not. Based on the results of the tests conducted, it is recommended that local validators be used with browser validators for visibility and improved security. On the DNS server side, Domain Look-aside Validation (DLV) presents a viable alternative for organizations in islands of security like most countries in Africa where only two country code Top Level Domains (ccTLD) have deployed DNSSEC. This research recommends use of DLV by corporates to provide DNS security to both internal and external users accessing their web based services. , LaTeX with hyperref package , pdfTeX-1.40.10
- Full Text:
- Date Issued: 2013
- Authors: Murisa, Wesley Vengayi
- Date: 2013 , 2013-03-31
- Subjects: Internet domain names , Computer security , Computer network protocols , Computer security -- Africa
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4577 , http://hdl.handle.net/10962/d1003053 , Internet domain names , Computer security , Computer network protocols , Computer security -- Africa
- Description: The Domain Name System (DNS), a name resolution protocol is one of the vulnerable network protocols that has been subjected to many security attacks such as cache poisoning, denial of service and the 'Kaminsky' spoofing attack. When DNS was designed, security was not incorporated into its design. The DNS Security Extensions (DNSSEC) provides security to the name resolution process by using public key cryptosystems. Although DNSSEC has backward compatibility with unsecured zones, it only offers security to clients when communicating with security aware zones. Widespread deployment of DNSSEC is therefore necessary to secure the name resolution process and provide security to the Internet. Only a few Top Level Domains (TLD's) have deployed DNSSEC, this inherently makes it difficult for their sub-domains to implement the security extensions to the DNS. This study analyses mechanisms that can be used by domains in islands of security to deploy DNSSEC so that the name resolution process can be secured in two specific cases where either the TLD is not signed or the domain registrar is not able to support signed domains. The DNS client side mechanisms evaluated in this study include web browser plug-ins, local validating resolvers and domain look-aside validation. The results of the study show that web browser plug-ins cannot work on their own without local validating resolvers. The web browser validators, however, proved to be useful in indicating to the user whether a domain has been validated or not. Local resolvers present a more secure option for Internet users who cannot trust the communication channel between their stub resolvers and remote name servers. However, they do not provide a way of showing the user whether a domain name has been correctly validated or not. Based on the results of the tests conducted, it is recommended that local validators be used with browser validators for visibility and improved security. On the DNS server side, Domain Look-aside Validation (DLV) presents a viable alternative for organizations in islands of security like most countries in Africa where only two country code Top Level Domains (ccTLD) have deployed DNSSEC. This research recommends use of DLV by corporates to provide DNS security to both internal and external users accessing their web based services. , LaTeX with hyperref package , pdfTeX-1.40.10
- Full Text:
- Date Issued: 2013
Information technology audits in South African higher education institutions
- Authors: Angus, Lynne
- Date: 2013 , 2013-09-11
- Subjects: Electronic data processing -- Auditing , Delphi method , Education, Higher -- Computer networks -- Security measures , Information technology -- Security measures , COBIT (Information technology management standard) , IT infrastructure library , International Organization for Standardization
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4615 , http://hdl.handle.net/10962/d1006023 , Electronic data processing -- Auditing , Delphi method , Education, Higher -- Computer networks -- Security measures , Information technology -- Security measures , COBIT (Information technology management standard) , IT infrastructure library , International Organization for Standardization
- Description: The use of technology for competitive advantage has become a necessity, not only for corporate organisations, but for higher education institutions (HEIs) as well. Consequently, corporate organisations and HEIs alike must be equipped to protect against the pervasive nature of technology. To do this, they implement controls and undergo audits to ensure these controls are implemented correctly. Although HEIs are a different kind of entity to corporate organisations, HEI information technology (IT) audits are based on the same criteria as those for corporate organisations. The primary aim of this research, therefore, was to develop a set of IT control criteria that are relevant to be tested in IT audits for South African HEIs. The research method used was the Delphi technique. Data was collected, analysed, and used as feedback on which to progress to the next round of data collection. Two lists were obtained: a list of the top IT controls relevant to be tested at any organisation, and a list of the top IT controls relevant to be tested at a South African HEI. Comparison of the two lists shows that although there are some differences in the ranking of criteria used to audit corporate organisations as opposed to HEIs, the final two lists of criteria do not differ significantly. Therefore, it was shown that the same broad IT controls are required to be tested in an IT audit for a South African HEI. However, this research suggests that the risk weighting put on particular IT controls should possibly differ for HEIs, as HEIs face differing IT risks. If further studies can be established which cater for more specific controls, then the combined effect of this study and future ones will be a valuable contribution to knowledge for IT audits in a South African higher education context.
- Full Text:
- Date Issued: 2013
- Authors: Angus, Lynne
- Date: 2013 , 2013-09-11
- Subjects: Electronic data processing -- Auditing , Delphi method , Education, Higher -- Computer networks -- Security measures , Information technology -- Security measures , COBIT (Information technology management standard) , IT infrastructure library , International Organization for Standardization
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4615 , http://hdl.handle.net/10962/d1006023 , Electronic data processing -- Auditing , Delphi method , Education, Higher -- Computer networks -- Security measures , Information technology -- Security measures , COBIT (Information technology management standard) , IT infrastructure library , International Organization for Standardization
- Description: The use of technology for competitive advantage has become a necessity, not only for corporate organisations, but for higher education institutions (HEIs) as well. Consequently, corporate organisations and HEIs alike must be equipped to protect against the pervasive nature of technology. To do this, they implement controls and undergo audits to ensure these controls are implemented correctly. Although HEIs are a different kind of entity to corporate organisations, HEI information technology (IT) audits are based on the same criteria as those for corporate organisations. The primary aim of this research, therefore, was to develop a set of IT control criteria that are relevant to be tested in IT audits for South African HEIs. The research method used was the Delphi technique. Data was collected, analysed, and used as feedback on which to progress to the next round of data collection. Two lists were obtained: a list of the top IT controls relevant to be tested at any organisation, and a list of the top IT controls relevant to be tested at a South African HEI. Comparison of the two lists shows that although there are some differences in the ranking of criteria used to audit corporate organisations as opposed to HEIs, the final two lists of criteria do not differ significantly. Therefore, it was shown that the same broad IT controls are required to be tested in an IT audit for a South African HEI. However, this research suggests that the risk weighting put on particular IT controls should possibly differ for HEIs, as HEIs face differing IT risks. If further studies can be established which cater for more specific controls, then the combined effect of this study and future ones will be a valuable contribution to knowledge for IT audits in a South African higher education context.
- Full Text:
- Date Issued: 2013
Log analysis aided by latent semantic mapping
- Authors: Buys, Stephanus
- Date: 2013 , 2013-04-14
- Subjects: Latent semantic indexing , Data mining , Computer networks -- Security measures , Computer hackers , Computer security
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4575 , http://hdl.handle.net/10962/d1002963 , Latent semantic indexing , Data mining , Computer networks -- Security measures , Computer hackers , Computer security
- Description: In an age of zero-day exploits and increased on-line attacks on computing infrastructure, operational security practitioners are becoming increasingly aware of the value of the information captured in log events. Analysis of these events is critical during incident response, forensic investigations related to network breaches, hacking attacks and data leaks. Such analysis has led to the discipline of Security Event Analysis, also known as Log Analysis. There are several challenges when dealing with events, foremost being the increased volumes at which events are often generated and stored. Furthermore, events are often captured as unstructured data, with very little consistency in the formats or contents of the events. In this environment, security analysts and implementers of Log Management (LM) or Security Information and Event Management (SIEM) systems face the daunting task of identifying, classifying and disambiguating massive volumes of events in order for security analysis and automation to proceed. Latent Semantic Mapping (LSM) is a proven paradigm shown to be an effective method of, among other things, enabling word clustering, document clustering, topic clustering and semantic inference. This research is an investigation into the practical application of LSM in the discipline of Security Event Analysis, showing the value of using LSM to assist practitioners in identifying types of events, classifying events as belonging to certain sources or technologies and disambiguating different events from each other. The culmination of this research presents adaptations to traditional natural language processing techniques that resulted in improved efficacy of LSM when dealing with Security Event Analysis. This research provides strong evidence supporting the wider adoption and use of LSM, as well as further investigation into Security Event Analysis assisted by LSM and other natural language or computer-learning processing techniques. , LaTeX with hyperref package , Adobe Acrobat 9.54 Paper Capture Plug-in
- Full Text:
- Date Issued: 2013
- Authors: Buys, Stephanus
- Date: 2013 , 2013-04-14
- Subjects: Latent semantic indexing , Data mining , Computer networks -- Security measures , Computer hackers , Computer security
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4575 , http://hdl.handle.net/10962/d1002963 , Latent semantic indexing , Data mining , Computer networks -- Security measures , Computer hackers , Computer security
- Description: In an age of zero-day exploits and increased on-line attacks on computing infrastructure, operational security practitioners are becoming increasingly aware of the value of the information captured in log events. Analysis of these events is critical during incident response, forensic investigations related to network breaches, hacking attacks and data leaks. Such analysis has led to the discipline of Security Event Analysis, also known as Log Analysis. There are several challenges when dealing with events, foremost being the increased volumes at which events are often generated and stored. Furthermore, events are often captured as unstructured data, with very little consistency in the formats or contents of the events. In this environment, security analysts and implementers of Log Management (LM) or Security Information and Event Management (SIEM) systems face the daunting task of identifying, classifying and disambiguating massive volumes of events in order for security analysis and automation to proceed. Latent Semantic Mapping (LSM) is a proven paradigm shown to be an effective method of, among other things, enabling word clustering, document clustering, topic clustering and semantic inference. This research is an investigation into the practical application of LSM in the discipline of Security Event Analysis, showing the value of using LSM to assist practitioners in identifying types of events, classifying events as belonging to certain sources or technologies and disambiguating different events from each other. The culmination of this research presents adaptations to traditional natural language processing techniques that resulted in improved efficacy of LSM when dealing with Security Event Analysis. This research provides strong evidence supporting the wider adoption and use of LSM, as well as further investigation into Security Event Analysis assisted by LSM and other natural language or computer-learning processing techniques. , LaTeX with hyperref package , Adobe Acrobat 9.54 Paper Capture Plug-in
- Full Text:
- Date Issued: 2013
Search engine poisoning and its prevalence in modern search engines
- Authors: Blaauw, Pieter
- Date: 2013
- Subjects: Web search engines Internet searching World Wide Web Malware (Computer software) Computer viruses Rootkits (Computer software) Spyware (Computer software)
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4572 , http://hdl.handle.net/10962/d1002037
- Description: The prevalence of Search Engine Poisoning in trending topics and popular search terms on the web within search engines is investigated. Search Engine Poisoning is the act of manipulating search engines in order to display search results from websites infected with malware. Research done between February and August 2012, using both manual and automated techniques, shows us how easily the criminal element manages to insert malicious content into web pages related to popular search terms within search engines. In order to provide the reader with a clear overview and understanding of the motives and the methods of the operators of Search Engine Poisoning campaigns, an in-depth review of automated and semi-automated web exploit kits is done, as well as looking into the motives for running these campaigns. Three high profile case studies are examined, and the various Search Engine Poisoning campaigns associated with these case studies are discussed in detail to the reader. From February to August 2012, data was collected from the top trending topics on Google’s search engine along with the top listed sites related to these topics, and then passed through various automated tools to discover if these results have been infiltrated by the operators of Search Engine Poisoning campaings, and the results of these automated scans are then discussed in detail. During the research period, manual searching for Search Engine Poisoning campaigns was also done, using high profile news events and popular search terms. These results are analysed in detail to determine the methods of attack, the purpose of the attack and the parties behind it
- Full Text:
- Date Issued: 2013
- Authors: Blaauw, Pieter
- Date: 2013
- Subjects: Web search engines Internet searching World Wide Web Malware (Computer software) Computer viruses Rootkits (Computer software) Spyware (Computer software)
- Language: English
- Type: Thesis , Masters , MSc
- Identifier: vital:4572 , http://hdl.handle.net/10962/d1002037
- Description: The prevalence of Search Engine Poisoning in trending topics and popular search terms on the web within search engines is investigated. Search Engine Poisoning is the act of manipulating search engines in order to display search results from websites infected with malware. Research done between February and August 2012, using both manual and automated techniques, shows us how easily the criminal element manages to insert malicious content into web pages related to popular search terms within search engines. In order to provide the reader with a clear overview and understanding of the motives and the methods of the operators of Search Engine Poisoning campaigns, an in-depth review of automated and semi-automated web exploit kits is done, as well as looking into the motives for running these campaigns. Three high profile case studies are examined, and the various Search Engine Poisoning campaigns associated with these case studies are discussed in detail to the reader. From February to August 2012, data was collected from the top trending topics on Google’s search engine along with the top listed sites related to these topics, and then passed through various automated tools to discover if these results have been infiltrated by the operators of Search Engine Poisoning campaings, and the results of these automated scans are then discussed in detail. During the research period, manual searching for Search Engine Poisoning campaigns was also done, using high profile news events and popular search terms. These results are analysed in detail to determine the methods of attack, the purpose of the attack and the parties behind it
- Full Text:
- Date Issued: 2013
- «
- ‹
- 1
- ›
- »