Final Report 2010

From Derek
Revision as of 18:36, 12 October 2010 by A1162144 (Talk | contribs)

Jump to: navigation, search

Due Date

Executive Summary

michael

Aims and Objectives

Project Background

Verification of Past Results

michael

Random Letters

michael

Verification of Past Algorithms

Methodology

The Text Matching Algorithm

The Web Crawler

What it does

The basic function of the web crawling portion of the project is to access text on the internet and pass it directly to the pattern matching algorithm. This allows for a reasonably fast access method to large quantities of raw text that can be processed thoroughly and used for statistical analysis.

How it was implemented

Several different approaches were used to implement the web crawler in order to find method that was both effective and simple to use. After experimenting with open source crawlers available such as Arachnid and Jspider we turned our attention to searching for a simpler solution that could be operated directly from the command prompt. Such a program would allow us to hopefully input a website or list of websites of interest, collect relevant data and then have some control over the pattern matching methods that would be used to produce useful results. After much searching and experimenting I came across an open source crawler called HTTrack. HTTrack was used for the following reasons:

  • It is free
  • It is simple to use. A GUI version and command line version come with the standard package which allowed for an easy visual experience to become familiar with the program that was easily translated to coded commands.
  • It allows full website mirroring. This means that the text from the websites is stored on the computer and can be used both offline and for multiple searches without needing to access and search the internet every time.
  • It has a huge amount of customisation options. This allowed for control over such things as search depth (how deep into a website), accessing external websites or just one (avoids jumping to websites that contain irrelevant data), search criteria (only text is downloaded, no images movies or unwanted files that are of no use and waste downloads)
  • It abides the Robots Exclusion Protocol (individual access rights that are customised by the owner of each website)
  • It has a command prompt option. This allows for a user friendly approach and integration with the pattern matching algorithm.



To keep the whole project user friendly, a batch file was created that follows the following process:

  1. Takes in a URL or list of URLs that are pre-saved in a text file at a known location on the computer.
  2. Prompts the user to enter a destination on the computer to store the data retrieved from the website.
  3. Accesses HTTrack and perform a predetermined search on the provided URL(s).
  4. Once the website mirroring is complete the program moves to the predetermined location containing the pattern matching code
  5. Compiles and runs the pattern matching code

Results

Direct Initialism

Pattern Initialism

Risk Probability Impact Comments
Project member falls sick 2 9 Could leave 1 member with a huge amount of work to complete
Unable to contact project supervisors 2 6 If it is at a critical time problems may arise
Crawlers not available/useful in Java 4 6 New programming language may need to be learnt and implemented in very little time
Run out of finance 1 7 Has the potential to be a problem if software needs to be purchased
Somebody else cracks the code .009 5 The project may lose its "x" factor but the web crawler will still be useful

Further Research

Project Management

Michael

Conclusion

Appendix

References

See Also