Difference between revisions of "Final Report 2010"

From Derek
Jump to: navigation, search
(Pattern Initialism)
(Pattern Initialism)
Line 47: Line 47:
 
===Pattern Initialism===
 
===Pattern Initialism===
 
<center>
 
<center>
{| class="wikitable" border="1"
+
{| class="wikitable"
 
|-
 
|-
 
|M
 
|M

Revision as of 18:48, 12 October 2010

Due Date

Executive Summary

michael

Aims and Objectives

Project Background

Verification of Past Results

michael

Random Letters

michael

Verification of Past Algorithms

Methodology

The Text Matching Algorithm

The Web Crawler

What it does

The basic function of the web crawling portion of the project is to access text on the internet and pass it directly to the pattern matching algorithm. This allows for a reasonably fast access method to large quantities of raw text that can be processed thoroughly and used for statistical analysis.

How it was implemented

Several different approaches were used to implement the web crawler in order to find method that was both effective and simple to use. After experimenting with open source crawlers available such as Arachnid and Jspider we turned our attention to searching for a simpler solution that could be operated directly from the command prompt. Such a program would allow us to hopefully input a website or list of websites of interest, collect relevant data and then have some control over the pattern matching methods that would be used to produce useful results. After much searching and experimenting I came across an open source crawler called HTTrack. HTTrack was used for the following reasons:

  • It is free
  • It is simple to use. A GUI version and command line version come with the standard package which allowed for an easy visual experience to become familiar with the program that was easily translated to coded commands.
  • It allows full website mirroring. This means that the text from the websites is stored on the computer and can be used both offline and for multiple searches without needing to access and search the internet every time.
  • It has a huge amount of customisation options. This allowed for control over such things as search depth (how deep into a website), accessing external websites or just one (avoids jumping to websites that contain irrelevant data), search criteria (only text is downloaded, no images movies or unwanted files that are of no use and waste downloads)
  • It abides the Robots Exclusion Protocol (individual access rights that are customised by the owner of each website)
  • It has a command prompt option. This allows for a user friendly approach and integration with the pattern matching algorithm.



To keep the whole project user friendly, a batch file was created that follows the following process:

  1. Takes in a URL or list of URLs that are pre-saved in a text file at a known location on the computer.
  2. Prompts the user to enter a destination on the computer to store the data retrieved from the website.
  3. Accesses HTTrack and perform a predetermined search on the provided URL(s).
  4. Once the website mirroring is complete the program moves to the predetermined location containing the pattern matching code
  5. Compiles and runs the pattern matching code

Results

Direct Initialism

Pattern Initialism

M R G O A B A B D
M T B I M P A N E T P
M L I A B O A I A Q C
I T T M T S A M S T G A B

Further Research

Project Management

Michael

Conclusion

Appendix

References

See Also