Cracking the Voynich code 2015

From Derek
Revision as of 10:31, 13 April 2015 by Dabbott (Talk | contribs)

Jump to: navigation, search

Supervisors

Honours students

Project guidelines

General project description

The Voynich Manuscript is a mysterious 15th century book that no one today know what it says or who wrote it. The book is in a strange alphabet. See details here.

Fortunately the whole book has been converted into an electronic format with each character changed to a convenient ascii character. We want you to write software that will search the text and perform statistical tests to get clues as to the nature of the writing. Does the document bear the statistics of a natural language or is it a fake?

We already have Support Vector Machine (SVM) and Multiple Discriminant Analysis (MDA) software that you can adapt for your purposes. This software is set up to test if two texts are written by the same author or not. The great thing about our software is that it is independent of language. So you could compare it against the existing writings of Roger Bacon, who is a suspected author

Useful notes

  • Download the digital Voynich from here.
  • The UN Declaration of Human Rights is translated into every language in the world and in principle you can compare the Voynich to all the existing languages for statistical proximity. Electronic access is here.

Specific tasks

  • Phase 1: Characterize the text. Write scripts that count its features. How many words? How many tokens? Word frequencies? Compare these in a table with known languages obtained by running your same code on the Declaration of Human Rights. Don't forget to get a short paragraph of English and manually count everything and then run it on your code to cross check it is counting correctly. You must always validate your code or you will lose marks. Tabulate results for all 16 versions of the Voynich. Which Voynich tokens only appear at the start of words and which are only at the end?
  • Phase 2: Investigate just using English text, how you can separate the alphabet from other ascii tokens such as &, $, (, ), +, =, 3, etc. If you were an alien without a priori knowledge, how would you do it? Characterize English text to see how token frequency, token recurrence interval, and statistics of token pairs varies between the alphabet and other characters.
  • Phase 3: Investigate Linguistic Morphology...
  • Phase 4: Investigate Stylometry...
  • Phase 5: Think up some of your own ideas to try out. Think up only very simple ideas. Simplicity is the key.

Deliverables

Re-check required deliverables and dates

Semester A

  • Proposal seminar ( Week 5 - 31/03/2015 - 3:05pm )
File:Proposal Seminar Slides.pdf
  • Research proposal and Progress report draft (Week 6a - First Week of Semester Break)
  • Research proposal ( Week 12 )
  • Progress report - only one report needed in wiki format ( Week 12 )

Unsure on exact dates, currently taken from lecture 1 slides (this is the same for Semester 2 currently)

Semester B

  • Final seminar ( Week 10 )
  • Final thesis ( Week 11 )
  • Poster ( insert date )
  • Project exhibition 'expo' ( Week 12 )
  • Labelled CD or USB stick containing your whole project directories. Only one is needed but it should contain two project directories, ie. one for each group member ( insert date )
  • YouTube video summarizing project in exciting way - add the URL to this wiki - only one needed ( insert date )
  • Optional: any number of instructional how-to YouTube videos on running your software etc.

Code

As particular code bases become complete, or relatively close to complete, they will be listed below.

Phase 1

Characterization Code

Weekly progress and questions

This is where you record your progress and ask questions. Make sure you update this every week.

Code files, work logs, meeting minutes, etc. can also be found on the project team's Google Drive. This can be viewed below:

Approach and methodology

We expect you to take a structured approach to both the validation and the writing of the software. You should carefully design the big-picture high-level view of the software modules, and the relationships and interfaces between them. Think also about the data transformations needed.

Expectations

  • We don't really expect you to crack the Voynich, though that would be cool if you do and you'll become very famous overnight.
  • To get good marks we expect you to show a logical approach to decisively eliminating some languages and authors, and finding some hints about the statistical nature of the words.
  • In your conclusion, you need to come up with a short list of possible hypotheses and a list of things you can definitely eliminate.
  • We expect you to critically look at the conclusions of the previous work and highlight to what extent your conclusions agree and where you disagree.
  • It is important to regularly see your main supervisors. Don't let more than 2 week go by without them seeing your face briefly.
  • You should be making at least one formal progress meeting with supervisors per month. It does not strictly have to be exactly a month, but roughly each month you should be in a position to show some progress and have some problems and difficulties to discuss. On the other hand the meetings can be very frequent in periods when you have a lot of activity and progress to show.
  • The onus is on you to drive the meetings, make the appointments, and set them up.

Relationship to possible career path

Whilst the project is fascinating as you'll learn about a specific high-profile mystery—and we do want you to have a lot of fun with it—the project does have a hard-core serious engineering side. It will familiarize you with techniques in information theory, probability, statistics, encryption, decryption, signal classification, and datamining. It will also improve your software skills. The new software tools you develop may lead to new IP in the areas of datamining, automatic text language identification, and also make you rich/famous. The types of jobs out there where these skills are useful are in computer security, comms, digital forensics, internet search companies, and language processing software companies. The types of industries that will need you are: the software industry, e-finance industry, e-security, IT industry, Google, telecoms industry, ASIO, ASIS, defence industry (e.g. DSD), etc. So go ahead and have fun with this, but keep your eye on the bigger engineering picture and try to build up an appreciation of why these techniques are useful to our industry. Now go crack the Voynich...this message will self-destruct in five seconds :-)

See also

Useful papers we wrote

[1] M. Ebrahimpour, T. J. Putniņš, M. J. Berryman, A. Allison, B. W.-H.-Ng, and D. Abbott, "Automated authorship attribution using advanced signal classification techniques," PLoS ONE, Vol. 8, No. 2, Art. No. e54998, 2013, http://dx.doi.org/10.1371/journal.pone.0054998

[2] M. J. Berryman, A. Allison, and D. Abbott, "Statistical techniques for text classification based on word recurrence intervals," Fluctuation and Noise Letters, Vol. 3, No. 1, pp. L1–L12, 2003.

References and useful resources

If you find any useful external links, list them here:

Back