Difference between revisions of "Cracking the Voynich code"

From Derek
Jump to: navigation, search
(Project description)
(Honours students)
 
(27 intermediate revisions by 3 users not shown)
Line 1: Line 1:
 +
== Supervisors ==
 
*[[Derek Abbott|Prof Derek Abbott]]
 
*[[Derek Abbott|Prof Derek Abbott]]
 
*[[Brian W.-H. Ng|Dr Brian Ng]]
 
*[[Brian W.-H. Ng|Dr Brian Ng]]
Line 5: Line 6:
 
==Honours students==
 
==Honours students==
 
*'''2014:''' [[Bryce Shi]] and [[Peter Roush]], see [[Cracking the Voynich code 2014]]
 
*'''2014:''' [[Bryce Shi]] and [[Peter Roush]], see [[Cracking the Voynich code 2014]]
 +
*'''2015:''' [[Lifei Wang]] and [[Andrew McInnes]], see [[Cracking the Voynich code 2015]]
 +
*'''2016:''' [[Ruihang Feng]] and [[Yaxin Hu]], see [[Cracking the Voynich code 2016]]
 +
*'''2022:''' [[Gregory Kontogonis]] and [[Reilly Heijkoop-Logan]], see [[Cracking the Voynich code 2022]]
  
 
==Project guidelines==
 
==Project guidelines==
Line 14: Line 18:
 
Fortunately the whole book has been converted into an electronic format with each character changed to a convenient ascii character.  We want you to write software that will search the text and perform statistical tests to get clues as to the nature of the writing.  Does the document bear the statistics of a natural language or is it a fake?
 
Fortunately the whole book has been converted into an electronic format with each character changed to a convenient ascii character.  We want you to write software that will search the text and perform statistical tests to get clues as to the nature of the writing.  Does the document bear the statistics of a natural language or is it a fake?
  
We already have Support Vector Machine (SVM) amd Multiple Discriminant Analysis (MDA) software that you can adpat for your purposes.  This software is set up to test if two texts are written by the same author or not. The great thing about our software is that it is independent of language.  So you could compare it against the existing writings of Roger Bacon, who is a suspected author
+
We already have Support Vector Machine (SVM) and Multiple Discriminant Analysis (MDA) software that you can adapt for your purposes.  This software is set up to test if two texts are written by the same author or not. The great thing about our software is that it is independent of language.  So you could compare it against the existing writings of Roger Bacon, who is a suspected author
  
 
==Useful notes==
 
==Useful notes==
  
 +
* Download the digital Voynich from [http://www.ic.unicamp.br/~stolfi/voynich/99-01-16-concordance/ here].
 +
 +
* The UN Declaration of Human Rights is translated into every language in the world and in principle you can compare the Voynich to all the existing languages for statistical proximity.  Electronic access is [http://www.ohchr.org/EN/UDHR/Pages/Introduction.aspx here].
  
 
==Approach and methodology==
 
==Approach and methodology==
You have an advantage that as engineers you know more about information theory and statistics than the average policeman or code breaking expert. You will take a structured approach to writing software code to use a process of elimination to say whether particular coding schemes were used or not.
 
  
Start with the [http://en.wikipedia.org/wiki/Playfair_cipher Playfair cipher] and the [http://en.wikipedia.org/wiki/Vigen%C3%A8re_cipher Vigenère cipher] to begin with and you should find that you can easily test the above sequence of letters to prove the Vigenère cipher was definitely not used. Then you can go onto exploring [http://en.wikipedia.org/wiki/Category:Classical_ciphers other encryption schemes]
+
* '''Phase 1:''' Characterize the text. Write scripts that count its features. How many words? How long is the alphabet? Word frequencies? Probability of one letter following another. Probability of two letter pairs (2-grams) and n-letter group (n-grams). Compare these in a table with known languages obtained by running your same code on the Declaration of Human Rights.  Don't forget to get a short paragraph of English and manually count everything and then run it on your code to cross check it is counting correctly. You must always validate your code or you will lose marks.
  
:'''Note''' from Matthew: If you include the extra line, I'm not so sure you can prove it's not the Vigenère cipher. Also, given the date of the murder, and the dates of invention of some ciphers, there are some you could reasonably rule out (e.g. I doubt it's RSA for historical and technical reasons), however you can still implement them and try them out :). If you dig into some of the historical documents on the case you may find [http://xkcd.com/538/ clues to possible decryption keys].
+
* '''Phase 2:''' Write a general descriptor for each picture in the book, eg. water, woman, tree, flower, vegetable, leaf, dancing etc. Associate each descriptor with the appropriate paper. Write some code to find which words on a page are unique to those pages with those descriptors.  Which words also suddenly increase in frequency on those pages with shared descriptors? Tabulate the results.
  
We would also like you to perform simple statistical tests to show if English was the most likely language or not in the original message. Also you should be able to prove if the code is the beginning letter of a sequence of words or is composed of whole words.
+
* '''Phase 3:''' Investigate the use of Word Recurrence Interval (WRI) versus rank plots. Plot WRI curves of the Voynich versus other languages from the Declaration of Human Rights.
A list of letter frequency rankings for different languages can be found .
+
  
Then if you have time and if you are excited to take this project to a higher level you can start to check out the work of the great electrical engineer Claude Shannon and apply his techniques from information theory. You can measure the information content in the message in terms of bits for starters.
+
* '''Phase 4:''' Think up some of your own ideas to try out.
  
==Possible extension==
+
* '''Phase 5:''' As WRI is a language-independent metric, you can select classification features based on WRI. Then you can run an SVM and an MDA classifier to compare the Voynich against other languages in the Declaration of Human Rights. Then you can run it against the works of specific authors of interest such as Roger Bacon, John Dee, and Edward Kelley.
If you knock off this project too easily and are looking for a harder code cracking problem to try your software out on, you can progress to analyzing another famous unsolved mystery: the [http://en.wikipedia.org/wiki/Voynich_manuscript Voynich Manuscript]
+
  
 
== Expectations ==  
 
== Expectations ==  
* We don't really expect you to find the killer, though that would be cool if you do and you'll become very famous overnight.  
+
* We don't really expect you to crack the Voynich, though that would be cool if you do and you'll become very famous overnight.  
  
* To get good marks we expect you to show a logical approach to decisively eliminating which coding schemes were definitely not used.  
+
* To get good marks we expect you to show a logical approach to decisively eliminating some languages and authors, and finding some hints about the statistical nature of the words.
  
* In your conclusion, you need to come up with a short list of likely possibilities and a list of things you can definitely eliminate that the code is not.
+
* In your conclusion, you need to come up with a short list of possible hypotheses and a list of things you can definitely eliminate.
  
* We expect you to critically look at the conclusions of the previous project groups and highlight to what extent your conclusions agree and where you disagree.
+
* We expect you to critically look at the conclusions of the previous work and highlight to what extent your conclusions agree and where you disagree.
  
* We expect all the written work to be place on this wiki. No paper reports are to be handed upJust hand up a CD with your complete project directory at the end. One CD for each group member.
+
* It is important to regularly see your main supervisorsDon't let more than 2 week go by without them seeing your face briefly.
  
* It is expected that you fill out a short progress report on the wiki each week, every Friday evening, to briefly state what you did that week and what the goals are for the following week.
+
* You should be making at least one formal progress meeting with supervisors per month. It does not strictly have to be exactly a month, but roughly each month you should be in a position to show some progress and have some problems and difficulties to discuss. On the other hand the meetings can be very frequent in periods when you have a lot of activity and progress to show.
  
* It is important to regularly see your main supervisors.  Don't let more than 2 week go by without them seeing your face briefly.
+
* The onus is on you to drive the meetings, make the appointments, and set them up.
  
* You should be making at least one formal progress meeting with supervisors per month. It does not strictly have to be exactly a month, but roughly each month you should be in a position to show some progress and have some problems and difficulties to discuss.
+
==Project deliverables==
 +
* Place all written work to be place on this wiki. No paper reports are to be handed up. A Semester 1 proposal report and a Semester 2 final report are required.
  
* The onus is on you to drive the meetings, make the appointments and set them up.
+
* Fill out a short progress report on the wiki each week, every Friday evening, to briefly state what you did that week and what the goals are for the following week.
  
* You are expected to make a YouTube presentation of your whole project.
+
* Just hand up a labelled CD or USB with your complete project directory at the end. One for each group member
 +
 
 +
* Make a fun YouTube presentation of your whole project (designed to attract lots of hits).  You can have supplementary YouTube videos that are instructional, with details for future groups (these are designed to not be entertaining nor get many hits).
 +
 
 +
* You carry out a project exhibition in Semester 2.
 +
 
 +
* You carry out two seminars. One  in each semester.
 +
 
 +
* Any purchases you make on the project account (eg. books) are the property of the university, and should be handed in at the end.
  
 
== Relationship to possible career path==
 
== Relationship to possible career path==
Whilst the project is fascinating as you'll learn about a specific cold case—and we do want you to have a lot of fun with it—the project does have a hard-core serious engineering side. It will familiarize you with techniques in information theory, probability, statistics, encryption, decryption, and datamining. It will also improve your software skills. The project will also involve writing software code that trawls for patterns on the world wide web (exploiting it as a huge database). This will force you to learn about search engines and databases; and the new tools you develop may lead to new IP in the area of datamining and also make you rich/famous. The types of jobs out there where these skills are useful are in computer security, comms, or in digital forensics. The types of industries that will need you are: the software industry, e-finance industry, e-security, IT industry, Google, telecoms industry, [http://www.asio.gov.au/ ASIO], [http://www.asis.gov.au/ ASIS], defence industry (e.g. [http://www.dsd.gov.au/ DSD]), etc. So go ahead and have fun with this, but keep your eye on the bigger engineering picture and try to build up an appreciation of why these techniques are useful to our industry. Now go find that killer...this message will self-destruct in five seconds :-)
+
Whilst the project is fascinating as you'll learn about a specific high-profile mystery—and we do want you to have a lot of fun with it—the project does have a hard-core serious engineering side. It will familiarize you with techniques in information theory, probability, statistics, encryption, decryption, signal classification, and datamining. It will also improve your software skills. The new software tools you develop may lead to new IP in the areas of datamining, automatic text language identification, and also make you rich/famous. The types of jobs out there where these skills are useful are in computer security, comms, digital forensics, internet search companies, and language processing software companies. The types of industries that will need you are: the software industry, e-finance industry, e-security, IT industry, Google, telecoms industry, [http://www.asio.gov.au/ ASIO], [http://www.asis.gov.au/ ASIS], defence industry (e.g. [http://www.dsd.gov.au/ DSD]), etc. So go ahead and have fun with this, but keep your eye on the bigger engineering picture and try to build up an appreciation of why these techniques are useful to our industry. Now go crack the Voynich...this message will self-destruct in five seconds :-)
  
 
==See also==
 
==See also==
 
* [[Cracking the Voynich code 2014]]
 
* [[Cracking the Voynich code 2014]]
 +
 +
==Useful papers we wrote==
 +
[http://www.eleceng.adelaide.edu.au/personal/dabbott/publications/PLO_ebrahimpour2013.pdf]
 +
[[Maryam Ebrahimpour|M. Ebrahimpour]], [[Tālis J. Putniņš|T. J. Putniņš]], [[Matthew J. Berryman|M. J. Berryman]], [[Andrew G. Allison|A. Allison]], [[Brian W.-H. Ng|B. W.-H. Ng]], and '''[[Derek Abbott|D. Abbott]]''', "Automated authorship attribution using advanced signal classification techniques," ''PLoS ONE'', '''Vol. 8,''' No. 2, Art. No. e54998, 2013, http://dx.doi.org/10.1371/journal.pone.0054998
 +
 +
[http://www.eleceng.adelaide.edu.au/Personal/dabbott/publications/FNL_berryman2003.pdf] [[Matthew J. Berryman|M. J. Berryman]], [[Andrew G. Allison|A. Allison]], and '''[[Derek Abbott|D. Abbott]],''' "Statistical techniques for text classification based on word recurrence intervals,"
 +
''Fluctuation and Noise Letters'', '''Vol. 3''', No. 1, pp. L1–L12, 2003.
  
 
== References and useful resources==
 
== References and useful resources==
 
If you find any useful external links, list them here:
 
If you find any useful external links, list them here:
 +
* [https://archive.org/details/TheVoynichManuscript Scanned copy of the Voynich]
 +
* [http://www.ic.unicamp.br/~stolfi/voynich/99-01-16-concordance/ Digital download for the Voynich]
 
* [http://ebooks.adelaide.edu.au/ Adelaide Uni Library e-book collection]
 
* [http://ebooks.adelaide.edu.au/ Adelaide Uni Library e-book collection]
 
* [http://www.gutenberg.org/wiki/Main_Page Project Gutenburg e-books]
 
* [http://www.gutenberg.org/wiki/Main_Page Project Gutenburg e-books]
Line 69: Line 91:
 
* [http://portal.acm.org/citation.cfm?id=1389095.1389425 Evolutionary algorithm for decryption of monoalphabetic homophonic substitution ciphers encoded as constraint satisfaction problems]
 
* [http://portal.acm.org/citation.cfm?id=1389095.1389425 Evolutionary algorithm for decryption of monoalphabetic homophonic substitution ciphers encoded as constraint satisfaction problems]
 
* [http://www.bckelk.ukfsn.org/words/etaoin.html List of letter rankings for different languages]
 
* [http://www.bckelk.ukfsn.org/words/etaoin.html List of letter rankings for different languages]
 +
* [http://www.compellingpress.com/voynich/ The Curse of the Voynich]
 +
* [http://www.plosone.org/article/fetchObject.action?uri=info%3Adoi%2F10.1371%2Fjournal.pone.0067310&representation=PDF Statistical properties of the Voynich]
 +
* [http://www.plosone.org/article/fetchObject.action?uri=info%3Adoi%2F10.1371%2Fjournal.pone.0066344&representation=PDF Information-theoretic analysis of the Voynich]
 +
* [http://www.amazon.com/s/ref=nb_sb_noss?url=search-alias%3Daps&field-keywords=voynich Books on the Voynich (Amazon)]
 +
* [http://www.ic.unicamp.br/~stolfi/voynich/ Voynich resources]
 +
* [http://ixoloxi.com/voynich/tools.html Voynich tools]
 +
* [http://en.wikibooks.org/wiki/The_Voynich_Manuscript/Jargon Guide to Voynich jargon]
  
 
==Back==
 
==Back==

Latest revision as of 15:11, 7 March 2022

Supervisors

Honours students

Project guidelines

Project description

The Voynich Manscript is a mysterious 15th century book that no one today know what it says or who wrote it. The book is in a strange alphabet. See details here.

Fortunately the whole book has been converted into an electronic format with each character changed to a convenient ascii character. We want you to write software that will search the text and perform statistical tests to get clues as to the nature of the writing. Does the document bear the statistics of a natural language or is it a fake?

We already have Support Vector Machine (SVM) and Multiple Discriminant Analysis (MDA) software that you can adapt for your purposes. This software is set up to test if two texts are written by the same author or not. The great thing about our software is that it is independent of language. So you could compare it against the existing writings of Roger Bacon, who is a suspected author

Useful notes

  • Download the digital Voynich from here.
  • The UN Declaration of Human Rights is translated into every language in the world and in principle you can compare the Voynich to all the existing languages for statistical proximity. Electronic access is here.

Approach and methodology

  • Phase 1: Characterize the text. Write scripts that count its features. How many words? How long is the alphabet? Word frequencies? Probability of one letter following another. Probability of two letter pairs (2-grams) and n-letter group (n-grams). Compare these in a table with known languages obtained by running your same code on the Declaration of Human Rights. Don't forget to get a short paragraph of English and manually count everything and then run it on your code to cross check it is counting correctly. You must always validate your code or you will lose marks.
  • Phase 2: Write a general descriptor for each picture in the book, eg. water, woman, tree, flower, vegetable, leaf, dancing etc. Associate each descriptor with the appropriate paper. Write some code to find which words on a page are unique to those pages with those descriptors. Which words also suddenly increase in frequency on those pages with shared descriptors? Tabulate the results.
  • Phase 3: Investigate the use of Word Recurrence Interval (WRI) versus rank plots. Plot WRI curves of the Voynich versus other languages from the Declaration of Human Rights.
  • Phase 4: Think up some of your own ideas to try out.
  • Phase 5: As WRI is a language-independent metric, you can select classification features based on WRI. Then you can run an SVM and an MDA classifier to compare the Voynich against other languages in the Declaration of Human Rights. Then you can run it against the works of specific authors of interest such as Roger Bacon, John Dee, and Edward Kelley.

Expectations

  • We don't really expect you to crack the Voynich, though that would be cool if you do and you'll become very famous overnight.
  • To get good marks we expect you to show a logical approach to decisively eliminating some languages and authors, and finding some hints about the statistical nature of the words.
  • In your conclusion, you need to come up with a short list of possible hypotheses and a list of things you can definitely eliminate.
  • We expect you to critically look at the conclusions of the previous work and highlight to what extent your conclusions agree and where you disagree.
  • It is important to regularly see your main supervisors. Don't let more than 2 week go by without them seeing your face briefly.
  • You should be making at least one formal progress meeting with supervisors per month. It does not strictly have to be exactly a month, but roughly each month you should be in a position to show some progress and have some problems and difficulties to discuss. On the other hand the meetings can be very frequent in periods when you have a lot of activity and progress to show.
  • The onus is on you to drive the meetings, make the appointments, and set them up.

Project deliverables

  • Place all written work to be place on this wiki. No paper reports are to be handed up. A Semester 1 proposal report and a Semester 2 final report are required.
  • Fill out a short progress report on the wiki each week, every Friday evening, to briefly state what you did that week and what the goals are for the following week.
  • Just hand up a labelled CD or USB with your complete project directory at the end. One for each group member
  • Make a fun YouTube presentation of your whole project (designed to attract lots of hits). You can have supplementary YouTube videos that are instructional, with details for future groups (these are designed to not be entertaining nor get many hits).
  • You carry out a project exhibition in Semester 2.
  • You carry out two seminars. One in each semester.
  • Any purchases you make on the project account (eg. books) are the property of the university, and should be handed in at the end.

Relationship to possible career path

Whilst the project is fascinating as you'll learn about a specific high-profile mystery—and we do want you to have a lot of fun with it—the project does have a hard-core serious engineering side. It will familiarize you with techniques in information theory, probability, statistics, encryption, decryption, signal classification, and datamining. It will also improve your software skills. The new software tools you develop may lead to new IP in the areas of datamining, automatic text language identification, and also make you rich/famous. The types of jobs out there where these skills are useful are in computer security, comms, digital forensics, internet search companies, and language processing software companies. The types of industries that will need you are: the software industry, e-finance industry, e-security, IT industry, Google, telecoms industry, ASIO, ASIS, defence industry (e.g. DSD), etc. So go ahead and have fun with this, but keep your eye on the bigger engineering picture and try to build up an appreciation of why these techniques are useful to our industry. Now go crack the Voynich...this message will self-destruct in five seconds :-)

See also

Useful papers we wrote

[1] M. Ebrahimpour, T. J. Putniņš, M. J. Berryman, A. Allison, B. W.-H. Ng, and D. Abbott, "Automated authorship attribution using advanced signal classification techniques," PLoS ONE, Vol. 8, No. 2, Art. No. e54998, 2013, http://dx.doi.org/10.1371/journal.pone.0054998

[2] M. J. Berryman, A. Allison, and D. Abbott, "Statistical techniques for text classification based on word recurrence intervals," Fluctuation and Noise Letters, Vol. 3, No. 1, pp. L1–L12, 2003.

References and useful resources

If you find any useful external links, list them here:

Back