2-12-2018 — Noise in Information —

In this meeting we discussed the theory that what you personally interpret from a message is ultimately irrelevant to the actual informational content of the message, in a strictly informational sense.

The example of Google was used, Google uses –grams of letter combinations to convey search queries and to compile results, something which I did not know at all.  Google’s system apparently contains millions of these combinations which it uses to translate a query into “technically understood language” to find the most relevant data.  So, in theory, it does not matter what language you form a query in, Google should come up with the same results relevant to the topic each time, because ultimately the perceived meanings of the message don’t matter to machines and logic systems, only the raw information.

In a way, this makes sense to me, because of the cliched belief that “machines can’t feel or express inflections on words” like humans can.  However, in recent history there have been advances made in order to combat this “deadpan communication system,” such as emoticons, changes to the “writing” of the text itself, and punctuation which make messages and the written word “easier” to understand the meaning.

Now, it would be very interesting to see what the machines do with this…



Leave a Reply

Your email address will not be published. Required fields are marked *