- cross-posted to:
- hackernews@lemmy.bestiver.se
- cross-posted to:
- hackernews@lemmy.bestiver.se
Interesting excerpt:
De Boer agrees that our brains are the bottleneck. But, he says, instead of being limited by how quickly we can process information by listening, we’re likely limited by how quickly we can gather our thoughts. That’s because, he says, the average person can listen to audio recordings sped up to about 120%—and still have no problems with comprehension. “It really seems that the bottleneck is in putting the ideas together.”
They found that Japanese, which has only 643 syllables, had an information density of about 5 bits per syllable, whereas English, with its 6949 syllables, had a density of just over 7 bits per syllable. Vietnamese, with its complex system of six tones (each of which can further differentiate a syllable), topped the charts at 8 bits per syllable.
That’s the part I don’t get. How do you determine the bits of information per syllable/word in different languages?
If I pick a random word such as ‘sandwich’ and encode it in ASCII it takes 8 bytes / i.e. 64 bits. According to the scientists, a two-syllable word in English only holds 14 bits of actual information. Anyone understands what they did there or has access to the underlying study?
You’ve stumbled upon the dark arts of information theory.
Sure, conveying “sandwich” in ascii or utf-8 takes 64 bits of information, but that’s in an encoding that is by default inefficient.
For starters, ascii has a lot of unprintables that we normally don’t really use to write words. Even if we never use these characters, they take up bits in our encoding because every time we don’t use them, we specify that we’re using other characters.
Second, writing and speaking are 2 different things. If you think about it, asking a question isn’t actually a separate (“?”) character. In speech, asking a question is just a modification of tone, and order of words, on a sentence. While, as literate people, we might think of sentence as written, the truth is that speech doesn’t have such a thing as question marks. The same is true of all punctuation marks. Therefore, a normal English sentence also encodes information about the tone of the sentence, including tones we don’t really know how to specify in text, and all of that is information.
This is the linguistic equivalent of kolmogorov complexity which explores the absolute lowest amount of data required to represent something, which in effect requires devising the most efficient possible data encoding scheme.
asking a question isn’t actually a separate (“?”) character.
The Japanese would like to have a word with you.
Good for them; unfortunately I can’t understand a thing they’re saying.
Thanks a lot for these insights, much appreciated!
asking a question isn’t actually a separate (“?”) character. In speech, asking a question is just a modification of tone
But if modification of tone encodes additional information wouldn’t we need to consider that as additional bits?
So if ‘You need a taxi.’ and ‘You need a taxi?’ are two different things, I don’t think we can just skip punctuation when measuring the bits of information in a sentence.
Theoretically speaking you do need to take phrasal tone into account, but in practice the difference is negligible - because most languages reinforce questions through syntactical and/or lexical means - particles, pronouns, subject-verb inversion, etc.
I’ll for sure dig a bit deeper on the links, but for me it’s still very counter intuitive to estimate information density of spoken word just by the count of syllables.
E.g. I can vary the sentence ‘I need help’ in so many ways. I can mumble it to a close sitting person to imply secrecy, I can say it in a desperate voice to show psychological distress, I can increase the volume to indicate urgency etc. And all that doesn’t even consider body language, mimics etc. which are all part of the information flow. And I’d guess that body language varies a lot from country to country.
…ah. The rabbit hole of paralinguistic information - all those bits of info that aren’t part of the language itself, but still found alongside it. It’s a big deal as you noticed, but really hard to quantify, so I don’t blame the authors for leaving it off.
But lot of very common ones like Spanish and Portuguese don’t. The difference between a statement and a question is exclusively the tone in both.
That’s only for yes/no questions. Open-ended questions start with a pronoun in both, as typical for Indo-European languages. Portuguese example:
- A cor do cavalo é cinza. // the colour of-the horse is grey.
- Qual é a cor do cavalo? // which is the colour of-the horse?
- Qual que é a cor do cavalo? // which that/what is the colour of-the horse?
#2 is the standard way to phrase a question, but #3 is really common in informal speech.
And colloquially sometimes you even see yes/no questions getting some “random” emphatic word, like:
- A cor do cavalo é cinza, né? // the colour of-the horse is grey, innit?
- Ma[s] a cor do cavalo é cinza? // but the colour of-the horse is grey?
They do change the nature of the question slightly (the first one sounds rhetoric, the second one as if there was conflicting info), but the main reason they’re added is to reinforce the phrasal tone as a question marker.
Yes, exactly. This is information that’s encoded by tone, and it is accounted for in the 7 bits per syllable (or lack of syllable, for periods for example). It was more of an example to show how if what you’re conveying is assumed to always be speech, the encoding you can use can be much more efficient.
On that note, a thing if forgot to mention is that speech assumes that what will be said is pretty much always valid. For example, sure, ascii has a lot more information density at 8 bits per character as you point out, but in reality it’s capable of encoding things like “hsuuia75hs”. If you tried communicating this to someone over speech, you’d find that the average speed you can do this drops dramatically from the normal 7 bits/syllable, where the ascii used in my comment’s text has been constant-speed. That’s one of the trade-offs.
I linked the paper in the OP. Check page 7 - it shows the formulae they’re using.
I’ll illustrate the simpler one. Let’s say your language allows five syllables, with the following probabilities:
- σ₁ - appears 40% of the time, so p(σ₁) = 0.4
- σ₂ - appears 30% of the time, so p(σ₂) = 0.3
- σ₃ - appears 20% of the time, so p(σ₃) = 0.2
- σ₄ - appears 8% of the time, so p(σ₄) = 0.08
- σ₅ - appears 2% of the time, so p(σ₅) = 0.02
If you apply the first formula, here’s what you get:
- E = -∑ [p(x)*log₂(p(x))]
- E = - { [0.4*log₂(0.4)] + [0.3*log₂(0.3)] + [0.2*log₂(0.2)] + [0.08*log₂(0.08)] + [0.02*log₂(0.02)] } = 1.91 bit
- E = 1.91 bits
Of course, natural languages allow way more than just five syllables, so the actual number will be way higher than that. Also, since some syllables are more likely to appear after other syllables, you need the second formula - for example if your first syllable is “sand” the second one might be “wich” or “ing”, but odds are it won’t be “dog” (a sanddog? Messiest of the puppies. Still a good boy.)
If I pick a random word such as ‘sandwich’ and encode it in ASCII it takes 8 bytes / i.e. 64 bits. According to the scientists, a two-syllable word in English only holds 14 bits of actual information.
ASCII is extremely redundant - it uses 8 bits per letter, but if you’re handling up to 32 graphemes then 5 bits is enough. And some letters won’t even add information to the word, for example if I show you the word “d*gh*us*” you can correctly guess it’s “doghouse”, even if the ⟨o⟩'s and the ⟨e⟩ are missing.