You can write a unicode proposal for them to add it as an emoji. The process takes a while though, and it may be hard to argue against its transience ⌚
One of my favorite parts of Unicode and emoji is that there's a sort of "glue" character that can be used to alter or modify an already existing emoji. It allows people to specify a skin color for emojis of people, and the pride flag is basically "basic flag emoji + glue + rainbow". It's just such a neat trick that the Unicode people used to greatly increase the range of emoji that can be supported.
Similarly, there are no specific country flags saved in Unicode, just a set of letters. So if you want the US flag, you essentially enter flag+U+S. That way, they don't have to deal with the every changing list of countries.
@@JorWat25 It's actually "Country indicator U" + "County indicator S", two characters. It does mean that the apparent length of the text depends on whether the font supports the flag.
That Ron/Don/ducks bit was absolute gold. Great job with this and all your videos! Its been exciting to see your trajectory from Fake Science into Vox and now your solo adventures!
When I worked at a national wildlife refuge, we made maps in ArcGIS to hand out to people who wanted to bird watch. We'd mark spots where you could see cranes, eagles, ducks, etc. ArcGIS doesn't easily allow for images to be imported into a map, so we used a bird font. I forget exactly what letter corresponded to what bird but it made labelling bird watching locations super easy. 😂
this reminds me of the time I made a custom font to make a computer vision task much easier and more reliable, custom fonts are always a funny workaround
Yeah I kinda miss symbol fonts where everything was “really” a letter, they’re kinda unnecessary now that Unicode has so many unique symbols. See also the mild controversy about whether to include stuff like the Prince symbol, since that was distributed as a vector font for publications to use, but isn’t public domain. But Unicode’s mission of being able to encode and support every possible symbol kind of suggests it should have it? But it can’t due to rights issues.
@@kaitlyn__L Ha great minds! I read an article about the Prince symbol while I was researching this and saw all the fans desperate to hack together the best possible comp.
Very nice! I've used ArcGIS for years and haven't seen a bird font, but then I haven't made maps with birds. Now, the base for our published maps start in GIS, but are cleaned and finalized in Illustrator.
Honestly, the production quality of your videos makes me feel like i'm watching Netflix, it's pristine! And i find it incredible that even with the fancy edits and topics you can still make it entertaining and funny!
Learning Unicode's history was never this fun, and made me realize it's importance. Also, I genuinely love your style and take on video essays like this. Must've taken some time to reach that balance.
As a computer geek this video tickles me nicely. We deal with unicode and it's weirdness as software developers all the time. It's nice for the general public to understand why sometimes we have this weirdness.
I am a Chinese speaker and I am learning Japanese. I know the hiragana and katagana pretty well but sometimes I don't know the pronunciation of a kanji. I would just switch to Chinese and type the Chinese equivalent and I am always surprised how it knows it is supposed to be a Japanese kanji not a Chinese character. Turns out someone (probably a group of people) painstakingly found out every single Japanese kanji that is exactly the same as their Chinese counterpart and made them the same unicode!
Hello, can you share your experience, like, is it easier for a Chinese speaker to learn Japanese compared to rest of the people who don't know about kanji (Chinese characters)? Can it be easier for a Japanese speaker to learn Chinese?
@@dapuslearning3828 as far as im aware of yes, a large part of japanese vocabulary is from chinese and since most kanji(used in japanese) have hanzi(used in chinese) counterparts which have same or similar meaning and pronunciation*, its better just to confirm how the kanji is actually pronounced though in terms of native japanese words or loanwords from other languages like english, and also japanese grammar and pitch accent in general that cant be learned with the help of chinese im not sure how it'd be like for a japanese speaker learning chinese, but again the grammar is different and compared to japanese theres fewer english loanwords in chinese which would be expressed entirely using the chinese meaning of hanzi instead of transliteration (e.g. japanese for computer is 'konpyuuta' while in chinese its 'dian nao', literally meaning electric brain) *im talking about the onyomi pronunciation which is derived from chinese, kunyomi which is the japanese pronunciation can't be learned from chinese
Repeatedly hearing the letter "Ä" being called "A" gave my swedish ass an aneurysm, even though i know he's not wrong and that the point being made is that you can add diacritics like an umlaut/diaeresis to letters using unicode.
I guess it's the same as in Russian Е and Ё, for non-russians out there they might look like they sound similar but for me they are completely different, I mean it's "ye" and "yo" accordingly
I'm a very technical guy who is not afraid of using the command line and it's text-based applications. Having Unicode there is very useful if you want things to be more useful or just to be more beautiful. Also, I learn Japanese using the internet, which is now possible due to Unicode. Well made video, ありがとうございます!
I know you didn't want to get too technical, but UTF-8 had a big effect on Unicode's adoption. ASCII used 7 or 8 bits of data, while the earliest Unicode standards used 16. But for ASCII text (which includes, say, the HTML that makes up every web page, even in non-Latin languages), half the data would just be a bunch of zeroes. If they wanted to include more that ~65,000 characters, they'd have to extend it to 32-bits, and a full three quarters of plain Latin text would be zeroes. This would suck even now, but in an age of dialup modems, it was a non-starter. UTF-8 allows us to vary the number of bits for each character: 8, 16. 24, and 32, or even more if we include combining characters that allow you to change the color of hair on an emoji character.
UTF-16 is a whole sorry saga in itself. Unicode was originally going to be a fixed-length 16-bit code, a.k.a. “UCS-2”. With a bit of a struggle over the CJKV unification, this was deemed sufficient for all writing systems currently in use. Certain vendors who were adopting Unicode at this time took the spec creators at their word, and began to hard-code the assumption of fixed-length 16-bit characters into their platforms: Microsoft with Windows NT, Sun with Java etc. Then with Unicode version 2 the spec creators changed their mind, and decided they would need more characters to support historical writing systems, special symbols, emojis etc. So characters--or actually, “code points”--would need more than 16 bits. So what happened to those products that had already been designed around the assumption of a fixed-length UCS-2 character set? What was originally “UCS-2” was now redefined as “UTF-16”, so a single Unicode code point could be represented by one or two 16-bit codes, using the newly-defined “upper surrogate” and “lower surrogate” ranges. But the sad thing is, what was supposed to be fixed-length code is now no longer fixed-length. What happened to all the software written with the assumption that the code was fixed-length? Don’t ask ...
UTF-8 also very cleverly solved a subtle problem in string processing when dealing with multi-byte characters: going backwards. It's a bit complicated, but Shift-JIS, GB-2312 and nearly all of the other systems encoded second and later characters of multi-byte sequences in ways that clashed with first bytes. But UTF-8 encodes subsequent bytes very slightly differently than the initial byte so you can always tell which is which, whatever way you're moving through the text bytes.
With UTF-8, you know how many bytes make up the current code point just from looking at the first byte. ASCII characters represent themselves in a single byte.
@@static-san Yup. - If a byte starts with 0, you _know_ it's a 1-byte ASCII character (Unicode range 0-127). No additional bytes are needed; any bytes to either side are part of other characters. - If a byte starts with 10, you _know_ it's the second (or third, or fourth, etc) byte of a multi-byte character. Move backward one or more bytes to find the start of the character. Or move forward one or more bytes to find the start of the _next_ character. - If a byte starts with 11, you _know_ it's the _first_ byte of a multi-byte character. The number of 1s before the first 0 (originally 110xxxxx - 1111110x) tells you how many bytes long the entire character is. Move forward to read those other bytes.
Thank you for making your videos so enjoyable beyond just the information you give us... it looks like you have fun making them, and that's fun to watch!
I have a vague recollection of reading somewhere that that the addition of Egyptian Hieroglyphs to unicode was a pretty significant step in Egyptological research as it helped simplify communication in research (referring to hieroglyphs in writing in a shared, standardized way) I have a hard time finding the article I read, so perhaps my memory is off, but that’s what I remember 😅
@@PhilEdwardsInc Details of stylistic variations, say, over the centuries, are obviously best conveyed by pictures. But conveying the actual text, independent of such issues, would be most easily done by using an actual text encoding.
Okay I legitimately blurted out laughing when the B&W guy asked if he could leave after his segment was over. 😂 You keep coming up with great ways of playing with tropes of educational videos. I immediately imagined all talking head interviewees as being trapped in the video, waiting to be cut to again. You’re killing it, Phil! Best video on this topic I have ever seen.
In the late 90s and early 2000s, when browsers did not fully support Unicode yet, non-English speakers had to adapt to writing without special chacterters. This was the beginning of a kind of "dialects" initially intended to circumvent that technical limitations, but quickly became youth slang (and the terror of many teachers and parents).
I remember Ron and Don's issue well. In the 1980s, I was using a Unix computer that used Ascii for its characters and I had to produce a tape of W-2 data for the IRS which had to be in EBCDIC which is the code that was used by IBM. I had to use the dd ommand to convert the text and write it to tape and produce a file that the IRS would accept. For some reason, it took a lot of work to get it to work right. The other similar issue was that Unix used linefeed to terminate text lines and both VAX/VMS and the new PC's used carriage return/linefeed combinations. Transferring files had to take that into account.
So did I. I had to write a program to convert the variable record length ASCII data of our employees on our Pick operating system-based computer onto a tape of EBCDIC data for the Employee's Retirement System's IBM computer. It was dictionary driven. It ran perfectly after the second compile.
VMS originally used record formats that did not have explicit line-ending characters. Then I think in version 3 they added 3 “stream” formats: STREAM_LF, STREAM_CR and STREAM_CRLF. One of these is equivalent to the CP/M/MS-DOS/Windows style, another to the Unix style, and the third to the old Macintosh style. So you see, it could handle them all natively.
I remember on my Samsung phone it used to appear as a short sentence but then they released an update which compressed it into a character slightly wider than ―
@@kaitlyn__L It's a bit more than a ligature cause it's the whole phrase "In the name of Allah, the Compassionate, the Merciful" which opens every chapter of the Quran.
I remember coming across the lovely phrase “the empty rectangle of Unicode disappointment”. I think web developers used the term “tofu” for those squares. And so Google created its “Noto” fonts (“no tofu”) to try to cover all the code points in a meaningful way, to put an end to them.
This answered questions I never knew I had. I'm a computer tinkerer/novice programmer, so I was already somewhat familiar with what ASCII & Unicode are, but never knew the history behind them. Thank you for this entertaining educational video!
The spoofs of 90s VHS presentations and modern overhyped RUclipsrs were great. I can tell how much effort went into framing, scripting, and then the chroma key background creation (more involved for the former admittedly).
I would kill for that 90s style presentation that takes three hours of monotone narration over the meaning of every single symbol. you have no idea how badly I want it
I find it interesting that there is actually a common language (of sorts) that Unicode doesn't fully support: Math expressions. A format called Latex is much more common in math equation setting, and while Unicode does have some stuff for this, its not nearly as good.
7:06 As a Chinese user, I think those are still very much the same Chinese characters. So I don't think anyone would oppose these characters on a practical standpoint. Han Chars Unification also deals with more visible differences where the good examples are listed on Wikipedia. As for the hieroglyphics, I haven't checked, but I assume these are added for research purposes. How else would you do a text search on ancient Egyptian writings if there isn't a standard?
I can tell you’re finding your groove with every new video. In my opinion this topic isn’t all that interesting but your delivery was really funny and entertaining. And I still learned something so congrats!
Great vid. This made me look into what a Japanese typewriter looked like and Wow! The technical expertise it must have taken to use those things. Makes the QWERTY evolution seem tame.
yes indeed! i believe johnny harris made one about the chinese keyboard ruclips.net/video/hBDwXipHykQ/видео.html and i have made a qwerty vid! ruclips.net/video/c8f6us-Sjlo/видео.html
Brilliant and well made as always Phil. Love your stuff at Vox and love your solo style and how it's you're own. By the way, has anyone ever mentioned you look so much like a younger Gary Oldman? Gary Not-So-Oldman? Also, are these RUclips emoji also unicode based? Time to do my own resarch.
i didn't expect to wake up to find a history of unicode video in my feed, let alone from someone i enjoy as much as you, Phil. let the breakfast, and history, begin
*Microsoft's ASCII extension. Dwarf Fortress uses CP-437, Microsoft's ASCII extension used in MS-DOS. So it's not the level of ASCII usage that fall of the cliff, it's the level of Bill's ASCII usage that did.
I appreciate the details you put into your youtuber spoof at 1:30. Abrasive colors, lighting, faces and cuts; just needed some face swirling and zooming.
This is amazing! I just attended the funeral of one Jon Jenkins, who was one of the key Unicode engineers who worked for Apple over many years, and was a key worker who helped with Chinese characters and other alphabets, idk this stuff just really intrigues me.
And here is another bit of unnecessary history: Baudot encoding was invented in the 1870s and was used in teleprinters. Letters were encoded in five bits, which only left room for capital letters. ASCII had seven bits, which allowed for lower case letters and more punctuation, but the first iteration only had upper case letters because that is what Baudot had. The Baudot ITA2 code, standardized in 1924, listed the letters in the order they appeared on the QWERTY keyboard. When the ASCII table was laid out they realized that the keyboard order no longer mattered, so the letters are in alphabetical order. In 1974 I worked at a company in Freeport, Texas. The blueprint printer I used was right next to the telex machine used to communicate with the company office in Houston. They would send us the daily schedules and we would send our production data back to them. When the telex received a message, in addition to printing it out, it would punch it out on paper tape, which was stored and could be printed out again. It was sort of the early version of the disk drive. The telex is pretty well gone now, but the Baudot code is still used in nautical radio messaging to conserve bandwidth. The big competitor to ASCII was EBCDIC, but I don't want to go into that right now, since it doesn't really bear on the development of Unicode.
My favourite part of Baudot code was how you could still do lowercase letters, you just had to hope the receiving end wasn’t out of sync on the status of the carriage shift… (so of course most didn’t do it, and the early teleprinters didn’t even bother having compatibility)
There used to be standardised "ASCII keyboard" modules that produced ASCII codes in parallel when you pressed a key, or key combination. I think both the Apple I and Apple II were made for ASCII keyboards: for the Apple I you had to buy one separately.
@@PhilEdwardsInc just regurgitating stuff I read 15 years ago lol, a certain amount gets reinforced watching people repair mechanical 1930s teletypewriters too! But thank you very much nevertheless, I suppose it does all add extra background info to the subject of the video :)
IBM’s excuse for creating EBCDIC is that they were bringing out their brand-spanking new System/360 machines in 1964, and the ASCII spec wasn’t quite finished yet. (I think it was actually finalized that same year.) So, for the sake of a few months, the computer industry was saddled with decades of compatibility headaches between the two character encodings. Actually, nobody used EBCDIC apart from IBM and some plug-compatible mainframe vendors. The entire rest of the industry adopted ASCII, plus national variants thereof.
I get the same feeling. Maybe it’s that these videos cover topics I already know, or maybe they could use an extra minute to go into a little more detail. An important point that was skipped from this video is the fact that the rendering of the glyph is not part of Unicode. Unicode is the text description of the glyph.
@@SimplyDudeFace I think it’s a Vox thing. Or it was a Cracked thing that evolved into the Vox thing of giving you the light information but not coming to a conclusion or opinion at the end. I just want him to dig one layer deeper. These videos spark my interest in a topic and then they’re over.
6:53 I wouldn’t say “A” and “Á” is a good example… Maybe “A” and “𝐀”? But in a world where the second 𝐀 is the only acceptable form of A in certain countries. (I know, weird, but there are no actual examples for this.) Imagine being used to only seeing 𝐀 in your day-to-day life, and then suddenly it gets rendered as “A”. You’d still recognise it, but you’d think it looks distinctly foreign. The same goes for Han characters such as 八. The font is the only thing that can determine a character’s “regional” look. Now imagine a Chinese text displayed with a Japanese font. All the characters would stay legible to a Chinese audience, but many of them will look Japanese.
Yeah I thought a comparison to similar looking letters in Cyrillic or Greek would have been more appropriate. Like “imagine if a Japanese computer giant decided A and alpha were the same”
yeah totally fair. I thought about that, but I wanted to get at the nationalistic/cultural element, and I felt the only way for a Western audience to vibe with that was to see a character that was clearly from another country (another being from my perspective, as an American).
@@PhilEdwardsInc A better example would be to have Latin A and Cyrillic А encoded as one single character! (I wonder why I didn’t think of that earlier haha!) Or Latin M and Greek Μ. And only a font would display lowercase M as either m or μ. So you’d need a Greek-specific font to display μ, or else it’d incorrectly be rendered as “m”. This is quite crazy if you think about it, yet Chinese and Japanese users have to deal with it on a daily basis! Not only that, but Hong Kong and Taiwan also display characters differently (and Korea as well, although Chinese characters aren’t that commonly used in Korean anymore). 真, for instance, would be displayed with the central box “detached” from the lower horizontal stroke in Japanese, whereas in Chinese it connects directly to it. The same can be said for 直. Who knows what your device displays it as! But yeah, the Japanese way of writing the character would be considered wrong in Chinese, and I believe vice versa as well… Imagine seeing that in a printed text!
Some people opposed the Han unification back then but as a Japanese person I would say the majority of us, including I, think it was a sensible thing to do. But it was not a simple task, was not executed perfectly, and left some irreversible inconsistencies in the code chart - like the fact that the character 浅 is defined so it can be drawn with two or three horizontal bars in the right‐hand part while 桟 and 栈 are encoded separately. Overall it more or less worked. It is hard to come up with a Latin alphabet equivalent but it is like an alternate universe where one half of the world would end up having a tradition of writing the letter A consistently without a horizontal bar, while the other half would exclusively use the A with two horizontal bars to write their own languages for centuries. Both evolved from the same letter A, but for each sides “the other version” would just look weird and alien. And it’s not just the letter A but many of the alphabet throughout would have such small differences, some because of natural evolution, some due to countries decided to purposefully modify their writing system. They speak different languages so texts written in one region using their version of the alphabet rarely meet the eyes of others, never normalizing wide variations within each regions. (So for developers supporting East Asian languages it is important to not just translate the text but also make sure to apply appropriate fonts designed for each regions to look acceptable.)
I think a comparison to the identical or nigh-identical letters in Greek or Cyrillic are an appropriate counterpart, since they have similar evolutionary divergences as the different versions of hanji. There’s less overlap, but still enough to make the point imo. Epsilon isn’t the same as E, but to an outsider they’d look the same.
@@kaitlyn__L That can be regarded as similar in a broad sense, but then we can not explain why they are treated differently in Unicode (and in other older standards) using the analogy. Among CJK languages some of the Han characters look a bit different (which are unified encoding‐wise), some look completely dissimilar (encoded separately), but many others look identical. When they are similar enough, native speakers regard them as the same character. The character 百 that appears in Mandarin text and 百 in Japanese are seen as the same, so it is desirable to be able to type or search the “Chinese 百” and the “Japanese 百” the same way. That is unlike the E/Epsilon pair, and more similar to the relationship between “English E” and “French E”. If we want a real‐world western analogy, I suppose it is how traditional cursive handwriting of the same alphabet differs from region to region in Latin‐ or Cyrillic‐using countries. That kind of divergence extends to block‐style glyphs in CJK, so to speak.
@@さゆぬ-x7i certainly, there’s no perfect analogy. But for explaining the specific backlash, I think it’s at least better than A and À! As the audience more quickly grasps both the core difference and why an outside observer might want to combine them. But certainly there’s nothing exactly the same in Europe.
@@kaitlyn__L Yeah, it’s better than A and Á, that is for sure. With that analogy alone the audience would not understand that the unification is not as far‐fetched as the idea of consolidating Latin, Greek and Cyrillic into one alphabet, so in the end we need to explain the situation specific to CJK, either way.
One reason why the scripts of extinct languages like the hieroglyphics were added is science. Archeologists, anthropologists, linguists, ethnologists, historians, etc. all need a way to digitally write whatever was written in a piece of clay, or rock or papyrus that they have found. This is still actually an ongoing problem for them. Like what Unicode did with Han Unification, Unicode also combined the different time periods of these ancient scripts. Ideally there should have been different sets of code points for sets of hieroglyphics that came from different periods of time but Unicode also unified those code points. So like with Han Unification where there should be distinct Japanese, or Korean or Chinese fonts just to distinguish what each code point truly represents, that too became the reality for hieroglyphics and other writing systems. They needed different fonts for different periods of time. What Unicode did is self-defeating. Our computers today are powerful enough to render up to millions of code points, not just the over 140,000 they currently have today. Why not do away with these unification and assign every character from every dialect and time period a code point, making none of us rely on installing distinct fonts just to display what the code point represents accurately.
If you can please explore more of the big bumps in that chart of upticks in characters what were they adding at that points. What did the code look like before han characters were added. What was the romantic languages how many characters did they fill up. What languages got supported in which order. Discuss about the emoji take over in more detail. Fascinating topic reminds me w bit of the helvetica documentary my art class showed.
That remind me of one shorts about Ao3 fun fact that there is someone posting fic written in hieroglyph. When i checked Ao3, i found something way crazier. There is fic written in fkn cuneiform. Like, HOW DO YOU EVEN TYPE THAT?! 💀
I mean since sentences can be of infinite length in any language, I guess it comes down to whichever language uses the most Unicode characters. Assuming we're disallowing the sentence "I just typed "🎉" and "❤" and "ɞ" and "ɮ" and "Ꮝ" and "Ꮿ" and "L"..." in which case it's that one
why do you need so much justification for including hieroglyphics? I'm so glad it tries to preserves every historical language. it is a good thing in itself.
One day, while waiting in a parking lot. I checked to see if I could get any WiFi signal on my phone. Lots boring SSID names (and a few clever ones) as you might find downtown. One of the WiFi names was 💩 Yes, you can use emojis & most unicode characters. I haven't had this much fun since the DOS days when I discovered you can use character 255 "invisible character" in file names. People would try to enter it as tye "space bar" Another fun was character #7 "bell character" if you Printed it, the printer would buzz or sound a beeper.
i know whatcha mean! i guess it's almost even deeper because rather than some perfect A you're representing the idea of A-ness. Does that make sense? did I sniff too much glue at lunch?
I think that Windows should have adopted the *Compose* key from Unix, and let people _type_ emojis instead of having to browse for them. For instance *Compose* *
@@FindecanorNotGmail The windows emoji menu actually works like this :D You just press " + :" and start typing the name of the emoji. The emoji list will be filtered, and when you select one the text you typed for the search will be replaced with the selected emoji
6:52 This is not a valid analogy. Mapping all 'Á's to 'A' would result in a loss of semantic meaning, as both 'Á' and 'A' are present in many languages and the diacritic is necessary to distinguish between them. Unicode's original treatment of CJK characters is based on the observation that the three language families in question have a number of characters with an identical meaning and similar appearance, therefore they could conceivably be considered different representations of the same entity (i.e.: code point). This is similar to how Fraktur is not encoded Unicode separately from Latin characters, but available to users through the use of an appropriate typeface.
Now I am curious as to their reasoning behind including long dead languages. As an amateur linguist of sorts, I greatly appreciate it, but I am interested to hear who came up with the thought and why.
I think the putative reason is that it helps scholars type in these languages and/or make PDFs, etc. I think practically, the actual use of unicode for those purposes is still less common than you'd think, so it ends up pointing more to Unicode covering themselves.
Your take on the typical RUclips video is depressingly accurate
🔥
😨🔥😳😱💩😎
@@PhilEdwardsInc Woah! 💩
Idk what RUclips videos you're watching lol
@SuperMustache555 don't be like that
Has Unicode added a character for the "Cool S" yet? I feel like that's a crucial piece of historical iconography that definitely needs to be included!
Someday kids are gonna see that and be like, "wow I can't believe letters used to look like THAT."
THIS NEEDS TO BE HEARD UNICODE!
You can write a unicode proposal for them to add it as an emoji. The process takes a while though, and it may be hard to argue against its transience ⌚
The fact that we know what you mean is enough reason to add it
@philedwards I would love to see a video on the cool S. It would make a great follow-up to calculator games.
One of my favorite parts of Unicode and emoji is that there's a sort of "glue" character that can be used to alter or modify an already existing emoji. It allows people to specify a skin color for emojis of people, and the pride flag is basically "basic flag emoji + glue + rainbow". It's just such a neat trick that the Unicode people used to greatly increase the range of emoji that can be supported.
ZWJ: zero-width joiner
Yea, it's the ZWJ. So the pride flag is "flag + zwj + rainbow"
Similarly, there are no specific country flags saved in Unicode, just a set of letters. So if you want the US flag, you essentially enter flag+U+S. That way, they don't have to deal with the every changing list of countries.
That is pretty cool. I also didn't get into control characters which are pretty neat too.
@@JorWat25 It's actually "Country indicator U" + "County indicator S", two characters.
It does mean that the apparent length of the text depends on whether the font supports the flag.
As a developer I’m familiar with ASCII and Unicode but this was still very informative and entertaining video. Keep up the good work and Thanks!
That Ron/Don/ducks bit was absolute gold. Great job with this and all your videos! Its been exciting to see your trajectory from Fake Science into Vox and now your solo adventures!
When I worked at a national wildlife refuge, we made maps in ArcGIS to hand out to people who wanted to bird watch. We'd mark spots where you could see cranes, eagles, ducks, etc. ArcGIS doesn't easily allow for images to be imported into a map, so we used a bird font. I forget exactly what letter corresponded to what bird but it made labelling bird watching locations super easy. 😂
that's a useful font!
this reminds me of the time I made a custom font to make a computer vision task much easier and more reliable, custom fonts are always a funny workaround
Yeah I kinda miss symbol fonts where everything was “really” a letter, they’re kinda unnecessary now that Unicode has so many unique symbols.
See also the mild controversy about whether to include stuff like the Prince symbol, since that was distributed as a vector font for publications to use, but isn’t public domain. But Unicode’s mission of being able to encode and support every possible symbol kind of suggests it should have it? But it can’t due to rights issues.
@@kaitlyn__L Ha great minds! I read an article about the Prince symbol while I was researching this and saw all the fans desperate to hack together the best possible comp.
Very nice! I've used ArcGIS for years and haven't seen a bird font, but then I haven't made maps with birds. Now, the base for our published maps start in GIS, but are cleaned and finalized in Illustrator.
Honestly, the production quality of your videos makes me feel like i'm watching Netflix, it's pristine!
And i find it incredible that even with the fancy edits and topics you can still make it entertaining and funny!
that's nice and appreciated - thanks! still room to improve!
Learning Unicode's history was never this fun, and made me realize it's importance.
Also, I genuinely love your style and take on video essays like this. Must've taken some time to reach that balance.
As a computer geek this video tickles me nicely. We deal with unicode and it's weirdness as software developers all the time. It's nice for the general public to understand why sometimes we have this weirdness.
I am a Chinese speaker and I am learning Japanese. I know the hiragana and katagana pretty well but sometimes I don't know the pronunciation of a kanji. I would just switch to Chinese and type the Chinese equivalent and I am always surprised how it knows it is supposed to be a Japanese kanji not a Chinese character. Turns out someone (probably a group of people) painstakingly found out every single Japanese kanji that is exactly the same as their Chinese counterpart and made them the same unicode!
That's reassuring!!
Though, cataloging all widely used kanji is a huge project to begin with, in any language.
Hello, can you share your experience, like, is it easier for a Chinese speaker to learn Japanese compared to rest of the people who don't know about kanji (Chinese characters)? Can it be easier for a Japanese speaker to learn Chinese?
@@dapuslearning3828 yeah, the difficulty of learning a language depends on your native one
@@dapuslearning3828 as far as im aware of yes, a large part of japanese vocabulary is from chinese and since most kanji(used in japanese) have hanzi(used in chinese) counterparts which have same or similar meaning and pronunciation*, its better just to confirm how the kanji is actually pronounced though
in terms of native japanese words or loanwords from other languages like english, and also japanese grammar and pitch accent in general that cant be learned with the help of chinese
im not sure how it'd be like for a japanese speaker learning chinese, but again the grammar is different and compared to japanese theres fewer english loanwords in chinese which would be expressed entirely using the chinese meaning of hanzi instead of transliteration (e.g. japanese for computer is 'konpyuuta' while in chinese its 'dian nao', literally meaning electric brain)
*im talking about the onyomi pronunciation which is derived from chinese, kunyomi which is the japanese pronunciation can't be learned from chinese
Repeatedly hearing the letter "Ä" being called "A" gave my swedish ass an aneurysm, even though i know he's not wrong and that the point being made is that you can add diacritics like an umlaut/diaeresis to letters using unicode.
If I'd pronounced the accented As aloud, your reaction would have been even worse!
Umlait ≠ diaeresis, but Unicode does not distinguish between them
I guess it's the same as in Russian Е and Ё, for non-russians out there they might look like they sound similar but for me they are completely different, I mean it's "ye" and "yo" accordingly
@@PhilEdwardsIncshould've said "A with acute", "A with grave", etc.
I'm a very technical guy who is not afraid of using the command line and it's text-based applications. Having Unicode there is very useful if you want things to be more useful or just to be more beautiful. Also, I learn Japanese using the internet, which is now possible due to Unicode.
Well made video, ありがとうございます!
I know you didn't want to get too technical, but UTF-8 had a big effect on Unicode's adoption. ASCII used 7 or 8 bits of data, while the earliest Unicode standards used 16. But for ASCII text (which includes, say, the HTML that makes up every web page, even in non-Latin languages), half the data would just be a bunch of zeroes. If they wanted to include more that ~65,000 characters, they'd have to extend it to 32-bits, and a full three quarters of plain Latin text would be zeroes. This would suck even now, but in an age of dialup modems, it was a non-starter. UTF-8 allows us to vary the number of bits for each character: 8, 16. 24, and 32, or even more if we include combining characters that allow you to change the color of hair on an emoji character.
UTF-16 is a whole sorry saga in itself. Unicode was originally going to be a fixed-length 16-bit code, a.k.a. “UCS-2”. With a bit of a struggle over the CJKV unification, this was deemed sufficient for all writing systems currently in use.
Certain vendors who were adopting Unicode at this time took the spec creators at their word, and began to hard-code the assumption of fixed-length 16-bit characters into their platforms: Microsoft with Windows NT, Sun with Java etc.
Then with Unicode version 2 the spec creators changed their mind, and decided they would need more characters to support historical writing systems, special symbols, emojis etc. So characters--or actually, “code points”--would need more than 16 bits.
So what happened to those products that had already been designed around the assumption of a fixed-length UCS-2 character set? What was originally “UCS-2” was now redefined as “UTF-16”, so a single Unicode code point could be represented by one or two 16-bit codes, using the newly-defined “upper surrogate” and “lower surrogate” ranges.
But the sad thing is, what was supposed to be fixed-length code is now no longer fixed-length. What happened to all the software written with the assumption that the code was fixed-length? Don’t ask ...
UTF-8 also very cleverly solved a subtle problem in string processing when dealing with multi-byte characters: going backwards. It's a bit complicated, but Shift-JIS, GB-2312 and nearly all of the other systems encoded second and later characters of multi-byte sequences in ways that clashed with first bytes. But UTF-8 encodes subsequent bytes very slightly differently than the initial byte so you can always tell which is which, whatever way you're moving through the text bytes.
With UTF-8, you know how many bytes make up the current code point just from looking at the first byte. ASCII characters represent themselves in a single byte.
@@static-san Yup.
- If a byte starts with 0, you _know_ it's a 1-byte ASCII character (Unicode range 0-127). No additional bytes are needed; any bytes to either side are part of other characters.
- If a byte starts with 10, you _know_ it's the second (or third, or fourth, etc) byte of a multi-byte character. Move backward one or more bytes to find the start of the character. Or move forward one or more bytes to find the start of the _next_ character.
- If a byte starts with 11, you _know_ it's the _first_ byte of a multi-byte character. The number of 1s before the first 0 (originally 110xxxxx - 1111110x) tells you how many bytes long the entire character is. Move forward to read those other bytes.
great editing as always Mr. Edwards. Thank you for your work, quite entertaining.
Thank you for making your videos so enjoyable beyond just the information you give us... it looks like you have fun making them, and that's fun to watch!
i feel so ꧁ special ꧂ as someone who has captions on hehe, i enjoyed that lil message ☆✪ൠ
I have a vague recollection of reading somewhere that that the addition of Egyptian Hieroglyphs to unicode was a pretty significant step in Egyptological research as it helped simplify communication in research (referring to hieroglyphs in writing in a shared, standardized way)
I have a hard time finding the article I read, so perhaps my memory is off, but that’s what I remember 😅
That sounds entirely plausible. Also the addition of other ancient scripts, like the many varieties of cuneiform. Also runes!
I read the same though I also ran into some stuff saying scholars sometimes preferred pictures to give historical context.
@@PhilEdwardsInc Details of stylistic variations, say, over the centuries, are obviously best conveyed by pictures. But conveying the actual text, independent of such issues, would be most easily done by using an actual text encoding.
Okay I legitimately blurted out laughing when the B&W guy asked if he could leave after his segment was over. 😂 You keep coming up with great ways of playing with tropes of educational videos. I immediately imagined all talking head interviewees as being trapped in the video, waiting to be cut to again.
You’re killing it, Phil! Best video on this topic I have ever seen.
In the late 90s and early 2000s, when browsers did not fully support Unicode yet, non-English speakers had to adapt to writing without special chacterters. This was the beginning of a kind of "dialects" initially intended to circumvent that technical limitations, but quickly became youth slang (and the terror of many teachers and parents).
Chat-Arabic comes to mind, which incorporated numbers as letters. Pretty useful actually.
I remember Ron and Don's issue well. In the 1980s, I was using a Unix computer that used Ascii for its characters and I had to produce a tape of W-2 data for the IRS which had to be in EBCDIC which is the code that was used by IBM. I had to use the dd ommand to convert the text and write it to tape and produce a file that the IRS would accept. For some reason, it took a lot of work to get it to work right.
The other similar issue was that Unix used linefeed to terminate text lines and both VAX/VMS and the new PC's used carriage return/linefeed combinations. Transferring files had to take that into account.
So did I. I had to write a program to convert the variable record length ASCII data of our employees on our Pick operating system-based computer onto a tape of EBCDIC data for the Employee's Retirement System's IBM computer. It was dictionary driven. It ran perfectly after the second compile.
VMS originally used record formats that did not have explicit line-ending characters. Then I think in version 3 they added 3 “stream” formats: STREAM_LF, STREAM_CR and STREAM_CRLF. One of these is equivalent to the CP/M/MS-DOS/Windows style, another to the Unix style, and the third to the old Macintosh style. So you see, it could handle them all natively.
Crazy that this is a single character ﷽
I think the code is U+FDFA
@@PhilEdwardsInc It's U+FDFD
I remember on my Samsung phone it used to appear as a short sentence but then they released an update which compressed it into a character slightly wider than ―
I guess that’s a ligature? Technically?
@@kaitlyn__L It's a bit more than a ligature cause it's the whole phrase "In the name of Allah, the Compassionate, the Merciful" which opens every chapter of the Quran.
Convo between Ron and Don made me sub!
LOL who remembers receiving text messages with a bunch of squares thinking what is wrong with my phone.
definitely
Those $5 emoji-enabling apps
I remember coming across the lovely phrase “the empty rectangle of Unicode disappointment”.
I think web developers used the term “tofu” for those squares. And so Google created its “Noto” fonts (“no tofu”) to try to cover all the code points in a meaningful way, to put an end to them.
This answered questions I never knew I had. I'm a computer tinkerer/novice programmer, so I was already somewhat familiar with what ASCII & Unicode are, but never knew the history behind them. Thank you for this entertaining educational video!
The spoofs of 90s VHS presentations and modern overhyped RUclipsrs were great. I can tell how much effort went into framing, scripting, and then the chroma key background creation (more involved for the former admittedly).
thanks for covering this :) I asked you in a tweet a while back and thanks so much for the video
oh dang i'm sorry i forgot to say thanks in the video! i put it in my giant brainstorm doc and lost track. thanks for the suggestion!
@Phil Edwards no worries at all :) thanks for making it answered alot of my questions!
Wow, the visuals and the storytelling are amazing
hell yeah !! i love unicode !!
finally a look into its history !!
I would kill for that 90s style presentation that takes three hours of monotone narration over the meaning of every single symbol. you have no idea how badly I want it
excellent quality as always Phil
I find it interesting that there is actually a common language (of sorts) that Unicode doesn't fully support: Math expressions. A format called Latex is much more common in math equation setting, and while Unicode does have some stuff for this, its not nearly as good.
Musical language is another example where pure Unicode doesn't really work, and you need formatting like TeX provides
See also Sutton SignWriting. Unicode supports all the symbols, but not their complex positioning.
I would argue that LaTeX is a programing language, not a character set.
Unicode is for natural speech, not math nerdspeak 😂
@@mfaizsyahmi I wouldn't exactly call Hiroglyphs natural language, not since Roman times anyways
The humor in this video is on point! A super interesting video, yet entertaining as well. Great job with this one, Phil!
What I learned from this video- Unicode reaches back into the 80’s, and that they tried to cap it 16 bits by limiting the CJK glyphs.
7:06 As a Chinese user, I think those are still very much the same Chinese characters. So I don't think anyone would oppose these characters on a practical standpoint. Han Chars Unification also deals with more visible differences where the good examples are listed on Wikipedia.
As for the hieroglyphics, I haven't checked, but I assume these are added for research purposes. How else would you do a text search on ancient Egyptian writings if there isn't a standard?
I remember reading most opposition came from the types of people who also opposed Simplified Chinese and the postwar Japanese hanji simplification.
always interested in your content, feels like curiosity at its peak
Love your videos. Not too in depth, long and dry videos but also not too superficial and surface level.
appreciate it!
Another solid video Phil!
Thank you for calling out how obnoxious some RUclips videos can be at the beginning
I can tell you’re finding your groove with every new video. In my opinion this topic isn’t all that interesting but your delivery was really funny and entertaining. And I still learned something so congrats!
I never loved history but the way you tell the stories make me like it ever so little more. Thank you!
Great vid. This made me look into what a Japanese typewriter looked like and Wow! The technical expertise it must have taken to use those things. Makes the QWERTY evolution seem tame.
yes indeed! i believe johnny harris made one about the chinese keyboard
ruclips.net/video/hBDwXipHykQ/видео.html
and i have made a qwerty vid!
ruclips.net/video/c8f6us-Sjlo/видео.html
@Phil Edwards Saw your QWERTY vid but hadn't heard of Johnny Harris before. Checking it out now, thanks for sending!
This video was good at showing social impacts which I hadn't seen featured before. Thank you.
Brilliant and well made as always Phil. Love your stuff at Vox and love your solo style and how it's you're own. By the way, has anyone ever mentioned you look so much like a younger Gary Oldman? Gary Not-So-Oldman? Also, are these RUclips emoji also unicode based? Time to do my own resarch.
Cool stuff!
I'm glad I'm not the only one who find it fun to look at unicode characters
The humor in this video is on another level
Great video as always Phil, thanks!
I'm always down to listen to a boring 30 minute video by "The Computer Guru" while I make my spaghetti
9:34 ^me sipping my mds sprite in bed wearing my bathrobe^ Ahhh yes, *we* did good.
You've earned it
we are heroes.
i didn't expect to wake up to find a history of unicode video in my feed, let alone from someone i enjoy as much as you, Phil. let the breakfast, and history, begin
Thank you for posting!
This is a phenomenal video! Loved every second of it!
There's also precedent because some of the earliest word processors felt it was important to be able to write chess books
Enjoyed this video Phil
The level of ASCII usage will fall off a cliff now that Dwarf Fortress has been ported to Steam.
*Microsoft's ASCII extension. Dwarf Fortress uses CP-437, Microsoft's ASCII extension used in MS-DOS. So it's not the level of ASCII usage that fall of the cliff, it's the level of Bill's ASCII usage that did.
I appreciate the details you put into your youtuber spoof at 1:30. Abrasive colors, lighting, faces and cuts; just needed some face swirling and zooming.
Insisting that every Han character gets it's own code point by itself: *Han Solo* (Queue Wookiee sound.)
Thank you. I'll show myself out.
Don't show yourself out. I had a Han unification joke that I cut out from the video. We're only human.
@@PhilEdwardsInc 🙂
1:35 shots fired at Scishow! 💀
That HAI mention in the beginning! 🤩
wasn't surprised to find he'd done something unicodey
This video is 𓎼𓂋𓄿𓏏.
𓃀
I'm sad how far I had to scroll to find interesting unicode in the comments, but I'm glad I found it!
Great editing, fitting music for the atmosphere, and your watch facing the camera for some reason kept me fixated. 9/10
lol the watch has triggered so many people haha. i am going to start it as my retention strategy.
01:56
In graphic design, they are more precisely called typefaces. Fonts are flavours: bold, italics, narrow and so on.
This is amazing! I just attended the funeral of one Jon Jenkins, who was one of the key Unicode engineers who worked for Apple over many years, and was a key worker who helped with Chinese characters and other alphabets, idk this stuff just really intrigues me.
Some of these unicodes are so high-quality & interesting ngl.
The ever-growing inclusivity of Unicode makes me feel warm and fuzzy inside and hopeful for the future.
And here is another bit of unnecessary history: Baudot encoding was invented in the 1870s and was used in teleprinters. Letters were encoded in five bits, which only left room for capital letters. ASCII had seven bits, which allowed for lower case letters and more punctuation, but the first iteration only had upper case letters because that is what Baudot had. The Baudot ITA2 code, standardized in 1924, listed the letters in the order they appeared on the QWERTY keyboard. When the ASCII table was laid out they realized that the keyboard order no longer mattered, so the letters are in alphabetical order.
In 1974 I worked at a company in Freeport, Texas. The blueprint printer I used was right next to the telex machine used to communicate with the company office in Houston. They would send us the daily schedules and we would send our production data back to them. When the telex received a message, in addition to printing it out, it would punch it out on paper tape, which was stored and could be printed out again. It was sort of the early version of the disk drive.
The telex is pretty well gone now, but the Baudot code is still used in nautical radio messaging to conserve bandwidth.
The big competitor to ASCII was EBCDIC, but I don't want to go into that right now, since it doesn't really bear on the development of Unicode.
My favourite part of Baudot code was how you could still do lowercase letters, you just had to hope the receiving end wasn’t out of sync on the status of the carriage shift… (so of course most didn’t do it, and the early teleprinters didn’t even bother having compatibility)
Thanks for adding that Paul. And Kaitlyn you are gradually revealing you are a character code/Unicode genius.
There used to be standardised "ASCII keyboard" modules that produced ASCII codes in parallel when you pressed a key, or key combination. I think both the Apple I and Apple II were made for ASCII keyboards: for the Apple I you had to buy one separately.
@@PhilEdwardsInc just regurgitating stuff I read 15 years ago lol, a certain amount gets reinforced watching people repair mechanical 1930s teletypewriters too! But thank you very much nevertheless, I suppose it does all add extra background info to the subject of the video :)
IBM’s excuse for creating EBCDIC is that they were bringing out their brand-spanking new System/360 machines in 1964, and the ASCII spec wasn’t quite finished yet. (I think it was actually finalized that same year.)
So, for the sake of a few months, the computer industry was saddled with decades of compatibility headaches between the two character encodings.
Actually, nobody used EBCDIC apart from IBM and some plug-compatible mainframe vendors. The entire rest of the industry adopted ASCII, plus national variants thereof.
This was super interesting. All your videos are. They just should be a little longer. They always leave me wanting more.
I get the same feeling. Maybe it’s that these videos cover topics I already know, or maybe they could use an extra minute to go into a little more detail. An important point that was skipped from this video is the fact that the rendering of the glyph is not part of Unicode. Unicode is the text description of the glyph.
@@SimplyDudeFace I think it’s a Vox thing. Or it was a Cracked thing that evolved into the Vox thing of giving you the light information but not coming to a conclusion or opinion at the end. I just want him to dig one layer deeper. These videos spark my interest in a topic and then they’re over.
"Sometimes a solution looks like a foot" - likely the most confusing quote without context.
𓃀
𓃀
6:53 I wouldn’t say “A” and “Á” is a good example… Maybe “A” and “𝐀”? But in a world where the second 𝐀 is the only acceptable form of A in certain countries. (I know, weird, but there are no actual examples for this.)
Imagine being used to only seeing 𝐀 in your day-to-day life, and then suddenly it gets rendered as “A”. You’d still recognise it, but you’d think it looks distinctly foreign.
The same goes for Han characters such as 八. The font is the only thing that can determine a character’s “regional” look. Now imagine a Chinese text displayed with a Japanese font. All the characters would stay legible to a Chinese audience, but many of them will look Japanese.
Yeah I thought a comparison to similar looking letters in Cyrillic or Greek would have been more appropriate. Like “imagine if a Japanese computer giant decided A and alpha were the same”
yeah totally fair. I thought about that, but I wanted to get at the nationalistic/cultural element, and I felt the only way for a Western audience to vibe with that was to see a character that was clearly from another country (another being from my perspective, as an American).
@@PhilEdwardsInc A better example would be to have Latin A and Cyrillic А encoded as one single character! (I wonder why I didn’t think of that earlier haha!) Or Latin M and Greek Μ. And only a font would display lowercase M as either m or μ. So you’d need a Greek-specific font to display μ, or else it’d incorrectly be rendered as “m”.
This is quite crazy if you think about it, yet Chinese and Japanese users have to deal with it on a daily basis! Not only that, but Hong Kong and Taiwan also display characters differently (and Korea as well, although Chinese characters aren’t that commonly used in Korean anymore). 真, for instance, would be displayed with the central box “detached” from the lower horizontal stroke in Japanese, whereas in Chinese it connects directly to it. The same can be said for 直. Who knows what your device displays it as! But yeah, the Japanese way of writing the character would be considered wrong in Chinese, and I believe vice versa as well… Imagine seeing that in a printed text!
You're my new favorite channel
Newspapers have headlines: youtube videos have titles
it's my headline 🤷
@@PhilEdwardsInc :(
Some people opposed the Han unification back then but as a Japanese person I would say the majority of us, including I, think it was a sensible thing to do. But it was not a simple task, was not executed perfectly, and left some irreversible inconsistencies in the code chart - like the fact that the character 浅 is defined so it can be drawn with two or three horizontal bars in the right‐hand part while 桟 and 栈 are encoded separately. Overall it more or less worked.
It is hard to come up with a Latin alphabet equivalent but it is like an alternate universe where one half of the world would end up having a tradition of writing the letter A consistently without a horizontal bar, while the other half would exclusively use the A with two horizontal bars to write their own languages for centuries. Both evolved from the same letter A, but for each sides “the other version” would just look weird and alien. And it’s not just the letter A but many of the alphabet throughout would have such small differences, some because of natural evolution, some due to countries decided to purposefully modify their writing system. They speak different languages so texts written in one region using their version of the alphabet rarely meet the eyes of others, never normalizing wide variations within each regions. (So for developers supporting East Asian languages it is important to not just translate the text but also make sure to apply appropriate fonts designed for each regions to look acceptable.)
I think a comparison to the identical or nigh-identical letters in Greek or Cyrillic are an appropriate counterpart, since they have similar evolutionary divergences as the different versions of hanji. There’s less overlap, but still enough to make the point imo. Epsilon isn’t the same as E, but to an outsider they’d look the same.
@@kaitlyn__L That can be regarded as similar in a broad sense, but then we can not explain why they are treated differently in Unicode (and in other older standards) using the analogy. Among CJK languages some of the Han characters look a bit different (which are unified encoding‐wise), some look completely dissimilar (encoded separately), but many others look identical. When they are similar enough, native speakers regard them as the same character. The character 百 that appears in Mandarin text and 百 in Japanese are seen as the same, so it is desirable to be able to type or search the “Chinese 百” and the “Japanese 百” the same way. That is unlike the E/Epsilon pair, and more similar to the relationship between “English E” and “French E”.
If we want a real‐world western analogy, I suppose it is how traditional cursive handwriting of the same alphabet differs from region to region in Latin‐ or Cyrillic‐using countries. That kind of divergence extends to block‐style glyphs in CJK, so to speak.
@@さゆぬ-x7i certainly, there’s no perfect analogy. But for explaining the specific backlash, I think it’s at least better than A and À! As the audience more quickly grasps both the core difference and why an outside observer might want to combine them. But certainly there’s nothing exactly the same in Europe.
@@kaitlyn__L Yeah, it’s better than A and Á, that is for sure. With that analogy alone the audience would not understand that the unification is not as far‐fetched as the idea of consolidating Latin, Greek and Cyrillic into one alphabet, so in the end we need to explain the situation specific to CJK, either way.
@@kaitlyn__L What about writing Z or 7 with or without a middle bar?
Awesome videos, super informative
As someone who is pretty familiar with Unicode and has kind of gotten over its wonder, I enjoyed you introducing unicode to stranger things music
The Calvin peeing video + the "Ron, Don, Ron, Don" joke got me subscribed :)
Just starting the video now- but I’m so surprised, my TV even recognizes the symbols!
The RUclips satire you made is as fire as fire.
🔥🔥🔥
@@PhilEdwardsInc what about 💩?
One reason why the scripts of extinct languages like the hieroglyphics were added is science. Archeologists, anthropologists, linguists, ethnologists, historians, etc. all need a way to digitally write whatever was written in a piece of clay, or rock or papyrus that they have found. This is still actually an ongoing problem for them. Like what Unicode did with Han Unification, Unicode also combined the different time periods of these ancient scripts. Ideally there should have been different sets of code points for sets of hieroglyphics that came from different periods of time but Unicode also unified those code points. So like with Han Unification where there should be distinct Japanese, or Korean or Chinese fonts just to distinguish what each code point truly represents, that too became the reality for hieroglyphics and other writing systems. They needed different fonts for different periods of time. What Unicode did is self-defeating. Our computers today are powerful enough to render up to millions of code points, not just the over 140,000 they currently have today. Why not do away with these unification and assign every character from every dialect and time period a code point, making none of us rely on installing distinct fonts just to display what the code point represents accurately.
His typical 90s character revealed a lot about what he was like 30 years ago lol
If you can please explore more of the big bumps in that chart of upticks in characters what were they adding at that points. What did the code look like before han characters were added. What was the romantic languages how many characters did they fill up. What languages got supported in which order. Discuss about the emoji take over in more detail. Fascinating topic reminds me w bit of the helvetica documentary my art class showed.
Yeah definitely more there - I'd basically be doing this version history: unicode.org/history/publicationdates.html
@@PhilEdwardsInc I'll take my time to browse it. maybe I'll find some interesting things! Thank you.
In one of his live talks, Tom Scott says that the watershed moment was when a version of ASCII included a smiley face.
Alt + 3 (in numpad) = ♥
Awesome vid! 🤗👍😁
╱╱┏╮
╱╱┃┃
▉━╯┗━╮
▉┈┈┈┈┃
▉╮┈┈┈┃
╱╰━━━╯
That remind me of one shorts about Ao3 fun fact that there is someone posting fic written in hieroglyph. When i checked Ao3, i found something way crazier.
There is fic written in fkn cuneiform.
Like, HOW DO YOU EVEN TYPE THAT?! 💀
“The quick brown fox jumps over the lazy brown dog.”
I wonder what sentence captures the most Unicode characters.
I mean since sentences can be of infinite length in any language, I guess it comes down to whichever language uses the most Unicode characters.
Assuming we're disallowing the sentence "I just typed "🎉" and "❤" and "ɞ" and "ɮ" and "Ꮝ" and "Ꮿ" and "L"..." in which case it's that one
why do you need so much justification for including hieroglyphics? I'm so glad it tries to preserves every historical language. it is a good thing in itself.
Well... With the use of emoji we're back to the origin of writing, so the (re)introduction of hieroglyphs is a logical step.
Super cool video!
One day, while waiting in a parking lot. I checked to see if I could get any WiFi signal on my phone. Lots boring SSID names (and a few clever ones) as you might find downtown. One of the WiFi names was 💩 Yes, you can use emojis & most unicode characters. I haven't had this much fun since the DOS days when I discovered you can use character 255 "invisible character" in file names. People would try to enter it as tye "space bar" Another fun was character #7 "bell character" if you Printed it, the printer would buzz or sound a beeper.
Phil’s alter egos are becoming the best part of this channel. Laughed my ass off at 1960s Phil 😹
1:59 platos theory of forms!?
i know whatcha mean! i guess it's almost even deeper because rather than some perfect A you're representing the idea of A-ness. Does that make sense? did I sniff too much glue at lunch?
New graphics are really shaping up dude. Every video is better and better
Pro tip: Press " + :" (so, press the windows key and colon at the same time) to bring up the Windows emoji menu. Now you can Unicode anywhere!
I think that Windows should have adopted the *Compose* key from Unix, and let people _type_ emojis instead of having to browse for them.
For instance *Compose* *
@@FindecanorNotGmail The windows emoji menu actually works like this :D
You just press " + :" and start typing the name of the emoji. The emoji list will be filtered, and when you select one the text you typed for the search will be replaced with the selected emoji
Why is fire so fire indeed.
❤️🔥
phil has transcended
🕴️
for (some) context i forgot about your existence for a year
@@kornsuwin and yet here we are, both thriving
@@PhilEdwardsInc and now i officially return to being the +=1 out of countless others
The blue wall stripe is to the book spine as the yellow watch is to the figure in the background as the desk lamp is to the green leaf wall. Noticed.
i̠̱̞t̛̖̟'̨̡̪ṣ̢̤ ̧͍̦a͎̰̯l̤̝̪l͔̗̜ ͓̘̠c̛͙̚o̝̞̟n̡͕̠n̢̫̬ȩ̨̩c̤̥̪ț̩̘e̠̲̰ḏ̳̹
6:52 This is not a valid analogy. Mapping all 'Á's to 'A' would result in a loss of semantic meaning, as both 'Á' and 'A' are present in many languages and the diacritic is necessary to distinguish between them. Unicode's original treatment of CJK characters is based on the observation that the three language families in question have a number of characters with an identical meaning and similar appearance, therefore they could conceivably be considered different representations of the same entity (i.e.: code point). This is similar to how Fraktur is not encoded Unicode separately from Latin characters, but available to users through the use of an appropriate typeface.
Now I am curious as to their reasoning behind including long dead languages. As an amateur linguist of sorts, I greatly appreciate it, but I am interested to hear who came up with the thought and why.
I think the putative reason is that it helps scholars type in these languages and/or make PDFs, etc. I think practically, the actual use of unicode for those purposes is still less common than you'd think, so it ends up pointing more to Unicode covering themselves.
@@PhilEdwardsInc Awesome! Thank you!
I had no idea so much work went into the new emoji releases
May the universal language of Unicode bring you joy, wisdom, and enlightenment too.