You are viewing a single comment's thread from:
RE: The Greater Than 256 Bit Alphabet Algorithm
In the beginning, there was ASCII, and it was good.
So much of C was built around ascii.
You know, for(i=0, string[i] != null, i++) { }
so easy to iterate through strings.
And then comes wchar, and it is just a mess. Basically every language just added in one after the other.
Some is just added extra alphabet characters (â, ō) but others like Chinese are over 3000 pictographs that roughly translate to words.
The thing about ascii is it was very well designed. A and a have the same lower bits, and such.
I could easily do what you say with char, but not with wchar. I can't quite see how to make that happen easily.
Run a for loop the length other wchar.
Make the char the same length.
Cast the what to char.
Or std::wstring
And std::string
😂