c# - why do char takes 2 bytes as it can be stored in one byte -


can tell me in c# why char takes 2 bytes although can stored in 1 byte. don't think wastage of memory. if not , how 1-byte used? in simple words ..please make me clear use of 8-bits.!!

although can stored in 1 byte

what makes think that?

it takes 1 byte represent every character in english language, other languages use other characters. consider number of different alphabets (latin, chinese, arabic, cyrillic...), , number of symbols in each of these alphabets (not letters or digits, punctuation marks , other special symbols)... there tens of thousands of different symbols in use in world ! 1 byte never going enough represent them all, that's why unicode standard created.

unicode has several representations (utf-8, utf-16, utf-32...). .net strings use utf-16, takes 2 bytes per character (code points, actually). of course, 2 bytes still not enough represent different symbols in world; surrogate pairs used represent characters above u+ffff


Comments

Popular posts from this blog

c# - How to set Z index when using WPF DrawingContext? -

razor - Is this a bug in WebMatrix PageData? -

visual c++ - Using relative values in array sorting ( asm ) -