Not sure I understand. You're saying that if I wanted a 32-bit data type, and my only data type available was unsigned char, I could design 32-bit data type by defining a struct as following?
Yup, but that would only get you like 1% of the way there (defining the struct, that is; the bulk of the work is in defining the operations).
And so, if I wanted to store integer 2^32-1, I'd fill in each of these fields with 255?
Exactly,
(255*256**3) + (255*256**2) + (255*256**1) + (255*256**0) == (1*2**31) + (1*2**30) + ... + (1*2**0) == 2**32-1 (brackets for readability).
Theoretically, I can extend this to unlimited bits long data type (...)
Yep, but if you're looking for
arbitrary precision, then it's worth studying something like
GMP.
(...) but various functions from the standard library won't work, and I'll have to rewrite them myself (...)
That's right.
(...) whereas in C# if I'm not mistaken, such capabilities exist already.
Yup, I'm not a big C# guy, but it has
BigInteger (in the
System.Numerics namespace, since .NET Framework 4.0, I think).
I'm just curious why I can't define a 256-bit or x-bit data type in C.
That's coming in
C23. I'm not sure what value
BITINT_MAXWIDTH will take on most compilers, but assuming it's large enough, then you'll be able to write
_BitInt(256), or
unsigned _BitInt(256), etc.
I can't really say that I'm a fan of this approach (especially in a systems programming language like C). Every time something is made easier for people, there's a corresponding drop in the skill level of the average practitioner. I'm not a sadist, but I do think there's harm in a situation where programmers can make heavier and heavier lifts, but have less and less idea of how things actually work.