Search Results - SC2 Mafia
Register

Search:

Type: Posts; User: DJarJar

Search: Search took 0.02 seconds.

  1. Forum:General Discussion

    Thread:Ultra nerdy question

    Thread Author:Helz

    Post Author:DJarJar

    Replies
    8
    Views
    1,063

    ►►Re: Ultra nerdy question◄◄

    In programming you generally use binary (base 2), hexadecimal (base 16), and decimal (base 10), depending on the task.

    I haven't seen anybody using octal (base 8 ) at all. Maybe what you are referring to is simply that a byte is 8 bits.

    A bit is a single 0 or 1. Which you can think of in a primitive machine as literally telling the machine that a certain switch should be flipped on or off.

    As oops said, a byte is the smallest unit of memory large enough to store an individual character. They could make bytes 10 bits instead of 8 in order for calculations to be easier for us as humans, but that would be a huge waste of storage space.



    Maybe the 8 vs 10 thing you're talking about is about the difference between conventions for prefixes regarding memory size. In most scientific fields Kilo means 1000, Mega 1 million, Giga 1 billion, etc. But with computing KB can mean both 1000 (10^3) bytes and 1024 (2^10) bytes. Windows uses 1024 while the Hard Drive companies use 1000. That's why your 1 TB (10^12 bytes) hard drive ends up being about 931 GB.

    1,000,000,000,000 / 1024 / 1024 / 1024 = 931.3

    There was a convention introduced to fix this problem where KB (kilo-byte) would mean 1000 bytes and KiB (Kibi-byte) would mean 1024 bytes. Unfortunately this convention is not gonna be universally adopted anytime soon because it would basically be suicidal for a hard drive company to start selling a 1 TiB drive (which is bigger and hence should be more expensive than a 1 TB drive) and try to convince the average uneducated consumer to buy their more expensive drive over the competitor's.
Results 1 to 1 of 1