A bit is the basic unit of information in computing and digital communications. A bit can have only one of two values, and may therefore be physically implemented with a two-state device. The most common representation of these values are 0and1. The term bit is a contraction of binary digit.
The two values can also be interpreted as logical values (true/false, yes/no), algebraic signs (+/−), activation states (on/off), or any other two-valued attribute. The correspondence between these values and the physical states of the underlying storage or device is a matter of convention, and different assignments may be used even within the same device or program. The length of a binary number may be referred to as its bit-length.
The byte is a unit of digital information in computing and telecommunications that most commonly consists of eight bits. Historically, the byte was the number of bits used to encode a single character of text in a computer and for this reason it is the smallest addressable unit of memory in many computer architectures. The size of the byte has historically been hardware dependent and no definitive standards existed that mandated the size. The de facto standard of eight bits is a convenient power of two permitting the values 0 through 255 for one byte.
The unit octet was defined to explicitly denote a sequence of 8 bits because of the ambiguity associated at the time with the byte.
The term kilobyte and the symbol KB have historically been used to refer to 1024 (210) bytes, in the fields of computer science and information technology. The megabyte (symbol MB, sometimes abbreviated as Mbyte) is a multiple of the unit byte for digital information storage or transmission with 1048576 bytes (220) generally for computer memory;
1 Kilobyte =1024 bytes = 210B
1 Megabyte = 1024 Kilobytes= 210B x 210B
1 Gigabyte = 1024 Megabytes =(210B x 210B ) x 210B
1 Terbyte = 1024 Gigabytes= (210B x 210B ) x (210B x 210B)
Please log in to post questions/answers: