Cover
Start nu gratis pattpatel-chapter-2CSA.pdf
Summary
# Bits and data types
This section introduces the fundamental concept of the bit as the basic unit of information in computers, explaining how bits represent values and exploring various data types for integers [1](#page=1).
### 1.1 The bit as the unit of information
Computers operate by controlling the movement of electrons, which are detected through the presence or absence of voltages in electronic circuits. This presence or absence of voltage is symbolically represented as "1" and "0" respectively, with each "0" or "1" being called a "bit" (binary digit). While not an absolute absence or presence, computer circuits differentiate voltages close to 0 from voltages far from 0, with a threshold determining if a voltage signifies a "1" or a "0" [1](#page=1).
#### 1.1.1 Representing values with bits
A single bit can represent only two distinct values. To represent a larger number of distinct values, multiple bits are combined. With $k$ bits, at most $2^k$ distinct items can be distinguished, with each unique pattern of $k$ bits forming a code for a particular value. For example, eight bits can represent $2^8 = 256$ different things [2](#page=2).
### 1.2 Data types
A representation of information is considered a **data type** if there are operations in the computer that can manipulate information encoded in that representation. Each instruction set architecture (ISA) defines its own set of data types and corresponding operations. Common data types include 2's complement integers for arithmetic operations and ASCII codes for character representation. Other data types, such as floating-point numbers used in scientific notation, are also supported by specific computer architectures [2](#page=2).
### 1.3 Integer data types
#### 1.3.1 Unsigned integers
Unsigned integers represent magnitudes without any associated sign (plus or minus). They are used for tasks like counting occurrences or identifying memory locations. Unsigned integers are represented using a positional notation similar to the decimal system, but with a base of 2. For example, with five bits, the number 5 is represented as $00101$, which translates to $0 \cdot 2^4 + 0 \cdot 2^3 + 1 \cdot 2^2 + 0 \cdot 2^1 + 1 \cdot 2^0$. With $k$ bits, $2^k$ unsigned integers can be represented, ranging from 0 to $2^k - 1$ [2](#page=2) [3](#page=3).
#### 1.3.2 Signed integers
To perform arithmetic involving negative numbers, signed integer representations are necessary. With $k$ bits, half of the $2^k$ bit patterns can be allocated for positive integers and half for negative integers. A common convention is that positive integers (and zero) have a leading '0' in their binary representation. Several methods exist for representing negative integers, including signed magnitude, 1's complement, and 2's complement [3](#page=3).
##### 1.3.2.1 Signed magnitude
In the signed-magnitude representation, a leading '1' signifies a negative integer, while a leading '0' signifies a positive integer or zero. The remaining bits represent the magnitude of the number [4](#page=4).
##### 1.3.2.2 One's complement
The 1's complement representation for a negative number is obtained by taking the bitwise complement (flipping all the bits) of the representation of its corresponding positive number. For example, if the representation of +5 is $00101$, the representation of -5 in 1's complement would be $11010$ [4](#page=4).
##### 1.3.2.3 Two's complement integers
The 2's complement data type is the most widely used representation for signed integers in modern computers due to its efficiency in hardware implementation for addition. This representation simplifies the design of arithmetic logic units (ALUs) [5](#page=5).
**Key properties of 2's complement:**
* **Positive Integers:** Positive integers are represented using the standard positional binary scheme, with a leading '0'. For a $k$-bit system, this covers integers from 0 to $2^{k-1} - 1$ [5](#page=5).
* **Negative Integers:** The representation for a negative integer $-A$ is designed such that when added to the representation of its positive counterpart $A$, the result is the representation of 0. The representation of $value+1$ is equivalent to the representation of $value$ plus the representation of $1$ [5](#page=5) [6](#page=6).
* **Shortcut for Negation:** A quick way to find the 2's complement representation of a non-zero integer $-A$ is to:
1. Flip all the bits of the representation of $A$ (this is called the bitwise complement).
2. Add 1 to the complemented value [6](#page=6).
> **Example:** To find the 2's complement representation for $-13$ (using 5 bits):
> 1. The representation for $+13$ is $01101$ ($8+4+1$) [6](#page=6).
> 2. The complement of $01101$ is $10010$ [6](#page=6).
> 3. Adding 1 to $10010$ gives $10011$, which is the 2's complement representation for $-13$ [6](#page=6).
> We can verify this by adding $01101$ and $10011$, which results in $00000$ with a carry-out, the carry is ignored in 2's complement arithmetic [6](#page=6).
* **Range:** For a $k$-bit system, 2's complement can represent integers from $-2^{k-1}$ to $2^{k-1} - 1$. For example, with 5 bits ($k=5$), the range is from $-2^{5-1} = -16$ to $2^{5-1} - 1 = +15$. The pattern $10000$ in a 5-bit system is assigned the value $-16$ [5](#page=5) [7](#page=7).
### 1.4 Conversion between binary and decimal
Converting between 2's complement binary representations and familiar decimal numbers is a common operation [7](#page=7).
#### 1.4.1 Binary to decimal conversion
To convert a 2's complement binary number (e.g., an 8-bit number $b_7 b_6 b_5 b_4 b_3 b_2 b_1 b_0$) to decimal [7](#page=7):
1. **Check the leading bit ($b_7$):**
* If $b_7$ is '0', the integer is positive.
* If $b_7$ is '1', the integer is negative. In this case, first find the 2's complement of the positive number with the same magnitude by flipping all bits and adding 1 [7](#page=7).
2. **Calculate the magnitude:** The magnitude is calculated by summing the powers of 2 for each bit that is '1', starting from $b_0$ up to $b_{k-2}$ (where $k$ is the total number of bits). For an 8-bit system ($k=8$), this is $b_6 \cdot 2^6 + b_5 \cdot 2^5 + b_4 \cdot 2^4 + b_3 \cdot 2^3 + b_2 \cdot 2^2 + b_1 \cdot 2^1 + b_0 \cdot 2^0$ [7](#page=7).
3. **Apply the sign:** If the original number was determined to be negative (leading bit was '1'), affix a minus sign to the calculated decimal magnitude [7](#page=7).
---
# Operations on bits
This section details the arithmetic and logical operations that can be performed on bits and binary patterns, covering addition, subtraction, sign extension, overflow detection, and fundamental logical operations [10](#page=10).
### 2.1 Arithmetic operations
Arithmetic operations on binary patterns, particularly those represented in two's complement, mirror decimal arithmetic in their step-by-step, right-to-left processing [11](#page=11).
#### 2.1.1 Addition and subtraction
Binary addition proceeds digit by digit, generating a sum and a carry. A carry is generated when the sum of bits exceeds 1, analogous to generating a carry after 9 in decimal arithmetic [11](#page=11).
Subtraction is performed by adding the two's complement of the subtrahend to the minuend [11](#page=11).
> **Example:** Adding 11 and 3 in five-bit notation [00011](#page=00011) :
> ```
> 01011 [11](#page=11).
> + 00011 [3](#page=3).
> -------
> 01110 [14](#page=14).
> ```
> [11](#page=11).
> **Example:** Subtracting 9 from 14 (14 - 9):
> ```
> 01110 [14](#page=14).
> + 10111 (-9 in 2's complement)
> -------
> 00101 [5](#page=5).
> ```
> The carry-out is ignored [11](#page=11).
Adding a number to itself ($x + x$) is equivalent to a left bit shift of the number's representation, provided the result fits within the available bits [11](#page=11).
#### 2.1.2 Sign extension
Sign extension is the process of replicating the most significant bit (the sign bit) of a binary representation to the left to increase the number of bits without changing the value. This is crucial for performing arithmetic operations on numbers represented with different bit lengths. Leading zeros do not affect the value of positive numbers, and leading ones do not affect the value of negative numbers in two's complement representation [12](#page=12).
> **Example:** Adding 13 and -5, where 13 is 0000000000001101 and -5 is represented with fewer bits as 111011.
> To add these, -5 must be sign-extended to match the bit length of 13:
> ```
> 0000000000001101 [13](#page=13).
> + 1111111111111011 (-5 extended to 16 bits)
> ------------------
> 0000000000001000 [8](#page=8).
> ```
> [12](#page=12).
#### 2.1.3 Overflow
Overflow occurs when the result of an arithmetic operation exceeds the range of values that can be represented by the available bits [13](#page=13).
* **Unsigned arithmetic:** Overflow is a carry-out of the most significant bit, similar to an odometer rolling over [13](#page=13).
* **Signed (two's complement) arithmetic:** Overflow is more subtle.
* Adding two positive numbers that result in a value larger than the maximum representable positive number leads to overflow, often manifesting as a negative result (due to the sign bit flipping) [13](#page=13).
* Adding two negative numbers that result in a value more negative than the minimum representable negative number also leads to overflow, often manifesting as a positive result [13](#page=13).
* The sum of a negative number and a positive number does not typically cause overflow issues [14](#page=14).
### 2.2 Logical operations
Logical operations work on logical variables, which can take one of two values: 0 or 1, often representing false and true, respectively. These operations are performed bit-wise on binary patterns [14](#page=14).
#### 2.2.1 The AND function
The AND function is a binary logical function where the output is 1 only if both input bits are 1. Otherwise, the output is 0. It can be thought of as an "ALL" operation [14](#page=14).
Truth table for AND:
| A | B | A AND B |
|---|---|---------|
| 0 | 0 | 0 |
| 0 | 1 | 0 |
| 1 | 0 | 0 |
| 1 | 1 | 1 |
The bit-wise AND operation is applied to corresponding bits of two operands. A **bit mask** is a binary pattern used with the AND operation to isolate specific bits of another pattern, effectively clearing (setting to 0) bits that do not match the mask's 0s [15](#page=15).
> **Example:** Isolating the rightmost two bits of an eight-bit pattern `A` using the bit mask `00000011`:
> If `A = 01010110`, then `A AND 00000011 = 00000010`.
> The bit mask highlights the bits of interest by zeroing out the others [15](#page=15).
#### 2.2.2 The OR function
The OR function is a binary logical function where the output is 1 if at least one of the input bits is 1. The output is 0 only if both input bits are 0. It can be thought of as an "ANY" operation. This is also known as the **inclusive-OR** [15](#page=15).
Truth table for OR:
| A | B | A OR B |
|---|---|--------|
| 0 | 0 | 0 |
| 0 | 1 | 1 |
| 1 | 0 | 1 |
| 1 | 1 | 1 |
The bit-wise OR operation can be used to set bits in a pattern.
> **Example:** Making unit 5 available in a BUSYNESS bit vector using the bit mask `00100000`:
> If the current BUSYNESS is `01000010`, then `01000010 OR 00100000 = 01100010`.
> This sets the bit corresponding to unit 5 to 1, indicating availability [19](#page=19).
#### 2.2.3 The NOT function
The NOT function (also called complement or inversion) is a unary logical function that operates on a single input bit. It outputs the opposite of the input: 1 becomes 0, and 0 becomes 1 [16](#page=16).
Truth table for NOT:
| A | NOT A |
|---|-------|
| 0 | 1 |
| 1 | 0 |
#### 2.2.4 The Exclusive-OR function
The Exclusive-OR (XOR) function is a binary logical function where the output is 1 only if the two input bits are different. If both input bits are the same (both 0 or both 1), the output is 0 [16](#page=16).
Truth table for XOR:
| A | B | A XOR B |
|---|---|---------|
| 0 | 0 | 0 |
| 0 | 1 | 1 |
| 1 | 0 | 1 |
| 1 | 1 | 0 |
The XOR operation can be used to check if two bit patterns are identical: if their XOR is all zeros, they are identical [17](#page=17).
#### 2.2.5 DeMorgan’s Laws
DeMorgan's Laws describe relationships between AND, OR, and NOT operations. One law states that the complement of an AND operation is equivalent to the OR of the complements of the operands: $\overline{A \text{ AND } B} = \overline{A} \text{ OR } \overline{B}$. This can be expressed as: "It is not the case that both A and B are false" is equivalent to "At least one of A and B is true" [17](#page=17) [18](#page=18).
#### 2.2.6 The bit vector
A bit vector is an m-bit pattern where each bit has a logical value (0 or 1) and can be operated on independently [18](#page=18).
* **Bit masks:** As discussed, bit vectors are commonly used as bit masks to selectively isolate or modify parts of another bit vector using AND or OR operations [15](#page=15) [18](#page=18).
* **System management:** Bit vectors can represent the status of multiple independent units (e.g., machines in a plant, taxicabs) where a bit indicates availability or busy status [18](#page=18) [19](#page=19) [1](#page=1).
* **Updating status:**
* To mark a unit as busy, an AND operation with a bit mask that has a 0 at the unit's bit position and 1s elsewhere can be used to clear that bit [19](#page=19).
* To mark a unit as available (free), an OR operation with a bit mask that has a 1 at the unit's bit position and 0s elsewhere can be used to set that bit [19](#page=19).
> **Example:** A BUSYNESS bit vector `11000010` indicates units 7, 6, and 1 are free.
> To assign work to unit 7, we AND with `01111111` (clearing bit 7): `11000010 AND 01111111 = 01000010`. Unit 7 is now busy [19](#page=19).
> If unit 5 becomes idle, we OR with `00100000` (setting bit 5): `01000010 OR 00100000 = 01100010`. Unit 5 is now available [19](#page=19).
---
# Other data representations
This section delves into alternative methods for encoding information within computers, moving beyond simple integers to include floating-point numbers, character representations, and a human-friendly shorthand for binary.
### 2.7.1 Floating point data type (greater range, less precision)
The floating-point data type is crucial for representing numbers that have a much wider range than standard integers, albeit with a sacrifice in precision. This is analogous to scientific notation used in fields like chemistry, where extremely large or small numbers (like Avogadro's number, $6.022 \times 10^{23}$ ) need to be expressed. Standard 16-bit two's complement integers are limited to a range of -32,768 to +32,767 and offer 15 bits of precision. Floating-point types address this by allocating bits to both the range (exponent field) and precision (fraction field), in addition to a sign bit [19](#page=19).
The most common floating-point format is the 32-bit single-precision type, which allocates bits as follows:
* 1 bit for the sign (positive or negative) [20](#page=20).
* 8 bits for the exponent (range) [20](#page=20).
* 23 bits for the fraction (precision) [20](#page=20).
This format is often visualized as:
$$ \underbrace{S}_{\text{1 bit (sign)}} \quad \underbrace{\text{exponent}}_{\text{8 bits}} \quad \underbrace{\text{fraction}}_{\text{23 bits}} $$ #### 2.7.1.1 Normalized form Floating-point numbers are typically represented in a normalized form, similar to scientific notation, where the value is expressed as: $$N = (-1)^S \times 1.\text{fraction} \times 2^{\text{exponent} - 127}, \quad 1 \le \text{exponent} \le 254$$ * **Sign bit (S):** A single bit where 0 indicates a positive number and 1 indicates a negative number [21](#page=21). * **Exponent field:** The 8 bits representing the exponent are encoded using an "excess" or "bias" code. For the IEEE standard 32-bit format, the bias is 127. To get the actual exponent, the unsigned integer value of the exponent field is treated, and 127 is subtracted. For example, an exponent field of `10000110` (decimal 134) corresponds to an actual exponent of $134 - 127 = +7$. An exponent field of `00000111` (decimal 7) corresponds to $7 - 127 = -120$. The range of representable exponents is from $2^{-126}$ to $2^{127}$ [21](#page=21). * **Fraction field:** The 23 bits store the fractional part of the number. In normalized form, there is always a '1' to the left of the binary point. This leading '1' is implicitly assumed and not stored, contributing to an effective 24 bits of precision. The complete significand is thus `1.fraction` [21](#page=21). > **Tip:** The "normalized form" means there is exactly one non-zero binary digit to the left of the binary point. For floating-point numbers, this non-zero digit is always '1'. **Example 2.12:** The floating-point representation `00111101100000000000000000000000` [21](#page=21): * Sign bit (S): 0 (positive) [21](#page=21). * Exponent field: `01111011` (decimal 123). Actual exponent = $123 - 127 = -4$ [21](#page=21). * Fraction field: All zeros. * Result: $+1.00000000000000000000000 \times 2^{-4}$, which is $\frac{1}{16}$ [21](#page=21). **Example 2.13:** Representing the number $-6 \frac{5}{8}$ in floating-point format [21](#page=21): 1. Convert to binary: $-110.101_2$. 2. Normalize: $-1.10101 \times 2^2$. 3. Sign bit: 1 (negative). 4. Exponent field: The actual exponent is +2. To get the exponent field value, we add the bias: $2 + 127 = 129$. In binary, this is `10000001` [21](#page=21). 5. Fraction field: Take the digits after the binary point from the normalized form: `10101`. Pad with zeros to 23 bits: `10101000000000000000000`. 6. Result: `11000000110101000000000000000000` [21](#page=21). **Example 2.14:** Further illustrations of floating-point interpretation [22](#page=22): * `0 10000011 00101000000000000000000` is $1.00101 \times 2^4 = 18.5$. (Exponent 131, $131-127=4$) [22](#page=22). * `1 10000010 00101000000000000000000` is $-1 \times 1.00101 \times 2^3 = -9.25$. (Sign 1, Exponent 130, $130-127=3$) [22](#page=22). * `0 11111110 11111111111111111111111` is approximately $2^{128}$. (Sign 0, Exponent 254, $254-127=127$. Fraction is all 1s, approximating 2) [22](#page=22). #### 2.7.1.2 Infinities When the exponent field is all 1s (`11111111`), and the fraction field is all 0s, the floating-point representation denotes infinity [22](#page=22). * Positive infinity: Sign bit is 0. * Negative infinity: Sign bit is 1. #### 2.7.1.3 Subnormal numbers Numbers smaller than the smallest representable normalized number (which is $2^{-126}$) but larger than zero are called subnormal numbers. These numbers cannot be represented in the standard normalized form [22](#page=22). * The largest subnormal number is $0.11111111111111111111111 \times 2^{-126}$ [22](#page=22). * The smallest subnormal number is $2^{-149}$ [22](#page=22). Subnormal numbers are represented with an exponent field of all 0s (`00000000`). In this case, the exponent is considered to be $-126$, and the significand is formed by a leading 0 followed by the fraction field (i.e., $0.\text{fraction}$) [23](#page=23). **Example 2.15:** The floating-point representation `0 00000000 00001000000000000000000` [23](#page=23): * Sign bit: 0 (positive) [23](#page=23). * Exponent field: `00000000` (zero exponent) indicates the actual exponent is -126, and the leading bit of the significand is 0 [23](#page=23). * Fraction field: `00001000000000000000000`, which represents $2^{-5}$. * Result: $0.00001 \times 2^{-126} = 2^{-5} \times 2^{-126} = 2^{-131}$ [23](#page=23). > **Tip:** While a deep dive into IEEE floating-point arithmetic is advanced, understanding its existence is key. It allows for a vast range of numbers (both very large and very small) at the cost of reduced precision compared to integers. ### 2.7.2 ASCII codes ASCII (American Standard Code for Information Interchange) is a standardized 8-bit code used for representing characters in computers and for transferring character information between input/output devices and the CPU. This standard simplifies compatibility between hardware from different manufacturers [23](#page=23). Each character, including letters, digits, and control keys, has a unique ASCII code. For example: * The digit '3' is `00110011` [23](#page=23). * The digit '2' is `00110010` [23](#page=23). * The lowercase 'e' is `01100101` [23](#page=23). * The ENTER key is `00001101` [23](#page=23). Most keys have multiple codes, for instance, differentiating between uppercase and lowercase letters (e.g., 'E' is `01000101`, while 'e' is `01100101`) [23](#page=23). > **Tip:** When you type a character on a keyboard, its corresponding 8-bit ASCII code is generated and sent to the computer. When the computer needs to display a character, it sends the ASCII code to the monitor. ### 2.7.3 Hexadecimal notation Hexadecimal notation, or base 16, is a human-friendly shorthand for representing long binary strings. It's particularly useful for dealing with 16-bit binary strings, common in systems like the LC-3 [24](#page=24). The system assigns a unique symbol to each possible 4-bit binary combination (0 to 15): * `0000` = 0 * `0001` = 1 * ... * `1001` = 9 * `1010` = A * `1011` = B * `1100` = C * `1101` = D * `1110` = E * `1111` = F This allows a 16-bit binary string to be represented by just four hexadecimal digits, reducing the number of characters needed by a factor of four [25](#page=25). **Example:** The binary string `0011110101101110` can be broken into 4-bit chunks: `0011 1101 0110 1110`. Converting each chunk to its hexadecimal equivalent gives `3D6E` [24](#page=24) [25](#page=25). > **Tip:** Hexadecimal notation significantly reduces the likelihood of copying errors when transcribing or working with long binary sequences. It can represent integers, floating-point numbers, ASCII codes, or bit vectors, serving primarily as a convenience for humans rather than a data type for computer operations [25](#page=25). --- # Conversion between binary and decimal This section details the methods for converting numbers between their binary (2's complement) and decimal representations, including handling fractional parts [7](#page=7). ### 4.1 Binary to decimal conversion To convert a 2's complement binary integer to its decimal representation, follow these steps, assuming an eight-bit representation for illustration (which covers decimal integers from -128 to +127) [7](#page=7): 1. **Examine the leading bit (b7)**: * If the leading bit is a `0`, the integer is positive. * If the leading bit is a `1`, the integer is negative. In this case, you must first find the 2's complement representation of the positive number with the same magnitude by flipping all the bits and adding 1 [7](#page=7). 2. **Calculate the magnitude**: The magnitude is calculated by summing the powers of 2 for which the corresponding bit is `1`. For an eight-bit number $b_7 b_6 b_5 b_4 b_3 b_2 b_1 b_0$, the magnitude is given by: $$ \text{Magnitude} = b_6 \cdot 2^6 + b_5 \cdot 2^5 + b_4 \cdot 2^4 + b_3 \cdot 2^3 + b_2 \cdot 2^2 + b_1 \cdot 2^1 + b_0 \cdot 2^0 $$ 3. **Apply the sign**: If the original binary number was determined to be negative in step 1, append a minus sign to the calculated magnitude [7](#page=7). > **Example:** Convert the 2's complement integer `11000111` to a decimal integer value [8](#page=8). > > 1. The leading bit is `1`, so the number is negative. To find the magnitude, we flip the bits (`00111000`) and add 1 (`00111001`). > 2. The magnitude is calculated as: > $0 \cdot 2^6 + 1 \cdot 2^5 + 1 \cdot 2^4 + 1 \cdot 2^3 + 0 \cdot 2^2 + 0 \cdot 2^1 + 1 \cdot 2^0$ > $= 0 + 32 + 16 + 8 + 0 + 0 + 1 = 57$ [8](#page=8). > 3. Since the original number was negative, the decimal value is $-57$ [8](#page=8). #### 4.1.1 Conversion of fractional parts For numbers with fractional parts, the binary to decimal conversion is straightforward. Bits to the right of the binary point represent negative powers of 2. For a fractional part $0.b_{-1}b_{-2}b_{-3}b_{-4}...$, the decimal value is calculated by summing the values where the corresponding bit $b_i$ is 1 [9](#page=9): $$ \text{Fractional Value} = b_{-1} \cdot 2^{-1} + b_{-2} \cdot 2^{-2} + b_{-3} \cdot 2^{-3} + b_{-4} \cdot 2^{-4} + \dots $$ > **Example:** Convert the binary fractional part `.1011` to decimal. > > The value is $1 \cdot 2^{-1} + 0 \cdot 2^{-2} + 1 \cdot 2^{-3} + 1 \cdot 2^{-4}$ > $= 0.5 + 0 + 0.125 + 0.0625 = 0.6875$ [9](#page=9). ### 4.2 Decimal to binary conversion Converting from decimal to 2's complement binary is more involved. The method relies on the parity of the number (whether it's odd or even) to determine the bits [8](#page=8). For a generic eight-bit representation $b_7 b_6 b_5 b_4 b_3 b_2 b_1 b_0$, the value is represented as: $$ b_7 \cdot 2^7 + b_6 \cdot 2^6 + b_5 \cdot 2^5 + b_4 \cdot 2^4 + b_3 \cdot 2^3 + b_2 \cdot 2^2 + b_1 \cdot 2^1 + b_0 \cdot 2^0 $$ The process to determine the bits ($b_i$) for a decimal integer $N$ is as follows [9](#page=9): 1. **Form the equation for the magnitude**: Start by forming the equation $N = b_k \cdot 2^k + \dots + b_1 \cdot 2^1 + b_0 \cdot 2^0$, where $k$ is the highest power of 2 less than or equal to $N$. 2. **Determine bits iteratively**: Repeat the following steps until the left side of the equation becomes 0: * If $N$ is odd, the rightmost bit ($b_0$ in the current context) is `1`. If $N$ is even, the rightmost bit is `0` [8](#page=8). * Subtract the determined bit (0 or 1) from $N$. * Remove the least significant term (e.g., $b_0 \cdot 2^0$) from the right side of the equation. * Divide both sides of the equation by 2. This effectively shifts the powers of 2, and the next iteration determines the next bit to the left [9](#page=9). 3. **Handle positive numbers**: If the original decimal number was positive, append a leading `0` as the sign bit to complete the 2's complement representation [9](#page=9). 4. **Handle negative numbers**: If the original decimal number was negative, append a leading `0` to the binary representation of its magnitude, and then form the 2's complement of this result to obtain the final representation [9](#page=9). > **Example:** Convert the decimal value +105 to a 2's complement binary code (using 8 bits) [8](#page=8). > > We want to find $b_i$ such that $105 = b_6 \cdot 2^6 + b_5 \cdot 2^5 + b_4 \cdot 2^4 + b_3 \cdot 2^3 + b_2 \cdot 2^2 + b_1 \cdot 2^1 + b_0 \cdot 2^0$ [8](#page=8). > > * 105 is odd, so $b_0 = 1$. Subtract 1: $104 = b_6 \cdot 2^6 + \dots + b_1 \cdot 2^1$. Divide by 2: $52 = b_6 \cdot 2^5 + \dots + b_1 \cdot 2^0$ [8](#page=8). > * 52 is even, so $b_1 = 0$. Subtract 0: $52 = b_6 \cdot 2^5 + \dots + b_2 \cdot 2^1$. Divide by 2: $26 = b_6 \cdot 2^4 + \dots + b_2 \cdot 2^0$ [8](#page=8). > * 26 is even, so $b_2 = 0$. Subtract 0: $26 = b_6 \cdot 2^4 + \dots + b_3 \cdot 2^1$. Divide by 2: $13 = b_6 \cdot 2^3 + \dots + b_3 \cdot 2^0$ [9](#page=9). > * 13 is odd, so $b_3 = 1$. Subtract 1: $12 = b_6 \cdot 2^3 + b_5 \cdot 2^2 + b_4 \cdot 2^1$. Divide by 2: $6 = b_6 \cdot 2^2 + b_5 \cdot 2^1 + b_4 \cdot 2^0$ [9](#page=9). > * 6 is even, so $b_4 = 0$. Subtract 0: $6 = b_6 \cdot 2^2 + b_5 \cdot 2^1$. Divide by 2: $3 = b_6 \cdot 2^1 + b_5 \cdot 2^0$ [9](#page=9). > * 3 is odd, so $b_5 = 1$. Subtract 1: $2 = b_6 \cdot 2^1$. Divide by 2: $1 = b_6 \cdot 2^0$ [9](#page=9). > * 1 is odd, so $b_6 = 1$. The equation is now 0 [9](#page=9). > > The binary representation of the magnitude is `1101001`. Since the original number was positive, we append a leading `0` for the sign bit: `01101001` [9](#page=9). #### 4.2.1 Conversion of fractional parts To convert a decimal fraction to binary, a different iterative process is used [9](#page=9). Suppose we want to convert a decimal fraction $F$ to binary. We form the equation: $$ F = b_{-1} \cdot 2^{-1} + b_{-2} \cdot 2^{-2} + b_{-3} \cdot 2^{-3} + \dots $$ The process is as follows: 1. **Multiply by 2**: Multiply both sides of the equation by 2. 2. **Determine the bit**: * If the left side of the equation is greater than or equal to 1, the next bit to the right of the binary point ($b_{-i}$ in the current step) is `1`. * If the left side of the equation is less than 1, the bit is `0` [9](#page=9). 3. **Adjust the equation**: If a `1` was assigned, subtract 1 from the left side of the equation to isolate the remaining fractional part. 4. **Continue**: Repeat steps 1-3 until the left side becomes 0, or until the desired precision is reached [10](#page=10). > **Example:** Convert the decimal fraction 0.421 to binary [10](#page=10). > > We start with the equation $0.421 = b_{-1} \cdot 2^{-1} + b_{-2} \cdot 2^{-2} + \dots$ [10](#page=10). > > * Multiply by 2: $0.842 = b_{-1} \cdot 2^0 + b_{-2} \cdot 2^{-1} + \dots$. Since $0.842 < 1$, $b_{-1} = 0$. The remaining equation is $0.842 = b_{-2} \cdot 2^{-1} + b_{-3} \cdot 2^{-2} + \dots$ [10](#page=10). > * Multiply by 2: $1.684 = b_{-2} \cdot 2^0 + b_{-3} \cdot 2^{-1} + \dots$. Since $1.684 \geq 1$, $b_{-2} = 1$. Subtract 1: $0.684 = b_{-3} \cdot 2^{-1} + b_{-4} \cdot 2^{-2} + \dots$ [10](#page=10). > * Multiply by 2: $1.368 = b_{-3} \cdot 2^0 + b_{-4} \cdot 2^{-1} + \dots$. Since $1.368 \geq 1$, $b_{-3} = 1$. Subtract 1: $0.368 = b_{-4} \cdot 2^0 + \dots$ [10](#page=10). > * Multiply by 2: $0.736 = b_{-4} \cdot 2^0 + \dots$. Since $0.736 < 1$, $b_{-4} = 0$ [10](#page=10). > > Stopping at four bits, the binary representation of 0.421 decimal is 0.0110 [10](#page=10). --- ## Common mistakes to avoid - Review all topics thoroughly before exams - Pay attention to formulas and key definitions - Practice with examples provided in each section - Don't memorize without understanding the underlying concepts
$$ \underbrace{S}_{\text{1 bit (sign)}} \quad \underbrace{\text{exponent}}_{\text{8 bits}} \quad \underbrace{\text{fraction}}_{\text{23 bits}} $$ #### 2.7.1.1 Normalized form Floating-point numbers are typically represented in a normalized form, similar to scientific notation, where the value is expressed as: $$N = (-1)^S \times 1.\text{fraction} \times 2^{\text{exponent} - 127}, \quad 1 \le \text{exponent} \le 254$$ * **Sign bit (S):** A single bit where 0 indicates a positive number and 1 indicates a negative number [21](#page=21). * **Exponent field:** The 8 bits representing the exponent are encoded using an "excess" or "bias" code. For the IEEE standard 32-bit format, the bias is 127. To get the actual exponent, the unsigned integer value of the exponent field is treated, and 127 is subtracted. For example, an exponent field of `10000110` (decimal 134) corresponds to an actual exponent of $134 - 127 = +7$. An exponent field of `00000111` (decimal 7) corresponds to $7 - 127 = -120$. The range of representable exponents is from $2^{-126}$ to $2^{127}$ [21](#page=21). * **Fraction field:** The 23 bits store the fractional part of the number. In normalized form, there is always a '1' to the left of the binary point. This leading '1' is implicitly assumed and not stored, contributing to an effective 24 bits of precision. The complete significand is thus `1.fraction` [21](#page=21). > **Tip:** The "normalized form" means there is exactly one non-zero binary digit to the left of the binary point. For floating-point numbers, this non-zero digit is always '1'. **Example 2.12:** The floating-point representation `00111101100000000000000000000000` [21](#page=21): * Sign bit (S): 0 (positive) [21](#page=21). * Exponent field: `01111011` (decimal 123). Actual exponent = $123 - 127 = -4$ [21](#page=21). * Fraction field: All zeros. * Result: $+1.00000000000000000000000 \times 2^{-4}$, which is $\frac{1}{16}$ [21](#page=21). **Example 2.13:** Representing the number $-6 \frac{5}{8}$ in floating-point format [21](#page=21): 1. Convert to binary: $-110.101_2$. 2. Normalize: $-1.10101 \times 2^2$. 3. Sign bit: 1 (negative). 4. Exponent field: The actual exponent is +2. To get the exponent field value, we add the bias: $2 + 127 = 129$. In binary, this is `10000001` [21](#page=21). 5. Fraction field: Take the digits after the binary point from the normalized form: `10101`. Pad with zeros to 23 bits: `10101000000000000000000`. 6. Result: `11000000110101000000000000000000` [21](#page=21). **Example 2.14:** Further illustrations of floating-point interpretation [22](#page=22): * `0 10000011 00101000000000000000000` is $1.00101 \times 2^4 = 18.5$. (Exponent 131, $131-127=4$) [22](#page=22). * `1 10000010 00101000000000000000000` is $-1 \times 1.00101 \times 2^3 = -9.25$. (Sign 1, Exponent 130, $130-127=3$) [22](#page=22). * `0 11111110 11111111111111111111111` is approximately $2^{128}$. (Sign 0, Exponent 254, $254-127=127$. Fraction is all 1s, approximating 2) [22](#page=22). #### 2.7.1.2 Infinities When the exponent field is all 1s (`11111111`), and the fraction field is all 0s, the floating-point representation denotes infinity [22](#page=22). * Positive infinity: Sign bit is 0. * Negative infinity: Sign bit is 1. #### 2.7.1.3 Subnormal numbers Numbers smaller than the smallest representable normalized number (which is $2^{-126}$) but larger than zero are called subnormal numbers. These numbers cannot be represented in the standard normalized form [22](#page=22). * The largest subnormal number is $0.11111111111111111111111 \times 2^{-126}$ [22](#page=22). * The smallest subnormal number is $2^{-149}$ [22](#page=22). Subnormal numbers are represented with an exponent field of all 0s (`00000000`). In this case, the exponent is considered to be $-126$, and the significand is formed by a leading 0 followed by the fraction field (i.e., $0.\text{fraction}$) [23](#page=23). **Example 2.15:** The floating-point representation `0 00000000 00001000000000000000000` [23](#page=23): * Sign bit: 0 (positive) [23](#page=23). * Exponent field: `00000000` (zero exponent) indicates the actual exponent is -126, and the leading bit of the significand is 0 [23](#page=23). * Fraction field: `00001000000000000000000`, which represents $2^{-5}$. * Result: $0.00001 \times 2^{-126} = 2^{-5} \times 2^{-126} = 2^{-131}$ [23](#page=23). > **Tip:** While a deep dive into IEEE floating-point arithmetic is advanced, understanding its existence is key. It allows for a vast range of numbers (both very large and very small) at the cost of reduced precision compared to integers. ### 2.7.2 ASCII codes ASCII (American Standard Code for Information Interchange) is a standardized 8-bit code used for representing characters in computers and for transferring character information between input/output devices and the CPU. This standard simplifies compatibility between hardware from different manufacturers [23](#page=23). Each character, including letters, digits, and control keys, has a unique ASCII code. For example: * The digit '3' is `00110011` [23](#page=23). * The digit '2' is `00110010` [23](#page=23). * The lowercase 'e' is `01100101` [23](#page=23). * The ENTER key is `00001101` [23](#page=23). Most keys have multiple codes, for instance, differentiating between uppercase and lowercase letters (e.g., 'E' is `01000101`, while 'e' is `01100101`) [23](#page=23). > **Tip:** When you type a character on a keyboard, its corresponding 8-bit ASCII code is generated and sent to the computer. When the computer needs to display a character, it sends the ASCII code to the monitor. ### 2.7.3 Hexadecimal notation Hexadecimal notation, or base 16, is a human-friendly shorthand for representing long binary strings. It's particularly useful for dealing with 16-bit binary strings, common in systems like the LC-3 [24](#page=24). The system assigns a unique symbol to each possible 4-bit binary combination (0 to 15): * `0000` = 0 * `0001` = 1 * ... * `1001` = 9 * `1010` = A * `1011` = B * `1100` = C * `1101` = D * `1110` = E * `1111` = F This allows a 16-bit binary string to be represented by just four hexadecimal digits, reducing the number of characters needed by a factor of four [25](#page=25). **Example:** The binary string `0011110101101110` can be broken into 4-bit chunks: `0011 1101 0110 1110`. Converting each chunk to its hexadecimal equivalent gives `3D6E` [24](#page=24) [25](#page=25). > **Tip:** Hexadecimal notation significantly reduces the likelihood of copying errors when transcribing or working with long binary sequences. It can represent integers, floating-point numbers, ASCII codes, or bit vectors, serving primarily as a convenience for humans rather than a data type for computer operations [25](#page=25). --- # Conversion between binary and decimal This section details the methods for converting numbers between their binary (2's complement) and decimal representations, including handling fractional parts [7](#page=7). ### 4.1 Binary to decimal conversion To convert a 2's complement binary integer to its decimal representation, follow these steps, assuming an eight-bit representation for illustration (which covers decimal integers from -128 to +127) [7](#page=7): 1. **Examine the leading bit (b7)**: * If the leading bit is a `0`, the integer is positive. * If the leading bit is a `1`, the integer is negative. In this case, you must first find the 2's complement representation of the positive number with the same magnitude by flipping all the bits and adding 1 [7](#page=7). 2. **Calculate the magnitude**: The magnitude is calculated by summing the powers of 2 for which the corresponding bit is `1`. For an eight-bit number $b_7 b_6 b_5 b_4 b_3 b_2 b_1 b_0$, the magnitude is given by: $$ \text{Magnitude} = b_6 \cdot 2^6 + b_5 \cdot 2^5 + b_4 \cdot 2^4 + b_3 \cdot 2^3 + b_2 \cdot 2^2 + b_1 \cdot 2^1 + b_0 \cdot 2^0 $$ 3. **Apply the sign**: If the original binary number was determined to be negative in step 1, append a minus sign to the calculated magnitude [7](#page=7). > **Example:** Convert the 2's complement integer `11000111` to a decimal integer value [8](#page=8). > > 1. The leading bit is `1`, so the number is negative. To find the magnitude, we flip the bits (`00111000`) and add 1 (`00111001`). > 2. The magnitude is calculated as: > $0 \cdot 2^6 + 1 \cdot 2^5 + 1 \cdot 2^4 + 1 \cdot 2^3 + 0 \cdot 2^2 + 0 \cdot 2^1 + 1 \cdot 2^0$ > $= 0 + 32 + 16 + 8 + 0 + 0 + 1 = 57$ [8](#page=8). > 3. Since the original number was negative, the decimal value is $-57$ [8](#page=8). #### 4.1.1 Conversion of fractional parts For numbers with fractional parts, the binary to decimal conversion is straightforward. Bits to the right of the binary point represent negative powers of 2. For a fractional part $0.b_{-1}b_{-2}b_{-3}b_{-4}...$, the decimal value is calculated by summing the values where the corresponding bit $b_i$ is 1 [9](#page=9): $$ \text{Fractional Value} = b_{-1} \cdot 2^{-1} + b_{-2} \cdot 2^{-2} + b_{-3} \cdot 2^{-3} + b_{-4} \cdot 2^{-4} + \dots $$ > **Example:** Convert the binary fractional part `.1011` to decimal. > > The value is $1 \cdot 2^{-1} + 0 \cdot 2^{-2} + 1 \cdot 2^{-3} + 1 \cdot 2^{-4}$ > $= 0.5 + 0 + 0.125 + 0.0625 = 0.6875$ [9](#page=9). ### 4.2 Decimal to binary conversion Converting from decimal to 2's complement binary is more involved. The method relies on the parity of the number (whether it's odd or even) to determine the bits [8](#page=8). For a generic eight-bit representation $b_7 b_6 b_5 b_4 b_3 b_2 b_1 b_0$, the value is represented as: $$ b_7 \cdot 2^7 + b_6 \cdot 2^6 + b_5 \cdot 2^5 + b_4 \cdot 2^4 + b_3 \cdot 2^3 + b_2 \cdot 2^2 + b_1 \cdot 2^1 + b_0 \cdot 2^0 $$ The process to determine the bits ($b_i$) for a decimal integer $N$ is as follows [9](#page=9): 1. **Form the equation for the magnitude**: Start by forming the equation $N = b_k \cdot 2^k + \dots + b_1 \cdot 2^1 + b_0 \cdot 2^0$, where $k$ is the highest power of 2 less than or equal to $N$. 2. **Determine bits iteratively**: Repeat the following steps until the left side of the equation becomes 0: * If $N$ is odd, the rightmost bit ($b_0$ in the current context) is `1`. If $N$ is even, the rightmost bit is `0` [8](#page=8). * Subtract the determined bit (0 or 1) from $N$. * Remove the least significant term (e.g., $b_0 \cdot 2^0$) from the right side of the equation. * Divide both sides of the equation by 2. This effectively shifts the powers of 2, and the next iteration determines the next bit to the left [9](#page=9). 3. **Handle positive numbers**: If the original decimal number was positive, append a leading `0` as the sign bit to complete the 2's complement representation [9](#page=9). 4. **Handle negative numbers**: If the original decimal number was negative, append a leading `0` to the binary representation of its magnitude, and then form the 2's complement of this result to obtain the final representation [9](#page=9). > **Example:** Convert the decimal value +105 to a 2's complement binary code (using 8 bits) [8](#page=8). > > We want to find $b_i$ such that $105 = b_6 \cdot 2^6 + b_5 \cdot 2^5 + b_4 \cdot 2^4 + b_3 \cdot 2^3 + b_2 \cdot 2^2 + b_1 \cdot 2^1 + b_0 \cdot 2^0$ [8](#page=8). > > * 105 is odd, so $b_0 = 1$. Subtract 1: $104 = b_6 \cdot 2^6 + \dots + b_1 \cdot 2^1$. Divide by 2: $52 = b_6 \cdot 2^5 + \dots + b_1 \cdot 2^0$ [8](#page=8). > * 52 is even, so $b_1 = 0$. Subtract 0: $52 = b_6 \cdot 2^5 + \dots + b_2 \cdot 2^1$. Divide by 2: $26 = b_6 \cdot 2^4 + \dots + b_2 \cdot 2^0$ [8](#page=8). > * 26 is even, so $b_2 = 0$. Subtract 0: $26 = b_6 \cdot 2^4 + \dots + b_3 \cdot 2^1$. Divide by 2: $13 = b_6 \cdot 2^3 + \dots + b_3 \cdot 2^0$ [9](#page=9). > * 13 is odd, so $b_3 = 1$. Subtract 1: $12 = b_6 \cdot 2^3 + b_5 \cdot 2^2 + b_4 \cdot 2^1$. Divide by 2: $6 = b_6 \cdot 2^2 + b_5 \cdot 2^1 + b_4 \cdot 2^0$ [9](#page=9). > * 6 is even, so $b_4 = 0$. Subtract 0: $6 = b_6 \cdot 2^2 + b_5 \cdot 2^1$. Divide by 2: $3 = b_6 \cdot 2^1 + b_5 \cdot 2^0$ [9](#page=9). > * 3 is odd, so $b_5 = 1$. Subtract 1: $2 = b_6 \cdot 2^1$. Divide by 2: $1 = b_6 \cdot 2^0$ [9](#page=9). > * 1 is odd, so $b_6 = 1$. The equation is now 0 [9](#page=9). > > The binary representation of the magnitude is `1101001`. Since the original number was positive, we append a leading `0` for the sign bit: `01101001` [9](#page=9). #### 4.2.1 Conversion of fractional parts To convert a decimal fraction to binary, a different iterative process is used [9](#page=9). Suppose we want to convert a decimal fraction $F$ to binary. We form the equation: $$ F = b_{-1} \cdot 2^{-1} + b_{-2} \cdot 2^{-2} + b_{-3} \cdot 2^{-3} + \dots $$ The process is as follows: 1. **Multiply by 2**: Multiply both sides of the equation by 2. 2. **Determine the bit**: * If the left side of the equation is greater than or equal to 1, the next bit to the right of the binary point ($b_{-i}$ in the current step) is `1`. * If the left side of the equation is less than 1, the bit is `0` [9](#page=9). 3. **Adjust the equation**: If a `1` was assigned, subtract 1 from the left side of the equation to isolate the remaining fractional part. 4. **Continue**: Repeat steps 1-3 until the left side becomes 0, or until the desired precision is reached [10](#page=10). > **Example:** Convert the decimal fraction 0.421 to binary [10](#page=10). > > We start with the equation $0.421 = b_{-1} \cdot 2^{-1} + b_{-2} \cdot 2^{-2} + \dots$ [10](#page=10). > > * Multiply by 2: $0.842 = b_{-1} \cdot 2^0 + b_{-2} \cdot 2^{-1} + \dots$. Since $0.842 < 1$, $b_{-1} = 0$. The remaining equation is $0.842 = b_{-2} \cdot 2^{-1} + b_{-3} \cdot 2^{-2} + \dots$ [10](#page=10). > * Multiply by 2: $1.684 = b_{-2} \cdot 2^0 + b_{-3} \cdot 2^{-1} + \dots$. Since $1.684 \geq 1$, $b_{-2} = 1$. Subtract 1: $0.684 = b_{-3} \cdot 2^{-1} + b_{-4} \cdot 2^{-2} + \dots$ [10](#page=10). > * Multiply by 2: $1.368 = b_{-3} \cdot 2^0 + b_{-4} \cdot 2^{-1} + \dots$. Since $1.368 \geq 1$, $b_{-3} = 1$. Subtract 1: $0.368 = b_{-4} \cdot 2^0 + \dots$ [10](#page=10). > * Multiply by 2: $0.736 = b_{-4} \cdot 2^0 + \dots$. Since $0.736 < 1$, $b_{-4} = 0$ [10](#page=10). > > Stopping at four bits, the binary representation of 0.421 decimal is 0.0110 [10](#page=10). --- ## Common mistakes to avoid - Review all topics thoroughly before exams - Pay attention to formulas and key definitions - Practice with examples provided in each section - Don't memorize without understanding the underlying concepts
Glossary
| Term | Definition |
|------|------------|
| Bit | The fundamental unit of information in computing, representing a binary digit which can have one of two values: 0 or 1. |
| Data Type | A classification of data that tells the compiler or interpreter how the programmer intends to use the data. It defines the set of values that an object can hold and the set of operations that can be performed on it. |
| Unsigned Integer | A numerical data type that can only represent non-negative integers. It does not have a sign bit, allowing for a larger range of positive values compared to signed integers of the same bit length. |
| Signed Magnitude | A representation for signed integers where one bit (typically the most significant bit) indicates the sign (0 for positive, 1 for negative), and the remaining bits represent the magnitude of the number. |
| 1's Complement | A representation for signed integers where negative numbers are formed by inverting (complementing) all the bits of their positive counterpart. This method has two representations for zero. |
| 2's Complement | The most common representation for signed integers in computers. Negative numbers are formed by inverting all bits of the positive number and adding 1. This method has a unique representation for zero and simplifies arithmetic operations. |
| Positional Notation | A system of representing numbers where the value of a digit depends on its position within the number. Examples include decimal (base-10) and binary (base-2) systems. |
| Floating Point Data Type | A data type used to represent real numbers that can have a fractional part. It is typically composed of a sign bit, an exponent field, and a fraction field, allowing for a wide range of values but with limited precision. |
| ASCII Codes | A character encoding standard where characters (letters, numbers, symbols) are mapped to unique 8-bit binary codes. It facilitates the exchange of information between different computer systems and devices. |
| Hexadecimal Notation | A base-16 number system that uses digits 0-9 and letters A-F to represent values. It is often used as a more human-readable shorthand for binary numbers, as each hexadecimal digit corresponds to exactly four binary digits. |
| Arithmetic and Logic Unit (ALU) | A fundamental digital circuit within a computer processor that performs arithmetic and bitwise logical operations on integer binary numbers. |
| Sign Extension | The process of increasing the number of bits used to represent a signed number without changing its value. For positive numbers, leading zeros are added, while for negative numbers (in 2's complement), leading ones are replicated. |
| Overflow | An error condition that occurs in arithmetic operations when the result of a calculation exceeds the maximum value that can be represented by the available number of bits for that data type. |
| Bit Mask | A binary pattern used in logical operations (like AND) to select or isolate specific bits within another binary pattern, effectively masking out or ignoring certain bits. |
| Bit Vector | A sequence of bits where each bit represents a distinct property or state. It is often used for efficient representation and manipulation of sets of flags or status indicators. |
| Logical Operations | Operations that manipulate binary values (bits) based on Boolean logic. Common examples include AND, OR, NOT, and XOR. |
| Normalized Form | A standard way of representing floating-point numbers where the most significant digit before the decimal point is always non-zero (in binary, this is always 1). This ensures a unique representation for each number and maximizes precision. |
| Subnormal Numbers | Floating-point numbers that are too small to be represented in normalized form. They are represented with a leading zero before the binary point and a special exponent value, allowing for representation of values closer to zero at the cost of reduced precision. |