[ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
In a computer, an integer is represented as a binary number, a sequence of bits (digits which are either zero or one). A bitwise operation acts on the individual bits of such a sequence. For example, shifting moves the whole sequence left or right one or more places, reproducing the same pattern "moved over".
The bitwise operations in Emacs Lisp apply only to integers.
lsh
, which is an abbreviation for logical shift, shifts the
bits in integer1 to the left count places, or to the right
if count is negative, bringing zeros into the vacated bits. If
count is negative, lsh
shifts zeros into the leftmost
(most-significant) bit, producing a positive result even if
integer1 is negative. Contrast this with ash
, below.
Here are two examples of lsh
, shifting a pattern of bits one
place to the left. We show only the low-order eight bits of the binary
pattern; the rest are all zero.
(lsh 5 1) => 10 ;; Decimal 5 becomes decimal 10. 00000101 => 00001010 (lsh 7 1) => 14 ;; Decimal 7 becomes decimal 14. 00000111 => 00001110 |
As the examples illustrate, shifting the pattern of bits one place to the left produces a number that is twice the value of the previous number.
Shifting a pattern of bits two places to the left produces results like this (with 8-bit binary numbers):
(lsh 3 2) => 12 ;; Decimal 3 becomes decimal 12. 00000011 => 00001100 |
On the other hand, shifting one place to the right looks like this:
(lsh 6 -1) => 3 ;; Decimal 6 becomes decimal 3. 00000110 => 00000011 (lsh 5 -1) => 2 ;; Decimal 5 becomes decimal 2. 00000101 => 00000010 |
As the example illustrates, shifting one place to the right divides the value of a positive integer by two, rounding downward.
The function lsh
, like all Emacs Lisp arithmetic functions, does
not check for overflow, so shifting left can discard significant bits
and change the sign of the number. For example, left shifting
134,217,727 produces -2 on a 28-bit machine:
(lsh 134217727 1) ; left shift => -2 |
In binary, in the 28-bit implementation, the argument looks like this:
;; Decimal 134,217,727 0111 1111 1111 1111 1111 1111 1111 |
which becomes the following when left shifted:
;; Decimal -2 1111 1111 1111 1111 1111 1111 1110 |
ash
(arithmetic shift) shifts the bits in integer1
to the left count places, or to the right if count
is negative.
ash
gives the same results as lsh
except when
integer1 and count are both negative. In that case,
ash
puts ones in the empty bit positions on the left, while
lsh
puts zeros in those bit positions.
Thus, with ash
, shifting the pattern of bits one place to the right
looks like this:
(ash -6 -1) => -3 ;; Decimal -6 becomes decimal -3. 1111 1111 1111 1111 1111 1111 1010 => 1111 1111 1111 1111 1111 1111 1101 |
In contrast, shifting the pattern of bits one place to the right with
lsh
looks like this:
(lsh -6 -1) => 134217725 ;; Decimal -6 becomes decimal 134,217,725. 1111 1111 1111 1111 1111 1111 1010 => 0111 1111 1111 1111 1111 1111 1101 |
Here are other examples:
; 28-bit binary values (lsh 5 2) ; 5 = 0000 0000 0000 0000 0000 0000 0101 => 20 ; = 0000 0000 0000 0000 0000 0001 0100 (ash 5 2) => 20 (lsh -5 2) ; -5 = 1111 1111 1111 1111 1111 1111 1011 => -20 ; = 1111 1111 1111 1111 1111 1110 1100 (ash -5 2) => -20 (lsh 5 -2) ; 5 = 0000 0000 0000 0000 0000 0000 0101 => 1 ; = 0000 0000 0000 0000 0000 0000 0001 (ash 5 -2) => 1 (lsh -5 -2) ; -5 = 1111 1111 1111 1111 1111 1111 1011 => 4194302 ; = 0011 1111 1111 1111 1111 1111 1110 (ash -5 -2) ; -5 = 1111 1111 1111 1111 1111 1111 1011 => -2 ; = 1111 1111 1111 1111 1111 1111 1110 |
For example, using 4-bit binary numbers, the "logical and" of 13 and 12 is 12: 1101 combined with 1100 produces 1100. In both the binary numbers, the leftmost two bits are set (i.e., they are 1's), so the leftmost two bits of the returned value are set. However, for the rightmost two bits, each is zero in at least one of the arguments, so the rightmost two bits of the returned value are 0's.
Therefore,
(logand 13 12) => 12 |
If logand
is not passed any argument, it returns a value of
-1. This number is an identity element for logand
because its binary representation consists entirely of ones. If
logand
is passed just one argument, it returns that argument.
; 28-bit binary values (logand 14 13) ; 14 = 0000 0000 0000 0000 0000 0000 1110 ; 13 = 0000 0000 0000 0000 0000 0000 1101 => 12 ; 12 = 0000 0000 0000 0000 0000 0000 1100 (logand 14 13 4) ; 14 = 0000 0000 0000 0000 0000 0000 1110 ; 13 = 0000 0000 0000 0000 0000 0000 1101 ; 4 = 0000 0000 0000 0000 0000 0000 0100 => 4 ; 4 = 0000 0000 0000 0000 0000 0000 0100 (logand) => -1 ; -1 = 1111 1111 1111 1111 1111 1111 1111 |
logior
is
passed just one argument, it returns that argument.
; 28-bit binary values (logior 12 5) ; 12 = 0000 0000 0000 0000 0000 0000 1100 ; 5 = 0000 0000 0000 0000 0000 0000 0101 => 13 ; 13 = 0000 0000 0000 0000 0000 0000 1101 (logior 12 5 7) ; 12 = 0000 0000 0000 0000 0000 0000 1100 ; 5 = 0000 0000 0000 0000 0000 0000 0101 ; 7 = 0000 0000 0000 0000 0000 0000 0111 => 15 ; 15 = 0000 0000 0000 0000 0000 0000 1111 |
logxor
is passed just one argument, it returns that argument.
; 28-bit binary values (logxor 12 5) ; 12 = 0000 0000 0000 0000 0000 0000 1100 ; 5 = 0000 0000 0000 0000 0000 0000 0101 => 9 ; 9 = 0000 0000 0000 0000 0000 0000 1001 (logxor 12 5 7) ; 12 = 0000 0000 0000 0000 0000 0000 1100 ; 5 = 0000 0000 0000 0000 0000 0000 0101 ; 7 = 0000 0000 0000 0000 0000 0000 0111 => 14 ; 14 = 0000 0000 0000 0000 0000 0000 1110 |
(lognot 5) => -6 ;; 5 = 0000 0000 0000 0000 0000 0000 0101 ;; becomes ;; -6 = 1111 1111 1111 1111 1111 1111 1010 |
[ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |