Java Reference
In-Depth Information
ues that are too large, or too small, to be represented as an integer be-
come
MAX_VALUE
or
MIN_VALUE
for the types
int
and
long
. For casts to
byte
,
short
, or
char
, the floating-point value is first converted to an
int
or
long
(depending on its magnitude), and then to the smaller integer type by
chopping off the upper bits as described below.
A
double
can also be explicitly cast to a
float
, or an integer type can be
explicitly cast to a smaller integer type. When you cast from a
double
to
a
float
, three things can go wrong: you can lose precision, you can get
a zero, or you can get an infinity where you originally had a finite value
outside the range of a
float
.
Integer types are converted by chopping off the upper bits. If the value
in the larger integer fits in the smaller type to which it is cast, no harm
is done. But if the larger integer has a value outside the range of the
smaller type, dropping the upper bits changes the value, including pos-
sibly changing sign. The code
short s = -134;
byte b = (byte) s;
System.out.println("s = " + s + ", b = " + b);
produces the following output because the upper bits of
s
are lost when
the value is stored in
b
:
s = -134, b = 122
A
char
can be cast to any integer type and vice versa. When an integer
is cast to a
char
, only the bottom 16 bits of data are used; the rest are
discarded. When a
char
is cast to an integer type, any additional upper
bits are filled with zeros.
Once those bits are assigned, they are treated as they would be in any
other value. Here is some code that casts a large Unicode character to