Java Reference
In-Depth Information
chunkCount = 0;
byte[] data = new byte[bytesPerChunk];
int bytesRead = 0;
while ((bytesRead = decodedInputStream.read(data, 0, data.length)) != -1) {
chunkCount++;
baos.write(data, 0, bytesRead);
}
decodedInputStream.close();
decodedAudio = new ByteArrayInputStream(baos.toByteArray());
DataLine.Info info = new DataLine.Info(SourceDataLine.class, decodedFormat);
line = (SourceDataLine) AudioSystem.getLine(info);
line.open(decodedFormat);
line.start();
audioConsumer = new AudioDataConsumer(bytesPerChunk, 10);
audioConsumer.start(line);
audioConsumer.add(this);
isPlaying = false;
thread = new Thread(new SoundRunnable());
thread.start();
} catch (Exception ex) {
throw new RuntimeException(ex);
}
}
In Listing 9-2 we can see that a
SoundHelper
class is created by calling a constructor and providing a
URL. If the provided URL starts with the word
jar
, we know we must copy the sound file out of the JAR
and into the local file system; the method
createLocalFile
is used to do this. Looking at the
implementation of
createLocalFile
, we can see that a suitable location is identified in a subdirectory
created in the user's home directory. If this file exists, then the code assumes that this file was copied
over during a previous run, and the URL to this file is returned. If the file did not exist, then the
createLocalFile
method opens an input stream from the copy in the JAR and also opens an output
stream to the new file. The contents of the input stream are then written to the output stream, creating a
copy of the sound file on the local disk.
Once the class
SoundHelper
has a URL pointing to valid sound file, it is then time to decode the sound
file so we can play it. The method
init
uses the static method
getAudioInputStream
from the Java Sound
class
AudioSystem
. The
AudioInputStream
returned by
getAudioInputStream
may or may not be in a format
we want to work with. Since we are going to do some digital signal processing (DSP) on the contents of
this stream, we want to normalize the format so we only have to write one class for doing the DSP.
Using the original format of the
AudioInputStream
as stored in the variable
baseFormat
, a new
AudioFormat
is created called
decodedFormat
. The variable
decodedFormat
is set to be
PCM_SIGNED
, which is
how our DSP code expects it to be formatted.
So, now that we know what format we want our audio data in, it is time to actually get the audio
data. The audio data will ultimately be stored as a byte array inside the variable
decodedAudio
. The
variable
decodedAudio
is a
ByteArrayInputStream
and provides a convenient API for working with a byte
array as a stream.