Database Reference
In-Depth Information
String compressedStr = out.toString("ISO-8859-1");
return compressedStr;
}
The previous function accepts the input string to be compressed and returns compressed
gzip. Similarly, the decompress function accepts the compressed string, which returns the
clear text original string back. Note that it is very important to use the same encoding
while compressing and decompressing.
public static String decompress(String inputString) throws
IOException {
if (inputString == null || inputString.length()
== 0) {
return inputString;
}
GZIPInputStream gis = new GZIPInputStream(new
ByteArrayInputStream(inputString.getBytes("ISO-8859-1")));
BufferedReader bf = new BufferedReader(new
InputStreamReader(gis, "ISO-8859-1"));
String decompressedString = "";
String line;
while ((line=bf.readLine())!=null) {
decompressedString += line;
}
return decompressedString;
}
Using AWS S3
As we discussed the size constraints put by Amazon on items, it is very important to have
a solid solution to solve the issue of large items. One good solution is to store large attrib-
utes in AWS S3 buckets. Here, we can simply store the item in an AWS S3 bucket and
have its object identifier stored in an item attribute. Here is an example to illustrate this.
Suppose we want to store information about one research paper in a particular journal,
which also contains some images. Now, it's obvious that images would have larger size
compared to text. So, here we can store other text information about the paper in Dy-
namoDB and store the images on AWS S3. To link the images and the item in Dy-
Search WWH ::




Custom Search