Eyeing the growing market for big data analysis, Amazon Web Services (AWS) has introduced a storage package, called High Storage, that can offer fast access to large amounts of data.
High Storage, an Amazon Elastic Compute Cloud (EC2) package, is designed to run data intensive analysis jobs, such as seismic analysis, log processing and data warehousing, according to the company. It is built on a parallel file system architecture that allows data to be moved on and off multiple disks at once, speeding throughput times.
"Instances of this family provide proportionally higher storage density per instance, and are ideally suited for applications that benefit from high sequential I/O performance across very large data sets," AWS states in the online marketing literature for this service. The company is pitching the service as a complement to its Elastic MapReduce service, which provides a platform for Hadoop big data analysis. AWS itself is using the High Storage instances to power its Redshift data warehouse service.
An AWS instance is a bundle of compute units, memory, storage and other services configured to the characteristics of a particular type of workload. High Storage is the ninth type of compute instance that AWS has introduced. It joins other instant types customized for particular workloads, such as instances optimized for using GPUs (graphics processing units) or for HPC (high performance computing) jobs.
To read this article in full or to leave a comment, please click here
No comments:
Post a Comment