Hear Sanjay Lulla speak at the EMC Forum 2011, November 17-18, Mumbai, India. Register today!


Which efficiency storage technologies can the CIOs consider?

CIOs should consider technologies which would let them enhance the utilization of their assets and help them enhance the productivity.

Data is growing at a faster rate than what the budgets can keep up; therefore, relentless efficiency is mandatory. Moreover, the practices of inefficient thick configurations are a passé. By incorporating fully automated storage tiering and virtual provisioning, customers can achieve 2 times the transactions per rack U and 3 times the transactions per pound of CO2 per year. It means more transactions (thereby more business), less footprint, less carbon footprint and data centre optimization.

Storage technologies fast - utilizes a portfolio of Flash, FC and SATA in self optimizing pools and run storage based middleware to get the right data at the right place at the right time.

Oversubscription technologies can be incorporated in storage design with Virtual Provisioning and most of them enables on demand provisioning rather than static volume allocations. Essentially, it means you can store more data for business without having to buy more storage and more capacity per IT dollar spent. In addition, Block Compression is known to reduce storage footprint by half.

Further, the capacity required for backup is becoming 10 times faster than production capacity. We end up taking the back up of same information again and again due to the technique used (incremental everyday and full once a week).

De-duplicated backups on specialized disk based appliances provide huge efficiency benefits. In addition, network efficiency is also improved along with the backup window (10 times faster daily backups), faster production and efficient replication of backed up data (500 times reduction in network bandwidth for replication) is also achieved with these storage technologies.

How important is storage technology integration with server virtualization technology? What aspects should be considered while making buying decisions?

The number of virtual servers has overtaken the number of physical servers over the time. Virtualization has enabled IT flexibility, consolidation and enhanced asset utilization. Therefore, it is no longer an option to have deep rooted connects into the virtual stack. Some important points to consider:

1. Deep rooted certification with virtual server technology. This should not be restricted to storage IO connect, instead the entire ecosystem of backup, de-duplication and replication should be considered.

2. Virtual environments introduce a new complexity, physical to virtual. The storage system should identify virtual machines available on the physical host. You should also be able to isolate issues and provide a virtual machine centric view of storage.

3. Virtualization technology comes with its own management platform. So why should the customer invest in two different skills for virtual machine and for storage. The virtualization technology admin should be empowered and enabled to view the underlying storage and provision through its native console.

4. Virtualization technologies also offer a lot of flexibility in moving virtual machines and workloads across servers based on resource policies, power policies, etc. There is no point in moving virtual machine if storage stays doesn’t move. Therefore, both technologies should conjoin and tools should be available to enhance virtual machine mobility across servers and storage.

5. There is a lot of heavy lifting being done by virtualized servers today. Most of them are storage related which are unfortunately being handled by servers because industry connects still need maturity. Offloading is possible so that virtual servers could issue storage commands to underlying storage and offload these activities to storage. This helps improve overall load and achieve scalability.

6. Post virtualization, there is hardly any headroom to run backups. Therefore, there should be de-duplicated connects available for efficient virtual machine backups at virtual machine level as well as hypervisor level.

7. Virtualization consistent failover and failback in DR environments is very important.

8. Different parts of a virtualized stack need different IO connectivity. That means an ability to have multiprotocol connect and therefore unified platform is mandatory.

9. Virtualization has shown us the way to federated environments. Federation at server level should work together with federation across heterogeneous storage and across data centers. This enables access anywhere, instant failover and virtual storage.

10. Finally, IT as a Service (infrastructure catalogs) and invisibility in deploying applications in virtual environments is very important.

What should be the approach to backup in a virtualized environment?

Virtualization allows users to run 20 or more virtual machines on a single server, increasing server utilization to 60 to 80 percent. Maximization of server utilization results in reduced amount of hardware and power, cooling and space costs. However, there is hardly any space left to run traditional backup software for system backups and running backups could lead to lower resources for production enc.

Backup processes must evolve. The best method is to deploy source based, global de-duplication. If this is of variable length, it will significantly reduce the amount of backup content moving out of the server for backups. Recovery is a single step and the backup windows get heavily optimized. It not only provides efficiency in backup up storage but also network efficiency, improved backup window (10 times faster daily backups), faster production and above all efficient replication of backed up data (500 times reduction in network bandwidth for replication).

Leave a Reply