Symantec Vision day3 & (something different)

26715 WAnd now for something different. Instead of attending the marginally useful sessions available in the morning, I’ve been talking to some of the exhibitors. Here’s some of them and my initial thoughts:

Index Engines

This is something very new. I have not had much time to explore this, but we have agreed to explore this much further. What these folks have is an appliance that sits in-band between the backup server and the back end tape device. They pass through the data to the device without changing it, but in the process they tear the backup stream apart and index the content of the data. In other words, they crack the packets open, index them, and put them back together again before sending them downstream to the tape device. They claim to work with NetBackup, Legato, and TSM (maybe others, but I don’t recall and won’t know for a little while). Once they have all of this indexed, it becomes searchable and “auditable” through their appliance. It’s an interesting concept, so I’ll make sure to explore this further with them. I’m concerned about scalability, index sizes (although they claim huge savings in this), and versioning issues (i.e. Legato changes OpenTape and they now become the gateway for an upgrade).


This is a new CDP product that has just come out of stealth mode. They work only in a windows world, but they appear to have a pretty comprehensive solution for that space. The way they work is by inserting a small driver (splinter driver) into the kernel that splits I/O’s between the “real” storage and their device. The I/O’s that come into their device are time stamped and cataloged. What’s really interesting is that they have agents that work with Exchange, SQL server, and the file system. They claim that with these agents it not necessary to bring the database to a consistent point in time to do full recoveries. They also have the ability to do single message or mailbox restores in Exchange from these continuous captures. In other words, there is no data loss. Interesting to say the least, but, again, I am interested in seeing the scalability and their roadmap. More to follow on these guys.


So we all know now about encryption on the host level (filesystem, application level, column level in databases, etc.). Most of us also know about the new encryption appliances that work at the block or file level protocols (SAN/NAS/iSCSI). The big players here are Decru and NeoScale. What all of these fail to do is set a finer level of granularity of control to who sees the data. What these tools do is in essence protect against unauthorized access from users that are not authenticated by the system. For example, if we are using a Decru appliance to encrypt disk data (block level), users on the SAN that gain the ability to map LUNs will not be able to gain access to the data even if they remap the LUN to another host. The only access is through the host that has the encryption policy permissions to see the LUN in cleartext. But, that’s the where the problem lies. Anyone with root level access on that server can see ALL of the data on that device. So, the way people protect against that today is by implementing a software layer of encryption. In essence, they do dual layer encryption. One to bulk protect against LUN level access and the other at something like the column level within the database so that key information is not visible to users with root/administrator level access to the system or database. This is where Vormetric comes in. Their appliance is a combination of a software driver and an appliance that gains a finer level of granularity of access while also encrypting the information on these systems. The best way to think about this tool set is as a way to give root and administrator level users access to only the data they need in other to do their job. Things like /etc directories in UNIX or the registry in Windows. However, the sensitive application data is completely encrypted from them. However, the right users, and even the right application would have full access to the information. So, the question is now, how does this scale, how does this tie into the bulk encryption guys, and how does this work in DR/backup/etc environments. Once again, a meeting has been set with these guys to figure this one out.


Speaking of security and encryption, here’s a thought: How do you prove that what was originally stored and what is stored now is the same content? Sure, you can encrypt it the way Decru and Vormetric does, but a sufficiently skilled or authorized user could change the content of the data. All that has happened is that the data is an encrypted format for non-authorized or non-skillfull attackers. How do you prove in litigation that you really are presenting the data that was originally there? Well, this little company thinks that they have an answer. They were not a presenter or even an exhibitor in the conference, but I happened to sit in a spot where they conveniently happen to migrate to. They were making their pitch to a Symantec person to see if they could include this in their technology. I was certainly intrigued and I suspect that this is going to become much, much more important shortly. Something to watch for.

Data Domain

These guys clearly had great visibility at this conference. They hosted an end-user session and had those end users clearly articulate that their message is loud and clear: simple, simple, simple. The toaster approach is working.

These folks are spending a whole lot of time re-doing their strategy. Their basic entry into the market was with their introduction of the MAID (Massive Array of Idle Disks) technology. Their basic concept is that for tier 3 storage (archival storage), there is a need for very low cost devices but with near instantaneous access. So, what they developed was a way to house a huge number of SATA disk drives (900+) in a single frame. With the current disk drive sizes, they have 3/4 of a Petabyte of storage in single rack! Their key insight was that most of this data will not be accessed so, there is little need to keep all of the drives spinning at the same time. They have some very sophisticated technology to figure out which drives are required to spin and which ones are not. Additionally, they have some disk management and exercise technology that allows them to spin up and verify disks and their long term viability. Their measured (and claimed) results from this is that the lifetime of SATA drives is expanded by a factor of four. This puts that drive technology in the ball part of reliability of the much more expensive SCSI drives. However, the cost of the drives, the cost of the power and cooling, and the cost of the management is much lower. Their initial introduction of this technology was as a VTL tape device. This didn’t work so well. The MAID stuff is cool, but so what? What’s really interesting is that they are now re-positioning themselves as a platform for long term storage technologies. They have divided their system into three levels of access. In my terms: 1) presentation/personality – SCSI/FCP, iSCSI, NFS/CIFS, VTL, etc., 2) API/Intelligence – a set of API tools that allow greater access (i.e. indexing, content aging, migration, protocol/api emulations). If and when this platform approach is deployed and a reality, this system becomes much more interesting. 750GB drives are out, 1TB drives are close, and soon even bigger drives will be available. So, if their platform is upgradable to take advantage of these higher densities and it’s also an open platform for storage, then this becomes a much more realistic thing. As with all of these, more questions remain and further investigation will need to be made.

Enough of that stuff. I did manage to attend some of the sessions today:

Enrique Salem Keynote

  • Consumer level threats
  • Consumer level technology has historically moved to the enterprise (gartner says that between 2007 and 2012 that the majority of technologies that enterprises adopt will come from consumer technologies – .8 probability in their words)
  • Consumers are loosing confidence in the online business model. Symantec is going to focus on increase the level of confidence
  • project voyager: proactive protection against phishing attacks,
  • Bit to do about project Genesis – the integration of the security and optimization tools on the desktop (norton tools)
  • Security 2.0 vision: (search protection, system security, interaction protection, transaction security, convenience, identity protection, data protection)

Intel Virtualization

  • look at virtualiron (intel says they are very excited about these guys), cassatt, swsoft, platform (all vm vendors). On a personal note they did talk about Parallels – a Mac VM company.
  • their built in VT technology allows for hypervisors like Xen to not require guest os modifications (needed to virtualize windows with Xen)
  • Evident is a chargeback software package for virtualized hardware
  • Itanium is coming out with a dual core VT enabled processor this summer. It shows “awesome performance in the lab with Xen”

NetBackup Future Directions

  • cheak disk is leveraged in data protection
  • system hardware is ebing commoditized (migration to lower cost intel/ad/etc platforms, + server virtualization
  • Pervasive (cheap) networks making apps and data more accessible (changing shape of dr, critical data in remote office, web based management)
  • four key areas (unified platform, best in disk based backup, complete application ptote ction & recovery, intelligent data management platform)
  • Netbackup 6.5 1h2007, disk focus
  • what drive unified protection – geo distribution of data, multiple architectures/technologies/paradigms, multiple tools for similar tasks (prolic management and monitoring/troubleshooting)+search and restore
  • integrated protection (remote office/desktop+laptop, files and apps, hetero mix of hw)
  • Leverage purdisk integration
  • unified CDP management (instant restore and APIT restore, both software and hardware based “cdp engines”, file and block based solutions
  • web-based user and management interfaces (unified backup management & reporting, end -user search & restore – federated)
  • netbackup 6.5 will have a puredisk gateway concept (netbackup will now have a puredisk storage unit) stange NBU backups from disk to puredisk for SIS/replication, use NBU to wrote recovery tapes for puredisk clients
  • unification of reporting (NOM will handle management of data protection, CC-Service will handle business of data protection) (cc-service is NBAR on steroids – optimized for trending,planning, analytics,, designed for outbound reporting (NOM is for administrator reporting), measuring costs, assess risk and exposure, verify compliance)
  • (traditional tape, disk staging, virtual tape, snapshot, data-reduction, CDP)
  • tras – sso,vault
  • disk – sso for disk, san client, advanced client
  • vt – hardware , vto
  • snapshot , advanced client
  • data reduction, hw partners, puredisk
  • cdp – cdp for files, cdp for blocks/apps
  • stage 1 (disk improves tape backup), stage 2 (disk is the backup & recovery – data reduction, d2d, backup set, tape for archive, sso for disk), stage 3 (online recovery environment – snapshots, cdp, application dr, live data replication, zero RPO/RTO)
  • sso for disk (shared disk pool for all platforms) will also do de-dup and replicate – (allocation happens on a volume/volume basis – thing of each volume like we think of tape on SSO today) (will allow restores from a volume while other are writing to it – will leverage snapshots to do this)
  • SAN clients in a sso agent will move data through a media server to sso volumes
  • cdp – (application level you have to get a snapshot of the transaction logs) (fs you need to get a snapshot at a consistent point) (volume level – block level index store – must be mapped back to fs or application level in order to get a consistent state)
  • CDP (federate management – block based cdp, file based CDP) = the bst of backup + replication
  • application protection & recovery:
  • application support for all protection methods (must know app level stuff regardless of data movement transport)
  • granular backup and restore for key apps (beyond – hot backup
  • Application stack recovery (from bmr to oracle & exchange) (system-> data -> application)
  • support for virtual machine architectures (protection for vmware and othe VM solutions, leverage VM arch for advanced solutions)
  • (cross server snapshot for horizontal application consistency is going to be considered – was a question from the audience)
  • in esx 3.0 (snapshot of a vm is mounted on another host, backup occurs on alternate host)
  • bmr is going to be integrated with Windows PE (preinstallation environment) (gives bmr the “livestate touch”) (boot winpe from cd & run in RAM for additional speed, no multiple reboot) BMR with WinPE available in Summer with 6.0
  • intelligent data management platform:
  • federated search and restore (across puredisk, netbackup – all protection methods)
  • storage resource management, data grouping ( understand data utilization – leverage backup catalogs) (“data collections” concept – much less “server centric” view)
  • Integrated backup, archiving, and HSM (collect data once, use it for multiple purposes)
  • Data security and compliance (data encryption, compliance and data litigation support tools)
  • audit loggin and LDAP for 7.0 timeframe
  • claiming that because all data is visible, cataloging and indexing turns it into “agent less SRM”
  • will consider content based searching in addition to file-system meta-data index searching
  • 6.5 bighorn 1h2007, netbackup 7.0 1h2008= complete web based UI, complete CDP, unified search, much more
  • puredisk 6.1 agents for db, pd 6.5 enterprise edition (at same time as nbu 6.5), puredisk 6.6 dekstop/laptop = 2h 2007, 7.0 nbu integrated management reporting, search/restore agents (nbu 7.0 time frame)
  • 6.5 = the disk backup release (sso for disk, advanced disk staging (create sla’s for backups, delete from disk pool by data importance -not age, retention guarantees), netbackup puredisk integration, san client, snapshot client (adv client – for all tier 1 disk), vtl-to-tape (api for VTL to dup to tape – VTL does data movement)
  • Unified backup management (nom + cc-service), lots more agent & platform support (all in 6.5)

The presenter, Rick Huebsch, did not talk about NBU and Microsoft, so I approached him. His comment was “Vista and all that will be in 6.5.” He did acknowledge that his lack of MS material was conspicuous not because there wasn’t a future in it, but because he didn’t talk about it. I fully believe that all the “right” MS things will be done with NetBackup. But, we’ll keep an eye on that.

Some more thoughts:

101396524 011Efbdf7CThe theme of the show was clearly focused around the Symantec core strengths. They did not minimize the importance of the Veritas enterprise products, but they sure did emphasize the end-user and mid-range products (think Norton product lines, think Backup-Exec). I’m not sure that this indicates a shift in priorities but it is clearly something to watch. The “feel” of this Vision was much different than last years. Last year’s keynotes were much more enterprise focused. This years’ spoke of enterprise, but from the aspect of Windows and Security. The storage elements of the Veritas product lines were not the centerpiece. I wonder if Symantec should not have different days or a different session where they speak of this technology. It’s probably me being old fashioned (in the way a 10 year old industry can be old fashioned), but the storage stuff is just as hard and it’s getting harder. The bulk of the customers I saw were coming to see this space, not the Super Norton 3000++. The partner show was almost exclusively storage centric. There were are few policy engine type of people, Intel, and Dell – but that’s it. Weird.

Remember that my thoughts are phrenological in accuracy.

You must be logged in to post a comment.