this is the end

Posted in Commentary, Fanboy on June 16th, 2006 by juan

Smile
The Mac is back! I’m working on it, and the screen is beautiful again. The SuSE laptop is going to stand by mode. Total elapsed time, 1 week calendar time of no Mac access. Painful, but livable. Apple did this right. They want to keep me as a customer. Well done.

experiment (post 2)

Posted in Commentary on June 15th, 2006 by juan

So, things are moving along. Apple now says that the laptop is fixed and should be shipping back in the next 24 hours. Interesting that it would happen this way because I just:

  • figured out VPN to work for Linux
  • started feeling comfortable with KDE/SuSE
  • started getting my work mail via IMAP to both Evolution and Pine
  • figured out that the exchange connector for Evolution uses WebDAV (which makes it shit)
  • did my first presentation to a customer with OpenOffice (it imported a PowerPoint presentation perfectly)
  • have my winblowz vmware image running perfectly

The good news of this is that Apple is good to the customer, and that if needed I have a backup plan now.

More to come once the PowerBook is in my hands.

experiment (status update)

Posted in Commentary, Fanboy on June 8th, 2006 by juan

The Mac is gone as of yesterday. DHL came by and picked it up. It was kinda scarry. I had no receipt from the guy that he took it. He was even scarrier. His uniform looked like it had been through a wet, muddy jungle. However, I was able to login to the DHL web site today and see that it was on its way overnight first class. The site allows you to sign up for confirmation of deivery. I was pretty happy with that and expected an update later today. However, I just got this:

Dear Juan,

The repair of your POWERBOOK G4 (17-INCH 1.67GHZ), Repair ID D8285XXX, is currently on hold, pending receipt of a needed part. We will notify you by email when the repair is complete.

Your repair status is available online.

Apple
——–

Ahhhhhhhhhhh!!!!!!!!!!! (Written via web interface on a blackberry)

experiment step 2

Posted in Commentary, Fanboy on June 2nd, 2006 by juan

Pain
The box for the shipment to Apple Care has arrived. At this point, I’m getting some warm and fuzzies. The packing is very professional and includes all the necessary items: the box (huh?), the properly sized pads for above+below+sides of the laptop, a wrapper for the laptop itself, and even tape to seal the box with. The shipping label it arrived with is two layer. For the return, all I have to do is peel the top layer off and the bottom layer gets the box back to Apple. I will perform the final backup of my data on Sunday and ship it out on Monday. Oh god.

the (temporary) loss and a new experiment

Posted in Commentary, Fanboy, Geekfest, Musings on May 31st, 2006 by juan

Apple-Logo-1
The other day, with no warning, I was dumped into a nightmare. My PowerBook‘s screen develop a nasty, pixel wide, always on purple line. A call to Apple Care confirmed it – laptop needs repair (no duh). They suggested that they send me a box to pack my laptop into and then ship it back to them and then 5-10 business days they would have it back to me. The kicker – they recommend that I back it up before I send it because “sometimes the depot finds that the hard drive is bad and they will replace it out of courtesy.” Crap. Next step – go visit the closest Apple store. Seems to me that they would be able to figure this out, order me a new display, let me go home with computer, call me when the display comes in, another quick dash, slap the new display in, run back home in joy. Nope. Apparently fixing computers requires centralization (one of Houston or Memphis). Apparently, screwdrivers and Apple stores are not allowed to co-exist in the same spatial coordinates.

So, I am not faced with a dilemma: what do I do for 5-10 business days without my laptop? Fortunately, I have a work laptop I can use. However, I refuse, refuse I tell you, to use Windows as the primary OS. So, looking around, it seemed to easy to use Fedora. I have three other machines at home running it now. Looking around I have a zillion choices of Linux and BSD distros to use. Without much scientific effort (read: a complete rectal extraction), I chose SuSE 10.1 (new shinny) to use as the base. The installation was awesomely easy. Linux has truly come a long way. The only thing not detected was my wireless. That I’m working on. Next was to try to use Evolution to connect to corporate email. Quickly, I got stymied – no CISCO VPN client available (at least to me). So, install VMWare – install winblows + sp2 + all the other crap + office + cisco vpn for windows. That gives me working access to the work stuff I need to do to pay for this computer habit of mine.

Logo Suse-2
The box from Apple Care is on it’s way. The SuSE box is ready with VMWare giving me a back line to the office. With this comes my great experiment: How do you survive Post Windows, Post Mac, into Linux in the corporate world?

Stay tuned.

superduper!

Posted in Commentary, Fanboy, OOTT on May 25th, 2006 by juan

As computers are wont to do, my PB 17″ has developed a glitch – a single pixel wide, consistently on column about .5 inches from the left of the screen. Unfortunately, this one means that I have to send it back to apple to fix. That means 5-10 business days without my main access to work, entertainment, and creative outlet. Crap. So, first step is to backup the computer, right? Well, being a UNIX geek, my first thought was “I’ll just rsync the whole thing and be done with it.” Then I started to look at / and realized that this is not so easy a thing. Sure I could do the whole computer, but then what? How do I get it back if/when I have to recover from it. Single files would be good, even directories would be good, but what about the whole thing? I mean the entire drive? How do I make a bootable copy? Well – superduper! is the answer. Not only does this tool do a superb job of copying everything, it knows how to do it without breaking the mac interface. It makes even old UNIX storage guys like me smile. Can’t recommend this highly enough.

strange happenings

Posted in Commentary, OOTT on May 19th, 2006 by juan

Waving Flag
Some of you already know this, but as of this morning, the odyssey is complete. I am now a Citizen!

Symantec Vision day3 & (something different)

Posted in Commentary, Geekfest, Musings on May 10th, 2006 by juan

26715 WAnd now for something different. Instead of attending the marginally useful sessions available in the morning, I’ve been talking to some of the exhibitors. Here’s some of them and my initial thoughts:

Index Engines

This is something very new. I have not had much time to explore this, but we have agreed to explore this much further. What these folks have is an appliance that sits in-band between the backup server and the back end tape device. They pass through the data to the device without changing it, but in the process they tear the backup stream apart and index the content of the data. In other words, they crack the packets open, index them, and put them back together again before sending them downstream to the tape device. They claim to work with NetBackup, Legato, and TSM (maybe others, but I don’t recall and won’t know for a little while). Once they have all of this indexed, it becomes searchable and “auditable” through their appliance. It’s an interesting concept, so I’ll make sure to explore this further with them. I’m concerned about scalability, index sizes (although they claim huge savings in this), and versioning issues (i.e. Legato changes OpenTape and they now become the gateway for an upgrade).

TimeSpring

This is a new CDP product that has just come out of stealth mode. They work only in a windows world, but they appear to have a pretty comprehensive solution for that space. The way they work is by inserting a small driver (splinter driver) into the kernel that splits I/O’s between the “real” storage and their device. The I/O’s that come into their device are time stamped and cataloged. What’s really interesting is that they have agents that work with Exchange, SQL server, and the file system. They claim that with these agents it not necessary to bring the database to a consistent point in time to do full recoveries. They also have the ability to do single message or mailbox restores in Exchange from these continuous captures. In other words, there is no data loss. Interesting to say the least, but, again, I am interested in seeing the scalability and their roadmap. More to follow on these guys.

Vormetric

So we all know now about encryption on the host level (filesystem, application level, column level in databases, etc.). Most of us also know about the new encryption appliances that work at the block or file level protocols (SAN/NAS/iSCSI). The big players here are Decru and NeoScale. What all of these fail to do is set a finer level of granularity of control to who sees the data. What these tools do is in essence protect against unauthorized access from users that are not authenticated by the system. For example, if we are using a Decru appliance to encrypt disk data (block level), users on the SAN that gain the ability to map LUNs will not be able to gain access to the data even if they remap the LUN to another host. The only access is through the host that has the encryption policy permissions to see the LUN in cleartext. But, that’s the where the problem lies. Anyone with root level access on that server can see ALL of the data on that device. So, the way people protect against that today is by implementing a software layer of encryption. In essence, they do dual layer encryption. One to bulk protect against LUN level access and the other at something like the column level within the database so that key information is not visible to users with root/administrator level access to the system or database. This is where Vormetric comes in. Their appliance is a combination of a software driver and an appliance that gains a finer level of granularity of access while also encrypting the information on these systems. The best way to think about this tool set is as a way to give root and administrator level users access to only the data they need in other to do their job. Things like /etc directories in UNIX or the registry in Windows. However, the sensitive application data is completely encrypted from them. However, the right users, and even the right application would have full access to the information. So, the question is now, how does this scale, how does this tie into the bulk encryption guys, and how does this work in DR/backup/etc environments. Once again, a meeting has been set with these guys to figure this one out.

ProofSpace

Speaking of security and encryption, here’s a thought: How do you prove that what was originally stored and what is stored now is the same content? Sure, you can encrypt it the way Decru and Vormetric does, but a sufficiently skilled or authorized user could change the content of the data. All that has happened is that the data is an encrypted format for non-authorized or non-skillfull attackers. How do you prove in litigation that you really are presenting the data that was originally there? Well, this little company thinks that they have an answer. They were not a presenter or even an exhibitor in the conference, but I happened to sit in a spot where they conveniently happen to migrate to. They were making their pitch to a Symantec person to see if they could include this in their technology. I was certainly intrigued and I suspect that this is going to become much, much more important shortly. Something to watch for.

Data Domain

These guys clearly had great visibility at this conference. They hosted an end-user session and had those end users clearly articulate that their message is loud and clear: simple, simple, simple. The toaster approach is working.

Copan
These folks are spending a whole lot of time re-doing their strategy. Their basic entry into the market was with their introduction of the MAID (Massive Array of Idle Disks) technology. Their basic concept is that for tier 3 storage (archival storage), there is a need for very low cost devices but with near instantaneous access. So, what they developed was a way to house a huge number of SATA disk drives (900+) in a single frame. With the current disk drive sizes, they have 3/4 of a Petabyte of storage in single rack! Their key insight was that most of this data will not be accessed so, there is little need to keep all of the drives spinning at the same time. They have some very sophisticated technology to figure out which drives are required to spin and which ones are not. Additionally, they have some disk management and exercise technology that allows them to spin up and verify disks and their long term viability. Their measured (and claimed) results from this is that the lifetime of SATA drives is expanded by a factor of four. This puts that drive technology in the ball part of reliability of the much more expensive SCSI drives. However, the cost of the drives, the cost of the power and cooling, and the cost of the management is much lower. Their initial introduction of this technology was as a VTL tape device. This didn’t work so well. The MAID stuff is cool, but so what? What’s really interesting is that they are now re-positioning themselves as a platform for long term storage technologies. They have divided their system into three levels of access. In my terms: 1) presentation/personality – SCSI/FCP, iSCSI, NFS/CIFS, VTL, etc., 2) API/Intelligence – a set of API tools that allow greater access (i.e. indexing, content aging, migration, protocol/api emulations). If and when this platform approach is deployed and a reality, this system becomes much more interesting. 750GB drives are out, 1TB drives are close, and soon even bigger drives will be available. So, if their platform is upgradable to take advantage of these higher densities and it’s also an open platform for storage, then this becomes a much more realistic thing. As with all of these, more questions remain and further investigation will need to be made.

Enough of that stuff. I did manage to attend some of the sessions today:

Enrique Salem Keynote

  • Consumer level threats
  • Consumer level technology has historically moved to the enterprise (gartner says that between 2007 and 2012 that the majority of technologies that enterprises adopt will come from consumer technologies – .8 probability in their words)
  • Consumers are loosing confidence in the online business model. Symantec is going to focus on increase the level of confidence
  • project voyager: proactive protection against phishing attacks,
  • Bit to do about project Genesis – the integration of the security and optimization tools on the desktop (norton tools)
  • Security 2.0 vision: (search protection, system security, interaction protection, transaction security, convenience, identity protection, data protection)

Intel Virtualization

  • look at virtualiron (intel says they are very excited about these guys), cassatt, swsoft, platform (all vm vendors). On a personal note they did talk about Parallels – a Mac VM company.
  • their built in VT technology allows for hypervisors like Xen to not require guest os modifications (needed to virtualize windows with Xen)
  • Evident is a chargeback software package for virtualized hardware
  • Itanium is coming out with a dual core VT enabled processor this summer. It shows “awesome performance in the lab with Xen”
  • Paul.Barr@intel.com

NetBackup Future Directions

  • cheak disk is leveraged in data protection
  • system hardware is ebing commoditized (migration to lower cost intel/ad/etc platforms, + server virtualization
  • Pervasive (cheap) networks making apps and data more accessible (changing shape of dr, critical data in remote office, web based management)
  • four key areas (unified platform, best in disk based backup, complete application ptote ction & recovery, intelligent data management platform)
  • Netbackup 6.5 1h2007, disk focus
  • what drive unified protection – geo distribution of data, multiple architectures/technologies/paradigms, multiple tools for similar tasks (prolic management and monitoring/troubleshooting)+search and restore
  • integrated protection (remote office/desktop+laptop, files and apps, hetero mix of hw)
  • Leverage purdisk integration
  • unified CDP management (instant restore and APIT restore, both software and hardware based “cdp engines”, file and block based solutions
  • web-based user and management interfaces (unified backup management & reporting, end -user search & restore – federated)
  • netbackup 6.5 will have a puredisk gateway concept (netbackup will now have a puredisk storage unit) stange NBU backups from disk to puredisk for SIS/replication, use NBU to wrote recovery tapes for puredisk clients
  • unification of reporting (NOM will handle management of data protection, CC-Service will handle business of data protection) (cc-service is NBAR on steroids – optimized for trending,planning, analytics,, designed for outbound reporting (NOM is for administrator reporting), measuring costs, assess risk and exposure, verify compliance)
  • DISK DISK DISK
  • (traditional tape, disk staging, virtual tape, snapshot, data-reduction, CDP)
  • tras – sso,vault
  • disk – sso for disk, san client, advanced client
  • vt – hardware , vto
  • snapshot , advanced client
  • data reduction, hw partners, puredisk
  • cdp – cdp for files, cdp for blocks/apps
  • stage 1 (disk improves tape backup), stage 2 (disk is the backup & recovery – data reduction, d2d, backup set, tape for archive, sso for disk), stage 3 (online recovery environment – snapshots, cdp, application dr, live data replication, zero RPO/RTO)
  • sso for disk (shared disk pool for all platforms) will also do de-dup and replicate – (allocation happens on a volume/volume basis – thing of each volume like we think of tape on SSO today) (will allow restores from a volume while other are writing to it – will leverage snapshots to do this)
  • SAN clients in a sso agent will move data through a media server to sso volumes
  • cdp – (application level you have to get a snapshot of the transaction logs) (fs you need to get a snapshot at a consistent point) (volume level – block level index store – must be mapped back to fs or application level in order to get a consistent state)
  • CDP (federate management – block based cdp, file based CDP) = the bst of backup + replication
  • application protection & recovery:
  • application support for all protection methods (must know app level stuff regardless of data movement transport)
  • granular backup and restore for key apps (beyond – hot backup
  • Application stack recovery (from bmr to oracle & exchange) (system-> data -> application)
  • support for virtual machine architectures (protection for vmware and othe VM solutions, leverage VM arch for advanced solutions)
  • (cross server snapshot for horizontal application consistency is going to be considered – was a question from the audience)
  • in esx 3.0 (snapshot of a vm is mounted on another host, backup occurs on alternate host)
  • bmr is going to be integrated with Windows PE (preinstallation environment) (gives bmr the “livestate touch”) (boot winpe from cd & run in RAM for additional speed, no multiple reboot) BMR with WinPE available in Summer with 6.0
  • intelligent data management platform:
  • federated search and restore (across puredisk, netbackup – all protection methods)
  • storage resource management, data grouping ( understand data utilization – leverage backup catalogs) (“data collections” concept – much less “server centric” view)
  • Integrated backup, archiving, and HSM (collect data once, use it for multiple purposes)
  • Data security and compliance (data encryption, compliance and data litigation support tools)
  • audit loggin and LDAP for 7.0 timeframe
  • claiming that because all data is visible, cataloging and indexing turns it into “agent less SRM”
  • will consider content based searching in addition to file-system meta-data index searching
  • 6.5 bighorn 1h2007, netbackup 7.0 1h2008= complete web based UI, complete CDP, unified search, much more
  • puredisk 6.1 agents for db, pd 6.5 enterprise edition (at same time as nbu 6.5), puredisk 6.6 dekstop/laptop = 2h 2007, 7.0 nbu integrated management reporting, search/restore agents (nbu 7.0 time frame)
  • 6.5 = the disk backup release (sso for disk, advanced disk staging (create sla’s for backups, delete from disk pool by data importance -not age, retention guarantees), netbackup puredisk integration, san client, snapshot client (adv client – for all tier 1 disk), vtl-to-tape (api for VTL to dup to tape – VTL does data movement)
  • Unified backup management (nom + cc-service), lots more agent & platform support (all in 6.5)

The presenter, Rick Huebsch, did not talk about NBU and Microsoft, so I approached him. His comment was “Vista and all that will be in 6.5.” He did acknowledge that his lack of MS material was conspicuous not because there wasn’t a future in it, but because he didn’t talk about it. I fully believe that all the “right” MS things will be done with NetBackup. But, we’ll keep an eye on that.

Some more thoughts:

101396524 011Efbdf7CThe theme of the show was clearly focused around the Symantec core strengths. They did not minimize the importance of the Veritas enterprise products, but they sure did emphasize the end-user and mid-range products (think Norton product lines, think Backup-Exec). I’m not sure that this indicates a shift in priorities but it is clearly something to watch. The “feel” of this Vision was much different than last years. Last year’s keynotes were much more enterprise focused. This years’ spoke of enterprise, but from the aspect of Windows and Security. The storage elements of the Veritas product lines were not the centerpiece. I wonder if Symantec should not have different days or a different session where they speak of this technology. It’s probably me being old fashioned (in the way a 10 year old industry can be old fashioned), but the storage stuff is just as hard and it’s getting harder. The bulk of the customers I saw were coming to see this space, not the Super Norton 3000++. The partner show was almost exclusively storage centric. There were are few policy engine type of people, Intel, and Dell – but that’s it. Weird.

Remember that my thoughts are phrenological in accuracy.

Symantec Vision – day 2.

Posted in Commentary, Geekfest on May 9th, 2006 by juan

How does Microsoft’s roadmap affect backup?

Vista

  • Much of the same – just 147 different flavors of vista (just kidding – 12 or so). No new things as far as backup other than a higher integration with the VSS services.
  • MS is coming out with a DPM (data protection manager) – note is that it’s going to be coopetition with Symantec products

Exchange 2007

  • Exchange 2007 will be 64bit only
  • 50 databases per server with 50 different storage groups (5dbs per storage group)
  • No MAPI support
  • Preferred backup method will be VSS + replication
  • exchange will have two replication options (LCR and CCR) (lcr = local continuous replication – local replica to distinct disk) (ccr = cluster continuos replication = mscs clustered replication between nodes)
  • LCR is a log based replication to local server. Backup can happen from the replica
  • CCR is based on MSCS – log shipping to replica server distinct from primary server
  • VSS backups of exchange: backup program will require Exchange server 2007 server aware VSS requestor, eseutil is no longer required, windows server 2003 NTBackup does not have a VSS 2007 requestor, Exchange server 2007 db and transaction log must be backed up via the VSS requester
  • vss backup methods (full: .edb, .log, and .chk are backed up, logs are truncated after backup completes), (copy, the same as full but no truncate), (incremental: only *.log, truncates log), (differential: *.log all the way back to full backup, but no truncating)
  • Recovery (vss now allows restores directly to recover storage group), (restores to alternate location, different path, different server, different storage group), (log files can be replayed without mounting it first).
  • recovery will be much more granular: mailbox or even message level (presentation is geared around backupexec) **note must ask NetBackup team how much they are going to tie into this infrastructure.

Scott McNealy Key-Note

  • we are adding 3M users to the internet per week
  • 390GB are created every second
  • 80% is archived
  • the cause: eliminate the digital divide
  • 2 wats/thread today, next gen will be 1 watt/thread, thin client is 4wats/desktop
  • open sourced everything
  • best of breed is the driver of complexity (analogy with frankenstein)
  • asking for companies to buy the pre-integrated stuff from people like sun – (made analogy to CIO’s buying 10 speed bikes, but paying extra for it being unassembled)
  • it’s not about storage, it’s about retrieval.
  • a=cost, b=npv, c(not being used) – how do I migrate out of what I just bought – what’s the migration path?
  • c is why you want something that is opensource (free’s the barrier of entry and the barrier of exit)
  • sun’s approach: (servers/storage/software/services)=security
  • sun has deployed a 50K EV installation that is going to 350K user over this year
  • 37% of the world’s data sits on sun platform
  • sun public grid is on = $1/CPU-hr (@ network.com)
  • will be rolling out $1/GB-day storage utility soon
  • trying to convince the crowd that not running java is like not having the right tool for the job (i.e. our utility runs on solaris 10, so if your app is not written to support it, the problem is your app not the utility. Said “thing about the hair drier – if it’s not 110 and you are in the US is the problem with the power grid or is with the hair drier?”)

**general notes

Storage Foundation Basic is free: www.symantec.com/sfbasic
It’s a free edition of storage foundation for Linux (RedHat ES 4 and SuSE 9), and Solaris 9,10). Limited to 4 volumes, 4 files systems, and/or 2 processor sockets

Windows Solutions for Storage & Server Virtualization

  • VM offers – Consolidation, Flexibility
  • TCO is better (lower datacenter costs – heating, cooling, ac, power,etc), reduced management & operational costs, increased agility
  • VM is groing to about 40% of server shippments (audience poll showed that all of them were running ESX – no one was running MS VM)
  • Challenges (storage becomes harder, increases the requirements for availability)
  • Storage concerns (must shut down VM server to expand VMDK files and then you have to run PartitionMagic to expand the partition) -> that leads to reduced storage utilization as a result
  • Limited snapshot support – no support for VSS for of host backup, no split mirror snapshot withing VM for quick recovery
  • Limited software raid capabilities in a HA/DR cluster ( simple spanned volume supprto only, no ability for physical to virtual campus clustering, no ability for virtua to virtual campus clustering
  • no multipath load balancing policies
  • optimiization (recommendations: user VcVM for windows, use raw device mapping for enterprise applications, use VMFS for non-critical workloads)
  • VMWare recommends raw device mappings for snapshots, off host backups, clustering, etc
  • HP (vpars,npars, vm, secure resource paritions) , aix (lpars, micropartitions, solaris (zones,containers), MS virtual server (win, linux), xen (solaris, mswin, linux), vmware (solaris, mswin, linux)
  • symantec key caps (availability – VCS, DR – replication, VCS, Storage Availability – mirroring, snapshots,etc, P->V V->P physical to virtual migrations, provisioning across all platforms)
  • avdvanced storage management withing a virtual server (common physical and virtual server storage management, drag and drop storage migration)
  • increased storage availability (overcome ESX limitation of shutting down server for storage resizing)
  • Proactive storage monitoring & notification (ms MOM, SFW management pack, Storage alerts – SNMP+Email+Pager, auto grow policies)
  • HA/DR recovery using software mirroring (software mirroring across array enclosures within a VM using MSCS or VCS enables p2v and v2v ha/dr solutions)
  • veritas flashsnap provides builtin VSS provider and VSS requestor support to allow creation of MS supported and approved snapshots
  • hardware independent (only solution that supports DAS, FC SAN, or iSCSI based storage. Works seamlessly within ESX)
  • Fully integrated with Netbackup 6.0 and BackupExec 10d
  • centralized storage management (storage management server – free app in sf 5.0, gives a central view of all storage services across VM and physical machines) (this is the only feature so far in 5.0 – all else is available before 5.0)
  • VCS runs and is supported in VM to VM and P to VM configurations
  • VCS can be run on the console OS (the ESX OS itself) – so vcs runs in one place and not the individual VM nodes – so single instance HA (one cluster license, standby nodes are truly offline, application no longer needs to be “clusterized”).
  • You can run VCS agent to monitor individual apps within VM when ESX is VCS clustered. (this will be in VCS 5.0)
  • recommend that for campus cluster to use software mirroring, for geo clustering to use async array based replication
  • VVR only runs in the VM.
  • VCS understands VMotion so that cluster won’t freak out if your move it.
  • vcs 5.0 for ESX – august (2.53 & 3.0 support, vmonition initatied from virtualccenter ui works with vcs, application mornitoring within virtual machines using lighweight agent, monitoring of virtual switches and virtual disks, will work with vmotion for statefull failover)

Ingesting PST files and old backup tapes.

**note use this as a tool to help customers understand the value of moving to EV sooner than later. only one of these events will more than pay for this whole process.

  • Create a historical vault (recover them into EV)
  • Migrate PST/NSF files into EV
  • Use discovery accelerator
  • Initial investment is high, but ROI is quick (customer quote “What used to take three months to never is now completed in a 24-hour window”)
  • how to do this (planing, much resources (people, money, machines), consultants should be engaged) + (outsource restore from backup tapes) + (outsource PST/NSF recovery) + (use PST migrator in EV 6.0)
  • (setup mutliple exchange servers to host restores of exchange server) -> (configure ev to archive each of these server) -> (restore oldest tape) -> (archive all data) -> (continue restoring daily/weekly/monthly tapes) [key is that EV is configured to only suck in stuff newer since the last tape import]
  • Use xmerge to get copy of each mailbox in pst file for each backup tape -> use xmerge to restore each pst to a new exchange server -> xmerge of first will take longer, each additional will be quick and automatically de-duplicate [xmerge is a MS tool to create PST file from a mailbox in exchange – also vice versa]
  • Backup tapes should be for DR, not discovery
  • For non-regulated uses, keep journaled messages for same amount of time as you keep backups
  • For “records” have users file to folders with assigned retention periods
  • Recommending use of (renewdata, IBIS consulting, national data conversions, procedo, avanade)
  • Symantec developing certification process (web site will post these partners)
  • Oliver Group has a de-duping process before EV ingesting (200TB email data in about 120days)
  • Walt Kashya – from Avanade (microsoft and accenture joint venture) – 4000 employees and 39 locations worldwide – own 10 out of 80 worldwide exchange rangers
  • Did a project for State of NM (20K users in MSE 5.5 and 2000, groupwise, and domino, 11K users PST, created a single AD directory, EV was used with OWA to replace PST)
  • migration steps (provisioning, redirect smtp traffic, export data and rename pst’s, upload pst’s into staging area, identify user’s vault store (they had six of them), ingest into ev)
  • locate and migrate process: pst locator task, pst collector task (look via file and/or registry search) and then hands that to PST migrator
  • pst locator runs in three stages (1- identify domains, 2- identify computers, 3-locate pst) meant so that you can limit scope and verify before running
  • pst collector run continuously during window, can send a message to users that the data is being collected and to inform them of changes,can control whether to wait for backup before migration (migration status changed to “ready to migrate”, archive bit is set)
  • pst migrator moves data into EV
  • client driven pst migration process can drive it from the client side (this is a step wise migration), a client is installed on each client machine, mail is sent to the user, and then sends data back to staging area. (reasons to do this: disconnected users, pre-fill offline vault, where administrator do not have access to client workstations, when continuous access to PST data is essential)

Jeremy Burton Keynote

  • 75% fortune 500 litigation involves discovery of email communications, 75% of company’s IP in in email
  • email is just the beginning – instant messaging is the next thing – also skype (voip)
  • step 1 (keep systems up and running), step 2 (keep bad things out), step 3 (keep important information in), step 4 (keep things as long as needed), step 5 (find things and mine them)
  • Backupexec has a “google like recovery interface”
  • Eagle project is: backupexec tied into google desktop!, has it’s own retrieval search tool – ties into the CDP product
  • instant messaging threats grew 1693% in 2005 (use IMLogic to protect against this)
  • three kinds of bad guys (hackers, spies, thieves)
  • internal users get security of internal inclusion
  • smtp gateway does outbound email filtering as well as inbound filtering
  • lookup “billion-dollar data storage error” in google
  • nasdaq has notified that IM must be treated the same way that email is
  • message is that archiving is the key to managing all of this information (secure, rationalized, retained, expired, future proofed, indexed, and categorized)
  • archive will have an API so that third party stuff will be expanded by 3rd parties (creating next business object, cognos but on unstructured data)
  • analytics of email is going to be the final step (which email is forwarded most, who uses profanity the most, etc…)
  • showing a lot of integration between regulation management tools (very future stuff)

Symantec Vision (ne Veritas Vision), day 1

Posted in Commentary, Geekfest on May 8th, 2006 by juan

Today is the first day of Symantec’s Vision end user conference. Last year’s conference was still under the Veritas name, so it was interesting to see how the dynamics have changed since that last one. It is very clear who bought who on this one. I have yet to see any of the traditional Veritas guys take charge. This conference is well put together, but there is just too much to cover. The notes below come from my experience during the few sessions I was able to attend. They are very much stream of consciousness, so disregard any formatting, spelling, or even semantic errors.

Kick-off Keynote

  • Vision is to secure information, compliance, and availability
  • System failures and user failure are as important to consider as natural or externally driven events
  • 130 large scale data breaches last year. most of the “hackers” are gone and are now being replaced by criminal organizations that are more insidious as well as more organized.
  • 15% of gross annual revenues go to R/D
  • focusing more and more on the integration of classic and veritas products
  • messaging is about inbound and outbound traffic
  • managed security software model is being delivered (will move to managed backup and servers)

Futures in Enterprise Vault

  • EV is now part of Enterprise Message Management (Brightmail, IMLogic)
  • Trippled the engineering staff since the KVS acquisition
  • Did not show AXS/One in the forester or Gartner thing
  • 75% of a typical companies IP is in email
  • 75% of fortune 500 litigation involves Discover of Email comm.
  • 79% of companies accept email as confirmation of a transaction
  • Instant messenger traffic is being considered in the same class as email (this is something that AXS/one had a couple of years ago).
  • Movement is towards pro-active compliance management (keeping things in before they go out and vice-versa). Today most things are being handled on an after the fact matter
  • Analytics will be the step after that (who’s talking to whom)
  • Messaging is more than email : email (exchange, notes, smtp) (90% of the problem – but the rest is growing even faster), messaging (IM: Yahoo, AOL, Gmail, MSN, etc), Collaboration (files, web, sharepoint), digital communications (voice mail, voip, etc).
  • VoIP is being considered as archive data
  • blogs?
  • water cooler conversations ( large trade houses are looking at recording common meeting grounds)
  • email as an asset (user value, business value, business history)
  • email as a liability (smoking guns, cost of review)
  • Risk classes: 1) No archiving – high risk, high cost, 2) expire archive after x days, delete too soon?, 3) retain items forever, litigation?, 4) retain records by category, nirvana
  • customer objective, (reduce storage cost, reduce litigation risk, reduce review costs)
  • records management does work because ( email volume, informality, universality)
  • challenges (end user push back to policy, tursing automated systems, can’t go back – no undo on a delete)
  • Archiving needs to be intelligent (itelligent classification, itelligent retention, intellligent management)
  • email encryption? (note: the hands up response from the crowd seemed to show moderate interest in encryption)
  • databases, sap, bloomberg, and ECM are future data sources (IM is done now through the IMlogic acquisition)
  • EV 6.0 sp2 now works with VCS (if email is mission critical, archiving is mission critical!) (later in the year MS cluster server will also work)
  • legal hold utility – (separate utility – pick a DA case, place items on hold, no expiry until lifted, track multiple holds) (requires DA 5 sp4)
  • sharepoint shortcuts (analogue to exchange shortcuts, adds to version archiving in ev6, age driven threshold, click on shortcut to retrieve, restore back to sharepoint)
  • ECM/RM system (enterprise content, records management) API. (user drags “file plan” in ecm outlook plugin to classify item, ECM system now can define retention and metadata, item stays in ev but managed by ECM)
  • ROADMAP
  • July time frame DA 6.0 – (legal hold integrated in DA, federated search across Directories (multiple EV), more flexible boolean search criteria )
  • EO CY 2006 EV 7.0 – (enterprise scale admin : roles based admin, granular policies (move evpn to gui), mscs support, improved backup), best user experience (desktop search integration), advanced reporting, improved supportability
  • ++ (user file share archiving – emc celerra), E-discovery – API to export to third party case management tools(tenex, concordance, etc.), lotus mailbox archiving, simpler exchange install
  • innovation: intelligent archiving – (content based classification, ecm connectors), EMM (tighter IM integration, tighter AV integration, encryption and RMS), file lifecycle mgmt (storage exec (building storage exec into EV!)+ FSA)
  • working to add Mac support! (they are hearing more and more often that the mac is part of the enterprise)
  • 7.0 is going to be a significant upgrade – so plan, plan, plan carefully
  • next gen (7.0+):
  • enhanced single instance storage (block level)
  • xml storage format
  • advanced retention and disposition (e.g. reclassification)
  • microsoft 2007 support (exchange 12, office 12, goal to ga with ms)
  • next gen scalabiliy (enghanced indexing technology), distributed deployments, sacle out model
  • backup and archiving integration (unified search, integrated data capture from exchange)

DataDomain Customer Panel

  • Overriding comment from all of them was “simple to install, simple to use, simple to justify”
  • No surprises as to where it fits (don’t use tape, disk becomes primary recovery tool, tape becomes primary archive tool)
  • Customer perception was skeptical at first, but then turned to “prove it to me”. Very much what we see in the field.

CTO Keynote (Ajei Gopal)

  • Big thing is the open source movement
  • Innovation over the next 10 years is going to go from Ameica/Europe to China/India and will become a global pool of innovation
  • Data is now spread everywhere (from data center to distributed to internet 1.0)
  • Identity information will become key data to prtotect
  • Fraud protection
  • Bill protection
  • Futures:
  • Active policy management framework (policy central console)
  • Business services are unit of measure (application with one or more servers/storage/etc)
  • Has service templates (comes with best practices) (defines SLA’s that map down to backup rules, dr rules, volume rules, etc).
  • Autodiscovers topology of services (servers, san, storage).
  • Based on what things need to be done they will either a) generate automatic actions, b) guide an administrator through a process, or c) generate a ticket to be serviced by one of the team (backup, san, etc.)
  • Monitors applications (like netbackup)) for compliance to policy settings (best practices stuff). Does this on the fly and real time.
  • SDSA (symmantec database security auditor) monitors and manages access and output from a database. operates from a paranoid world.
  • easily monitors for sql injection atacks (learns from user or self learns at first and then monitors after that.)
  • today it is detection mode only, but could be an blocking technology down the road

That’s it for today. More will come tomorrow.