Symantec Vision – day 2.
How does Microsoft’s roadmap affect backup?
Vista
- Much of the same – just 147 different flavors of vista (just kidding – 12 or so). No new things as far as backup other than a higher integration with the VSS services.
- MS is coming out with a DPM (data protection manager) – note is that it’s going to be coopetition with Symantec products
Exchange 2007
- Exchange 2007 will be 64bit only
- 50 databases per server with 50 different storage groups (5dbs per storage group)
- No MAPI support
- Preferred backup method will be VSS + replication
- exchange will have two replication options (LCR and CCR) (lcr = local continuous replication – local replica to distinct disk) (ccr = cluster continuos replication = mscs clustered replication between nodes)
- LCR is a log based replication to local server. Backup can happen from the replica
- CCR is based on MSCS – log shipping to replica server distinct from primary server
- VSS backups of exchange: backup program will require Exchange server 2007 server aware VSS requestor, eseutil is no longer required, windows server 2003 NTBackup does not have a VSS 2007 requestor, Exchange server 2007 db and transaction log must be backed up via the VSS requester
- vss backup methods (full: .edb, .log, and .chk are backed up, logs are truncated after backup completes), (copy, the same as full but no truncate), (incremental: only *.log, truncates log), (differential: *.log all the way back to full backup, but no truncating)
- Recovery (vss now allows restores directly to recover storage group), (restores to alternate location, different path, different server, different storage group), (log files can be replayed without mounting it first).
- recovery will be much more granular: mailbox or even message level (presentation is geared around backupexec) **note must ask NetBackup team how much they are going to tie into this infrastructure.
Scott McNealy Key-Note
- we are adding 3M users to the internet per week
- 390GB are created every second
- 80% is archived
- the cause: eliminate the digital divide
- 2 wats/thread today, next gen will be 1 watt/thread, thin client is 4wats/desktop
- open sourced everything
- best of breed is the driver of complexity (analogy with frankenstein)
- asking for companies to buy the pre-integrated stuff from people like sun – (made analogy to CIO’s buying 10 speed bikes, but paying extra for it being unassembled)
- it’s not about storage, it’s about retrieval.
- a=cost, b=npv, c(not being used) – how do I migrate out of what I just bought – what’s the migration path?
- c is why you want something that is opensource (free’s the barrier of entry and the barrier of exit)
- sun’s approach: (servers/storage/software/services)=security
- sun has deployed a 50K EV installation that is going to 350K user over this year
- 37% of the world’s data sits on sun platform
- sun public grid is on = $1/CPU-hr (@ network.com)
- will be rolling out $1/GB-day storage utility soon
- trying to convince the crowd that not running java is like not having the right tool for the job (i.e. our utility runs on solaris 10, so if your app is not written to support it, the problem is your app not the utility. Said “thing about the hair drier – if it’s not 110 and you are in the US is the problem with the power grid or is with the hair drier?”)
**general notes
Storage Foundation Basic is free: www.symantec.com/sfbasic
It’s a free edition of storage foundation for Linux (RedHat ES 4 and SuSE 9), and Solaris 9,10). Limited to 4 volumes, 4 files systems, and/or 2 processor sockets
Windows Solutions for Storage & Server Virtualization
- VM offers – Consolidation, Flexibility
- TCO is better (lower datacenter costs – heating, cooling, ac, power,etc), reduced management & operational costs, increased agility
- VM is groing to about 40% of server shippments (audience poll showed that all of them were running ESX – no one was running MS VM)
- Challenges (storage becomes harder, increases the requirements for availability)
- Storage concerns (must shut down VM server to expand VMDK files and then you have to run PartitionMagic to expand the partition) -> that leads to reduced storage utilization as a result
- Limited snapshot support – no support for VSS for of host backup, no split mirror snapshot withing VM for quick recovery
- Limited software raid capabilities in a HA/DR cluster ( simple spanned volume supprto only, no ability for physical to virtual campus clustering, no ability for virtua to virtual campus clustering
- no multipath load balancing policies
- optimiization (recommendations: user VcVM for windows, use raw device mapping for enterprise applications, use VMFS for non-critical workloads)
- VMWare recommends raw device mappings for snapshots, off host backups, clustering, etc
- HP (vpars,npars, vm, secure resource paritions) , aix (lpars, micropartitions, solaris (zones,containers), MS virtual server (win, linux), xen (solaris, mswin, linux), vmware (solaris, mswin, linux)
- symantec key caps (availability – VCS, DR – replication, VCS, Storage Availability – mirroring, snapshots,etc, P->V V->P physical to virtual migrations, provisioning across all platforms)
- avdvanced storage management withing a virtual server (common physical and virtual server storage management, drag and drop storage migration)
- increased storage availability (overcome ESX limitation of shutting down server for storage resizing)
- Proactive storage monitoring & notification (ms MOM, SFW management pack, Storage alerts – SNMP+Email+Pager, auto grow policies)
- HA/DR recovery using software mirroring (software mirroring across array enclosures within a VM using MSCS or VCS enables p2v and v2v ha/dr solutions)
- veritas flashsnap provides builtin VSS provider and VSS requestor support to allow creation of MS supported and approved snapshots
- hardware independent (only solution that supports DAS, FC SAN, or iSCSI based storage. Works seamlessly within ESX)
- Fully integrated with Netbackup 6.0 and BackupExec 10d
- centralized storage management (storage management server – free app in sf 5.0, gives a central view of all storage services across VM and physical machines) (this is the only feature so far in 5.0 – all else is available before 5.0)
- VCS runs and is supported in VM to VM and P to VM configurations
- VCS can be run on the console OS (the ESX OS itself) – so vcs runs in one place and not the individual VM nodes – so single instance HA (one cluster license, standby nodes are truly offline, application no longer needs to be “clusterized”).
- You can run VCS agent to monitor individual apps within VM when ESX is VCS clustered. (this will be in VCS 5.0)
- recommend that for campus cluster to use software mirroring, for geo clustering to use async array based replication
- VVR only runs in the VM.
- VCS understands VMotion so that cluster won’t freak out if your move it.
- vcs 5.0 for ESX – august (2.53 & 3.0 support, vmonition initatied from virtualccenter ui works with vcs, application mornitoring within virtual machines using lighweight agent, monitoring of virtual switches and virtual disks, will work with vmotion for statefull failover)
Ingesting PST files and old backup tapes.
**note use this as a tool to help customers understand the value of moving to EV sooner than later. only one of these events will more than pay for this whole process.
- Create a historical vault (recover them into EV)
- Migrate PST/NSF files into EV
- Use discovery accelerator
- Initial investment is high, but ROI is quick (customer quote “What used to take three months to never is now completed in a 24-hour window”)
- how to do this (planing, much resources (people, money, machines), consultants should be engaged) + (outsource restore from backup tapes) + (outsource PST/NSF recovery) + (use PST migrator in EV 6.0)
- (setup mutliple exchange servers to host restores of exchange server) -> (configure ev to archive each of these server) -> (restore oldest tape) -> (archive all data) -> (continue restoring daily/weekly/monthly tapes) [key is that EV is configured to only suck in stuff newer since the last tape import]
- Use xmerge to get copy of each mailbox in pst file for each backup tape -> use xmerge to restore each pst to a new exchange server -> xmerge of first will take longer, each additional will be quick and automatically de-duplicate [xmerge is a MS tool to create PST file from a mailbox in exchange – also vice versa]
- Backup tapes should be for DR, not discovery
- For non-regulated uses, keep journaled messages for same amount of time as you keep backups
- For “records” have users file to folders with assigned retention periods
- Recommending use of (renewdata, IBIS consulting, national data conversions, procedo, avanade)
- Symantec developing certification process (web site will post these partners)
- Oliver Group has a de-duping process before EV ingesting (200TB email data in about 120days)
- Walt Kashya – from Avanade (microsoft and accenture joint venture) – 4000 employees and 39 locations worldwide – own 10 out of 80 worldwide exchange rangers
- Did a project for State of NM (20K users in MSE 5.5 and 2000, groupwise, and domino, 11K users PST, created a single AD directory, EV was used with OWA to replace PST)
- migration steps (provisioning, redirect smtp traffic, export data and rename pst’s, upload pst’s into staging area, identify user’s vault store (they had six of them), ingest into ev)
- locate and migrate process: pst locator task, pst collector task (look via file and/or registry search) and then hands that to PST migrator
- pst locator runs in three stages (1- identify domains, 2- identify computers, 3-locate pst) meant so that you can limit scope and verify before running
- pst collector run continuously during window, can send a message to users that the data is being collected and to inform them of changes,can control whether to wait for backup before migration (migration status changed to “ready to migrate”, archive bit is set)
- pst migrator moves data into EV
- client driven pst migration process can drive it from the client side (this is a step wise migration), a client is installed on each client machine, mail is sent to the user, and then sends data back to staging area. (reasons to do this: disconnected users, pre-fill offline vault, where administrator do not have access to client workstations, when continuous access to PST data is essential)
Jeremy Burton Keynote
- 75% fortune 500 litigation involves discovery of email communications, 75% of company’s IP in in email
- email is just the beginning – instant messaging is the next thing – also skype (voip)
- step 1 (keep systems up and running), step 2 (keep bad things out), step 3 (keep important information in), step 4 (keep things as long as needed), step 5 (find things and mine them)
- Backupexec has a “google like recovery interface”
- Eagle project is: backupexec tied into google desktop!, has it’s own retrieval search tool – ties into the CDP product
- instant messaging threats grew 1693% in 2005 (use IMLogic to protect against this)
- three kinds of bad guys (hackers, spies, thieves)
- internal users get security of internal inclusion
- smtp gateway does outbound email filtering as well as inbound filtering
- lookup “billion-dollar data storage error” in google
- nasdaq has notified that IM must be treated the same way that email is
- message is that archiving is the key to managing all of this information (secure, rationalized, retained, expired, future proofed, indexed, and categorized)
- archive will have an API so that third party stuff will be expanded by 3rd parties (creating next business object, cognos but on unstructured data)
- analytics of email is going to be the final step (which email is forwarded most, who uses profanity the most, etc…)
- showing a lot of integration between regulation management tools (very future stuff)
Excellent. Thanks for this.
Storage Foundation Basic is long overdue. More later. rds