and the question to the answer is
Posted in Commentary, Fanboy, Geekfest, Humor on April 6th, 2006 by juanin this I pose the right answer. The question is, why not run Windows all of the time?
here’s a hint
in this I pose the right answer. The question is, why not run Windows all of the time?
here’s a hint
So, the first person to announce the right answer is….. parallels. Real virtualization for the Intel Macs. BootCamp is an answer, but it doesn’t let you do the real thing – keep the right os running while you jump to play with the not so good one. Now all I need is for the the 17″ dual chip / quad core MacBook Pro to come out. Then me, my bank account, and my mouse will fly to apple.com as fast as possible.
Come on, MacBook Pro 17″ dual chip/ quad core/200GB hd/.,….. Come on!!!!
This is cool (and it hurt like a mother)..
This is an amazing video of Jobs demoing NeXT3.0 in the early nineties.
The apps and many of the features are cool even today. It’s truly amazing how far ahead they were.
I want to know who’s doing this kind of crap now. The crap that we are going to look back on 10 years from now and say “It’s truly amazing how far ahead they were.” Any ideas?
The much vaunted revamp of the Microsoft Office system includes a ton of new changes. One of the most important (as far as I can tell so far) is the complete revamp of the user interface. This link goes to a video where MS walks us through a high level overview of this change.
I’m excited about this, not for personal use, but because I might finally stop getting calls from everyone I know. Many of the features that make Word, Excel, and PowerPoint presentation look good are very difficult to figure out. The learning curve for all of these products is extreme, to say the least. To illustrate this, look at the size of this book. This 1172 page tome attempts to cover the features of this set of products. BUT, the Word only version is 912 pages by itself. Excel is 936 pages. No need to go on. What Office is missing is not features, but accessibility.
I hope that once we finally get our hands on this, the calls will stop (well actually, I expect a slew of calls when it first comes out because it has changed).
I ranted and raved before on Dvorak’s prediction. One of his big arguments was that Microsoft agreed to “only” a five year office extension. Well, I found this:
Listen to the RDF on this one. Not so much distortion.
One of the most interesting things about this is how Steve acted like a patient parent explaining to children (the audience) that we need to coexist in order to survive. I wonder how much of that feeling is still there. I’d imagine it’s quite a bit.
Extremetech just published this article on why Vista won’t suck. It’s a good read because:
And, not to mention that all of these cool things won’t be available for all of the Vista users. There’s going to be SIX different flavors: extreme rookie, rookie, usable, cool, over priced, and why-not-just-this-version-at-a-reasonable-price. All of these features make this article all the more relevant. Why would Apple drop an ahead-of-the-curve OS for something that is very obviously playing catch up? Call me a fanboy, but I’m in for OSX for a while to come.
Recently, I had a customer ask for further clarification on a proposed storage assessment. They, wisely, had asked third parties (Gartner) to give them perspective on the value of doing a storage assessment. The third party, expensive, consultancy came back with four major areas that should be addressed:
The customer, again wisely, asked us and the two other bidders to explain how our proposals would address the above. My response was very targeted, but had some insight that I think should be thrown to the aether. I’m also expanding it a bit since the original response did not address all of the points (they were out of scope for what we were trying to do).
So without further ado, here’s my thoughts on this:
1) Proper provisioning of storage
Gartner identifies this as an issue because most organization do not have a good understanding of what storage they have and how it is allocated. In addition, most organizations allocate storage as a “knee jerk” reaction to demand. By that, I mean that most allocation is done either by satisfying the customers requests (“I need 400GB of disk for my SQL database”) or by including storage in the acquisition of servers. These types of allocations do not consider the true cost of data management or even the true storage requirements. Provisioning is also typically looked as a one way function: storage allocation. However, there is a flip side to this: storage reclamation. As you well know, most users will over request storage because it’s easier to go to the well once. Very rarely, if ever, will they tell you “I asked for too much – you can take back 200GB.”
So, the first step in establishing a provisioning strategy is to understand what storage you have, how it’s allocated, and how well it’s being utilized. Once you have that understanding you can start making more informed strategic decisions on how your business should operate the storage infrastructure. With that in hand you can then start creating policies and procedures regarding your storage allocation and de-allocation. Only then will you be able to design a technology architecture to support your business requirements.
A good star for an assessment, internal or external, should give you: and understanding your current policies, procedures, and infrastructure. Additionally, it should make some broad recommendations as to the direction to take for your next step. However, determining a complete storage provisioning and management policy should be a project of it’s own right.
2) Maximizing ROI by devising Data Life cycle tiering strategy
Similar to point #1, the first step in understanding your data life cycle is to map your current storage. Any strategy needs to consider the results of #1 and do exactly that for both your unstructured and semi-structure data (files system, and email). An analysis of the data should give you the ammunition necessary for you to determine what tiering structure makes sense for you. Careful consideration should be given to the results to match them to industry best practices. However, those best practices should only be a guide as each business is different. The ultimate strategy will be a blend of best practices and targeted site specific practices.
3) Capacity planning for future purchases
This, again, ties to point #1. Capacity planning is part and parcel of a provisioning strategy. Because storage, systems, and growth in most companies varies drastically, a plan should be developed for the projected requirements for the subsequent 18 months. This will assist you in planning for the current, expect growth. However, as is the nature of any assessment like engagements, the recommendation are created only with data that identified during the duration of the engagement. If your business changes unexpectedly or grows faster than the projections created during the engagement, the recommendations will probably not be accurate. This is where you would need to have a capacity planning process that accommodates for changes. This process would, but it’s very nature, need to be something that is on-going and self monitoring. Typically, It is outside the scope of and assessment to device this capacity planning process. However, it is something that you should be able to device, albeit with some minor help, after this type of engagement.
4) Validate disaster recovery strategy and intra-company SLA’s.
Storage provisioning, allocation, and capacity planning is part of a properly maintained DR strategy. However, many companies fall into the trap of believing that a data protection or data replication plan is the DR plan. They neglect to consider the people and non-IT processes that are required to implement disaster recovery. While it’s true that these data based protection mechanism can help in the case of minor or even major disasters, a DR plan should be primarily based on managing the business processes in the case of an “event.” A good storage protection strategy would be used to accelerate the recovery process, but not be the recovery process. Any assessment engagement that addresses this element, should be focused on either how to implement a data protection methodology, or how the current or proposed protection systems map to the larger DR plan. The only way to drive these results is to create or validate SLA’s amongst all of the business units or stake-holders.
Speaking of which, that is the other most common failure amongst many of my customers. Data protection mechanisms are created based on perceived needs rather than any measured or clearly defined business requirements. As an example, it’s very common to encounter sites that use backup technologies to capture nightly incremental backups and once weekly full backups. These are typically implemented across the board without considering that some applications require more frequent, or even less frequent backups. Often, secondary protection mechanism are implemented by application groups, DBA’s, or even non-storage system’s administrators. These secondary schemes are in place because the system wide protection mechanisms are perceived as either in-adequate or not realistic to their needs. These are clear indications that the overall DR strategy is flawed, and needs to be addressed.
The first I heard of this was from John C. Dvorak on TWIT. Dvorak basically said that he has a theory that Apple is going to go all hardware and quit using OSX. So, earlier in the week came the article he talked about on the podcast. Now, Dvorak is quite the guy, and has some deep thoughts from time to time, but I have to take the exception to this. Let’s enumerate the points he makes:
Well – my counterpoint:
As you can tell, this Dvorak thing has me fired up. I don’t really know why. Is this an example of that rabid addiction he talks about? Probably. Is this an excuse to post on my blog? Yep. Will this be read by anybody? Probably not. But, I feel better.
-Juan