Tuesday, March 26, 2013
Thursday, January 24, 2013
I've been using Windows 8 in various forms since the early previews, through the release candidates and now have it as my Windows desktop of choice. Leaving aside the controversy of the new 'Start Screen', Windows 8 works well on old and new machines alike - even Windows XP era machines from the early 2000s.
After the launch of Windows 8, I got the opportunity to test out Windows 8 Enterprise in the 'To Go' form, installed on a Kingston Datatraveller Workspace 32GB USB stick.
What is Windows 8 to Go? The simple answer is it is a complete installation of Windows on a USB stick. The system boots and runs from the USB stick, bypassing the internal hard disks of the PC it is plugged into.
I'll be upfront about the usefulness of Windows 8 To Go. Firstly, for most people that have been provided with a notebook running Windows 8 it is unlikely to be of too much use. The notebook will likely have all the required personalisation and synchronisation already and with modern thin and light machines, portability is not that much of an issue. Where a secure work desktop is required, using desktop virtualisation may well be a better overall approach.
But there are a number of situations where Windows 8 To Go can help. A couple of examples include a contractor with their own PC being able to run a secure system provided by their client, or for travelling to a risky location with only a USB stick rather than an expensive notebook to damage or tempt thieves - so long as there is a PC to plug into at the destination.
My experience of Windows 8 To Go has been pretty uneventful - I plugged the stick into a number of PCs, and it booted easily. The only noticeable difference is the first time a particular PC is used, as it takes time to set up hardware, and driver installation can be fiddly with non-standard peripherals. Other than that, it is pretty straightforward.
When it came to trying this out on a 2010 Apple MacBook Pro though, Windows 8 was not able to complete the boot process via the Boot Camp boot selector. This might be solvable with some more time and effort, but for now is something to be aware of.
I used Windows 8 To Go across three different machines - a Lenovo X220T, a Fujitsu Lifebook and a Dell XPS 420. They had quite different specs and capabilities and all pretty much worked seamlessly. I also tried it on an Intel Atom based netbook, but the slow speed of intial configuration made me give up on it as I had limited time.
Overall, once setup and configured properly, Windows 8 To Go was simple to use and performed well. Despite the lack of USB 3, performance running Windows 8 from the Kingston Datatraveller USB stick was good and certainly as snappy or better than the internal notebook harddisks which were not SSDs.
In terms of size, the 32GB stick I used is about the minimum to have to work with Windows To Go, and I would recommend 64GB wherever possible. This would enable the USB stick to have the OS and all required data and applications available on the single stick. I found with 32GB, I was having to split my data, either into the Cloud which would then have to upload and download, or onto another device such as a USB HDD. This complicates the solution a lot and dulls the attractiveness of a simple and small but self contained solution that can be taken anywhere.
The biggest drawback I've found in day to day use has been the physical size of the stick. The 32GB stick is quite long and bulky. This is OK if you're booting it on a desktop machine, but it makes handling a notebook quite difficult with it sticking out. It would be good for the stick to have a short cable to enable the stick to move around a bit else it risks being dislodged or damaged with small or accidental movements of the notebook. The other aspect of the bulk is that it tends to block access to other USB ports located next to it.
The ease of use of Windows 8 To Go also brings home another point though, which is that many people who use it are also likely to have a main machine with applications, data and customisations they are used to. In day to day use, they will most likely use that machine and only if necessary use the Windows 8 To Go stick.
As it stands, the two are separate. If Microsoft could come up with a way to sync or update the Windows 8 To Go installation directly from the main PC rather than relying on the user to boot and sync up separately, it would be a lot more relevant and valuable to many PC users.
Wednesday, December 19, 2012
Five things that will not happen
It’s that time again when the PR world revolves around everyone and their dog making predictions for the coming year. As an analyst who likes to fight against marketing hype I feel compelled to add my thoughts to the mix. And being, by nature, a character who takes ‘Scrooge’ as a role model, here are my ‘thoughts’ of five things that will NOT take place in the year ahead.
IT departments will continue to delivery services in the most appropriate way that meets the needs of their users. Some internal, some from external resources. In fact, if anything, 2013 will be the year in which many organisations start to figure out that scaling out the use of cloud services across more than a handful of service providers is actually really difficult. Integration, security management, data governance and, not least, supplier and contract management, will act as a natural break on cloud adoption until organisations figure out how to manage the complexity.
The concepts behind big data are interesting but the solutions are yet to be ‘industrialised’ enough for many organisations to use them in anger. In fact, organisations would be better advised to focus first on more fundamental data management integration challenges, which remain prevalent, and improving their general capabilities around ‘analytics’ to help the organisation make better decisions more rapidly. And without enabling better internal and B2B process integration, advanced time analytics solutions often have little to act on. Enhancing capabilities here can have more easily justified payoffs and deliver benefits more quickly.
Nope – it’s a world of device plurality. Users will utilise a range of different devices to suit the work at hand and where they are working. This includes Windows PCs, which aren’t going away in a hurry. During 2013, more organisations will wake up to the challenges of dealing with a world in which end-point devices are the most volatile part of the IT equation, but few will start to address those challenges effectively.
With business ever more dependent on IT and at a time when delivery options are proliferating the role of the CIO will become more important, not less. I anticipate a shift in emphasis beginning to take place, in which the internal IT infrastructure and operational processes s is used to create a ‘service hub’ for coordinating the use of internal and external resources. And going hand in hand with this, the role of the CIO will move towards service delivery management and helping the business ‘do more’.
See the answer above.
Businesses have always been attacked, and evolving working practices have always created new and interesting ways for users cause problems. There is nothing new about the range and scope of threats changing with time but ‘defensive’ measures will also continue to develop and most organisations will just about keep up as they have done in the past. But this is an area in which organisations need to become more proactive, and I do see security analytics starting to play more of a role in the mainstream during 2013.
Business users who travel and need to create content rather than just consume information need a thin device that can provide the same functionality as a traditional laptop. There are still many product / solution niches to be filled. As stated above, it’s a multiple device world.
Wednesday, November 14, 2012
A couple of years before terms such as “Cloud”, “Big Data” and “Bring Your Own Device” (BYOD) occupied the lexicon of IT publications and vendor marketing machines, the discipline of “Business Service Management” (BSM) captured many column inches. The concept at the heart of BSM was to ensure that the IT services provided to users were delivered in line with business requirements. Despite the idea being sound , real world adoption was quite low, partly due to the then low maturity of solutions and also as few organisations saw immediate requirements. In many ways BSM has now evolved into the concept of service delivery management. Is this an approach whose time has come?
Organisations are now beginning to move beyond the first stage of ‘virtualisation’ where much of the focus has been the creation of resource-efficient systems with better availability and recovery options. The next steps will take the relatively static systems that have been implemented so far and look at making them far more dynamic. Such solutions are now often referred to as “Private Cloud “.
The challenge is how to create policies that will best fit these dynamic Private Cloud IT resources to business needs. Indeed, looking even further ahead the question may become how to utilise these dynamic internal systems alongside resources running outside of the organisation’s data centres? Both of these solution architectures will necessitate the adoption of service delivery management approaches to ensure users get the services they require with the service qualities needed at optimum cost.
In addition it should be borne in mind that the SOA architecture has now taken a firm root in many enterprises, almost by stealth. By its very nature, SOA based systems also create complexity that could further encourage the adoption of ‘service delivery’ models.
Alongside these technical developments another factor is likely to encourage CIOs to modify the way IT functions from one dedicated the operational management and administration of hardware and software to one based on the idea that it is the management of service delivery that is important. This concerns the escalating pressure from all stakeholders, users, regulators and shareholders that IT be able to clearly demonstrate that is delivering the maximum business value at minimum expense and risk.
Clearly for IT to be able to adopt such service management centric operations it will be necessary for suitable monitoring and management solutions to be available. In fact several software vendors with a history in BSM such as CA, IBM, HP and BMC have made considerable strides forwards whilst Microsoft, VMware, Quest and a number of others are also developing useful tools.
It is worth noting that few organisations have a clear idea of exactly which IT services are being delivered to users, the precise quality of service levels needed or the relative business importance of each service compared with others. Effective tools to provide such information, in the guise of service catalogues, asset management repositories and SLA monitoring solutions have been available for some time, but knowledge of the value they can deliver is still remarkably rare.
But before organisations rush out to acquire such management tools, CIOs will need to be ready to change the way they look at IT. It is my opinion that the time to move to service delivery management philosophy has arrived.
Tuesday, October 23, 2012
When computers were first created, all attention was centred on getting answers to complex calculations that could not be solved directly by mathematics. Calculus was everything and getting data into the systems was achieved via the use of cards punched with holes. We have moved on someway since then, but there have been only limited developments in how computers are controlled and the ways in which users can enter data.
Certainly the IT industry has brought forward a few more options during the last two decades, but none displaced the keyboard from the majority of mainstream usage. For example, back in the early nineties I remember very clearly investigating programs that sought to allow a person to dictate to their PC via a microphone. At the time the voice recognition capabilities were quite simplistic and required the user to spend a number of hours to “train” them to their accent. More importantly, the vast majority of desktop PCs of the time lacked the compute power to allow the voice recognition software to function adequately.
I also recall that many ‘influential’ staff members were not prepared to work with the new tools, even if they had functioned acceptably. Business managers did not want to give up their secretaries and, even more unforgettably a number of senior secretaries pro-actively advised me that the introduction of such systems were unlikely to deliver business benefits. There are times, few though they may be, when the advice of users should not be ignored, and that was one of them.
But today we can see evidence of new options that hold the potential to change the way data is entered at a fundamental level and for large communities of users. Already you can see people using devices without physical keyboards. On some machines it is now possible for simple voice commands to be given to machine to ask for information or to add data. The Siri software found on Apple’s devices is a good example whilst the Dragon dictation software is now very effective at converting audio to text, as long as you remember to speak the punctuation desired.
Beyond this the widespread use of Tablets and Smartphones has made entire generations comfortable ‘typing’ onto virtual keyboards that only exist on the device’s screen. I must also point out how effective handwriting recognition software has become on certain platforms. As an example, this blog has been handwritten onto my Fujitsu T901 Windows 7 Tablet PC using only the stylus. It has also been written in my normal handwriting; long gone are the days when using a stylus meant learning a new ‘way’ of writing, however simple Graffiti may have been.
It will be interesting to see how much effort manufacturers such as Fujitsu with its range of tablets, Samsung with its Windows slates and Note smartphone, Lenovo, HP, Apple and others put into promoting handwriting recognition, the use of a stylus and voice control compared with the almost universal acceptance of touch screen keyboards.
I believe that the range of ways that users can control devices and enter data has yet to be fully appreciated. What is clear is that there are now multiple options and, overtime, I fully expect individuals will use several of them to achieve different tasks. I, for one, find it much faster to write with a stylus (or indeed a fountain pen) than to type using a keyboard or dictate to a voice transcription program. It also better fits my way of working.
Have you looked at different ways of getting data into your devices?
Thursday, October 11, 2012
Many breach attempts are now using multiple vectors together - such as a Denial of Service attack combined with the activation of an Advanced Persistent Threat (APT). The aim is to create an environment of panic and uncertainty through a visible primary attack in order to hide the true nature and intent of the secondary attack.
The net effect is to hide the subtle attack in plain sight while the security operations team are tied up dealing with the diversion. The serial nature in which the teams respond to events highlighted by many Security Event & Incident Monitoring (SEIM) solutions means that the most visible threat is often prioritised for remediation, leaving the secondary attack to operate undetected for longer.
While the true nature of the attack may eventually be detected, it is often too late to stop the valuables leaving the organisation. The message from RSA was that it is time to start moving beyond looking at various systems and attacks in isolation. Instead, as an industry, we should start to seek out a more intelligent and analytical approach to monitoring activities on the network and between various systems and clients.
RSA argues that we should be taking advantage of some of the technologies and skills that have been developed in other areas of IT, particularly in managing fast moving data sets and extracting patterns of activity from this data through advanced analytics. This is commonly called Big Data, a term that is felt by many to be one of those over-hyped buzzwords.
With RSA being part of EMC - which has Big Data as part of its core marketing message - it is natural to be somewhat sceptical of RSA’s use of the term with regard to security analytics. But the sheer volume, breadth and variety of information needed to be collected and stored, together with the need for fast – sometimes real time – insight based on multiple sources of information puts security analytics right in the frame of Big Data.
The viewpoint of RSA is that the move to security analytics is too important to wait for Big Data as a whole to become more widely adopted. Therefore – at least for the near to mid term – the most practical way to achieve this will be an integrated, appliance-like solution that can be bought and switched on with a minimum of integration services and other activities. In the longer term, the data acquisition and analytics may migrate to more general purpose Big Data systems.
Looking at this practically, it is likely that this type of security analytics platform will require a substantial investment in both equipment and skills. Usually this is something only the largest or most heavily regulated and sensitive industries are prepared to stump up for and have the teams to make it work properly. But what about the rest of the market?
For many other companies, security analytics may seem like overkill. After all, this is not a core competency of their business. If you’ll excuse an analogy: private citizens or small businesses don’t usually provide their own physical security – this is left to publicly funded police forces, army or intelligence operatives. Gated communities or industrial parks enable them to tap in to a shared private security resource.
When it comes to Security Analytics, most of these customers will not be willing or able to run a full, stand-alone capability on site. This is where the ecosystem of security vendors and partners that can offer a variety of managed, hosted or cloud based services will become vital if we are to see a change to more intelligent security.
The challenge is that these solutions are in their infancy and lack many features such as multi-tenancy that are required if they are to be offered as a share service. A substantial amount of development, integration and maturation are going to be required to get these services ready and cost effective to offer, while demand will naturally be low due to a lack of awareness in the market.
IT vendors are notorious for investing where the money is in the short term, rather than taking the longer-term view. If RSA, and others like them, are serious about shifting the security market from product centric protection to intelligent detection and remediation, they need to start investing now in making shared service Security Analytics a reality.
I have to say that a lot of the complaints I have seen have left me scratching my head. It’s almost as if the critics are talking about a totally different piece of software to the one I have been using. My own experience of Windows 8 has been generally pretty positive, and I wonder whether a lot of the negative judgements made are based on either hearsay or very limited hands on time, rather than any level of in-depth use.
During this post I am therefore going to provide a couple of different perspectives based on the use of Windows 8 on more of a continuous real world basis for a reasonable length of time. I’ll start out with my personal experience in a hard-core business context, but for a completely different view, I’ll also provide some feedback gathered from one of my teenage kids, who has also been using Windows 8 for some time.
The Power User Perspective
As an industry analyst, I do a lot of multi-tasking in the average day, juggle many different information sources, and create a lot of content. This often involves working with survey data and making use relatively complex analytical models. I also do a bit of web development and multi-media stuff on the side, so all things considered, I would probably fit squarely into the category of ‘power user’.
In terms of equipment, I routinely use a dual monitor desktop machine and a separate laptop/tablet hybrid (Lenovo T220). Both of these have been running Windows 8 for a couple of months now, and compared to Windows 7, I have seen productivity benefits in both environments.
Surprisingly, given everything you read about Windows 8 supposedly having been crippled for serious multi-talking use, it’s the dual monitor setup that has highlighted some of the improvements the most. Put simply, Windows 8 is ergonomically superior to Windows 7, especially when working with multiple applications and documents simultaneously across two large screens.
The first and most obvious advantage is being able to access the start screen and system shortcuts from any monitor. Another important feature is the option of having independent task bars on each screen. The idea here is that the task bar on any given monitor reflects the application windows placed on that monitor.
Such changes might seem trivial, but they translate to a lot less mouse movement and head swivelling, which is both faster and physically more comfortable. Once you get used to the new way of working, going back to the old Windows 7 approach of all menus and task management being driven from one ‘main monitor’ seems very awkward and inefficient.
The usability benefit on the laptop when used in keyboard/mouse mode is not as great, but is still worthwhile. The combination of the new start screen and various shortcut mechanisms, e.g. right clicking in the bottom left-hand corner to bring up all systems functions, means that you that you can do pretty much everything on Windows 8 with fewer mouse clicks and less mouse movement than you need with Windows 7. I did find it took me a little while to get used to the corner/edge activated menus, but after a few hours of just getting on with work, it all became very natural.
On a controversial aside, I personally think Microsoft was right to do away with the old start menu, which to me now seems cramped, clumsy and inefficient when I go back to a Windows 7 machine. Being a typical lazy human being that gravitates to the familiar when given a chance, if the start menu was there I probably would have continued using it and failed to take advantage of the more efficient navigation mechanisms designed into the Windows 8 desktop. Now I wouldn’t want the start menu back, even if I could have it, as it would be totally redundant, arguably even counterproductive.
So far, pretty much everything I have mentioned is concerned with using Windows 8 in desktop mode with a mouse and keyboard. I have also tried the new operating system in tablet mode and my impression as long-time iPad user is that it’s pretty good in comparison, but I am reluctant to comment further based on my own experience as I haven’t spent enough time with it to form a robust view. However, I can provide some interesting second-hand feedback.
The Teenager Perspective
A few months ago at Tech Ed, Microsoft provided everyone at a press/analyst gathering with a slate pre-loaded with Windows 8, so I came away with Samsung device and various accessories to play with. When I got this home, my teenage daughter (14 years old) asked to have a look, and about 15 minutes later she declared “This is SOOO much better than my iPad”. I haven’t seen much of the device since because she has been practically living on it, while the iPad has sat there with a flat battery gathering dust.
So what’s the appeal to a socially-oriented teenager who, like all her friends, is an obsessive multi-tasking online communicator?
My daughter calls out a few things about the Samsung that she really likes. Firstly, there’s the versatility. The Samsung came with a docking station into which can be plugged a monitor, network cable, keyboard, mouse, and any USB storage device or other peripheral you want. Windows 8 is then very slick in the way it handles docking and undocking – you simply drop the slate into the dock or remove it at will, and within a few seconds the machine sorts itself out. Great if you are doing homework at your desk one minute, then rushing out of the door to a sleepover the next.
Mentioning homework, the other thing my daughter likes is that the machine runs Microsoft Office, so she can do all of her writing and creative stuff as usual. From a leisure perspective, while she likes the Windows Store and some of the early Windows 8 apps, she has not surprisingly highlighted the relative lack of software available compared to the iPad. However, this seems to be more than made up for by the fact that she can access all the websites that she and her friends visit habitually and “they all work as they are supposed to”, which is an indirect reference to the constraints of Mobile Safari on iOS.
Thoughts on the Learning Curve
The interesting thing in all of this is that never once has my daughter commented on the Windows 8 user interface. Looking over her shoulder, she happily flits between desktop and touch mode, and just gets on with it. This further confirms to me that UI related concerns commonly expressed by reviewers are more to do with lack of familiarity (perhaps sometimes accompanied by an unwillingness to make the effort) rather than inherent usability issues.
Having said this, familiarity among existing users is obviously an important consideration in a business context. Hitting a mixed ability workforce with replacement tools that are new and unfamiliar can lead to friction, productivity issues and a spike in calls to the help desk if users are not prepared for the change.
Given the usual lag between consumer and enterprise adoption of new Windows releases, the good news is that there are likely to be at least some members of the average workforce familiar with Windows 8 by the time it is rolled out, and the availability of co-worker support is not to be underestimated. There’s then always the option of end user training, even though this is something that often gets overlooked.
Beyond the User Interface
While I think popular opinion to do with the Windows 8 user interface is generally misguided and has more to do with natural resistance to change rather than anything else, that’s not to say that everything is perfect. I personally have some questions around dockable tablets and laptop/tablet convertibles that are used in tablet mode one minute and desktop mode the next. The OS handles the switch very well as I have said, but my concern is whether we’ll end up needing two versions of important applications, optimised for each mode interaction.
Related to this, there are obviously questions for developers who essentially have to choose which mode/runtime environment to design and build for. There are then questions about Windows 8 RT and how desirable/viable this reduced spec version will be for business deployment.
So, let’s shake this unhelpful obsession with the UI and missing start menus, and turn our attention to some of the other issues that matter a lot more in the long run. Businesses aren’t going to be rushing to upgrade Windows 7 machines in a hurry (indeed some are only just moving to it from XP), but there’s a good chance that Windows 8 tablets and convertibles will start creeping into organisations soon after they hit the market. With that in mind, it’s probably worth IT departments thinking about the real implications of Windows 8 from a development, management and support perspective sooner rather than later.
Friday, September 28, 2012
When I speak with people directly, the more usual tale from a so called ‘Enterprise 2.0’ initiative aimed at ‘transforming the way the organisation works’ is of disappointing or inconclusive initial results, with further efforts put onto the back burner as other priorities take precedent.
The exception is where efforts have been targeted at specific groups and processes, e.g. when Salesforce.com customers have focused on the activities of sales teams, or IBM clients have set their sights on driving improvements in the customer service function.
The point is that success generally comes from objectivity, i.e. knowing what you are trying to achieve in quite precise terms. With this in mind, it’s far easier to set objectives and execute against them when your focus is narrow. You are able to pay adequate attention to how the new social systems and working practices will improve things in the context of specific tasks, functions and processes.
Some of the more mature vendors such as IBM have acknowledged this, and developed services-led engagement models that help customers analyse requirements, create sensible expectations, and deliver against them. However, recent discussions with a software vendor called Mindjet got me thinking about the middle ground between the non-specific ‘faith-based’ approach that commonly leads to stalled projects, and the highly targeted activity that has created most success in the past.
If you’re not familiar with Mindjet, it started out as the author of MindManager, arguably the market leading commercial product in the traditionally niche area of mind mapping. I have used MindManager as a personal productivity enhancer for over a decade, and have to admit to being an advocate of both the tool and the visual mapping technique upon which it is based. These have proved incredibly useful for things like organising information while conducting desk research, designing research studies, interpreting research results and outlining reports. When used in conjunction with a web conferencing solution, MindManager has also been great for capturing and structuring key points from collaborative team and client meetings.
Picking up on this last point, one of the common use-cases for MindManager across its user base has been to deal with the up-front collaborative/discussion phase of projects. This in turn has led to a range of different activity templates, a comprehensive interface with Microsoft Project and Office, and the addition of native collaboration capabilities within the solution set itself. I have to say that I struggled to see the value of the web based collaboration environment over generic web conferencing when the facility was first introduced, but a recent announcement from Mindjet has made me think again.
Mindjet has now pulled together the desktop and web based capability already discussed, with some other functionality it had developed to allow interaction with Microsoft SharePoint through a highly visual interface. It has then thrown in more of what I would describe as ‘lightweight multi-user project management functionality’, with a sprinkling of social stuff like status updates from team members and the ability to ‘follow’ activity streams (similar to Salesforce Chatter). The end result is a single collaboration framework that, in a nutshell, simply allows teams to get stuff done more efficiently and effectively than relying on ad hoc social communication.
In order to appreciate the significance of this, think about how many times you go through the process of discussing, planning and executing activities as part of your own job. Whether it’s a full blown project, or something more modest such as preparing for a client meeting or dealing with an operational problem, you still go through essentially the same steps, even though you might not consciously think about them. To begin with, you get some people together, either physically or virtually, to discuss requirements, dependencies and constraints, and exchange relevant information and ideas. You then kick around some options, crystallise them out and prioritise tasks to produce some kind of action plan, before finally getting stuck into making things happen with the appropriate level of sharing, collaboration, feedback and visibility along the way.
What we have here is a simple, repeating pattern that basically keeps the world of business turning, and coming back to where we started, it captures the essence of objective collaboration.
Having tracked the progress of Mindjet for quite a few years, I am well aware that the company didn’t set out with a grand vision to produce the collaborative activity management solution it has ended up with. Through incremental development, however, based on a degree of trial and error and a lot of customer feedback, its solution acknowledges important business realities. It therefore stands a better chance of delivering tangible benefits than the myriad of products out there founded on romantic social idealism.
I also like the way Mindjet respects existing investments. If you are big SharePoint user, Mindjet can bring all of that underutilised collaboration capability to life by making it much more accessible and extending it across the whole of the activity lifecycle. If you don’t have a SharePoint backend (or don’t want to use it), Mindjet will provide one as a cloud service.
The proof of the pudding will, of course, be in the eating as Mindjet and others taking a more business-led approach to social collaboration find their place in the market against the backdrop of so much evangelistic hype in this space. I sincerely hope that pragmatism and business sense eventually win through, and am looking forward to seeing more solutions like this that bridge the gap between targeted bespoke deployments and the fluffy concept of Enterprise 2.0.