Archives for category: Technology

If you are using SCVMM 2012 R2 (the latest version as of this post) and open one of its many wizard dialogs, you will be confronted with this if you cancel a dialog:



I know what the product team was intending, but still this is clear as mud. Continue with what? Cancelling or filling out the dialog? It would not take much effort to rewrite this to remove the ambiguity. Simply change the last sentence to “Do you want to cancel and leave this wizard?”

I could launch into a diatribe about how the managers favor feature count over fit and finish (alliteration unintentional). However, applying Hanlon’s razor, this is likely just the result of laziness.

SCVMM is Microsoft System Center Virtual Machine Manager. It is a data center virtualization management solution. Like much of the software produced by Microsoft, it is tremendously capable while being over-the-top complicated and completely unintuitive. There is an old computer programming expression that any problem can be solved by adding another layer of indirection. SCVMM has multiple layers of indirection/abstraction which requires that you understand all of it to use any of it. This completely violates one of the primary principles of software usability: progressive disclosure. You should be able to use basic features without having to dig into and understand the advanced features. You can’t do that with SCVMM. It is all or nothing. My colleagues call that “job security.” I call it poor design.


I don’t think that TV remote control technology has changed much at all in decades which leaves lots of room for improvement. Something as simple as turning a TV on and off has become a quagmire and the reason is simple. The remote control design has not kept pace with living room technology. In the early days of remotes they operated just one device: the TV. Thus it made sense for the on/off function to be what we engineers like to call a toggle. If the TV is off the remote control power button turns it on and vice versa. Things started to get complicated with the introduction of cable tuner boxes and more gadgets are appearing that use IR (infra-red) remote control. It seems that all of these devices use the same on and off logic. Well that’s fine if you are using the remote that came with the device to operate that device. A problem arises when you have a remote that is designed to control several devices. Typically it is the cable box remote. It will have an “all on” button that appears to be a nice shortcut. Until it isn’t. Remember, the power signal is a toggle. If one of the devices is on for some reason (say, the cable box is a DVR and it has turned itself on to do some recording), then pressing the “all on” button will backfire. The TV will go on but the DVR will complain that continuing will interrupt the recording. Similar situation can arise if the devices are some distance from one another. If you don’t hold the remote just right the IR beam won’t hit all of them. Thus the attempt to toggle the power can miss one device. Things quickly go downhill and frustration mounts.

There is a better way to do this that would solve this problem. There could be two separate buttons on the remote “all on” and “all off” with each sending a unique IR command. That way there is no ambiguity. If you want to turn everything off you can repeatedly press “all off” until everything is actually off. And vice versa.

Yes, there is another technical solution to this dilemma. It is an IR repeater. This device is often found on high-end home theater installations were much of the electronics is behind the doors of a fancy cabinet. The IR receiver would be placed right under the TV and IR emitters are placed on the IR port of every device to control. If you understood everything I just said about IR repeaters or if you have extremely deep pockets then this remote control shortcoming becomes a non-issue. However there are a lot of folks who probably spend too much time scratching their heads and cursing their remote controls because half of their devices are off and the other half are on and that isn’t what they intended.

I don’t hold much hope for this changing. Usability, that is, designing things to be easy to use, does not seem to command much attention from electronics manufacturers and that is a shame.

I have a personal Office 365 subscription and use it for my email amongst other things. I use the OWA client on a couple of different computers and find its behavior vexing. If I leave the computer on and the OWA window open I get automatically logged out after a period of inactivity. This is annoying because I lock the computer so I know I am safe. What makes it even more annoying is the fact that there is a check box that says “Keep me signed in.”


It doesn’t matter if the check box is checked. I get logged out anyway. Then I usually get a timeout message to add insult to injury:


Yeah, I am sure the Office 365 team is sorry. Except that I know a dirty little secret about Office 365. It is not just one team at Microsoft. Office 365 is bolted together using components from many teams and even different divisions. That is presumably why OWA ignores the “Keep me signed in” check box. It probably doesn’t even know that setting exists. The back end email service, Exchange Online, uses its own copy of the MSODS (now known as the Windows Azure Active Directory or WAAD) user database. In fact, there are probably a half dozen copies of a tenant’s user datastores with background processes to synchronize information between them. This user preference to remain logged in is either not synchronized from the main tenant WAAD database to the Exchange Online copy,  the OWA team decided to ignore the setting, or it could be that the preference is not stored in WAAD at all but instead is part of the main login cookie in the browser and OWA isn’t reading this part of that cookie. I wouldn’t be surprised if it is way more complex behind the scenes. This isn’t to say that there aren’t a lot of conscientious people at Microsoft working to improve Office 365. Rather, it is just an  enormously complex system and it is really hard to get everything right in a complex system. Add to that the preference of management to add new features rather than refine existing features and I am left with little hope that this will be fixed any time soon.

You can’t get there from here. That seems to be the theme with too many programming interfaces. In particular, the .Net System.DirectoryServices classes are too general purpose such that specific tasks become maddeningly difficult or impossible. I know AD and I know LDAP, but trying to do simple things with .Net is just too difficult.

For example, AD is based on LDAP and x500 which describes a hierarchical object system. Conceptually it is very similar to a file system. AD containers are similar to file system directories. AD objects are similar to file system objects. Yes, this is an oversimplification, but you should be able to do something like CD (change directory) from an AD parent container into a child container. However, there is no CD command in DirectoryServices. Instead, you have two choices: specify the full path to a container to connect to it, or enumerate all of the children of a container and choose one to connect to. The enumeration returns the full path of each child. There is no navigation by relative paths. Worse yet is the complexity of LDAP paths themselves (thanks to x500). It doesn’t need to be this complicated.

This complexity is compounded by the way the .Net System.DirectoryServices classes are implemented. They are just wrappers around the ADSI COM interfaces. ADSI uses IDispatch. This made some sense back in 1998 when ADSI was being designed. At that time Visual Basic was the predominant Microsoft scripting language and IDispatch provided a convenient dynamic typing system. Another decision made at that time was to have ADSI support multiple directory protocols including LDAP and WinNT (and later IIS). Unfortunately this adds a layer of unnecessary complexity to the .Net wrappers due to the fact that there many subtle differences between these and the other supported protocols.

It could have been done differently. A namespace and set of classes could have been designed specifically for doing common AD operations. Those classes could have talked directly to AD using the LDAP wire protocol but at the same time abstracting and simplifying the complexities of LDAP and AD. Instead we are left with multiple layers of interfaces with really no simplification at all. Microsoft did add the System.DirectoryServices.ActiveDirectory namespace classes to address some of the underlying complexities, but only for specific scenarios like managing forests and trusts. Ditto for System.DirectoryServices.Protocols. A simple operation like finding all users that match a set of conditions still requires an understanding of many complex details. How many people understand pre-fix notation and its application to LDAP query filters? Why should people even have to be subjected to such difficulties?

Fortunately there are promising solutions. One is the Active Directory module for PowerShell. The authors did a good job of translating many of the AD/LDAP peculiarities into standard PowerShell syntax. The other is a new API that has been added to the Windows Azure Active Directory. This is being called a Graph API. It is a RESTful web service that references all AD objects using standard URI naming. AD objects become web resources with standard URI resource naming. The attributes on objects that describe relationships to other objects (e.g. group membership, OU containment) are also expressed as URIs. Hence the “graphness” that allows simple traversal based on object relationships. See Kim Cameron’s blog for more graph API info.

In the meantime, accessing AD from .Net remains challenging without resorting to writing lots of code or buying third party libraries.

Software should follow the KISS principle, keep it simple Sherwood. Yes I know this is old news, but it is so important it needs repeating. This applies to all aspects of the software development process. The functional design should aim for addressing the most pressing user needs. The user interface should be clean and consistent in its layout and operation. Last, and certainly not least, the underlying code should not try to do more than the functionality and UI require.

One thing that I have seen repeatedly is designers trying to solve non-existent problems. There is pressure from marketing for feature bullet points. They also want to entice people to upgrade, so yet more pressure to add features. API designers and coders try to future-proof by adding the capability to handle anticipated needs. This is often called “extensibility points.” The problem is that almost all such efforts miss the mark because technology changes so quickly and in often unpredictable ways.

Complexity in code invites bugs and security issues. The easiest way to minimize these issues is to write absolutely no more code than necessary. This also helps to keep projects on track. Engineers and technologists by their nature want to invent stuff, so the tendency to do more than necessary is understandable. Resist the urge and take pride in elegantly delivering what is needed and no more.

Psychologists and kindergarten teachers like to tell us that we are all special. Well, when it comes to Microsoft online services, I am a bit too special. I have two accounts using the same email address; a Hotmail/Live account and an Office 365 account. I was using the Office 365 account for email (I own the domain name of the email account) and the Live account for Zune, Win7 phone syncing (SkyDrive) and PC folder syncing (Mesh). This used to work fine but is now broken. The problems started at about the same time that Microsoft announced the upgrade of Hotmail to I am consistently getting redirected to the wrong logon server. So, to cut to the chase, I tried to change the email address associated with my Live account. The email-change page gave me the ominous warning that changing the email address would break syncing on my phone. This was not surprising. However, it went on to state that I would have to wipe and reset my phone back to its factory state to re-enable syncing using the new account name. Yikes! Whose idea of user-friendly is this? To be fair at least I got a warning.

So you might ask why would I be crazy enough to have to accounts with the same email address. Technology is an incremental thing; you try new stuff as it becomes available. I’ve owned my domain name for a long time. I wanted to have a constant in the shifting sands of the Internet. I created the Live account using that email address several years ago. I never actually used it for email though. My domain registrar also provided the POP email service for me. I then decided to make the switch to Office 365 for email and the other services it offers. I signed up with the same email address. I had to go to my domain registrar and update the MX records. Fortunately I understood the concepts around DNS records. Doing the Office 365 sign-up without knowing anything about DNS would be daunting. At any rate things worked fine for half a year until MS unveiled as the successor to The back end authentication code must have changed the cookie format so that both logins create similar cookies. Additionally there must be something that is stored on the back end though because even deleting cookies and starting in safe mode (so the Office sign-in assistant isn’t running) does not solve the problem.

The biggest issue is when I try to open a SharePoint Online document locally. The web service login prompts me for credentials and apparently goes to the wrong account so I get an access-denied. I also have to log into Office 365 on a different browser from SkyDrive (which is understandable; each browser has its own cookie cache). I can’t log into hotmail at all. No big loss since I didn’t get email there, but just part of the symptoms.

I will try limping along as things are until my phone contract expires. The chances of my getting another Windows phone are pretty slim. WP7 is a vast improvement over the prior Windows Mobile but it is far from perfect. Maybe it will be time to get an iPhone?

I have lots of ideas about how software should be written but I’m not particularly adept at expressing them. Jeffrey Snover’s writing, OTOH, is very eloquent and persuasive. His latest blog post here  discusses a topic that is near and dear to my heart. He uses the term technical debt. I don’t know if he is the originator of the term, but it succinctly captures the issue. In a nutshell the problem is product managers who favor investing in shiny, new features at the expense of basic functionality. This has been a problem for as long as I’ve been involved in the computer software industry (25 years). The marketeers want to have impressive bullet lists of features for a new product release. They give lip service to customer satisfaction but would much rather sell something first and worry about how well it works afterwards. This is a big problem for desktop software but becomes critical for customers who pay for online services.

Paid-for online services give a service level agreement (SLA) guarantee as to the availability and reliability of the service. It is a major problem (and a breach of contract) if the service is unavailable for any extended period of time. There is a more insidious problem that can occur when transient loads cause service resources to be momentarily unavailable. In the aggregate it appears that everything is fine, but individual users could see disconnections or other service failures. The only way to detect these situations is with very fine-grained performance monitoring of all of the service components. This is a difficult problem to solve but it can be solved with sufficient development resources. Unfortunately it is not very sexy. Will management have the wisdom to make this investment or will it be just more technical debt. Time will tell.