Great Offers from Amazon

Amazon have just made me this great offer :

Product Promotions

Save £0.02 when you spend £100,000.00 or more on Qualifying Items offered by Amazon.co.uk. Enter code M7575XH9 at checkout. Here’s how (restrictions apply).
Some one at Amazon is not paying attention. If only that offer was the other way round.
Advertisements

Generics in Serviced Component interfaces.

Here’s a thing that’s bugging me right now. I’m refactoring a client server application to use Enterprise Services. It’s already split into UI, Buslness Logic and Data Access on the client with direct calls to the SQL server stored procedures so we’re just moving some of those layers to the server and turning them into serviced components. Simple and effective….OK so it’s not THAT simple but in broad strokes that’s the strategy.
 
So I’m looking at the newly formed COM+ API with its interfaces and methods using the Component Services explorer and I notice that lots of the methods are missing. I mean, I know they’re there right, because I can call them and they work, but COM+ is telling me that they don’t exist.
 
It turns out that the thing all of the “missing” methods have in common is that they have a generic somewhere in the method signature, like 
int foo(Nullable<int> bar);
 or 
List<string> foo(string bar);
 So I ask around and I get a number of explanations/guesses, a common one being “you can’t use generics across a COM+ boundary”. But, sure I can, because these methods are working. So what’s the deal here ?
I know that if I have a simple signature like : 
int foo(string bar);
 then COM+ will have no problem with it and I am wondering if this is similar to the issue with the serializer. There are some CLR types (mainly the value types) that COM+ understands and so if a method signature contains only these types the COM+ serializer will be used when the method is called. However, if the method signature contains a type that COM+ does not undertsand and cannot serialize it will defer to the .NET remoting serializer when the method is called. Similarly when I query the interface COM+ simply doesn’t have the language to communicate the method signatures of the methods that contain generics because it’s stuck with IDL so it doesn’t bother trying. But then if I look at method signatures that have DataSets, or strongly typed derivatives thereof, in them they appear just fine so IDL is communicating those just fine.
 
So maybe I shouldn’t be worrying about this because it works and if it ain’t broke….right ? But now I’m looking at profiling this application and I’m looking at various tools that will give me some metrics on calls to the methods in the COM+ API and I have a sneaking suspicion that the metrics for all my missing methods are going to get lumped together under calls to IRemoteDispatch or some such horror.
 
Anyway, I banged my head on this for a while and in the end I took a flyer and emailed Juval Lowy. Yes, me. I did that. So over the weekend I exchange a few emails with Juval and you know what ? He didn’t tell me the answer. I think he kind of hinted at it but in the end what he actually did was make me feel a whole lot better about learning not to care. I mean, if you look at this method : 
DataSet GetData(Nullable<int> id);
why would I spend hours agonising over the perfomance issues that might be associated with the difference between Nullable<int> and just int. I’ve already made my bed in performance terms by deciding to pass a DataSet. At the end of the day I have a working application, the client is happy and if performance is adequate why should I care ? The answer is, I don’t know, I just know that I do and I find it hard not to.

Between a Rocky and a hard place.

So I was blogging a couple of days back about the ClickOnce COM+ proxy issue we were having and in conclusion I wanted to add that we finally decided that the best  approach in that particular situation was to work out a security profile for the workstation users that would allow them to install both the app and its proxy and overwrite a current installation of the proxy if necessary.
However, this doesn’t really solve the problem that if the proxy is already installed it will not reinstall and therefore if the proxy is a prerequisite the whole installation fails when the proxy install bombs out. I think I suffered from a bit bit of “solution fix” on this issue. It seemed that the prerequisite solution was the right one but it isn’t. I guess I just wanted to take as big a bite out of ClickOnce as I could and make use of its features.
In the end it was easier to distribute the proxy installer with the application and have the application check for its own proxy on startup and install it if it’s not present. This was achieved with a combination of “in code” ClickOnce using the System.Deployment and System.Deployment.Application namespaces and a managed wrapper round COMAdmin.dll to allow the installation and configuration of the proxy components. This is actually very neat and makes the ClickOnce install a little quicker and a little slicker, i.e. more like ClickOnce than Click-Five-Or-Six-Times.
I will do a full example post sometime soon demonstrating the technique.

What would J**** do ?

I have never been a religious person or a “believer” but the world is getting more complex. In these days when we all seem beset by confusion, when we are presented with difficult decisions and a plethora of choices it seems increasingly important to have something or someone by which to set one’s compass. Someone to guide us in our moments of doubt. To set us back on the true path and to remind us of what is really important and good.
In my darkest moments I have found that I can ask myself a simple question to help me find the light. What would Juval do ?
I don’t think I am alone in this. I believe that there are many other followers and potential followers out there and this knowledge gives me strength.
So I ask you, have you accepted Juval as your personal saviour ?

Essex Wildlife…carpeting the A12 one fox at a time

I am continually amazed by the sheer volume of roadkill on the A12 between the M25 and Colchester. Some days I think I could walk the entire route without ever stepping on tarmac…just hopping from carcass to carcass. I’d need a good pair of wellies obviously 🙂

Enterprise Services, COM+ Proxies & ClickOnce

Although WCF and .NET 3.0 seem to have taken over my life I am still doing a fair amount of .NET Enterprise Services stuff. Hey, it’s good technology and it got us through some tough times, right ? There are actually still a lot of people out there with a big installed client base of Windows 2000. No, really, there are. While we’re all getting excited about Vista there are a lot of people who never made it to XP. Sure they have moved their server platforms to 2003 for better stability, performance, scalability etc but what’s the business case for the workstations especially when you add the cost of a roll out and updating the PC hardware ?
So these people don’t get to play with our new .NET 3.0 toys, at least not “client-side” anyway. In some environments I have been building hybrids where the client-server communications within an application works over .NET Enterprise Services and DCOM but between the application servers WCF is used to create SOA-type messaging with Pub/Sub events and so on. You know, the cool stuff Juval Lowy likes. This is actually a nice compromise. The move to WCF was always going to be gradual anyway so you can phase it in at the server level and roll it down to the clients as they catch up.
This has thrown up some interesting problems though. Notably with ClickOnce because my .NET 2.0 SmartClient UI application needs to be deployed with its COM+ proxy. When I first came across this it quickly became apparent that I would have to create a pre-requisite package containing my proxy and then use the ClickOnce bootstrapper to install it. Not a problem per se. I already had Brian Noyes book on ClickOnce, plus Michelle Leroux Bustamante’s downloadable examples and I had found the Bootstrap Manifest Generator on GotDotNet, so I was well armed in that respect.
However, I was not happy with the whole approach and furthermore it raised some problems about the rights of the user running the ClickOnce install to perform com registrations on their workstation. So I did what any responsible architect would do and I Googled the issue…without much success. I found a post on CSLA.net by Rocky Lhotka in which he desrcibed the issue as “problematic” so I emailed him to see if it was a problem he’d solved or if “problematic” was simply a euphemism for “more trouble than it’s worth”. He replied almost immediately (which was cool because he’s a legend and all) but suggested that I read Brian Noyes book (which was not because I already had it open in front of me).
An alternative approach, since the pre-requisite is an .MSI package, is to use Active Directory and Group Policy to deploy the proxy. AD allows you to either publish or assign an application. The main difference being that with publication you can only publish to a user whereas you can assign to either a user or a computer. Applications that are published or assigned to a user will only have their icon(s) installed next time the user logs in. The application will not install until the user first tries to launch it. Applications that are assigned to a computer will be fully installed next time the computer is rebooted.
Assign Publish
Computer Complete application installed on next boot. n/a
User Application icon(s) only installed on next login. Complete application installed on first use. Application icon(s) only installed on next login. Complete application installed on first use.
OK, so now we can deploy our proxy without the security issue but we have separated its deployment from the deployment of the client application that uses it and we have introduced a reboot into the process. I suppose we could go the whole hog and package the entire client application along with its proxy as an MSI and AD-deploy it that way but I really like the idea of ClickOnce and I’m loathe to abandon it completely at this stage.
So now ClickOnce throws us another curve ball. Let’s say we bump up the permissions of the users on these workstations so they can install the proxy. So the first user logs in and installs the app and its pre-requisites via ClickOnce. It all works fine, we have beer. They log off and the next user logs on and has to install the app aswell because ClickOnce installs apps in the user’s local settings cache. But the proxy is already installed and ClickOnce tries to install it again. The install of the proxy fails which causes the whole installation to fail. We put down our beers and are sad.
It turns out that subsequent users can actually install and run the application without the prerequistiues by clicking on the lauch link on the install page. Our proxy is installed “globally” but our app is installed per user. So it “kinda works”. We go back to our beers agreeing with Rocky that this is indeed “problematic”.

Service Orienteering

There are bushmen wandering in the Kalahari who have heard the term SOA. It has been an industry buzzword for the last 3 – 4 years. However, definitions of SOA are as varied as they are widely spread. To try to untangle these definitions we need to start from the ground up.
Services are simply the next evolution in how we write code. They are the next level of abstraction, the next unit of re-use and encapsulation and the next software paradigm. 10 years ago we talked about objects, 5 years ago it was components and now it is services.
Service Orientation is to services as object orientation was to objects. When we build applications in a service orientated manner we are thinking about our application in terms of the interaction between services. Where we had APIs we have service contracts and where we had shared types we have schemas or data contracts. Instead of clients and servers we have service consumers and service providers.
This explains some of the confusion that surrounds the acronym SOA. When some people are talking about SOA they mean Service Oriented Applications and Service Orientation in general as opposed to Service Oriented Architecture. Another factor that adds confusion is that SOA and Web Services have been used interchangeably to mean the same thing when actually it is quite possible to develop service oriented applications and design service oriented architecture without using web services at all.
Finally we come to SOA where SOA stands for Service Oriented Architecture. This has almost nothing to do with the software development concepts we have just discussed because the “Services” are not the same. In Service Oriented Architecture the services are Business Services and SOA in this context means “architecting” your IT infrastructure so that it is both strategically aligned with and decoupled from the business services it supports. While Service Orientation is a technology that can facilitate this architectural decoupling from a software perspective, they are not the same thing.
The reason we are all obsessed by decoupling these days has to do with complexity. Both the business and IT challenges we face are inherently and increasingly complex. Complexity is here to stay. The people who handle complexity most efficiently and robustly will win whether it’s because they are fastest to market or incur the lowest costs or offer the greatest value or achieve the greatest flexibility.
Efficiency
Decoupling and modularization are efficient. They create agility and facilitate reuse. A group of climbers ascending a mountain who are all roped together get some benefits in terms of security and communication however, they must all take the same route, normally the slowest easiest route that is accessible to the least skilled of their team. They must all travel at the same pace, normally the slowest pace, the one at which their weakest least fit member travels. They must all encounter the same risks at the same time. An avalanche that takes out one member is likely to get the rest, either because they are all bunched up or because they are all roped together and get dragged along. They are not resource efficient. Before they embark on the first step of the journey they must have sufficient resources in terms of food and equipment to get the whole team to the summit and back. They must always attempt each challenge en route in the same order, i.e. the order in which they are roped together or they must stop and go through a reordering process. If one member is injured or falls sick the whole team must stop or slow down or the ascent may have to be cancelled. These factors make them slow and places some if not many mountain tops completely out of their reach.
The same group of climbers that ascended using different routes at different paces and at different speeds and times incurs none of the above disadvantages and gets all of the benefits of being a team if they use radio, GPS and thermal imaging to provide the same security and communication the rope used to provide.
Robustness
Complex, non-linear, tightly-coupled systems tend to fail in non-predictable and catastrophic ways. The point at which they fail tends to coincide with or result from a change in the state or behaviour of one of the components. This has been described as the domino or ripple effect which in reality doesn’t even begin to describe the problems faced by modern complex systems because both falling dominos and ripples are simple and linear. Non-linear systems have feedback loops and echoes which can exacerbate, magnify or accelerate failures. In complex systems a single event or action can cause a multiplicity of other events or actions which are not the same or in the same order each time.
Complexity and non-linearity are here to stay. These are inherent in the problems we are trying to solve so they are non-negotiable. We cannot do something complex in a simple way except by abstracting the complexity and that simply moves the complexity somewhere else, it does not reduce it. In fact it can add new complexity. Even simply hiding the complexity of a system behind a simple interface can cause problems in terms of its use, for instance it can make resource intensive processes seem otherwise. The only feature we can mitigate to any great extent is the coupling and this is why so much of our effort is addressed here.
Benefits
How do we benefit from strategic alignment ? Really, when we say strategic we mean “from the strategic level on down” and the alignment means that IT infrastructure is modeled on the business services it supports. The most obvious benefit is that the IT infrastructure is designed to support not only the business as it is today but as it will be tomorrow, it has the businesses strategy engineered into it. It is proactive and not reactive. This makes the business agile because when it needs to change it doesn’t have to wait for the IT infrastructure to be re-engineered to support that change either because the systems have been engineered to allow for change or because their decoupling allows for change in one without necessarily requiring change in the other. When it comes to capacity management (one of the most arcane and yet most critical arts in both Business and IT Management)** the clear mapping of business processes and services onto IT services and processes is of enormous benefit. Key business volume indicators (BVIs) can be identified and profiled to find their resource footprints. What does a BVI cost the business in network bandwidth, system memory, processor cycles, disk space etc. Which systems are involved or affected? If the business plan involves increasing a BVI (e.g. doubling sales) the knowledge of that BVI’s system footprint enables IT to quickly establish if current capacity will support the increased business and if not, which systems will have to be scaled by how much to meet increased demand and how much will that up-scaling or out-scaling cost ? If IT cannot answer these questions in a timely and accurate manner when business decisions are being made this is a constraint on the agility of the business and a risk to the success of business strategy.
**If you really want to get a handle on this talk to Capacity Management specialists Capacitas or arrange to attend one of their excellent seminars.

Sound of Mind

So, for those of you who don’t know I am currently working as a .NET architect on a contract in Colchester (Essex). This means I am driving 140 miles a day which takes around 2.5 to 3 hours.
I was particularly apprehensive about this commute going into the contract. In fact I only agreed to a 6 week contract for this reason. It’s not just the time and the distance and the inconvenience, it’s the fact that time in the car is “dead” time and I hate that. I hate being unproductive for such a big part of my day.
However, I am not finding it as bad as I thought. The thing that is keeping me sane at the moment is podcasting. I always liked the radio in the car espacially stuff like In Our Time (Radio 4) which you can really get your teeth into but the scarcity of broadcast radio which really interests me is a problem. There just isn’t 15 hours of broadcasting in a week that a) I want to hear, b) I regard as a productive use of my time and c) is brodcast during the hours I’m on the road.
My phone (a Nokia E61) has wifi and a fairly good podcast client, so now I’m spending a few minutes each weekend updating my podcast list. This means that when I hit the road each day it is loaded up with the equivalent of a personally tailored radio station. Because of podcasters like DotNetRocks (who now do 2 shows a week), TWiT, Secuirty Now, ITConversations and others, listening to podcasts in the car has now replaced reading on the train and my commute is a productive and interesting part of my day.
And, hey…I just extended my contract for another six weeks 🙂
%d bloggers like this: