Enterprise Initiatives

This blog focuses on Enterprise IT topics such as Enterprise Architecture, Portfolio Management, Change Management, Business Process Management, and recaps various technology events and news.

I just read Joe McKendrick's article IT departments don’t have time for SOA which states that

IT may actually like SOA, but they are simply too consumed with day-to-day tasks and ongoing maintenance.
The statement "we don't have time" is one of my biggest pet peeves. The reason why most IT shops don't have time to be proactive and do the right thing is because they never take the time to truly architect anything. IT shops love to dive into code with weak requirements, non value add business processes, and quick and dirty designs. With each release of software they add more maintenance heavy code into production thus creating even less time to do anything the right way.

If IT shops would take the time to architect systems that improved speed to market, decreased maintenance costs, and optimized business processes, maybe we wouldn't have to send so much of our dirty work offshore!

I found this piece of research from Duke University that discusses outsourcing, labor shortages, and graduation rates of engineers. This is a very long article so I will try to summarize it for you.

First, the researchers tackled the issue of graduation rates. They questioned the data that is reported by the popular media and policy makers which state that the....

United States graduates roughly 70,000 undergraduate engineers annually, whereas China graduates 600,000 and India 350,000....China and India collectively graduate 12 times more engineers than does the United States....

The researchers found major flaws in the above mentioned numbers. They found that the definition of engineer varies widely in China and India from the definition in the US. Comparing the graduation numbers by using source data from the different governments does not give us an apples to apples comparison. The researchers shared this example....
We were told that reports sent to the MoE from Chinese provinces did not count degrees in a consistent way. A motor mechanic or a technician could be considered an engineer.

What they found is that the US is graduating nearly as many engineers as India. China does have more graduates but not as a percentage of their overall population. So the theory that there is a shortage of engineers coming out of US colleges seems to be false. Based on this finding, they tried to answer the following questions:
What skills would give U.S. graduates a greater advantage, and would offshoring continue even if they had these skills?
44% of the respondents said the US engineering jobs were more technical then outsourced technical jobs versus 1% who said the outsourced jobs were more technical. Then they found that over 80% of the companies surveyed were able to fill onshore engineering jobs in four months or less which shows that there is no labor shortage.

So if we are graduating enough engineers and there is no labor shortage, what is the driver? What they found is that the driver is pure cost reduction. High salary demands and rising health care costs are causing employers to look offshore for cheaper alternatives. Other benefits are the 24x7 work days accomplished by having teams work in the US by day and offshore at night. Work ethic and long hours also contributed to outsourcing.

I have read a few articles recently claiming that US companies are seeking offshore development due to superior technical capabilities and innovation. What this report tells me is that it is still all about cost. I would love to see another report done about the total cost of ownership of outsourcing. I agree that you can get the job done cheaper offshore, but I wonder if executives understand the true costs. It is much more complicated then taking an offshore resource's hours and multiplying it by $25 to compute their cost. There are many other costs like infrastructure costs and software licenses, the cost of your own onshore resources who must manage them, review their designs and code, and the overhead dealing with communication and cultural issues.

I am not here to criticize outsourcing or to discourage people from engaging in it. I understand why companies are pursuing outsourcing and there are many success stories. I just found this research interesting because it disputes some of the myths about labor shortages, talent levels, and graduation rates. It clearly shows me what I already knew which is it's all about cheap labor.

I was on Microsoft's website trying to understand the business benefits of upgrading to Vista. I found this article called Key reasons to upgrade to Windows Vista. To sum it up, here are the four reasons from Microsoft that justify this expensive upgrade:

  1. Empowering users to find and use information
  2. Enabling mobile workers to stay connected and productive in and out of office
  3. Helping companies to make corporate systems and information more secure
  4. Making it easier to deploy and manage company PCs
That's it. I would think that corporations would like more business value to justify the cost of hardware upgrades and PC purchases that will be required to run Vista. I can only think of two reasons to upgrade to Vista.
  1. XP will eventually not be supported by Microsoft (currently targeted for 2014).
  2. As other businesses upgrade, your company will not be able to open Office 2007 documents unless you install the Office Compatibility Pack.
Now let's discuss Microsoft's four reasons to upgrade. First, the improvements in searching for data. I use Google Desktop which gives me Google's world class search functionality across all my files and emails. The cost...Zero! It runs great on XP. No need to upgrade for this.

Second, enable mobile workforce. A lot of nice to have's here. Most companies already have tools that address the security and the collaboration. These features do not justify upgrading all of the hardware in your enterprise, especially when most of your hardware is not mobile.

Third, security. Wasn't this one of the selling points for going to XP. I feel like Bill Murray in Ground Hog's Day. Security is one of the main reasons why people are looking at alternative operating systems. Yes we need better security. Do I think that Vista is the answer? The jury is still out.

And finally, easier desktop management. These are commendable features. Many companies already have enterprise tools for managing their desktops and others are already moving to a desktop server approach that centrally manages desktop images and automatically updates client PCs when they connect to the network. I think the advancements in desktop management in Vista are great, but it does not justify the upgrade by itself.

So of the four reasons to upgrade to Vista giving to us by Microsoft, only security is compelling enough for me to even consider it. The shelf life of XP is the real business driver. We can argue all day long whether Vista is more secure then XP or any other OS for that matter, but other then a few reports from Microsoft's own Jeff Jones, I haven't seen any compelling facts to make me want to start an upgrade tomorrow.

I did stumble across a good article from Kelly Martin from SecurityFocus called The New Vista Waiting Game. He predicts that XP will be the corporate standard for years to come. Here is a quote from the article:
“ Despite all the coming advertising and sales pitches about early Vista installations, most businesses would be foolish to upgrade to Vista in the coming year. Businesses want stable, reliable environments. They want to see service packs that address problems even before they encounter them. They want secure environments as well, but to senior executives and other decision makers, this is still a function of Security Risk Management that can be mitigated in various different ways. ”
The good news is that XP will be supported for several more years. That gives corporations time to wait for Vista to become more stable and mature. It also gives corporations time to test different distributions of Linux and have an alternative to Vista once XP gets put out to pasture.

My good friend James McGovern asks, "I wonder if Mike Kavis understands why SOA shouldn't be sold" to the business and references the article Most enterprise SOA deployments fail to deliver ROI. This article makes some good points about the total cost of ownership (TCO) which includes extensive training and expensive tools like registries and repositories. The article continues with this quote:

The survey found that companies using SOA did experience an improvement in developer productivity by an average of 28 percent; however, the productivity savings do not warrant broad SOA deployment.
A little further down in the article, a very important point is made.
Despite these obstacles, Nucleus Research’s survey found that SOA is assisting companies in the areas of business process improvement and portals, followed by master data management and partner integration.
I think this sentence is where many of us EA bloggers start to disagree on whether you should sell SOA to the business or if an ROI is even achievable. In a previous article I described my view on selling SOA to the business. In my case, I was not trying to implement SOA by itself. Instead, SOA was being implemented in conjunction with BPM and Master Data Management (MDM). If I was only trying to implement SOA, I would have to agree with James's stance. But since SOA is part of a larger project, which includes business process reengineering, and the funds are coming from the business, I had to sell SOA to the business.

For James's sake, no I did not Power Point them to death or talk about web services and JMS queues. Instead I explained SOA in business terms and as a major contributor to the overall ROI. In an earlier post James states that if the business trusts IT then IT shouldn't have to sell SOA to the business. Once we convinced the business that SOA was the key to maximizing their BPMS investment, they trusted us to go figure out what tools we needed. Not once did we have to sell the concept of the ESB, MDM, repositories, training, etc.

Back to the ROI. The ROI will be achieved through huge operational efficiencies that lead to increased sales, improved customer support, better quality, and improved speed to market. One could argue that the ROI is a result of the process reengineering and not SOA. I am fine with that argument although SOA does allow us to leverage our legacy systems without causing major disruptions to our business and current projects.

So hopefully we can put an end to the "Selling SOA" discussions and move on to implementing SOA. As far as SOA and the ROI goes, SOA by itself is a cost of doing business. SOA in conjunction with BPM can pay for itself if done right.

James McGovern put out a call for more EA blogs a few weeks back. I'll take that one step further and call for more EA collaboration. There are a lot of different opinions about how to approach SOA, process, and many other topics. James has some strong opinions and calls out his peers when he disagrees with them. I love constructive criticism and I sure get it from James. The problem I have is that he doesn't post my comments and allow a discussion to take place. That is not collaboration, that is being closed minded.

I have been having many healthy debates on how or if to sell SOA to the business with fellow EA bloggers Nick Malik, Jack van Hoof, and Alastair Bathgate. I think it benefits the readers to see all of the different solutions to a problem. James just flat out says that I am wrong, end of story. So here is my challenge to James McGovern. Let's collaborate and post your readers' comments so we can see everyone's point of view. Let's have more collaboration. If we want more EAs to blog, let's provide a platform for them to share ideas and give and receive constructive feedback.

I just read two really interesting articles (Giving proprietary vendors a run for their money & Could Linux become the dominant OS?). These articles and a discussion I had yesterday about budget constraints for the next calendar year makes me think that Open Source Software (OSS) is on the verge of becoming mainstream over the next few years. I have already seen the statistics where 51% of companies are using OSS in mission critical applications. This is starting to look very similar to the days where everyone was fleeing the mainframe for client server technology. The client server craze was driven by lower cost and greater flexibility. Does that sound familiar?

Back to my budget discussion. I was having a discussion with a peer about budget constraints for the upcoming year. Our budgets typically remain flat or slightly increase each year. But each year the cost of doing business rises so we really have less to work with. We have been leveraging newer technologies, like virtualization, disk consolidation and compression, and others that have been driving costs down. Over the past few years we have been dealing with our budget constraints through technology improvements in the hardware area. Now its time to look at software.

As I look at the back end servers, I can't see how we can continue to justify spending the money on licenses and maintenance for proprietary operating systems like AIX or SCO or Windows 2003 unless the applications we are serving up mandate them. For example, we obviously need a Windows server to run Exchange, but many third party packages we buy give us the option of Windows or Linux. For those worried about support for OSS, read this article about open source service providers. With the advancements in virtualization, I should be able to create as many test and development environments I need as long as I don't have to continue paying for the OS licenses. Linux gives me that flexibility. I think a good strategy this year is to look at all of your software assets to see if there are candidates to move off of proprietary solutions to open source solutions. Once you have identified the candidates, put a plan together for replacing these systems over time.

Then I started looking further down the road. I have written many articles about my concerns with Vista and how this might be the right time to start a Linux on the desktop pilot. With the potential of Linux on the desktop being introduced to the enterprise over the next few years, coupled with applications moving towards SaaS models and rich AJAX enabled interfaces, does it still make sense to leverage .Net technologies and force the .Net framework and ActiveX controls on clients? If it makes sense to reduce licensing costs at the middle tier, Java, Ruby, or LAMP technologies sure look like better solutions.

So as I look down the road and see a continuous push to reduce costs while increasing value, I wonder how much proprietary software companies will be purchasing 5-10 years from now. Will it be like the mainframe where the only systems left standing are the ones that have no cost justification to replace? Will the norm be that new applications move to OSS? I know we are still a few years away from this but OSS is becoming more mainstream and widely acceptable in corporate IT whether we want to admit it or not.

In Part 1 of this series, I explained my reasoning behind creating an open source strategy. In Part 2, I will discuss our progress. But before I start, here are some predictions from Gartner:

  • By 2010, 75 percent of mainstream IT shops will have a formal open source acquisition policy in place.
  • By 2008, open source will compete with closed source in every infrastructure market.
  • By 2010, mainstream IT shops will consider open source for 80 percent of their infrastructure software needs.
  • By 2010, mainstream IT shops will consider open source for 25 percent of their business software needs.
Our first step was to create an inventory of the open source products that we use at my IT shop. We have a few areas within the organization that were early adopters of OSS and have a variety of products in use. When polling the staff for OSS products, I expected to find between 20-30 actively being used. I was shocked to find that we have around 100 different OSS products in our inventory (not including the ones packaged within proprietary closed software products). What an eye opener!

There is a lessoned learned here. Since the company as a whole still has not fully embraced OSS and still looks at OSS as the red headed step child, individuals have gone into stealth mode and started assembling a massive inventory of products that help them get there job done at a very low cost. What I found out is that we have a lot of duplication of products including various different versions. Some of the products are best of breed while others are questionable. If there is ever a need of a strategy, the time is now! Since we rely so heavily on OSS, we must embrace it as a strategic part of our enterprise and put the necessary governance around it. This takes us to step two of our strategy.

Today we spent a couple of hours with Sourcelabs, an open source service provider. This was another eye opening event for me. I knew open source service providers provide support for a wide range of OSS products. But here are a few things they do that I didn't know:
  • Stress test and certify OSS products
  • Contribute code to numerous OSS products
  • Provide a one stop self service portal with information on numerous OSS products, including patches security alerts, product roadmaps, known issues, etc.
  • Provide advice and guidance for product evaluations
  • Assist in the creation and/or validation of your Open Source Strategy
  • Fix product bugs and submit to the product's community for the next patch or release
  • Provide certified Java middleware suites
  • Provide open source policy and process best practices
All of this from one vendor across a whole suite of tools. This is so much more cost effective then paying 20% maintenance on every single product you buy in the world of proprietary software. We have a use case where we purchased a 20 node cluster of servers from a major vendor. We were required to purchase support for each node. The vendor mandated that we use SUSE Enterprise which more then doubled the cost per node. To make matters worse, the vendor is one to two years behind in the version of SUSE that they support on their hardware. The reality is that these servers just run and we rarely need any support for the operating system. So how cost effective is that model? For this use case, service providers are a no brainer. Not only is it more cost effective, but we can also choose whatever distribution of Linux we want because the service providers do not mandate what software we must use. Suddenly, the overall price of the cluster just dropped in half. Now for the same price (and better support) I can purchase a second cluster and drop it at our disaster recovery site!

So much for the myth that you can't get support for OSS. So to recap where we are with our strategy:
  • Step 1 - create an inventory
  • Step 2 - educate IT - this started today with our discussion w/Sourcelabs
I will continue this series as we move forward with our strategy. If any of the readers out there have experience with this process I would much appreciate hearing your lessons learned and recommendations.

The hot SOA topic the past few days has been how to sell SOA to the business. I have seen many authors talk theory on this topic. I am getting a little annoyed of when "experts" who don't actually have to sell SOA to their business partners tell me what works and what doesn't work. I am here to give you a real life story.

There are three camps on how to sell SOA to the business. Some experts recommend that you sell executives on the technology aspects of SOA only. The second camp recommends that you speak only about the business aspects of SOA. The third camp, which is where I go camping, recommends that you speak to both the business and technology aspects of SOA.

My recommendation, which comes from a very successful real life example, is to start with the business benefits of SOA. In my case we were pushing both BPM and SOA. We sold the business on the benefits of BPM and then explained how SOA was the key to allow the BPMS tool to talk to our legacy systems. This approach is much simpler then drawing multiple layers of an architecture on a white board and describing what an ESB is, what MDM is, and how web services or JMS queues work. In fact, we were able to get the funding for our SOA initiative without having to describe in gory detail what the different software modules were. Once the business knew that SOA was the enabler for their BPM initiative, which happened to have an eight figure ROI over five years, they didn't need to hear anymore. They only wanted to know how much!

For bonus points, we discussed how SOA (over time) would allow us to rapidly deploy applications because of reuse, increased flexibility, leveraging existing assets, and modular development. This became a win-win conversation. The business was getting a huge return on investment from the operational efficiencies gained by process reengineering, while IT was finally getting the funding they needed to build the architecture that they could never justify in the past.

I am sure there will be many more arguments on the "right" way to sell SOA to the business. The right approach depends on many factors that are unique to each project and each culture. My recommendation is to talk to people who have successfully sold SOA to the business and learn from their experiences. I shared my success story, who wants to share theirs?

My article Open Source and Microsoft Free was posted on Slashdot last week. I woke up on Saturday and looked at my traffic on Sitemeter. My daily traffic before this day was in the 100 range. That morning it read 2000+. I rubbed my eyes thinking that I was not quite awake yet and hit refresh. It shot up another 100. I was averaging several hundred hits an hour. Holy Smokes, I thought to myself! I then went and looked at my referrals and it was all Slashdot. After lunch, I looked again and I was over 6000. A quick peak at the referrals and I saw the Diggs coming it. The next few days were incredible. The article drew over 50,000 hits in a week! Then came the Del.icio.us, StumbleUpon, Craigslist, Linuxworld, Linux.org, and Japanese & Polish versions of Digg & Del.icio.us. It is been over a week now and I still get a few hundred hits a day from that post. That's the good part of being Dugg.

I probably received north of 500 comments across these sites. I am used to the comments that I receive on ITToolbox where I host my main blog. The comments are usually very collaborative in nature, even if the commenter disagrees with my point of view. I have built a very nice network of experts from the professional community on ITToolbox. Compare that to the discussions going on at Digg and some of the other social bookmarking sites. My article turned into an all out war between Microsoft and Linux fan boys. There was so much negativity, profanity, and non fact based opinions going on that I received very little value from the 500+ comments. As a matter of fact, I stopped reading it after a while. I would have had more factual conversations had I walked into a bar loaded with Yankee and Red Sox fans and argued whether Big Papi or A-Rod was a better hitter!

So being Dugg is nice from a traffic point of view, but my real goal with these posts is to share my opinions and collaborate with professionals to refine, defend, or validate my opinions. On Digg, it's all a bunch of noise.

I read this article today about outsourcing that claims that outsourcing is not just about cost anymore. The author points out these four reasons for outsourcing:

-Accessibility to right talent
-Geographic expansion
-Reinvention of their business model
-Promote innovation
After I finished choking on my cornflakes I decided I must offer a different opinion on this matter. First I want to differentiate between outsourcing and offshoring. Offshoring is obviously when you send work to other countries. Outsourcing is when you send work to companies outside of your corporation that may or may not be in the same country. Since the article was based on this link from The Times of India I know that they are really talking about offshoring.

Here is my take (from a US point of view). Offshoring is a cost savings exercise, period. There is a lot of overhead involved with offshoring due to language barriers, cultural barriers, and time zone challenges. To successfully offshore development, the paying customer must provide a maximum amount of oversite and process to overcome these barriers. Outsourcing is different. With outsourcing, I can bring a team of consultants on site for any amount of time without paying the expenses of flying them in from the other side of the planet. You still need oversite and process but not at the same level because the language and cultural barriers do not exist and the time zone differences are manageable (if there even are any).

I wrote this article a while back about agile development and consulting. In summary, if you want to be agile, don't engage with an offshore partner. Agile development requires face to face interactions and loosely defined requirements due to the iterative nature. The requirements get nailed down over time by cycling through requirements and protoyping. This model is a recipe for disaster if you are using offshore resources. For offshore to work you need to have a more waterfall type approach were the requirements are fairly static.

The article that I came across did not link back to the original research provided by PWC. If you read the PWC article you will see that over 50% of the service providers surveyed are US companies like PWC. So the findings in the research make a lot more sense to me because we are really talking about outsourcing and not totally focusing on offshoring. US companies are definitely leveraging consulting companies like IBM, Accenture, and others to foster innovation, reinvent their business models, and access the right talent. They don't use these companies to save money (they cost an arm and a leg). Saving money is all about offshore development.

The lesson learned for me is be careful what you read on the net. The PWC article is a very good analysis of what is happening in the marketplace. The problem is all of the offshoring blogs are using it to say, "See we are more then a cheap alternative", which does not represent the facts of the PWC research. For those who just read the headlines and don't take the time to read the details, beware!

According to opensource.org:
the promise of open source is better quality, higher reliability, more flexibility, lower cost, and an end to predatory vendor lock-in.
But that it is not the biggest driver for our Open Source strategy at my shop. One of our biggest drivers is budget constraints. Every year our IT budget remains flat or increases modestly. When you throw in merit increases, promotions, rising health care costs, and maintenance on software and services acquired during the year, you must get creative to stay within budget. With the exception of Linux on some of our back end servers, most of our enterprise software comes from the major vendors like IBM, Microsoft, BEA, Oracle and many other big names. But when it comes to developing software, it is hard to justify spending big dollars on the large numbers of tools we need to do our job when there are cost effective alternatives.

Our Open Source strategy that we are putting together addresses this. The purpose of our strategy is two-fold. First, we must educate our peers in the enterprise about Open Source. There are many myths that must be addressed to get everyone on board and feeling comfortable when leveraging Open Source. Tim O'Reilly listed 10 myths about Open Source in this 1999 article that still prevails today. Here are a few myths that we will address in our strategy:

Myth #1. It's all about Linux versus Windows.

Myth #2. Open Source Software Isn't Reliable or Supported.

Myth #3. Open Source projects are written by a small group of amateurs in their friend's garage.

Myth #4. The Open Source movement isn't sustainable, since people will stop developing free software once they see others making lots of money from their efforts.

Debunking Myth 1
Go to this page on Sourceforge.net and you will see the wide range of software categories that have active and established Open Source projects. If you wanted to, you could run your entire enterprise on Open Source. There is more to Open Source then Linux on the desktop.

Debunking Myth 2

For well established Open Source projects, it is not uncommon that you can get faster and better support in forums then from the expensive "Gold Support" that the major software providers charge you an arm and a leg for each year. It is not uncommon for small and medium customers to see unacceptable levels of support despite being a paying customer. The less money you spend with a vendor the less pull you have escalating support issues. There are plenty of Open Source service providers who provide support for a suite of Open Source products which is an extremely cost effective way of doing business. Traditionally, we pay vendors 18-20% of the cost of the original purchase price of each product. Support and maintenance can take up a huge chunk of an IT budget. With the service provider approach, we can pay the provider one flat fee and get support for several products at a much cheaper rate. What is even better is that the service providers core competency is support. That's all they do and their mission is to do it well. For purchased software, support is pure overhead and usually not the strength of the company.

Debunking Myth 3
Some of the most popular Open Source projects have hundreds or even thousands of developers world wide contributing to the overall code base. Scores of people respond to questions and support issues often within minutes after posting on the forums.

Debunking Myth 4

I wrote an article called Still Afraid of Open Source a while back that discussed how companies like Google and Yahoo are leveraging Open Source while the big guns like IBM, BEA, and Sun are investing big money in support of open source initiatives

The second purpose of our strategy is to identify software needs that we currently can't fulfill with our existing budgets. There are many development, testing, and software development lifecycle tools that we could benefit from but never have the funds to acquire. We need additional testing tools for our SOA initiative, a new defect tracking system, a portfolio management suite, and many others.

By putting a strategy together that identifies needs while addressing the concerns that people might have with Open Source products, we stand a better chance of fulfilling our team's needs with the support of our management.

Once we create a culture that sees value in Open Source, then we can start talking about evaluating Linux as an alternative to Windows when Microsoft drops support of XP in the future. I wrote this article called Open Source and Microsoft Free that discussed why I believe it is important to at least test Linux to understand what the issues would be in a production environment.
The worst thing that can happen with a small pilot is that you discover that Linux won't work for your organization. At least then you can sleep at night knowing you did your homework and made a strategic decision based on real information.
So what is your Open Source strategy? When you evaluate software, are Open Source products even considered? If it was your money, would you think differently or do you always buy the biggest and most expensive toys? As IT professionals, our job is to bring value to the organization. If you are not even considering Open Source alternatives for any solution, are you really looking out for the best interests of your company?

I have been blogging about my SOA & BPM implementation for the past few months. My earlier posts talked about how we sold SOA to the business, the vendor evaluation process, and our bottom up approach. We have been partnering with a SOA implementation consulting company for the past 10 weeks practicing an agile approach to delivery. In this short time frame we have installed our stack (BPMS, ESB, Data Services) and built a beta version of a B2B portal. Life is good!

So why can't I sleep at night? Because I have only a few months to ramp my organization up to take this over from our consulting partners. This may be the biggest challenge of implementing SOA that we have faced so far! We have formed a business process management organization in the business and hired our first Process Analyst. I feel good about that. We created a team of architects including a testing architect to help build out and govern the architecture. I feel good about that. We have access to network and hardware architects and moved someone into the role of administering the stack. Real happy there! But that was the easy part.

The hard part will be getting the rest of the organization on board with SOA. There are numerous challenges. First, SOA introduces numerous new roles and responsibilities for our staff. We will have to move from being SMEs (subject matter experts) in an application to becoming SMEs in a layer of the architecture (see chart below)
Second, we are moving away from our current methodology which is slightly waterfall in nature to a more agile and iterative approach. We are currently ramping up to implement our governance model.

But the biggest challenge is organizational change management. SOA concepts are drastically different then the way we currently develop applications. We are moving away from thinking about stovepipe applications and moving towards building business services. We still have some legacy apps that are written in VB6. For the folks maintaining those apps, they have not been exposed to web services yet. The world of testing changes drastically with this layered and loosely coupled approach. The DBAs now need to create a logical representation of data to hide the complexity of the joins and relationships from the services. The business analysts role is practically a different skill set now. Traditionally, our project managers have been assigned to applications and work side by side with the development manager who has a fixed team of resources. With SOA, we are moving more towards the classic PM role where the PM drafts a team from a pool of resources. I could go on and on.

So as you can see, every single person's role is changing. At the same time, we have a third party who is cranking out code faster then we ever dreamed of. We are unintentionally setting an expectation that we will be able to move this fast once we take this over. We are not getting incremental head count so we need to figure out how to transform our existing staff from application SMEs to specialists within the architecture over the next few months.

Anyone who has been responsible for managing culture changing initiatives understands the challenges that this presents. For each individual, we must answer WIIFM (what's in it for me) , how will this help the business, what's wrong with the way we do it now, and many other questions that cause resistance when left unanswered. At the same time we need to quickly educate everyone on SOA. Meanwhile, everyone has a current full time job and are assigned to existing projects.

A while back I wrote an article about the role of the enterprise architect. Many people only consider the technology side of the equation. There are days where I don't even have time to deal with technology. Some days are spent evangelizing to create more buy-in throughout the organization. Other days are spent working with business owners on a plan to reprioritize projects to free resources up to work on our SOA initiatives. And even more days are spent working with the development managers on plans to free up resources for temporary assignments (learning opportunities) on SOA projects.

So as we continue down the road with our SOA implementation, I have discovered that the most challenging part of the project thus far is adapting to change. The technology part seems easy compared to this. I will write more about this as we work through future opportunities and challenges.

Check out this article by Dana Gardner from ZDNet about Microsoft's approach to SOA. Microsoft just doesn't get it. Their approach to SOA requires an upgrade to .NET Framework 3.0. Funny, I always thought that the beauty of SOA was it allows you to deliver new applications while leveraging your legacy applications, not upgrading them! If you follow some of the links that Dana provides you'll see these quotes from the article Microsoft Does Have a SOA Strategy:

(Microsoft) so far declined to participate in certain key emerging industry standards relevant to SOA.

The more vocal critics claim Microsoft's approach to SOA not only goes against the technical grain of competitors, but may also not be in the best interests of customers. They believe the company's approach is too tied to pushing sales of its core desktop and server products, which are more expensive, complex and proprietary than alternative offerings.

Microsoft is primarily concerned with its [own] business strategy. It wants to continue to produce these fantastic profits but that runs counter to what many IT shops are focused on, which is cost-reduction, simplification, consolidation and modernization...

Such standards are too closely tied to rival technologies and platforms for Microsoft's taste, Heffner says: "That would be a hard pill to swallow. Microsoft doesn't want to do the tools that will help people use some other platforms."
Let me say that again, "Microsoft doesn't want to do the tools that will help people use some other platforms." Think about that when evaluating vendors. Unless you have a Microsoft only shop, stay far away from this mine field! Notice that they always tend to focus on the developer and not the the developer's customers.

Everyday as I sift through the articles in my Google Reader, I see countless debates about the relationships between SOA & BPM, IT driven vs. business driven, and top down vs. bottom up. There is so much debate about these topics that I sometimes wonder if anyone is getting any work done. Glance at these headlines and ask yourself how confusing this must be for somebody who is in the research stages of SOA.

* The proper relationship between SOA and BPM
* The awkward dance between BPM and SOA
* BPM Driven SOA
* BPM Without SOA: 'Like One Hand Tied Behind Your Back
* Why BPM Screws up SOA

* Business Driven SOA
* Who's in Charge of your SOA?

* Another view: avoid bottom-up SOA like the plague
* Bottom-up SOA is harmful and should be discouraged
* Should SOA be Top Down or Bottom Up

I have mentioned it in the past and I'll say it again. There is no single answer to these questions. There are many factors that can influence these decisions such as:

  • Whether you have strong executive sponsorship or not
  • Your IT staff's capacity to change
  • Your EA maturity level
  • Your staff's talent level
  • Budget
  • How much time you are given to deliver
  • The main driver for the initiative
These are in no particular order. I am not going to tell everyone how I think you should run your SOA projects. I will give you insight into the decisions we made on my project.

In this article I tell the story of how we got the business to support a BPM and SOA initiative. IT had been pushing this for a while but could not get the funding. Once we convinced the business to reengineer their business processes we were able to come up with the justification to buy BPM for operational efficiencies and SOA as the technology to enable BPM.

We then launched into a process reengineering exercise which produced a portfolio of a dozen or so initiatives with an extremely attractive ROI. This determined the bottom up approach for us. We were funded specifically for delivering the projects identified from the process reengineering exercise. We then analyzed the projects and identified the services that would be required to support the new business processes. We did this for each project. Then we recommended a priority order which was based on two factors:
  1. Business benefits - ROI, operational efficiencies, customer service, etc.
  2. Architecture benefits - Service reuse and speed to market
We performed a three week "SOA roadmap" exercise that took these two factors into consideration. There were huge advantages of moving certain projects to the front of the priority list because of the number of services contained in the project that would be shared by the other projects. To figure this out we mapped out all of the services for each project and identified projects in the portfolio that would give us the biggest bang for the buck from the standpoint of service reuse.

The other constraints we had to deal with was time and money. Each of these projects in the portfolio had to be justified individually. They each had very specific funds and very aggressive timelines. The first project which included implementing the stack and delivering a beta version of a B2B portal in 10 weeks really constrained us from a governance standpoint. For this first project it was critical that we delivered quickly to deal with the perception that SOA initiatives take forever to implement. Funding for the other projects in the portfolio were also dependent on the results of the first deliverable. Due to these constraints, we did not have the luxury to establish a governance model up front. We alerted everyone of the risks of not establishing our governance model and agreed that we would be allowed to implement our governance model for the next projects.

As we sit now, we are wrapping up the first project and getting the funding to tackle several projects concurrently. We are gearing up with our implementation partner to start establishing our governance model that will help us grow our SOA with the release of each new project.

So for us, we are building SOA from the bottom up, the business and IT are working together to drive the initiatives that drive SOA adoption. The business is the owner of this multi year initiative. We are focusing on a specific area of the business that has 20 year old processes. Other business units are seeing the benefit and lining up to launch their own initiatives. The company is changing the way they think because of BPM and SOA. Twelve months from now we will be a different company because of this.

To sum it up, I can't tell you how you should address top down vs. bottom up, BPM driven SOA or SOA driven BPM, business driven or IT driven. I don't think there is a silver bullet. I believe the decision should be based on the environment you work in and the constraints that you are faced with.

I would love to hear from others who have been down this road.

Back when Jaws was still considered a scary movie, the mainframe dominated the hardware marketplace. Well, Just when you thought it was safe to get back in the water, the mainframe or at least the mainframe mentality, is coming back.

Virtualization is as hot of a topic as BPM and SOA these days. Companies are saving millions of dollars by consolidating hundreds or even thousands of individual servers onto small clusters of servers serving up virtual machines. Other drivers for this technology are reductions in energy, emissions, and floor space, improved manageability, and easier disaster recovery strategies.

The Butler group published an article called, "The King is Dead - Long Live the Mainframe". If you have the time, this article is a great read. Here is a quote from the article:

We believe the wider adoption of the mainframe beyond these markets will be influenced by developments in the Service Oriented Architecture (SOA) paradigm, and the impact that the advancements in the capabilities of x86 server virtualisation is having in the market.
One of the many reasons for the decline in mainframe usage over the years is the lack of products that are available for the mainframe platform. This is changing as Linux can now be the OS of choice on the mainframe. The article continues with this quote:
Another argument against mainframes has been the lack of commercially available software developed on the platform, which at best tends to be ported to the system at a later date, or not at all in some cases. This has created the ‘inhouse’ or customised solutions that have become associated with many mainframe implementations. However, since IBM announced support for Linux on its Z series this has become less of an issue.
But even if companies are not considering mainframes as a platform for virtualizing their enterprise, one can't help but see the resemblance of today's virtual infrastructure with the mainframe infrastructure of the days gone by.

As I continue to research the virtualization movement, I keep stumbling across articles that point to various issues and challenges with virtualization. These range from security to inadequate monitoring and managing tools. When companies like VMWare and Open Source solutions like Xen resolve these issues, won't these solutions closely resemble the mainframe? If you think about it, the virtual server concept is basically the same thing as LPARs. The architecture behind the mainframes of yesterday are starting to look very similar to the architecture behind virtualization today.

IBM is using this opportunity to revitalize its mainframe sales. Most of their sales in recent years can be attributed to the fact that companies cannot afford the cost to migrate off of the years of legacy built on top of mainframe technology. Now, IBM can leverage the new mainframes running Linux as a solution to virtualization and Green IT initiatives. And by the way, the are eating their own dog food too.

IBM saves $250 million consolidating Linux servers on to mainframes

Subscribe to: Posts (Atom)

My favorite sayings

"If you don't know where you're going, any road will get you there"

"Before you build a better mouse trap, make sure you have some mice"