Friday 7 October 2011

Who Needs a new Server

My current project has me again working on my platform of choice Force.com solving business problems for a massive multi-national enterprise company. The problem the client has is that they have developed an excellent suite of 'sweet' sales tools to assist their sales people out in the field. The technologies are many and various and all have one slight weakness from a business perspective:

The Tools that they use all capture client information and details of sales orders and proposals. Again, as a toolkit given to their Sales representatives it's an excellent package, videos demonstrating their products and simple forms to allow them to capture all the information to both identify pain points for the prospective customer and also identify appropriate solutions that they can offer to solve them, this is where the problems begin for the business.

The tools all reside locally on the Sales representatives own laptop and there is currently no way for the business to extract this information and get critical data out to view in a federated way. At first impression the long rendundant IBM consultant within me started thinking of the servers that would be needed to support this, a big expensive middleware infrastructure, an appliance such as DataPower, or software (MQ, and ESB) deployed to a 6 figure cost server that would need to be housed in an even more expensive data centre. This "essential" piece of middleware would then connect to a big central database (DB2, Oracle etc...) and then we can look at deploying Business Objects or Crystal Reports to allow management to get the MIS they require out of the system. Ouch, a big expensive implementation with 5+ consultants and a lead time as much as 6 months before you even start to think about the OpEx of maintaing such a mammoth implementation!!!
Luckily for me and the customer, the year 2011 came knocking and told me that I was living in the past and that those old fashioned architectures are just expensive to deploy in a resilient fashion, unreliable unless you deploy in a resilient fashion and require specialist infrastructure and application teams just to keep the lights on so a more modern, agile and rapid solution was required.

I then lifted the lid on SalesForce after a few hours it became quite obvious that the data they were capturing was essentially Accounts, Contacts, Opportunities and some ancilliary data to support each of those objects! So we had it, a clean mapping between the core SalesForce.com CRM offering and the toolkits developed, now all we needed was a mechanism to get the data from the remote silos into the SalesForce.com org and start reporting on it. The answer was so simple and that was to make use of the excellent WebServices API that you get from the Force.com platform. After an initial workshop, an interface with core services was defined, the applications, which already have the functionality to operate in an off-line capacity were compatible with the delivery mechanism, http/https, which was considered rightly, to be a reliable and resilient mechansim and therefore removed the need for some IBM or BEA/Oracle bloatware (middleware to the layman) and away we all went.

Now to repeat the problem, business critical data was residing in unreachable, remote silos and needed bringing together to provide senior business leaded with mission critical data and reports to allow for forecast and pipeline analysis. How long do you think it would have take? I couldn't even begin to estimate the cost, lapse time for going the 'old' way all the unknowns, server procurement, highly paid product specific consultants, licensing costs, server costs, data centre costs, security considerations, the list goes on and on and on, as does the delivery date and the zeros on the end of the purchase order the customer would need to raise. Luckily for them, the modern Cloud / SalesForce.com route was chosen, and here we are, 12 days of consultant effort later, ready to plug the solution in. The platform is in place, it's resilient and secure, it doesn't cost us to maintain and only one person was needed for all the configuration and coding of the SFDC solution.

If you are reading this, have a think about your own company! How many servers do you have? How many people do you employ exclusively to keep the lights on? How much does one server cost you as a company? How much does it cost doing failover and DR simulations?

Just keep in mind that SalesForce frees you from:

  1. Data Centre Considerations
    1. Rack Space
    2. Power Consumption
    3. Air Conditioning
    4. Networks
    5. Firewalls
    6. Bandwidth
  2. Server Considerations
    1. Security Patches
    2. OS upgrades
  3. Hardware Considerations
    1. Maintenance Contracts
    2. Component failure
    3. Redundant infrastructure (Warm, Hot, Cold Stand-bys)
  4. Skills Considerations
    1. System Administrators
    2. Networks and Firewalls teams
    3. Security specialists
All of this is included in your SaaS licensing model, one user, one license and all of the above comes for free!

With this in mind, when you consider your next implementation, think about the costs, the lead times, the procurement processes, the on-going maintenance cost, the legion of expensive consultants you will need and then think again, why should I do/pay for all of this when the "True Cloud" frees your hands and give you all of this in the price?

3 comments:

  1. You have certainly started them in the right direction. They still have the headache of deploying changes to these tools on every laptop, but I guess that moving to salesforce mobile was too big a jump. Maybe touch.salesforce.com will tick the boxes when it's released.

    ReplyDelete
  2. API cloud technology, hello 2011! Don't you just LOVE it? Enjoyed reading your post and am continuously astounded at how fast tehcnology is evolving :-)

    ReplyDelete
  3. Great Post Chris. And you are absolutely SPOT ON and we will see some big implementations moving away from traditional systems and jump on to the Cloud.

    ReplyDelete