Subject to Regulation vs (Effectively) Regulated

Posted in Economics on October 6, 2008 by kgagan

So now for an economics rant.  In Sebastian Mallaby’s post today in the Washington Post, he states that “The claim that the financial crisis reflects Bush-McCain deregulation is not only nonsense. It is the sort of nonsense that could matter.”  And goes on to argue that “deregulation is the wrong scapegoat” for the current financial crisis that has frozen credit markets, led to a drop of nearly 50% (as of today) in the stock market from highs a  year ago, and appears to be the worst economic crisis to hit this country in 70+ years – and you know what I’m talking about.  I can’t really quibble with his opening arguments that the current crisis is in large part due to the Fed’s recent policy of letting asset bubbles inflate, waiting until they burst, then cleaning up afterwards with the liberal application of dollars and easy credit.  His insight that the Fed faced a choice between managing consumer inflation (due to the rise of cheap consumer goods from China) and asset inflation (due to the massive influx of capital seeking returns from China’s profits on those consumer goods) is valuable, and helped me understand a bit of the history of the current crisis.  His assertion that unwise deregulation was not a major factor doesn’t seem right to me though, and his supporting arguments seem a bit simplistic and disingenuous.

He first argues that the Investment Banks, regulated by the SEC, and commercial banks, regulated by the Fed and other agencies bought up large chunks of CDO’s, CDS’s, etc… built on top of questionnable loans, and that lightly regulated hedge funds seem to have steered clear, so therefore regulation didn’t help.  I’m interested to understand the latter point about hedge funds, as it might offer some guidance into how to manage these kinds of things, but the mere fact that the I & C Banks were “regulated” does not hold much water.  Which brings me to the subject of this post: these entities were “subject to regulation,” not “regulated,” or at least not effectively regulated.  As a recent article in the New York Times reports, the SEC was almost entirely absent when it came to actually overseeing the operations of the Investment Banks.  The Commission actually deregulated the banks by removing capital constraints (that might have reduced their exposure to these assets), they barely investigated them or follow up on the results of what audits they did undertake, and they didn’t take advantage of any of the new information the banks agreed to share in exchange for the removal of the capital constraints.   I don’t have the salient reference, but I believe the story was largely the same for the commercial banks.  While these institutions were certainly subject to regulation, the actual oversight performed was minimal and the ability of the agencies charged with their oversight has been gutted in recent years. 

He then argues that “super-regulated” (his words) Fannie Mae’s and Freddie Mac’s appetite for lousy mortgage backed securities further inflated the bubble.  Fair enough, but I don’t think their appetite for these asset classes was driven by regulation – I think it was driven by their directive to promote homeownership.  This is not regulation; this is the result of their role as quasi-governmental entities that have to make some concessions to the government masters who’s implicit (now explicit) guarantee allows them to borrow at below market rates.  This is government meddling in markets, not regulation.

My opinion is that the problem is not too much or too little regulation, it’s ineffective regulation, coupled with government meddling (i.e. politics) in markets.  So whoever becomes the next president of the United States, I hope they take a hard look at the regulatory infrastructure that exists today, how well it’s performing its stated function, and where it needs to be shored up.  I certainly hope they don’t go for a simplistic or ideologically motivated solution (more regulation, less regulation), and instead find a few smart people (from both sides of the aisle), to try to figure out how to bring our regulatory scheme up to date with today’s complex and global markets.

And one more thing: I’m getting really sick of hearing about how “free markets” are self regulating, and if the regulators/government would just get out of the way, everything’d be find.  Conventional market theory is built on a few core assumptions, among which are that market participants are rational actors, and that markets operate on the basis of perfect or near perfect flow of information.  With relatively recent innovations in technology, markets have been opened up to whole new classes of “amateur” investors, and there are now a very large number of players in the market that respond as a herd, not on the basis in individual, rational decisions (see complexity theory and specifically agent based modelling if you’re interested in crowd dynamics and how they may affect market behaviour).  This also calls into question how perfect the flow of information is under these circumstances.  In the best of situations, information does not flow smoothly or symmetrically.  There is a class of investor that spends all of their time doing nothing but soaking up all the information they can, but even they can only focus on a small subset of any given market (and never mind any conflicts of interests these analysts may have) – the amateurs haven’t got a chance.  Then there’s this whole wave of investments and trading strategies, dreamt up by the mathematicions and physicists – the “quant jocks” – of Wall Street.  Never mind perfect information – nobody even knows how these things work, much less what they’re worth or what might influence that value.  So forget about theoretical, self regulating markets.  They don’t exist.

Incentive Alignment

Posted in Business Analysis, Project Management, Software Development on July 12, 2008 by kgagan

The company I work for is in the process of wrapping up one of the biggest system implementation projects I’ve worked on, and probably the biggest that the company has ever undertaken.  As you do when projects start to wind down, I’ve been thinking recently about some of the things that didn’t go particularly well, and why.  One of the key issues we’ve faced, I think, has been with the alignment of incentives for the business users (whose participation and support are always critical to the success of any project) with the overall objectives of the project and the expected benefit to the company.  By way of background, this project involved the modification and implementation of some software developed in the US by our UK operations.   This project was originally estimated at 12-18 months of effort.  We’re now at about 48 months, with probably a more extreme multiplier on the cost.  I wrote an e-mail on the subject a few months ago that I think captures some of the issues fairly well. 

One of the fundamental assumptions that underpinned the initial analysis and business case was that the UK operation would substantially adopt US business processes and organizational structures.  This lead to the assumption that the level of system change required would be relatively small.  One of the basic implications of this assumption is that UK business users would be required to significantly change the way they do business, which implies additional work, disruption, and discomfort as they go through the required transition.  Those business users are the same resources that must give shape to the project and provide input to the requirements, analysis and design process for the (inevitable but presumably minimal) system modifications that will still be required to support the UK business.

 As we’ve moved through the project there have been hundreds, probably thousands of decisions that have needed to be made that have hinged on whether the UK must be able to continue to do things the way they have with their current system (with the associated system development time and cost to modify the new system) or can change to doing things a new way.  Unfortunately, but unsurprisingly, the individuals charged with making those decisions are almost always deciding in favor of additional system development to support doing things they way they do now.  This is unsurprising because in most cases they have almost no incentive to make a change, no matter how small it is, and there is almost always a cost to adopting a new process.  In some cases, the cost to them may simply be learning a new way of doing things, in others, they may actually be giving up a level of functionality to which they’ve become accustomed.  In addition, they may not know if the way things are done now is truly necessary, due to market or internal convention, or due to current system limitations, so they’ve generally erred on the side of caution, assuming that things are truly necessary if they’re not sure.

 

Counteracting the tendency to want to maintain the status quo is the desire to keep project time and cost down by adopting US processes & existing system functionality.  As IS is, in large part, managing the project and is the primary source of cost & time to implement new functionality to support UK business processes, IS has been asked to attempt to keep this under control.  But of course IS has no authority to enforce it (and rightly so).  If IS starts to say no to a significant number of the system changes requested by business users as a result of the day to day decisions, we lose their support.  If we lose their support, they’ll stop participating, and without their support and participation, the project will fail.  I believe IS has pushed back as hard as we can without alienating the users (too much, anyway).

 

With a project the magnitude of what we’re undertaking, that touches all parts of the organization and that requires significant compromise between parallel user groups in the US & UK (i.e. reporting lines for stakeholders and participants only meet at the CXO level), the only people with any ability to enforce the change required by the original business case are senior executives.  Even at the level of the workstream leads, the incentives are pretty weak to force major change to business processes.  It’s also extremely difficult for a senior executive to effectively manage the decisions that are resulting in the increase in project scope.  The increase in scope has largely been the result of a large number of small decisions made at the line management level and requiring detailed knowledge of specific business processes and system functionality, rather than a small number of large decisions that can be readily made by a senior executive.

 

I would also argue that the original assumption (that the UK would be willing to accept the business processes implicit in the new software, i.e. the US business processes) is fundamentally flawed.  A project of this magnitude requires significant support and effort from just about everyone in the business, and ultimately requires buy in and acceptance from a large majority of end users.  Given that there’s little tangible short term benefit to those end users to make the changes and put up with the disruption that are implied by that basic assumption (however small they may be, and unfortunately, in this case, they’re much larger than anticipated), I’m not sure you would ever have the level of support required for the project to be successful.  To get their support and participation, you need to offer them some short term benefit, which will (has) result in increases in project scope.  Stakeholder and project participant incentives need to be aligned with project outcomes and objectives in order to have a chance at succeeding, and I’m not sure the original business case assumptions left enough room for this.

 

Three other related (and probably more obvious) issues/assumptions are/were:

We assumed that US & UK businesses were fundamentally pretty similar, and that adopting US processes wouldn’t be too difficult.  I think this was probably a bit optimistic, and that we had an insufficient appreciation of the difference between US & UK business processes (and the reasons for/ability to overcome those differences)

Insufficient appreciation of the level of additional functionality required to support the UK market environment – we thought we were 75-90% of the way there with the existing pilot system.  We were more like 10-20% of the way there (which implies a multiplier of 3-9 times original estimates).

The benefits of standardizing on US processes and system functionality mostly accrue to the global organization in the form of lower project cost, cost savings from consistent, streamlined business processes, and improved sales & marketing capability enabled by the system.  In the long run these should benefit individual employees through greater company profitability, but at the cost of significant short term disruption, up to and including the loss of a job for some employees.

 

——————–

 

Here’s an example, from a very recent e-mail discussion of system design.  While this is not specifically related to doing things the US or UK way (as the functionality in question is UK only), it’s indicative of the way decisions have been made.

 

(Me) If the same customer has advised two amounts (e.g. under separate reference numbers), could those go in separate message segments, or are they required to be in the same segment?  It would be simpler to implement if it’s acceptable to report them under separate segments.

 

(Business User) They are required to be grouped and cannot go separately. The reason for this, is that custoemrs get charged £?? per segment and this is a mechanism they use to keep their costs down.

 

(Me) OK.  We’ll adjust system design accordingly.  Out of curiosity, how often does this happen and how much do they get charged per segment?

 

(Business User) It doesn’t happen a great deal of the time but when it does it stays with the contract throughout its lifetime, but that’s irrelevant really. Underwriters are motivated by cost savings and we are obliged to assist them in doing so by using the system in the most cost effective way. The ballpark figure is £1 per segment depending on volume.

 

While it may seem ridiculous, this is an entirely rational response on the part of the business user.  If we do not implement the more complex method, they’ll be criticized by their customers for not supporting the cost saving strategies that they have in the past (this flexibility is currently available in their existing system), even if the cost savings is only a few hundred pounds a year.  The only benefit to them of giving up this functionality is getting the new system implemented slightly sooner, and for slightly less cost.  Given the choice, the users will take the longer project time to get the functionality they want.  We’ve estimated that the functionality required to implement the more complex method will add 3-5 days of effort to the overall implementation, including design, development, QA, UAT & Integration Testing. Take the few additional days for this one decision and multiply by the hundreds or thousands of similar design decisions that have been made and you end up with big numbers.

If you’ve gotten this far in the post, you’ve probably realized that there are a lot of issues to explore here, but as this is already a long post, I’ll take them up in subsequent notes.

Functionality Matrix

Posted in Business Analysis, Project Management on July 3, 2008 by kgagan

I dusted off an old project definition & scoping tool today to help communicate scope to users & executive sponsors.  This dates back 10+ years to a methodology that was developed by a little management consulting company I worked for after business school – Axiom Inc (later bought by Cambridge Technology Partners).

I’ve never encountered this anywhere else, although I’m sure it’s not a totally original idea.  The basic concept is to categorize distinct functional requirements into groupings and put them in order of priority.  Structuring the resulting data into a matrix and drawing a zig zag line through the matrix to indicate the cutoff for items in and out of scope offers a concise presentation of scope for a project.

One of the major benefits of the presentation is that it concentrates attention on the marginal items.  As with most portfolio management problems, most people will agree on the top priority items and the lowest priority items, but will struggle to come to agreement on the stuff in the middle.  You don’t want to waste valuable meeting time on discussion of things on which everyone agrees – you want to cut straight to a review of areas where there may be disagreement.

At Axiom, part of the scoping methodology was to include an assessment of the effort required for each area.  Granted that early in the process these numbers are pretty fuzzy, you can generally at least get to a small/medium/large categorization, and attaching effort ranges to these categories allows you to do some scenario analysis.  With a little Excel magic, you can even build a tool to calculate total estimated effort as you adjust in scope/out of scope decisions on individual functional items.