Select Page

Not a week goes by without an advisor telling me that they’re paying for a financial planning tool they never use.  Some of the reasons they provide are mundane and common to small business owners struggling to introduce any new technology- poor uptake of the service it supports, lack of attention to training, etc.  These problems can be troublesome and time-consuming to address, but they don’t present a true philosophical challenge to the use of advice technology.  Some of the other issues advisors bring up are quite a bit more vexing.

A recent Investment News article, “When Technology Doesn’t Make Sense“, by Sheryl Rowling highlights two of these higher order challenges to advice technology.

The first of these problems, exemplified by her example of tools calculating the most fiscally responsible choice for dealing with mortgage expenses, suggests that the intelligence baked into current planning tools often ignores clients’ emotional goals.  She describes multiple circumstances where clients fail to follow through with the suggestions of a particular planning tool, or at least one of the optional calculators it contains, which is a common thread in my conversations on the topic.  Her point seems to be that technology solves problems in a blunt, idealized fashion without consideration for real circumstances and therefore should be used sparingly.

The second of these problems is that advisors may end up disagreeing with the output of a planning calculation, which she exemplifies when discussing the relatively new area of social security optimization.  I encounter this in a variety of situations.  Rowling’s example is perhaps the most common type- client circumstances where there are no common standards for generating a solution, but the advisor’s one chosen tool has itself chosen only one outcome.  It also occurs when the advisor lacks education about the matter at hand or is led astray of providing objective advice by their own emotional response to the situation.

Rowling refers to their source as the “Human Element” and suggests that they are reasons to ignore the output of advice technology, presumably by manually generating an alternative solution.  It’s not clear from her article what Rowling sees as the long term solution, but in my experience many advisors aren’t even seeking one.  Among a large portion of advisors, there is rampant fatalism regarding the capability of technology.  Agreeing with Rowling, they tend to see these “Human Element Problems” as insurmountable obstacles and believe there must always be human significant action outside of the process boundaries set by advice technology.

However, I suspect this is more fantasy about the irreplacable nature of individual human thought than a fundamental shortcoming of all advice technology.

Rowling’s Human Element Problems may be more difficult to address algorithmically, versus simply solving for the best fiscal solution, but that increased difficulty doesn’t mean they are outside the scope of modern technology.  Compared to other challenges met by algorithm, modern speech recognition for one, determining whether a client will likely act on a set of suggestions and presenting options based on this consideration seems rather simplistic.  There are a lot of complexities to consider, such as how to gather information about clients’ behavioral predilections and apply them to various “best practice” solutions.  There will be challenges around fair and balanced disclosure and fiduciary requirements when the fiscally correct answer isn’t the behaviorally realistic one that needs to be presented for a successful sale.  The important point, however, is that properly designed technology can solve the first sort of Human Element Problems.

Alternately, proper technology choice by advisors and appropriate policy from their firms can solve the second set of issues.  If advisors have a huge problem with the output of a particular tool, they shouldn’t use that tool.  If they can’t find a tool that meets all of their needs, they need to clearly state their goals some that someone can create one.  The reaction to finding deficiency in available products shouldn’t be to check out of technology in general, or even in a a broad range of circumstances, but to check into the process for creating new products and improving available tools.  This is something we’ve seen used to great success outside of our industry, from the creation of open voting on features to early access beta teams to the crowdfunding of new hardware or software.  It’s an ethos that should, and probably will, find application in the advisory space.

Between the creation of more intelligent advice software, which seems like an eventuality given the number of advisors seeking it, and more care choosing software, which is already happening within more successful practices, Rowling’s Human Elements are simply another part of a calculation to be automated.