This nice piece (which comes by way of eCornell Research Blog) sums up the problem with these archaic methodologies nicely:
As the table above indicates, although there are four levels of evaluation in the Kirkpatrick model, none of the levels captures business feedback or business reaction to the training or e-Learning product. Level I captures the reaction of the student. Level II tests the student and accesses if the student has learned because of the training. The provider of the feedback is the student. Level III attempts to determine if the student is applying this newfound skill or knowledge in the workplace. This is accomplished by interviewing either the student or his supervisor. Level IV, which is where the impact that the training product has on the business is supposed to be identified, is contingent on an analysis of the data collected in the first three levels (none of which by the way even attempted to capture the voice of the business). The provider of feedback for level four tends to be the training department or the training manager who frequently (unilaterally) attempts to derive a correlation between the results of the first three levels of evaluation and business impact.
It should now be obvious that the current e-Learning development and evaluation methodologies are not equipped with the tools required to capture and measure business requirements. ISD as the name implies does a good job of identifying instructional requirements, but does not possess the means to detect business needs. The Kirkpatrick method of evaluation is designed to measure the effectiveness of instruction, and thus does not include tools that are vital to calculating business impact.
There’s also some nice stuff here about substituting Six Sigma’s DFSS (Design for Six Sigma) method instead of ADDIE. It’s a little vague, but it’s a start.