An AI algorithm has successfully learned how to dress itself, even adapting when researchers changed the starting position and shape of the clothing.

In a way that might seem familiar to those of us who are not naturally "morning people", researchers "rewarded" the humanoid character for putting its limbs and head through the correct holes without ripping the fabric.  This technique, known as "reinforcement learning", is inspired by positive reinforcement techniques used to train animals.

While is is clear that there are a number of potential applications, from CGI to simulations used in engineering and robotics design, what is perhaps more interesting is the way this breakthrough was achieved.  Rather than programming one particular movement to reach the desired outcome, researchers broke down the motion into a number of subtasks.  As a result, the model was "more robust to variations in the clothing".

In a similar way, an outcomes-based or results-based approach must have two limbs: not only identifying the end goal, but also demonstrating how each process within a project contributes to achieving that goal.  Familiar and routine tasks - like putting on a jumper - can seem like singular units, discrete and therefore immutable.  By breaking down those tasks, each constituent step can be adjusted, creating a feedback loop for continuous improvement as well as allowing the process to be tailored to the particular circumstances at hand.