5DL Tools: Implementing High-Performance Human Development with BASHH™
The five dimensions are essential to our analysis of what can be improved about any learning experience. They are primarily a diagnostic tool: they tell us what’s working and what’s not, but they don’t tell us how to fill any gaps they spot. When designing new learning experiences or modifying existing ones, we use a consistent methodology to ensure that we address every key element that maximizes ROI for the learning experience. We’ve named that methodology BASHH™, which is an acronym for its key components/values: balance, alignment, student-centrism, high-performance, and habitual.
Balance. We’ve discussed elsewhere on this site the importance of learning in five dimensions. When one or more dimensions are missing or under-addressed, learner success suffers. In a perfect world, every learning experience would be optimized across all five dimensions for every student. In the real world, there are typically both cost and time-constraints that force any learning experience to limit itself to the most valuable elements. Balancing those elements across all five dimensions—ensuring that learners gain the most benefit in the dimension(s) where they need the most benefit—is vital to maximizing learner and organizational ROI.
For example, a learning experience designed primarily for learners who are well-prepared in tacit skills and knowledge and connectedness may look very much like a traditional university course, which emphasizes cognitive competencies to the near-exclusion of all else. On the other hand, a learning experience on the same topic that is optimized for under-prepared students may sacrifice some explicit knowledge/skills content in favor of more effort devoted to the social and emotional competencies that under-prepared students typically lack. Such realignment must be balanced against the need to learn new explicit competencies as well. Designers work closely with stakeholders (see below) to ensure that the balance fits the needs of the learner, the organization, and the situation, so that ROI is maximized.
(Stakeholder) Alignment. One can’t balance a learning experience properly without a clear understanding of what the learner needs to learn. In order to obtain that understanding, one must analyze the needs of at least the key stakeholders in the learning process. In our work, those stakeholders always include the learner, the instructor (who may require upskilling), and the organization sponsoring the learning experience. It can include many other stakeholders as well. For example, consider an undergraduate degree or diploma program. In addition to the learners and their educational institution, other potentially relevant stakeholders include: learners’ parents and/or families; the educational organization’s funders; the students’ prospective employers post-graduation (especially but not exclusively for vocational programs); accrediting agencies; and others.
We use Community Charrettes™ to survey key stakeholders and arrive at a shared understanding of what the learning experience’s priorities should be. From that understanding, we complete the remainder of the design work for the course.
Student (Learner) Centrism. We avoid using the word ‘Student’ in most of our work, in favor of using ‘Learner,’ for two reasons. First, Student comes from the old training world that we’re trying to overthrow. As such it is passive, and it suggests that what’s important happens somehow between a subordinate student and a superordinate ‘teacher.’ In our work, the Learner is primary, and those who facilitate Learning do so from beside the Learner, not above him or her. As others before us have put it, “there is no teaching, only learning and the facilitation of learning.” Second, only a fraction of those we serve would normally be described as students. Among all our projects, two-thirds to three-quarters operate in contexts outside formal education; for example, they may serve primarily adult Learners. But using the word Student here give us BASHH™, a powerful, memorable acronym that captures the energy of what we work to achieve in our human development solution designs. Using Learner gives us BALHH, or BLAHH—not a word capturing the kind of positive energy we seek to deliver. So, we decided to keep Student around for this one purpose. That’s the kind of pragmatic we tend to be!
After we’ve balanced the competencies across the five dimensions and inventoried the priorities of the various stakeholders (including the learners), it’s time to focus on the learner’s perspective on the experience being designed. Human development efforts often impose objectives on Learners that are not their own priorities. Schools teach what society thinks a student should learn; other organizations, like corporations or government agencies, teach only what they need their staffs to know. That’s both reasonable and necessary. But if those objectives are not also made important to the Learner in some way, the learning experience will not achieve results that satisfy any of the stakeholders, including those who pay for the experience as well as the learners.
So, we analyze the Learners’ incentives to learn carefully and exhaustively, looking for ways to make connections between the content and competencies to be learned and what the Learner already values. Sometimes, the connections are rational or substantive. For example, many Learners want to learn in order to gain a job or a promotion. Sometimes, they are more emotional; for example, even when they have no intrinsic interest in the lesson, Learners can work more productively if they find the learning experience to be emotionally engaging, via the thrill of competition, the joy of social interaction, or simply having fun.
High-Performance. Once we understand what the learning experience should deliver and how to engage the learner, the next step is to make that engagement as performant as feasible. To this end, we engage the learning science research to re-engineer the learning experience for maximum ROI. Our approach to utilizing learning science is embedded in a constantly evolving learning framework that we call LADDER2™. LADDER2™ is complicated and important enough that we discuss it in its own blog post separately. Here, let us make just a few key points about how we “drink from the fire hose” of learning science research and end up with the relatively small number of key tools and insights that comprise LADDER2™.
Our experts regularly scan the major scholarly outlets for learning science research and participate in scholarly convening, with an eye to identifying new insights that could be helpful in our work. Because of the amount of learning science inquiry going on today and the pace of new discoveries, that effort produces a very large number of potential tools. We need to winnow through that large list to find the subset of new findings that have the best chance of maximizing ROI for our clients. We use four primary filters:
- Findings must be high-performance. We are looking for the kinds of insights that can make a meaningful, real-world difference in how much is learned. With micro-findings, we look for “multipliers,” approaches that improve some micro-aspect of learning by 100% or more. With macro-findings, we look for approaches that improve overall learning by at least 20%-30%.
- Findings must be replicated. If there is a striking new finding delivering major performance gains, we mark it for further attention, but we do not adopt it right away. Instead, we wait for other studies to confirm the finding in different contexts. Our clients in emerging and developing economies do their work with very scarce resources, and many other urgent matters in those nations could benefit from those same resources. We owe it to our clients to get the best possible ROI for them, and not to test on them approaches that have not been validated.
- Findings must be validated in real-world environments. This criterion adds to #2, above. Our clients deserve to know that they are using tools that have been proved to work in the real world, not just in some academic laboratory. There are cases where we use unproved tools in projects because they show such distinctive promise for the situation we’re addressing. In those cases, however, we always consult with the client, explain the tool, the potential benefit, and the risks, and make sure the client is comfortable with the decision. We never try untested ideas on unsuspecting clients.
- Those real-world environments must be from contexts relevant to emerging and developing economies. Culture matters in learning: it shapes what Learners believe, what preparation they have received, and what sorts of challenges they find easier and more difficult to overcome. As we’ve discussed elsewhere in this site, Learners in emerging economies are more likely to be first-generation Learners, and they are often under-prepared to Learn in one or more fundamental ways. So, when seeking tools to improve their performance, we look for tools that have been tested on other Learners having the same characteristics. A surprising number of learning science findings have never been tested on anyone other than upper-middle-class Learners in G7 universities. We view those findings with extreme caution until they have been proved with Learners whose profiles are more like the Learners with whom we usually work.
Habitual. Most training providers focus on conveying knowledge or skills, as measured by process-measures such as course-completion and tests. We focus on obtaining results, which means that we focus on changing actual behaviors in real-world contexts. We convey knowledge and skills, of course, but instead of testing for the transfer of knowledge and skills and declaring victory if Learners can regurgitate them, we build those items into real-world habits and measure Learners’ success in changing their habits. The difference is profoundly important—as important as the difference between teaching about the harms of cigarettes and how to put down a cigarette, on the one hand, and actually quitting smoking, on the other. As with high-performance learning, how we do this is complicated enough that we devote another blog post just to this topic.
We welcome your thoughts and comments below. Interested in learning more? Please drop us a note at email@example.com.