Myth #11 (of 12): Bloom is no longer relevant

Any organization, teacher, trainer or instructor who focuses on ‘noun’ objectives (see myth #10) can’t help but only focus on the cognitive domain. However, if you only focus on the cognitive domain, learners may have trouble in finding value in the material that is being covered. “In spite of the wide acceptance of Bloom’s taxonomy, educators have largely ignored the affective domain, focusing instead on the cognitive.” (Bolin, 2005)  “What is the ‘value’ in learning this information/skill?” “Why are we studying this?” “How is it relevant to my situation?” Whether stated in class, or with colleagues at lunch, or with family at night, these statements resonate around the globe.

The reason most young people (certainly in the West) want to get a license is – for freedom. To go driving in your own car (or borrow the parents car if still at home) is a huge step in independence. Therefore, the affective goal of getting a license is to feel independent and seen as a pseudo right-of-passage to growing up. The socio-motor goal is to drive safely (usually to keep tickets and insurance costs low), thus the need to learn to drive in various conditions and terrains. The cognitive goal is to pass the test. But the cognitive needs the affective for motivation and the socio-motor to pass the practical aspects of the test. Whether you realize it or not, we utilize all three domains in our personal lives here in the 21st Century.

21st Century Bloom: A few years ago I decided to delve into Bloom and see what the recent literature had to say. They all had some interesting ideas and a few researchers had developed updated versions. Instead of having to choose a version of Bloom to fit my situation, I decided to compile all the versions into one table per domain.(bloom DOMAIN+TABLES) This document consists of three pages – one table for each domain – but uses some of the newer terminology to describe each level. Too simplify things, I used different color text in the tables that corresponds with the source citation at the bottom of each page.

I then proceeded to generate a few ‘training’ videos on how to best utilize these tables when planning course/lesson outcomes. I selected the topic of ‘giving a presentation’ as these skills are needed across all education systems and require a heavy does of all three domains. Given that outcomes are the core reason for the existence of any lesson/course/program, writing them should take the most time and effort.

Myth #12 (of 12): ADDIE is outdated

There has been some lively discussion recently in LinkedIn concerning the best curriculum design method for today’s fast paced world. So perhaps its time to rethink the role of content in teaching and learning. A fresh perspective on this problem includes thinking about our role as faculty and that of our students, as well as reconsidering the nature of curriculum design. (Monahan, 2015)

As an ISD (Instructional System Designer) I personally am still a big proponent of ADDIE (see my approach below) and have nothing negative to say against others using other systems given their circumstances. Use what works for you and gets the results you need.

However, I realize confusion abounds about ADDIE given many people only utilize the Instructional Designer components: ‘DDI’ or ADDIE light. Meaning, I do take issue with those who attack a proven concept based on misuse or misunderstanding.

There are three main ADDIE phases that, for a variety of well meaning reasons, many leave out: Assessment, Evaluation, and most importantly, the ‘prototyping’ aspect of Development phase. This blog will address all three.

Assessment of Needs

Conducting Needs Analysis: Is it Really Important? by Arshavskiy (2016) is a very interesting post from a couple of weeks agoHer premise is that “Good instructional designers must be able to recognize the ultimate reason the training is needed and seamlessly help the clients select the most appropriate training modality.” Obviously written from a corporate training perspective, but she is accurate in her statement. Moreover, she begins the article by hinting that the contracting body (company, institution, etc) already knows what the needs are. Here I fear is the great mistake many make: an over reliance of 3rd party needs assessment information.

There are a million reports that highlight what skills new grads need for the workplace. However, each new grad cohort is unique so why not just quickly double check to see what is really needed with a specific group. Furthermore, big consulting companies are making a killing generating global and country specific education needs reports that are wholeheartedly accepted by education officials. With all the sweeping comments about groups of learners, I wonder what happened to the ‘I’ in ILP? (see Myth #7)

Developing & Prototyping

I was recently involved with a HUGE project in a Middle East country that would make the ultimate case study for how NOT to bring Western education into foreign countries. At one point I asked the top people at the Skills Standards group why this project was never prototyped. “Jeff, there just wasn’t time.” was the answer. Sorry, but when a government dedicates USD $1 Billion for a new education initiative, you make time to prototype! If a company wants to try a unique training procedure across all divisions, you make time to prototype! If a university wants to deliver old content in a very radical way, you make time to prototype.

Tech & startup companies are all about fast prototyping. Build fast and break fast is the motto. Educators need to feel free to prototype lessons/courses/programs that can be overhauled as they are delivered, WITHOUT retribution from governing bodies. Surely this is a better model than suffering through many inspections with poor grades and told to make changes ‘or else!’

Evaluation

A course /program evaluation has three perspectives: course level; institution level; and the regulatory level. Given that regulatory bodies around the globe are so diverse and different I will only concern myself with the first two.

There is a group in the US that has started to do some interesting things with surveys to measure what tests alone can’t. They are using surveys to analyse data to generate a number of ‘data driven aha moments’ providing insights about teachers, content and students. They created a framework with four attributes: growth mindset, self-efficacy, self-management and social awareness. (Kamenetz, 2015) Not only does this seem to dovetail with what employers want, it also supports Kim’s (2015) notion that, “If personalized learning is to become the dominant paradigm for education in the U.S., more school districts are going to need to follow their path independently, via broader cultural change.”

Final Thought: Should not any course/program/institution be ultimately ‘judged’ against whether participants are progressing in their Individual learning journey? (Luckily my current research is working on such a process. Stay tuned!)