Linda Be Learning Newsletter

July 2025 - The Automation Edition

Linda Berberich, PhD - Founder and Chief Learning Architect, Linda B. Learning, in avatar form circa 2008

Hi, I’m Linda. Thanks so much for checking out the July 2025 edition of my Linda Be Learning newsletter. If you are just discovering me, I encourage you to check out my website and my YouTube channel to learn more about the work I do in the field of learning technology and innovation.

If you’ve been following my newsletter, you know that I introduced themes in January 2025, looking at specific technologies and their intersections with human learning. This month, I am addressing automation, a machine learning area that I’ve dabbled in for decades, both as a student and professionally. Back in 2019, I wrote a series of LinkedIn articles about machine learning, AI and behavioral instruction, chatbots, and how machine learning augments human learning - feel free to check those out and see how they resonate six years later!

In this month’s edition, we’ll explore technologies, instructional design methodologies, and other resources concerning some of the best uses of automation.

Tech to Get Excited About

I am always discovering and exploring new tech. It’s usually:

  • recent developments in tech I have worked on in the past,

  • tech I am actively using myself for projects,

  • tech I am researching for competitive analysis or other purposes, and/or

  • my client’s tech.

This month, I’d like to introduce you to Lily~Bot.

Lily~Bot

I first became acquainted with Lily~Bot through a STEAMDivas post on LinkedIn that led me to RobotDiva.

The RobotDiva YouTube channel recently kicked off an exciting new series: Lily~Bot Dance Party!

They’ve launched a fun, creative, and hands-on four-part robotics series exploring how to build, code, and customize Lily~Bot— their exclusive voice-responsive, dancing robot designed to make STEM joyful, artistic, and accessible for all.

This introductory episode provides a sneak peek into what Lily~Bot is, how she works, and what else is needed to build out her functionality. It’s an ideal approach for parents, educators, and future innovators ready to combine creativity with technology.

📅 Lily~Bot Series Schedule:

🤖 Week 1: Introduction and creative use case exploration

🚀 Week 2: Learn more about robotics and motion subsystem

🎤 Week 3: Expert interview or representative discussion

🛠️ Week 4: Hands-on tutorial and build process

Episodes drop every Monday — subscribe to the channel to follow along.

Want to build a Lily~Bot yourself? Get the exclusive Lily~Bot Kit here and use discount code: STEAMDIVAS. Links to additional components are listed in the above video’s description.

Technology for Good

Automation is everywhere, whether we are aware of it or not. It has long been my opinion that automation is better suited for some tasks over others. Web content accessibility, per WCAG guidelines, for example, should have always been one such area where automation just happens by default.

But that’s not the case, sadly. And even more sadly, people who are responsible for ensuring such standards are in place are suspicious of whether automation can actually do the job accurately and correctly. This came up at an ASU presentation I did with a colleague back in June 2025 - more on that later in the newsletter, but here’s an excerpt.

TestParty

Enter Testparty.ai, designed to automatically scan source code to create more accessible websites, digital apps, images, and PDFs with complete visibility. Testparty products can be used to reduce compliance risk and supplement in-house or manual vendor audits. Thread, Tushy, Zedge, Dorai Home, Felt, Greatness Wins, Pasito, WestPoint Home Partners, and Pepperdine University are among some of the early adopters using this innovative new tech.

Check out this recent webinar from TestParty founders Michael Bervell and Jason Tan to learn more.

Tech Retrospective: JAWS - a brief history

As you’ve seen, we could’ve just started from a place of building technology that is accessible by default simply be reimagining accessibility as software testing parties, with a focus on building products that just work for everyone to empower anyone. But instead, adaptive technology became its own vertical within tech — a reactive, versus proactive approach, and one that makes sense given the capitalist nature of the tech industry.

2025 marks the 30th anniversary of JAWS. No, not the horror movie about a great white shark.

That was 50 years ago…

JAWS, or Job Access With Speech, is one of the first examples of automated technology intended to assist humans overcome an accessibility obstacle; in this case, related to vision. It is probably the most well-known of all the adaptive technologies.

JAWS was originally released in 1989 by Ted Henter, a former motorcycle racer who lost his sight in a 1978 automobile accident. It was one of several screen readers originally created for Microsoft’s DOS operating system, giving blind users access to text-mode MS-DOS applications. JAWS used macros that allowed its users to customize the user interface and work better with various applications, a competitive advantage over other screen readers of its time.

In 1992, as Microsoft Windows became more popular, a new version of JAWS emerged, one that didn’t interfere with the natural user interface of Windows but also continues to provide a strong macro facility. In January 1995, JAWS for Windows 1.0 was released. Scripting support was released the following year. The JAWS Scripting Language allows users to use programs without standard Windows controls, and programs that were not designed for accessibility.

A new revision of JAWS for Windows is released about once a year, with recent updates including Picture Smart AI and FS companion - see this 30 year timeline for more.

Learning Theory and Learning Technology

Because scaling, efficiency and speed have been buzzwords since the dawning of tech industry time, everyone and their dog have tried applying automation to EVERYTHING. This overgeneralization has led us to a place of overspending, overconsumption, excessive waste, and even anxiety and depression. Not everything needs to be bigger or faster or cheaper to be better, and not all tasks SHOULD be automated.

Besides questions around the ramifications of automating a system, such as who is helped and who is potentially harmed through this automation, is the basic question of the task itself and whether it lends itself to automation.

In my mind, there’s a pretty clear distinction - procedural, near-transfer tasks are more amenable to automation. Far-transfer, not so much.

If you’re not familiar with this terminology and the difference between these two task types, please see this excerpt from a live learning session I did on far-transfer training.

In that session, I talked about how to design far-transfer training. Since we’re talking about automation, I’m going to briefly discuss how to design near-transfer training, to illustrate why these types of tasks are better suited for automation than far-transfer tasks are.

Chapter 3 of the Clark book referenced in the video discusses how to teach procedures. Procedures are simply clearly defined steps that result in the completion of a routine job task. These are tasks that are performed the same way every time you perform them, lending themselves to a step-by-step format, particularly when consistency to standardized form is the goal. Procedures can be linear or decision based, with the latter being made up of two or more linear procedural sequences.

Training procedures is often referred to as near-transfer training, because the way you train them is essentially the way they are performed on the job or in real life, as the case may be.

Learners require three basic instructional methods to learn procedures:

  1. Clearly stated steps of the procedure, with illustrations as needed.

  2. A follow-along demonstration of the procedure.

  3. Hands-on practice with explanatory and corrective feedback.

While Clark addresses how to create instructional materials for teaching procedures in classrooms and in elearning, she didn’t anticipate how these same methods could be used for creating automations.

But I did. Want me to do a live learning session on this topic? Let me know — I’m still trying to decide what/if I’m going to offer one this fall.

Upcoming Learning Offerings

Earlier in the newsletter, I referenced an ASU presentation I did back in June, Access for All: Automating Design Justice with my friend and colleague Em Weltman.

Here are a couple of particularly poignant excerpts from that session:

For us, automating accessibility isn’t just about the tech, it’s about the mindset. Want to learn more? Watch the session recording, and if you want to explore working with us, feel free to set up some time on my Calendly.

See you next month!