Linda Be Learning Newsletter

August 2025 - The Machine Learning Edition

Machine Learning Art GIF

Not Linda Berberich, PhD - Founder and Chief Learning Architect, Linda B. Learning, just fun and games with machine learning

Hi, I’m Linda. Thanks so much for checking out the August 2025 edition of my Linda Be Learning newsletter. If you are just discovering me, I encourage you to check out my website and my YouTube channel to learn more about the work I do in the field of learning technology and innovation.

If you’ve been following my newsletter, you know that I introduced themes back in January 2025, looking at specific technologies and their intersections with human learning. This month, I am turning the focus on machine learning itself. Given our current obsession with “AI everything” and some of the dark places it’s taking us, it seemed timely to ground ourselves in what this tech is, what it can and cannot do, and how we can take back tech by building better machine learning that serves ALL of humanity instead of simply being extractive.

Tech to Get Excited About

I am always discovering and exploring new tech. It’s usually:

  • recent developments in tech I have worked on in the past,

  • tech I am actively using myself for projects,

  • tech I am researching for competitive analysis or other purposes, and/or

  • my client’s tech.

This month, I’m going to do a bit of a twist for this segment of the newsletter. Instead of talking about tech to get excited about, I’m going to get you excited about your own machine learning tech, by giving you a bit of a roadmap to teach yourself about machine learning.

Teach Yourself Machine Learning

I’ve been dabbling in coding for literally decades and have lived through the evolution of both hardware and software, the rise of the Internet, cloud computing, and all of what we know as modern tech. It’s vast and all encompassing, but it provided the opportunity to continue to learn and grow skills, as well as remain curious about the possibilities.

When it comes to modern data science and machine learning, I got to thinking, what would I do to teach myself machine learning today, given the availability of LLMs.

I started sketching it out, but then I found the video below, and I tend to agree with his steps, summarized here:

  1. Dive deep into Python - I can’t emphasize enough that learning Python is a great way to start. It’s easy to learn, has tons of flexibility in how you can use it, and it’s a skillset you’ll come back to again and again in your machine learning journey. I like using Spyder, Anaconda and Jupyter Notebooks and recommend getting familiar with all of them.

  2. Get data literate - that means becoming familiar with how to work with data, learning some basic SQL, and getting good at querying, managing, and visualizing data.

  3. Check out readily available AI models - particularly OpenAI API, Claude API, and Ollama, and start building some simple AI apps using Python and available libraries.

  4. Now go back to the core machine learning and AI fundamentals - including regression, classification, clustering, Python libraries, neural networks, computer vision, and libraries like PyTorch and TensorFlow for more complex applications. They’ll make more sense after seeing practical applications.

  5. Now return to your AI models and focus on LLMs and AI agents - see what you learn about the tools and what you can build with them.

  6. Build a ton of other AI applications - flex those new-found, hard-won skills!

Once you are at step four reviewing core fundamentals, the recommendations from this next video may be helpful to you as well.

Take-home message: You can teach yourself machine learning through self-study. You don’t have to go to college or pay for courses to understand what it is or how to build your own apps, tools, and algorithms.

Technology for Good

It’s very easy to get lost in the hype of AI and the promises of what it does and can do. With that in mind, this month’s Technology for Good section consists of some required reading to better balance fact from fiction and understand some of the dangers and harms related to AI and machine learning.

Here are my current Top Five reading recommendations - check out the links and videos to learn more about these books and their authors.

Data Conscience

DATA CONSCIENCE ALGORITHMIC S1EGE ON OUR HUM4N1TY EXPLORE HOW D4TA STRUCTURES C4N HELP OR H1NDER SOC1AL EQU1TY Data has enjoyed ‘bystander’ status as we’ve attempted to digitize responsibility and morality in tech. In fact, data’s importance should earn it a spot at the center of our thinking and strategy around building a better, more ethical world. It’s use—and misuse—lies at the heart of many of the racist, gendered, classist, and otherwise oppressive practices of modern tech. In Data Conscience: Algorithmic Siege on our Humanity, computer science and data inclusivity thought leader Dr. Brandeis Hill Marshall delivers a call to action for rebel tech leaders, who acknowledge and are prepared to address the current limitations of software development. In the book, Dr. Brandeis Hill Marshall discusses how the philosophy of “move fast and break things” is, itself, broken, and requires change. You’ll learn about the ways that discrimination rears its ugly head in the digital data space and how to address them with several known algorithms, including social network analysis, and linear regression A can’t-miss resource for junior-level to senior-level software developers who have gotten their hands dirty with at least a handful of significant software development projects, Data Conscience also provides readers with: Discussions of the importance of transparency Explorations of computational thinking in practice Strategies for encouraging accountability in tech Ways to avoid double-edged data visualization Schemes for governing data structures with law and algorithms

Tech Retrospective: Kasparov vs Deep Blue

In May 1997, IBM’s Deep Blue did something that no machine had done before - it became the first computer system to defeat a reigning world chess champion in a match under standard tournament controls. Deep Blue’s victory in the six-game match against Garry Kasparov marked an inflection point in computing, paving the way for today’s world of AI. Its underlying technology advanced the ability of supercomputers to tackle the complex calculations needed to adress important societal problems, such as discovering new pharmaceuticals, assessing financial risk, and exploring the inner workings of human genes.

Brute force computing power was key. Deep Blue used 32 processors to perform a set of coordinated, high-speed computations in parallel, evaluating 200 million chess positions per second, achieving a processing speed of 11.38 billion floating-point operations per second, or flops. By comparison, IBM’s first supercomputer, Stretch, introduced in 1961, had a processing speed of less than 500 flops.

IBM’s history in compute and machine learning goes back to the 1950s and is worth exploring to understand how we got to where we are today.

Deep Blue wasn’t just a breakthrough for the world of chess. Deep Blue's win was seen as symbolically significant, a sign that artificial intelligence was catching up to human intelligence, and could defeat one of humanity's great intellectual champions. Later analysis tended to play down Kasparov's loss as a result of uncharacteristically poor play on Kasparov's part, human intervention on Deep Blue’s gameplay (referenced in the short at the beginning of this newsletter), and computer bugs.

In a podcast discussion in December 2016, Kasparov reflected on his views of the match. He mentioned that after thorough research and introspection while writing a book, his perspective shifted. He acknowledged his increased respect for the Deep Blue team and a decrease in his opinion of both his own and Deep Blue's performance. He also noted the evolution of chess engines, and that modern ones easily surpass Deep Blue.

I have a bit of a different take. In a conversation with Sam Chavez of The Roots of Change Agency from July 2024, I share my perspective - excuse my naming Big Blue (IBM) over the chess-playing tech (Deep Blue). The point of chess gameplay not be a great model to assess human behavior or human intelligence still stands.

Learning Theory: Concept Analysis

Back in 2019, I wrote a series of LinkedIn articles about machine learning, AI and behavioral instruction, chatbots, and how machine learning augments human learning. In the first article in that series, I discussed concept learning as being my "aha moment" for connecting the dots between what I learned in college as a behavioral psychologist, and what I am learning on the job in AI.

The following is a reprint from my earlier article.

Behavior analysts who study verbal behavior often talk about two general categories of behavior. The first is rule governance, that is, behavior that is the result of spoken or assumed "rules" of appropriate behavior. For example, a stop sign signals people to stop when confronted with it; even if there is no traffic or other reason to stop, most people will still stop at a stop sign. As such, rule-governed behavior is often insensitive to those situations where the response isn't actually necessary, as in the situation of stopping at a stop sign when there is no traffic or other person or thing obstructing the way.

The second is contingency-shaped behavior, where future probability of a behavior recurring has to do with the consequences of the behavior when it is performed. This is how some people learn to run stop signs; if, in the past, they stopped when there was no traffic and the delay seemed, to the driver, to be an unnecessary burden, that driver may likely run the stop sign in future situations when they are running late.

Listen No Way GIF by Jukebox Saints

To stop or not to stop, that is the question.

From an instructional design perspective, we might recognize that rule governance can be used to describe concept learning, whereas contingency-shaped behavior describes experiential learning. Further, we might consider different instructional approaches to concept learning. Those who follow the work of Susan Meyer Markle or Ruth Clark might design concept learning that is "rule-eg" (Markle) or "directive" (Clark) in nature, where a rule is stated that defines the concept categorization properties or features, followed by several examples and non-examples that the learner then categorizes as part of the process of learning the concept. Concept learning also can be achieved using an "eg-rule" or "constructivist" approach, where several examples and non-examples of the concept are presented, and through categorizing the examples and non-examples, the learner discerns the rule.

What does all of this have to do with machine learning? Well, it turns out that machine learning approaches closely map to behavior analytic approaches to learning. Rule-governed behavior was the primary method that older machine learning approaches mimicked. Supervised learning is essentially rule-eg concept training, using a matching-to-sample paradigm with very large sets of structured data with predefined labels (concepts). Unsupervised learning is eg-rule concept training using large sets of unstructured data where the labels (concepts) are revealed in the training process that define the hidden structure. Newer machine learning models use reinforcement learning, which is essentially contingency-shaped behavior/experiential learning where the machine learns from repeated exposure to an environment and the consequences of its actions within that environment.

Programmed instruction never gets its due in terms of modern machine learning. To learn more about Markle’s influential yet underappreciated work, check out the second article in the series.

Upcoming Learning Offering

Join us on Saturday, August 23, 2025 for a deep exploration of Stacey Abrams' seminal work, "Lead from the Outside: How to Build Your Future and Make Real Change." This is not just a book review; it is a critical discourse on the paradigms of power, identity, and ambition.

This session explores how to transform your unique perspective—whether shaped by your identity as a woman, a person of color, an LGBTQ+ community member, or an unconventional thinker—into your greatest leadership advantage.

This conversation is part of the series "The Power of Owning Our Story," brought to you by My SafeSpace by Wo+Men In The Digital Age Ltd. We invite you to a rigorous discussion on architecting a future where your unique perspective becomes your most formidable asset.

Our upcoming LinkedIn Live session on Stacey Abram’s brilliant book, Lead From the Outside

That’s all for now. I hope you’ve gained some new insights and are inspired to deepen your study of AI andf machine learning.

See you next month!

Imagine Artificial Intelligence GIF by Pudgy Penguins

Building machine learning is so much fun!