- Linda Be Learning Newsletter
- Posts
- Linda Be Learning Newsletter
Linda Be Learning Newsletter
January 2026 - The Vision Edition


Linda Berberich, PhD - Founder and Chief Learning Architect, Linda B. Learning, wearing the infamous “Coke-bottle” glasses, circa 1978
Hi, I’m Linda. Thanks so much for checking out the January edition of my Linda Be Learning newsletter, the first one for 2026. If you are just discovering me, I encourage you to check out my website and my YouTube channel to learn more about the work I do in the field of learning technology and innovation.
My background in learning is very, very broad. In addition to my academic background in psychology and kinesiology, I also have been an instructor, coach, facilitator, and guide in one form or another since I was a little kid. In graduate school, I studied human, animal, plant, and machine learning, both in the lab and in applied settings. For 2026, I’m going to feature all of these different types of learning, when applicable, to compare and contrast them around a specific theme.
Fun fact: Up until I was 43 years old, I had totally garbage eyesight. I was born with misshaped eyeballs and discovered much later in life that I also had congenital cataracts. I started the first eight years of my life stumbling around with utterly impaired eyesight, until the first time I had my eyesight tested. I found out that in addition to everything else, I was also highly myopic (-8.0 at initial testing) and had astigmatism in my right eye. So I wore corrective lenses that partially corrected my vision, but it wasn’t until I had cataract surgery that I achieved 20/20 vision for the first time in my life. I had customized Crystalenses implanted to replace the natural lenses in my eyes, designed to correct for all of my vision impairments. This dramatic event gave me a heightened appreciation for how much vision impacts not only how I see and interpret the world, but also how others see and interpret me.
When you see me, what do you see?
Is it real or simply fantasy?
To start off the year, this month’s edition is focused (pun intended) on how we see, aka vision: mostly human and computer, but we’ll touch on animal and plant, too. We’ll look at some assistive technologies, take a deep dive into the history of computer vision and compare how different living organisms see versus how machines “see.”
Tech to Get Excited About
I am always discovering and exploring new tech. It’s usually:
recent developments in tech I have worked on in the past,
tech I am actively using myself for projects,
tech I am researching for competitive analysis or other purposes, and/or
my client’s tech.
This month, we’re going to look at the technology that gave me 20/20 vision, the Crystalens.
Crystalens
Crystalens AO Intraocular Lens is an artificial lens implant that corrects for both cataracts (the clouding or hardening of the lenses in your eyes) and presbyopia (loss of near and intermediate vision). Crystalens was modeled after the natural lens of the human eye and uses the eye muscles to flex and accommodate in order to focus on objects in the environment at all distances. Crystalens dynamically adjusts to your visual needs so that you hardly, if ever, need glasses after surgery.
I had my surgeries in 2011, starting with the right eye and followed by the left eye a week later. The procedure itself is quick, but there is some prep involved with numbing the eye as well as some post-surgery follow-up. And the effect is immediate - as soon as the surgeon dropped in the lens, I could see perfectly, even through the numbing and tearing of the eye that happens during eye surgery. And that was life-changing, from that moment on.
If you are interested in learning more about Crystalens surgery, here is a comprehensive patient guide.
Technology for Good
Short of corrective surgery, what’s next for assistive tech for the visually impaired? Let’s take a look at some of the more exciting developments.
In today’s world, technology is supposed to work for everyone. But for many of us with visual impairments, it often feels like we’re an afterthought in the design process. That’s not just inconvenient—it’s exhausting. Apps that aren’t screen reader-friendly. Devices that assume sight as the default.
So we’ve taken matters into our own hands. The rise of inclusive design and community-driven solutions is finally bringing forward solutions that actually work for those of us who are blind or have low vision. Technology should flex to our lifestyles, not the other way around.
Some breakthroughs in technology for visually impaired people include:
AI-Powered Apps: Seeing AI, Be My Eyes, and Lookout turn smartphones into visual interpreters.
Voice-Based Tools: Smart speakers and digital assistants are often the first step into tech for many blind adults.
Wearables: Instead of loud instructions or screen-based input, the SensAble smart band uses haptic feedback to indicate obstacles and direction.
It’s not just about white canes or Braille displays anymore. Modern blind assistive devices make everyday life more seamless and include: mobility aids such as smart canes with ultrasonic or LiDAR sensors and wearables like SensAble that use vibration cues; communication support like refreshable Braille displays and voice-to-text transcription tools; home navigation aids including smart door sensors and audible thermostats and appliances; and daily task devices such as talking watches, scales, and thermometers.
Each of these devices to help the blind is grounded in lived experience—built not just to function, but to feel natural in everyday use. The community rightfully deserves independence without compromise. Products for the blind and visually impaired that bring usability and dignity together.
“Accessibility is not a checklist—it’s a mindset. Even small visually impaired products can change someone’s whole routine.” - SensAble customer
Tech Retrospective: The History of Computer Vision
Computer vision is a subfield of machine learning that empowers machines to interpret and make decisions based on visual data input from the world around them. It is literally the automated extraction, analysis, and “understanding” of relevant information from one or more images.
In recent years, computer vision has been used in countless applications, ranging from autonomous vehicles to medical imaging. In this tech retrospective, we’ll delve into the history of computer vision, tracing its roots from early theoretical models to present-day advancements, and explore potential future development.
Early computer vision can be traced back to Warren McCulloch and Walter Pitts’ 1943 foundational work, A Logical Calculus of Ideas Immanent in Nervous Activity, the first attempt to model the behavior of a biological neuron. This approach introduced a new simplified model of neural computation, representing living neurons as binary units capable of performing logical operations and laying the groundwork for neural network theory, which would later become integral to computer vision.

Source: McCulloch, W. S., & Pitts, W. (1943). "A Logical Calculus of Ideas Immanent in Nervous Activity." Bulletin of Mathematical Biophysics, 5(4), 115-133.
Early computer vision research in the 1950s was inspired by the way frogs' visual systems detect and respond to movement. Studies on frog vision, such as the article What the Frog’s Eye Tells the Frog’s Brain, co-authored by McCullough and Pitts, helped lay the groundwork for developing visual processing systems in machines.
In 1958, Frank Rosenblatt developed the Perceptron, an early neural network model based on the McCulloch-Pitts artificial neuron designed for binary classification. The perceptron was capable of learning from data, and its design included a single layer of neurons. Despite its limitations in solving non-linear separable problems, the perceptron marked an important step forward in machine learning and pattern recognition development. It was capable of organizing and classifying various types of information and emphasized the importance of using adaptive algorithms to process visual data.
The work was not without its critics. In 1969, Marvin Minsky and Seymour Papert published Perceptrons: An Introduction to Computational Geometry, which highlighted the limitations of single-layer perceptrons, particularly their inability to solve the XOR problem (a classic issue in machine learning involving a logical operation that takes two binary inputs and returns true (1) if the inputs are different and false (0) if they are the same) and other non-linearly separable functions. This critique led to the "AI Winter," a brief period of reduced interest and funding for neural network research, and underscored the need for more complex architectures to overcome the limitations of early neural networks and to develop them into what we know today.
In 1974, Paul Werbos introduced the backpropagation algorithm, allowing for efficient training of multi-layer neural networks by computing gradients and updating weights, overcoming previous limitations of a single-layer system. A revolutionary idea, it would not gain widespread recognition and use until the 1980s.
In 1979, Kunihiko Fukushima gave us the neocognitron, a hierarchical, multi-layered neural network designed for pattern recognition considered to be the precursor to modern convolutional neural networks (CNNs). The neocognitron’s architecture, with its multiple layers of processing units with local receptive fields, enabled it to recognize patterns with some degree of invariance to translation.

Fukushima’s neocognitron
Neural networks, the development of backpropagation and other new training algorithms introduced in the 1980s enabled the training of deeper and more sophisticated networks. The influenential Learning Internal Representations by Error Propagation, published in 1986 by Geoffrey Hinton, David Rumelhart, and Ronald Williams, demonstrated the effectiveness of backpropagation in training these new multi-layer networks. Hinton and his colleagues even suggested that backpropagation enables multi-layer networks to learn internal representations, establishing the foundation for both the modern deep learning techniques still used today and the anthropomorphic interpretations that go with them.
The 1980s also brought us the restricted Boltzmann machines (RBMs), initially proposed under the name Harmonium by Paul Smolensky in 1986, a type of generative stochastic neural network used for unsupervised learning. They model the joint distribution of visible and hidden units, which helps in dimensionality reduction, feature learning, and pre-training deep networks. They would rise to prominence after Geoffrey Hinton and collaborators used fast learning algorithms for them in the mid-2000s.
The 1990s brought us the application of neural networks to solving practical problems. One such development was Yann LeCun’s LeNet, an architecture used for handwritten digit recognition and implemented in banking systems and the United States Postal Service.
During this decade, recurrent neural networks (RNNs) also gained attention. RNNs are designed to handle sequential data by maintaining a memory of previous inputs. This allows them to capture temporal dependencies and patterns in data sequences, making them useful for tasks like speech recognition and time-series prediction.
The 90s also brought us developments like Long Short-Term Memory (LSTM) networks by Sepp Hochreiter and Jürgen Schmidhuber, addressing challenges in training RNNs for sequence prediction tasks. LSTMs are a special kind of neural network designed to remember information for long periods. Unlike regular networks that may forget important details over time, LSTMs use special mechanisms to keep track of important information and make better predictions, even when the data has long-term patterns or dependencies.
A pivotal moment in the history of computer vision occurred in 2006 when Geoffrey Hinton and his colleagues introduced the aforementioned fast learning algorithm for deep belief networks (DBNs) known as the contrastive divergence algorithm. DBNs are generative models that stack multiple layers of RBMs to form a deep neural network. They use a layer-wise pre-training approach followed by fine-tuning to learn complex representations of data, which significantly improves performance on various tasks.This made it possible to deep train networks efficiently, enabling unsupervised learning of features from data.
In 2012, Alex Krizhevsky, Ilya Sutskever and Geoffrey Hinton developed AlexNet, a deep CNN that won the ImageNet Large Scale Visual Recognition Challenge (ILSVRC). AlexNet outperformed previous methods, showcasing the power of deep learning in image classification and inspiring further research and development in the field. The success of AlexNet highlighted the potential of GPU acceleration for training large-scale neural networks, which is now standard practice in deep learning research.
In 2014, Ian Goodfellow and his colleagues introduced generative adversarial networks (GANs), which consist of two neural networks, a generator and a discriminator, that are trained simultaneously to create imagery among other things. The generator creates artificial data samples and the discriminator evaluates their authenticity. This adversarial process leads to the generation of highly realistic images and has been applied to a variety of applications and new devices, such as Open AI’s Dall-E 3.
The transformer model, introduced by Ashish Vaswani in 2017, revolutionized natural language processing (NLP). Its self-attention mechanism allows the model to handle long sentences and complex patterns by focusing on different parts of the text as needed. This breakthrough led to the development of vision transformers, which adapts the same architecture to excel in computer vision tasks, achieving high performance in various benchmarks and inspiring new advancements in the field. In their seminal paper, Vaswani and his colleagues noted that, "the self-attention mechanism in transformers enables efficient processing of sequential data, making it applicable to both NLP and computer vision." This innovation bridged the gap between different AI domains, leading to multiple cross-disciplinary advancements.
Today, computer vision encompasses a wide range of techniques and applications. Advanced models such as GANs and transformers have revolutionized image generation and AI learning skills. Computer vision technologies are now highly integral to a variety of industries. In healthcare, they enable precise medical imaging and diagnostics. In autonomous vehicles, computer vision systems allow vehicles to navigate and detect obstacles. Other industries such as retail, agriculture, and security are also benefitting from the rapid advancements in computer vision.
Despite its significant progress over the past decades, several challenges still remain in the field of computer vision. These challenges include the need for large annotated datasets for training and the computational cost of training deep models. But for many of us, the biggest issues are related to model interpretability, specifically around biases, privacy, consent, security, as well as identity fraud, deepfakes, and other forms of exploitation and cybercrime, highlighted in research by Dr. Joy Buolamwini and others.
Whether you're considering computer vision for business applications or simply want to understand the technology shaping our world, recognizing its capabilities and limitations helps you make informed decisions about where and how this powerful technology can be most effectively applied. This beginner’s guide is a good starting point.
Learning Theory: Vision and Seeing
Sensation and perception are subfields within the field and academic study of biological psychology as well as in anatomy and physiology within the field of biology. When talking about human vision from this perspective, we usually refer to the visual system, which is responsible for seeing.
Seeing is a sophisticated biological marvel, a dance that translates light into vision. When we open our eyes, we take in entire landscapes in an instant. Mountains loom, rivers shimmer, stars twinkle millions of light-years away. Yet seeing is not as simple as opening windows to the world. It is a complex collaboration between light, our eyes, and our brain.
Light is made of tiny particles called photons that travel from objects into our eyes. Our corneas bend the incoming light, and the lenses focuses it onto the retinas, thin layers of tissue at the back of the eye. The retinas contain photoreceptors, specialized cells made up of rods and cones. Rods thrive in dim light, allowing us to see shapes and movement in darkness, while cones detect color, making sunsets fiery and forests lush. Cones are divided into three types—red, green, and blue—whose combined signals let us perceive millions of hues.
The retina transforms light into electrical signals, sending them through the optic nerve to the brain’s visual cortex. Here, neurons construct the images we “see.” The brain fills in gaps, adjusts for shadows, and even corrects for the blind spot where the optic nerve exits the eye.
But human vision is not just mechanical; it is deeply emotional. Seeing a loved one’s face triggers reward centers in the brain. Art, landscapes, and colors stir memories and feelings. Our eyes can even reflect inner emotions, dilating in awe or narrowing in suspicion.
Human vision does not work in isolation. The brain integrates sensory data into a seamless experience of reality, a process called multisensory integration. This integration can lead to remarkable phenomena. In synesthesia, some people naturally blend senses—seeing colors when hearing music or tasting flavors when reading words. Virtual reality technology exploits multisensory processing to immerse users in lifelike digital worlds.
Even perception of time can be influenced by the interplay of your senses. A bright flash may make a sound seem louder, or a delay between touch and visual feedback can distort how long a moment feels. These interactions reveal that our senses are not isolated doors but interconnected windows onto the world.
Despite its sophistication, human vision and its related sensory integration are not flawless. Optical illusions trick the eyes. Neurological conditions can profoundly change sensory perception. In blindness and deafness, the brain rewires itself, enhancing the remaining senses. Hallucinations can generate sensory experiences without external stimuli, blurring the line between perception and imagination.
These phenomena demonstrate that what we perceive is not a direct copy of reality. The brain constructs our sensory world, interpreting signals based on context, experience, and expectation. In a way, we live not in raw reality but in a brain-crafted masterpiece shaped by our senses.
Understanding human senses is more than biology. It is how we become aware, how we form memories, how we survive, and how we find beauty in everything around us. Our senses do not just tell us what exists: they shape our emotions, guide our decisions, and bind us to one another in ways no machine can replicate. To understand them is to understand what it means to be alive.
Science and technology are now pushing visual sensory boundaries beyond natural limits. Bionic eyes are restoring vision to the blind. Devices can translate infrared light or magnetic fields into signals the brain learns to interpret, effectively giving humans abilities once reserved for other animals. Virtual and augmented reality aim to create entirely new sensory experiences, reshaping entertainment, education, and communication.
Science can describe receptors, neural pathways, and brain regions, but it cannot fully capture the subjective wonder of visual sensation. The first view of the ocean, the sight of a loved one after a period of long separation, visualizing your “safe space” —these are more than biology. They are moments where science and soul meet, where the mechanics of perception transform into meaning.
The science of senses teaches us not only how we see and perceive but also how precious perception is and how it differs from the mechanical world of computer vision. Every blink is a miracle shaped by evolution and refined by the mind. In understanding our senses, we glimpse the extraordinary reality that, through them, the universe knows itself.
How Other Animals See
Vision evolved differently across species. Birds see ultraviolet light invisible to humans. Dogs possess fewer color receptors but far superior night vision.
Check out these articles to learn more about how other animals see the world compared to humans.
How Plants See
Plants may seem quite unassuming and rooted to a spot – on the ground or in a pot – unaware of their surroundings, but this couldn't be further from the whole truth. They have unique ways of “seeing” and sensing the world about them; a world that is oftentimes invisible to humans.
When people think of sight, they think of what we just reviewed, the complex process of light entering our eyes, being refracted by the lens, and landing on the retina to form an image. However, for plants, lacking eyes and a nervous system, they possess a unique form of “sight” that aids and sustains their survival and growth.
In 1907, Francis Darwin, a British botanist and naturalist, hypothesized that leaves have organs that are a combination of lens-like cells and light-sensitive cells. Experiments in the early 20th century confirmed the existence of such structures, called ocelli or “simple eyes.” Derived from the Latin word 'oculus,' ocelli are light-detecting organs found in various animals, including insects and some invertebrates.
In recent years, research has suggested that plants are indeed capable of vision and may even have something similar to an eye. František Baluška, a plant cell biologist, and Stefano Mancuso, an Italian plant physiologist, and professor at the Agriculture, Food, Environment, and Forestry Department at the University of Florence, presented new evidence for visually aware vegetation: the 2016 discovery of synechocystis cyanobacteria, single-celled organisms capable of photosynthesis that act like ocelli. These unicellular, freshwater cyanobacteria use their entire cell bodies as a lens to focus an image of a light source at the cell membrane, just like the retina of an animal eye.
Plants sense light, touch, chemicals, microbes, animals, and temperature uniquely. Quite unlike humans or other animals, they do not form images or perceive depth. Instead, they detect light through specialized proteins called photoreceptors, which are found throughout a plant's body, including its stems and leaves. These photoreceptors are important for the plant’s essential functions, including the development and regulation of the plant's circadian rhythm.
The photoreceptors detect different wavelengths, allowing them to sense light. This, in turn, enables plants to distinguish between red and blue light and even perceive wavelengths beyond human capabilities, such as far-red and ultraviolet light. Plants can, therefore, differentiate between red and blue light and even ‘see’ wavelengths that people cannot.
They can perceive the direction from where light comes from, tell whether it is intense or dim, and judge how long ago the lights were turned off. This ability to sense light is crucial for photosynthesis, as plants need to detect light sources to get food.
Plants' ability to sense and detect light is crucial for their survival and growth. They need light to carry out photosynthesis and have evolved sophisticated ways of reacting to this light. Sunflowers, for example, track the sun's path through the sky. Some houseplants lean towards the nearest window or sunlight.
Photoperiodism is the process through which plants can measure the length of day. When irradiated with red light, phytochromes change their conformation, acting as light-activated switches that detect far-red light. Long-day plants only flower when days are at their longest, while short-day plants have the opposite response. Day-neutral plants (DNPs), on the other hand, flower regardless of the length of day or night. The mechanism behind photoperiodism involves photoreceptor proteins like phytochrome and cryptochrome, which help these plants sense changes in light duration and trigger appropriate growth responses.
Phytochromes, for example, help plants detect red light. When exposed to red light, they change their conformation, acting as light-activated switches that detect far-red light. This mechanism guides plants’ growth and development by allowing them to respond to changing light conditions and optimizing their energy intake.
Plant vision also enables them to adapt to their environment. For instance, a study published in 2018 showed that plants can detect neighboring plants and modify their growth patterns to avoid competition for light and other resources. Moreover, some plants can detect the color of their surroundings and adjust their leaf shape and size accordingly, maximizing their ability to capture light.
While the evidence for the existence of eyelike features in plants remains limited, it is growing. The challenge is to just confirm earlier research and fully understand plants' rudimentary sight and how they use it. That said, the question of whether plants can, really, see us remains just that: a question. At least for now!
How Computers See
Now that we’ve reviewed how humans and other living organisms see, check out these articles and see how much you now think computer vision still effectively mimicks human vision or the vision of other organisms.
Don’t believe the hype. While many of the applications of computer vision are super cool and revolutionary, they are not substitutes for human vision.
But let’s keep going.
By now you should realize that computer vision does not mimic human vision. It is simply pattern recognition; nothing more, and nothing less.
I even included this one last computer vision tutorial to drive that final point home.
Learning How To See
Before we leave the topic of vision, I wanted to touch on the concept of learning to see. With exceptions and limitations, most humans are born with the ability to see. But even from a perception perspective, we don’t all see the same way, and not all of that is due to how our sense organs work, but instead, on how we perceive visual stimuli.
And one area where “learning to see” greatly impacts our psychomotor skill set is artistic expression. This is one of several reasons why computer-generated art will never replace art produced by human artists - because a computer can never learn how to “see” like a human artist can.
Check out these articles to learn more.
Upcoming Learning Offerings
As I have discussed in the April 2025 edition of this newsletter, in addition to the work I do with Linda B Learning, I also have an extensive fitness and athletic training background. I wanted to build YouTube for fitness training way back in 2001, because I recognized early on how much a visual medium like online video would impact people’s ability to make connections, workout with each other and share knowledge with other people all over the world.
But YouTube came along not too far afterwards, and in 2009 I created my Fit Mind-Body Conditioning channel. This year I have revived that work and am offering two live classes each month, broadcast live on YouTube.
On the Full Moon, we do yang energy workouts. So far, those have been step and strength workouts, but other formats may make appearances as the year progresses. During the New Moon, the focus is on yin energy, so yoga and other mind-body practices are featured during those lives.
Be sure to subscribe to the channel and click notifications to be notified when we go live, or when other new channel content drops.
That’s all for now. I hope your perspective of what constitutes vision has broadened, and that you have a better understanding of how humans and other life forms see versus how computers “see.” While the tech is certainly cool and advancing rapidly, they are not at all the same thing and should never be construed as such. There is so much joy to be found in human sensation and perception, something we might forget in this world currently enamored with AI everything.
“See” you next month!

And he’s not talking about 20/20 vision!






