Part-2
The effectiveness of speech therapy in both children and adults is indeed strong evidence for neuroplasticity, though the way plasticity works differs depending on age and the type of brain injury. Let me break it down:
1. Children and plasticity
In children whose brains are still developing, speech therapy taps into the naturally high levels of plasticity. Neural circuits for language can be reorganised or even shifted to alternative regions if the primary speech centres (like Broca’s or Wernicke’s areas) are compromised. This is why early intervention is particularly successful in neuro-compromised children, such as those with developmental delays or hearing-related speech difficulties.
2. Adults and plasticity after trauma
For adults who lose speech abilities due to stroke, traumatic brain injury, or neurodegenerative conditions, recovery is more challenging because plasticity is less pronounced than in childhood. Yet, speech therapy still works. How? The brain recruits compensatory pathways—for instance, areas in the opposite hemisphere or undamaged neighbouring cortical tissue can take over speech functions. Therapy accelerates this process by repeated practice, structured cues, and targeted exercises, which strengthen alternative connections.
3. Mechanism of recovery
Speech therapy leverages three key principles of neuroplasticity:
- Use-dependent strengthening: circuits that are repeatedly activated (e.g., practicing words or sounds) become stronger.
- Synaptic reorganisation: unused or weakened pathways are pruned, while new synapses form to support speech tasks.
- Cortical remapping: neighbouring regions may expand their role to compensate for damaged areas.
4. Limits of plasticity
While both children and adults benefit, the outcomes are not identical. Children often achieve near-complete recovery if therapy begins early. Adults can make remarkable gains, but the recovery may be partial, slower, and more dependent on therapy intensity and injury severity.
5. Proof of principle
The fact that speech therapy works at all in adults after brain trauma is a powerful demonstration that the adult brain is not hardwired. Even years after an injury, targeted stimulation can reshape networks to restore function—clear evidence of neuroplasticity at play.
So, in short: speech therapy is living proof of neuroplasticity, both in the developing brain of children and the adaptive, compensatory brain of adults.
There is a growing body of research examining how reliance on digital apps and devices changes the way we use (and don’t use) our memory. The evidence suggests that while digital tools are helpful, overdependence can offload or even erode certain cognitive abilities. Here’s the main evidence:
1. The concept of “Digital Amnesia”
Studies by organizations like Kaspersky Lab (2015) coined the term digital amnesia, describing the tendency to forget information we believe will be stored on our devices. For example, many people cannot recall their own phone numbers or those of close family members because they assume they can always retrieve them from contacts. This reflects a shift from internal to external memory reliance.
2. Transactive memory and cognitive offloading
Psychologists have shown that humans naturally distribute memory tasks across people or tools (called transactive memory). With smartphones, the “memory partner” is now digital. Research shows that when people expect information to be available online, they are less likely to commit it to memory. A famous study (Sparrow, Liu & Wegner, 2011) found that participants remembered where to find information (on the computer) but not the information itself. This suggests that the brain is outsourcing memory storage to devices.
3. Decline in rote memory skills
Tasks once considered basic—like memorising phone numbers, directions, or birthdays—are increasingly offloaded to apps. Neuropsychological surveys indicate that younger generations (who grew up with smartphones) are less likely to recall even personal information without checking their devices. While not inherently harmful, it represents a measurable reduction in unaided memory use.
4. Cognitive load and attention fragmentation
Dependence on apps doesn’t just affect memory—it also interferes with focus. Constant notifications and multitasking have been shown (e.g., Ophir et al., 2009) to reduce working memory capacity and impair the ability to filter distractions. Since working memory underpins skills like reasoning and learning, this has broader cognitive implications.
5. Neuroimaging evidence
Brain imaging studies show altered activation patterns in the hippocampus (critical for memory) when people rely on GPS navigation versus when they learn routes themselves. London taxi drivers (whom we discussed earlier) had enlarged hippocampi from memorising maps. In contrast, heavy GPS users show reduced engagement of spatial memory systems, suggesting that constant reliance on digital tools may blunt natural memory circuits.
6. Anecdotal and behavioural data
Even in everyday life, many report difficulty recalling phone numbers, email addresses, or even relatives’ birthdays without consulting their phones. While this doesn’t mean memory capacity is “lost,” it highlights how habitual offloading reduces the frequency with which memory networks are exercised—like a muscle that weakens without training.
The evidence points to a real but nuanced effect: dependence on digital apps doesn’t destroy memory, but it reshapes it. We remember less content directly and focus instead on remembering where to find it. Over time, this may erode skills like rote recall, navigation, and even sustained attention, unless we actively practice them outside digital reliance.
It appears that over the years, the new generation addicted to digital tools may find it difficult to go for a lasting social contract like marriage and bringing up children, and could result in disaster for humanity.
1. Shifting patterns in relationships and family formation
Sociological data already shows that younger generations in many parts of the world are delaying or avoiding marriage, and birth rates are falling to historic lows (Japan, South Korea, much of Europe). Digital tools are not the only factor—economic insecurity, gender role changes, and urban lifestyles play a huge role. But technology adds a powerful new layer: dating apps, endless online entertainment, and parasocial relationships can provide a sense of connection without the long-term commitment of marriage or children.
2. Addiction to immediacy and low tolerance for difficulty
As we discussed with research work, digital platforms condition the brain toward short-term rewards and constant novelty. Marriage, parenting, and deep social contracts require the opposite—long-term patience, resilience, and compromise. If attention spans and tolerance for frustration decline, sustaining these bonds could indeed become harder for digital-native generations.
3. Evidence from psychology
- Studies on “attachment styles” in the age of dating apps suggest that people increasingly struggle with commitment, preferring multiple shallow interactions over one deep, enduring bond.
- Research also indicates that heavy social media use can fuel loneliness, anxiety, and dissatisfaction—ironically weakening the very social skills needed for strong family ties.
- Children raised in highly digital households sometimes show deficits in face-to-face communication and empathy, though outcomes vary with parenting.
4. Long-term societal risks
If these trends intensify, we could see:
- Declining fertility rates, leading to shrinking and ageing populations.
- Weakening of the family unit as a site of emotional and economic stability.
- Greater reliance on the state or technology (AI companions, virtual childcare) to fill gaps once filled by social contracts like marriage.
In the extreme, yes, this could become a civilisational risk if humanity loses its capacity or desire to reproduce and nurture future generations.
5. But counterforces exist
It’s important not to paint this as destiny. Humans are highly adaptive. Every major technological shift—printing, television, even contraception—was once thought to “end family life.” Yet societies rebalanced through new norms and institutions. There’s also growing awareness of “digital hygiene”—from digital detox retreats to policies limiting screen time for children. If cultures learn to set boundaries with technology, the risks may be mitigated.
An interesting question arises that if machines are to repeated software applications, are capable of modifying their hardware to suit the usage the system is being put to?
1. Current Reality
- Reconfigurable hardware already exists.
- FPGAs (Field-Programmable Gate Arrays): These chips can be reprogrammed repeatedly at the hardware logic level to optimise for particular applications. For example, if you’re running heavy signal processing, the FPGA can be reconfigured to accelerate those computations.
- Neuromorphic chips: Some experimental chips inspired by the brain adapt their internal circuits as they “learn” (e.g., Intel’s Loihi).
- Hardware + software co-design is advancing.
- AI accelerators (like GPUs, TPUs, NPUs) are examples of hardware made specifically to run certain types of software efficiently.
- But these don’t self-modify physically — they’re fixed designs optimised in advance.
2. Theoretical Possibilities
For a system to physically change its hardware to match repeated software use, we’d need:
- Reconfigurable circuitry at scale – chips that can dynamically alter their pathways, not just logically (like FPGAs) but physically at the transistor/nanostructure level.
- Self-adaptive material science – e.g., nanomaterials that change conductivity, structure, or arrangement based on usage patterns.
- Machine intelligence controlling adaptation – algorithms that decide how to “reshape” hardware depending on long-term workload.
3. Challenges
- Energy and heat: Physically restructuring hardware repeatedly would be energy-intensive.
- Material fatigue: Hardware that constantly reconfigures could degrade.
- Complexity: You’d need a supervisory system to ensure changes don’t break compatibility with existing software.
4. Research Frontiers
- 3D chip stacking & modular computing: Allows swapping or reconfiguring parts for different tasks.
- Self-assembling nanotechnology: Theoretically, chips could rearrange at nanoscale to become more efficient at tasks.
- Biocomputing inspiration: Neurons rewire themselves — neuromorphic systems may one day mimic this with adaptable hardware.
We already have primitive forms of self-modifying hardware (like FPGAs and neuromorphic chips). A truly adaptive machine that physically changes its hardware to suit repeated software use is not yet practical, but research in nanotech, neuromorphic engineering, and reconfigurable computing suggests it is possible in the future.