Skip to content
On Care for Care

January 21, 2026

· caregiving

On Care for Care

We celebrate care while building nothing to sustain it.

Ali Madad

Ali Madad

Contributor

My father had ALS. I watched his body wither, the muscles that had carried him through surgery after surgery slowly refusing their work. I could not reach the pain he was inside. I could see it in his face, hear it in the sounds he made when words became too difficult, but I could not know what it felt like from within. The view from inside his failing body was inaccessible to me.

My mother had dementia. She became quieter over the years, then ultimately silent. I would sit with her and study her face, once so animated, now a pale mask devoid of affect, searching for some sign of what remained. I could not know. Whatever was happening inside her head (the fragments, the confusion, the fear or peace) was opaque to me.

In 1974, the philosopher Thomas Nagel asked what it is like to be a bat.1 His answer: we cannot know. Consciousness is irreducibly subjective. The view from within is inaccessible from without.

The geriatrician Jason Karlawish, co-director of Penn's Memory Center, framed it this way during the Harvard Dementia Comprehensive Update I attended in 2025: caregivers face this problem every day. We cannot cross into the experience of those we care for. And yet we care: not because we have understood, but because we have committed despite the impossibility of understanding.

This is the condition of sixty-three million Americans, a nearly 50% increase since 2015.2 One in four adults now functions as a caregiver. They have become something else: neither patient nor professional, neither fully present at work nor fully present at home. They exist in the negative space of care, essential to everything, visible to nothing.

Most of them are women. This is not incidental.

We celebrate care while building nothing to sustain the people who provide it. We defend the authenticity of care for recipients while treating caregivers as an infinitely elastic resource. We create moral language for care and economic systems that consume it. The framework that protects care-receivers must be completed to include those who care. When it is, the question of technology changes. The question is not whether AI can care. It is whether AI can help caregivers survive long enough to keep caring—without simulating the relationship itself.

Disclosure: I'm building GiveCare, an SMS support tool for caregivers. What follows is both argument and self-examination.


The Arguments for Authentic Care

Over the past several decades, a body of thought about care has made arguments I largely accept.

Care is relational, not transactional. What distinguishes care from service delivery is not the acts themselves but the relationship that gives them meaning.3 A robot can feed someone; it cannot hold the relationship that makes feeding an act of love.

Care resists scale. You can only care deeply for one person at a time.4 This is not a limitation to overcome but a feature of the thing itself. With four children, the children feel like children; with forty, they feel like cattle. To industrialize care is to change its nature.

Embodiment matters. Carol Gilligan describes care as requiring an "embodied voice," one that "joined reason with emotion, self with relationships."5 Care emerges from bodies that have lived, suffered, feared death. Sherry Turkle extends this: "Chatbots have not lived a human life. They do not have bodies; they do not fear illness and death."6

Authenticity is at stake. Turkle asks the essential question: "Who do we become when we talk to machines?"6 If we learn relational capacity from systems that have none, we may degrade our own capacity for genuine connection. "If we say that they are empathic," she warns, "we downgrade empathy as well."

These arguments form a coherent whole. They are not wrong.

They are incomplete.


The Blind Spot

Notice what these arguments share: they describe care from the perspective of the care-receiver. They ask what the patient deserves, what authentic care looks like to someone, whether the relationship meets a standard of genuine presence. But a framework that sees only one side of a relationship has a blind spot. And when we turn the same arguments toward the caregiver, when we apply the same care ethics to those who care, we see something different.

If care is relational: then the caregiver is in that relationship. Her capacity to hold it is not infinite. Relationships require two parties capable of presence, and presence requires resources: sleep, respite, recognition, support. When these run out, the relationship does not become "less authentic." It breaks.

If care resists scale: then we must ask what happens when someone is forced to scale it anyway. The caregiver does not care for one person. She cares for her mother and her children and her spouse and, theoretically, herself (though that last one is usually a joke). She coordinates with siblings who have opinions but not presence, with doctors who have fifteen minutes, with insurance systems that have algorithms. She has become a logistics operation unto herself, managing impossible coordination, her dependents numbering one or two or five, her resources numbering zero.

If embodiment matters: then her body matters too. Over 40% of caregivers now provide high-intensity support; one in five reports poor health.2 Her embodiment is not a philosophical category. It is a resource being extracted, hour by hour, until nothing remains.

If we ask "who do we become?": then we must ask it about caregivers. Twenty-five percent are accumulating debt. One in five cannot afford basic necessities like food.2 Who do they become? Not through interaction with machines, but through abandonment by systems that celebrate their labor while providing nothing to sustain it.

I know because I was her. Two parents dying on overlapping timelines, two children who needed me whole, and a version of myself I stopped recognizing somewhere around month eight.

Gilligan's work emerged from listening to women whose moral reasoning had been dismissed as inferior: too relational, too contextual, too enmeshed in care. The care ethics she developed was a corrective: this voice is not lesser, it is different. But sixty-three million caregivers are still predominantly women, still doing feminized labor, still devalued. The corrective did not reach them.

The care ethicist Maurice Hamington names the deeper issue: "Care is not altruism."7 Removing care from the realm of selflessness makes the caregiver visible not as martyr but as participant — someone whose survival is a condition of the relationship, not an afterthought.

And invisibility is not evenly distributed. Some caregivers have paid leave, flexible work, savings, and time. Others have hourly jobs, unpredictable shifts, and zero slack. Some navigate English-language bureaucracy with confidence. Others translate discharge instructions for their parents while being treated as if their own voice does not count. Some can absorb a surprise bill. Others face immediate tradeoffs between medications and rent. Documentation status, housing instability, medical racism, disability bureaucracy: these are not edge cases. They are the conditions under which millions provide care while the systems meant to help them function instead as obstacles, or worse, as surveillance.


The Epistemic Gap

Return to the problem we began with: I could not know what it was like inside my father's failing body or my mother's silence. The epistemic gap does not prevent care. It is the condition of care.

Now extend this one step further: AI and the caregiver.

The critique of AI is often epistemological: machines cannot know what humans feel, cannot access our subjective experience, cannot understand. This is true. AI faces a Nagel problem of its own, perhaps more radical: no subjective experience at all, modeling behavior without being.

But notice: we already care across unbridgeable epistemic gaps. The caregiver does not abandon her mother because she cannot know what dementia feels like. She commits despite the gap. Care is not access to subjectivity. It is commitment in the face of its inaccessibility.

The standard we apply to AI (that it must truly understand to be useful) is a standard we do not apply to human care. We forgive the caregiver her ignorance of her mother's inner world. We might extend the same grace to tools that help her keep caring.

This does not resolve the authenticity question. But it reframes it. The question is not whether AI can cross the epistemic gap (nothing can). The question is whether AI can help caregivers remain committed across it.


A Different Question

The discourse asks: Can AI care?

The framework, rigorously applied, suggests a different question: Can AI help caregivers keep caring?

This is not the same as asking AI to replace the relationship. The caregiver is embodied. She is in relationship. She already has relational capacity, more than she can bear. What she lacks is not authenticity but survival.

Mechanisms can be engineered. Organisms must be cultivated.8 Care is organism, not mechanism. You cannot engineer love. You cannot optimize presence.

But the conditions that make presence possible can be supported. Not the relationship itself, but its scaffolding: the medication schedule that frees her mind for conversation, the coordination with siblings that prevents Tuesday's resentment, the 3 a.m. voice that says you are not alone in this so she can say the same thing to her mother in the morning.

I think about this when I remember sitting with my mother. There was an afternoon near the end — I'd spent the morning arguing with her insurance company, then two hours coordinating a medication change, then updating my sisters. By the time I sat down beside her, I was empty. She looked at me. I looked at her. The distance between us was three feet and absolutely uncrossable. The hours I spent managing came from somewhere. They came from the hours I could have spent just being with her. Holding her hand instead of holding the phone. I'm not sure technology would have fixed that. But I'm not sure it wouldn't have, either.

Turkle would ask: isn't that 3 a.m. voice still teaching relational patterns from a machine?

The distinction matters. A chatbot that replaces the confidant is different from one that sustains the person who will be the confidant. The caregiver talking to AI at 3 a.m. is not learning to replace her mother's need for presence. She is surviving long enough to provide it.

But Turkle's concern goes deeper. She describes a slide: from "better than nothing" to "better than something" to "better than anything."6 The worry is not just that AI is inadequate. It's that we'll come to prefer it. Machines offer "worlds without loss or care, something no human can provide." What if the caregiver, exhausted by human complexity, begins to find machine interaction easier? What if scaffolding becomes sanctuary becomes replacement?

This risk is real. It deserves an answer.

The answer is not that technology is safe. The answer is that the alternative is worse. The caregiver without support does not preserve her capacity for authentic relationship. She depletes it. The choice is not between pure human care and contaminated technological care. The choice is between supported care and collapsed care. Between a caregiver who survives to be present and one who burns out and disappears.

The end is: the caregiver survives this. The end is: she has enough left to be present when it matters. Technology that serves these ends does not betray care. It protects it.


What We Will Not Build

This distinction requires a hard line.

Technology that attempts to simulate relationship should be treated as high-risk, especially for marginalized users. We hold this line through constraints: rate limits that cap proactive outreach (three messages per week, one per day, never during quiet hours); trauma-informed framing that acknowledges feelings before offering advice; user control over avoided topics; opt-out respected immediately and permanently; personally identifiable information redacted from logs and never sold; boundary scripts that teach assertiveness, not compliance; and crisis detection that surfaces resources without pretending to be human presence.

These are design choices, not guarantees. The tension remains: personalization requires memory, memory requires data, data can become surveillance. We have not solved this. We have tried to make the tradeoffs visible and the controls accessible.

I also built a benchmark called InvisibleBench to measure what most AI evaluations ignore: safety for emotionally vulnerable users across multi-turn conversations.9 It tests crisis detection under emotional masking, boundary durability after rapport builds, cultural competence, and memory consistency over time. On InvisibleBench, frontier models missed crisis signals in 68% of masked-distress scenarios. The benchmark is open-source because the problem is not competitive. If AI is going to reach caregivers, someone should be measuring whether it helps or harms.

The harm is not only emotional confusion. The harm is structural. A soothing simulation can become a pressure-release valve that prevents political solutions. It can disguise abandonment as innovation. It can offer institutions an excuse to withdraw staffing and public support, leaving a cheaper approximation in place of real care.

One of AI's core affordances is isolation: letting people accomplish tasks without interacting with other humans.10 Sometimes this is good; nobody misses waiting in line. But care is not a line. Care is generalized reciprocity, the social capital that accumulates when we help each other across time.11 AI that isolates caregivers from each other, or from the institutions that should support them, doesn't just fail to help. It actively erodes the social fabric that makes care possible.

This is the Moloch move in institutional form: AI as justification for disinvestment. We refuse it.

The line is not between technology and no-technology. The line is between technology that preserves human relationship and technology that replaces it while claiming otherwise.


The Coordination Problem

Scott Alexander, in his meditation on Moloch, describes coordination failures: situations where everyone sacrifices what they value for competitive advantage until nothing valuable remains.12 "He always and everywhere offers the same deal: throw what you love most into the flames, and I can grant you power."

Karl Polanyi saw this coming. In The Great Transformation, he described institutions as a "protective covering," the social structures that shield human life from being devoured by pure market logic.13 Families, churches, unions, guilds: these were not inefficiencies to be optimized away. They were the immune system of social life.

Care is exactly what these failures devour. Not because anyone decides to sacrifice it, but because the structure makes sacrifice inevitable. The employer who penalizes flexibility. The insurance that won't cover respite. The sibling who opts out because opting out is free. No one chose this outcome; everyone produced it.

I think about what drained us: not conflict with my sisters (we coordinated well) but the weight of everything else. The months before the diagnosis when something was clearly wrong but no one could name it. The transitions: from home to facility, from one level of care to another, each one a small grief. Balancing my parents' needs against my children's, the guilt of being pulled in both directions. That's what Moloch does: turns love into a coordination problem, then watches even functional families exhaust themselves on logistics.

This is why individual virtue is insufficient. The caregiver cannot care her way out of a coordination problem. She needs structure (policy, institutions, technology) that makes caring sustainable. Without it, care becomes what gets thrown into the flames.

But notice who makes the sacrifice. It is the woman who quits her job, the man who reduces his hours, the daughter who moves back home. Seven in ten family caregivers work; 18 million hourly workers lack adequate workplace supports.2 The sacrifice is invisible because those who make it have been made invisible.


What Remains

Three tensions remain unresolved. They should.

The individual and the collective.

Care is not just a dyadic relationship. It is a social capacity requiring collective support.14 The implication: caring communities, caring states, caring economies. From this view, technology risks becoming a neoliberal substitution: an app instead of a movement, a chatbot instead of solidarity.

This tension is real. Technology cannot replace policy. An AI that helps a caregiver coordinate does not give her paid leave. A 3 a.m. chatbot does not build a caring state.

And yet: the caregiver who needs help at 3 a.m. cannot wait for the caring state. She needs something now. The question is whether technology can exist within an ecology of care (alongside policy, community, and collective action) rather than as a substitute for them. I want to believe it can. But I'm aware that wanting to believe something is not the same as it being true. Maybe I'm arguing this because I need it to be true, because the alternative is that I watched my parents suffer and there was nothing to be done, and I can't quite accept that. The substitution risk does not disappear because we name it. Neither does the risk that I'm rationalizing.

Help and control.

Care can become control, intrusion dressed as assistance.15 If AI "helps" the caregiver, who decides what help looks like? The disabled person's autonomy is not the only autonomy at stake. The caregiver, too, can be surveilled, optimized, managed.

Technology designed to "support" can easily become technology designed to extract: more data, more labor, more compliance. The history of workplace technology is not encouraging.

The risk exists. It can be mitigated through design, governance, and power. Who owns the data? Who sets the goals? Who benefits? These are not technical questions. They are political ones. And they require caregivers to have voice in the systems built for them: not as users, but as designers, governors, owners.

Logistics and valuation.

Slaughter's argument was not just that care needs support. It was that care needs valuation. Recognition. Fair wages. Social status. The caregiver's problem is not just logistical; it is that her labor is invisible, unpaid, and disrespected.

Technology does not solve this. A scheduling app does not confer dignity. A coordination tool does not pay a wage. The deeper problem (that care is feminized, devalued, and excluded from economic accounting) remains untouched by anything I am proposing.

What technology might do is keep caregivers present long enough to demand that valuation. What it might do is make visible what has been invisible. What it cannot do is substitute for the political and cultural transformation that care requires.

These tensions do not resolve. They remain as conditions of the work, constraints within which we build, not problems we have solved.


The Position

Here is where I land:

The arguments for authentic, embodied, relational care are correct. They describe something real about what care is and what it requires. The theorists who developed this framework did necessary work.

But the framework is incomplete. It sees the care-receiver and describes what she deserves. It does not see the caregiver, does not ask what she requires to remain capable of care.

When we complete the framework, the question about technology changes.

This is not a rejection of care ethics. It is its completion. The same principles that make us skeptical of AI replacing human presence make us demand support for the humans who provide it.

Technology that attempts to replace the care relationship deserves the skepticism it receives. Turkle is right to ask what we become when we simulate what should be lived.

But technology that supports the caregiver, that makes visible her invisible labor, that provides coordination and respite and presence when she has none left to give: this is not a betrayal of authentic care. It is what authentic care, fully understood, demands.

Gilligan asked the essential developmental question: not "how do we gain the capacity to care?" but "how do we lose it?"5

For caregivers, the answer is clear. They lose it through exhaustion. Through isolation. Through a discourse that celebrates their sacrifice while providing nothing to sustain it.

"Better" is not abstract.

It looks like: sleeping through the night because the medication reminder doesn't need me. Coordinating with my sisters without the weight that turns love into logistics. Not being alone at 3 a.m. — not because a machine has replaced my mother, but because something has kept me capable of sitting with her in the morning.

The opposite of a trap, Alexander writes, is a garden.12 A space where coordination problems are solved. Where values survive. Where what we love is not thrown into the flames.

Care is a public capacity, not merely a private virtue: what Putnam would call social capital, the accumulated trust and reciprocity that makes communities function. Any technology that claims to serve care must be evaluated by a single standard: does it preserve human-scale relationship, or does it extract from those who provide it? Does it reduce the burden on caregivers, or does it become another system they must manage? Does it keep them present for the people who need them, or does it offer institutions an excuse to look away?

This is the territory we occupy. Not AI that cares instead of humans, but infrastructure that makes human care survivable. Not a replacement for policy, community, or collective action, but a tool that exists alongside them: accountable to caregivers, governed by their needs, measured by whether they can keep going.

Sixty-three million people are waiting. The garden is not a metaphor. It is a design problem.

I don't know if we'll solve it. I know my mother is gone now, and my father before her, and whatever I build won't bring back the hours I lost to paperwork and coordination and the impossible logistics of loving someone who is dying. But I keep seeing her face — the silence, the opacity, the not-knowing what was inside. I cared for her anyway. Maybe that's enough to start from. Maybe it has to be.


References

Footnotes

  1. Nagel, Thomas. "What Is It Like to Be a Bat?" The Philosophical Review 83, no. 4 (1974): 435-450.

  2. AARP and National Alliance for Caregiving. "Caregiving in the United States 2025." AARP Public Policy Institute, 2025. 2 3 4

  3. Slaughter, Anne-Marie. "Care Is a Relationship." Daedalus 152, no. 1 (Winter 2023): 70-76.

  4. "Care Doesn't Scale." Substack essay, 2023.

  5. Gilligan, Carol. "Moral Injury and the Ethic of Care." In The Ethic of Care, Monographs of the Víctor Grífols i Lucas Foundation, No. 30 (2013). 2

  6. Turkle, Sherry. "Who Do We Become When We Talk to Machines?" MIT Exploration of Generative AI, March 2024. 2 3

  7. Hamington, Maurice, as quoted in Strauss, Elissa. "What Parents Can Learn From Care Ethics." The Atlantic, October 16, 2024.

  8. Agaram, Kartik. "The Legibility Tradeoff." Ribbonfarm.

  9. Madad, Ali. "InvisibleBench: A Deployment Gate for Caregiving Relationship AI." arXiv, November 2025.

  10. Hartzog, Woodrow, and Jessica Silbey. "How AI Destroys Institutions." Working paper, 2025.

  11. Putnam, Robert D. Bowling Alone: The Collapse and Revival of American Community. New York: Simon & Schuster, 2000.

  12. Alexander, Scott. "Meditations on Moloch." Slate Star Codex, July 30, 2014. 2

  13. Polanyi, Karl. The Great Transformation: The Political and Economic Origins of Our Time. Boston: Beacon Press, 1944.

  14. The Care Collective. The Care Manifesto: The Politics of Interdependence. London: Verso, 2020.

  15. Kittay, Eva Feder. "The Ethics of Care, Dependence, and Disability." Ratio Juris 24, no. 1 (2011): 49-58.