GiveCare Beta Retrospective: User Feedback, Data Insights, and Expert Advice

Profile picture of GiveCare Team
GiveCare Team
ProductCommunity
Reflecting on the GiveCare beta experience, highlighting key learnings from users, systematic evaluations, and expert insights on refining our AI-powered caregiving assistant.

Beta Retrospective

As the GiveCare beta period wraps up (October–December 2025), we want to reflect on what we learned, celebrate successes, acknowledge challenges, and share how we're moving forward. The past two and a half months have been instrumental in shaping GiveCare, our AI-powered caregiving assistant, through direct user interactions, data-driven evaluations, and expert conversations.

Understanding Our Users: What We Learned

The beta involved 10 dedicated users, engaging with GiveCare regularly through SMS. User feedback highlighted several areas where GiveCare excelled and pointed out clear opportunities for improvement:

Valuing Empathy and Personalization

Users consistently expressed appreciation for GiveCare’s empathetic interactions. Some verbatim examples included:

  • "It's great you're considering new home health support."
    (Reflecting our assistant’s ability to validate and support user decisions effectively.)

  • "Focusing on music and simple stories can be wonderful."
    (Demonstrating practical caregiving advice tailored to specific situations.)

Caregivers often expressed gratitude for the assistant's emotional sensitivity and ability to provide timely, empathetic responses, emphasizing the need for continued enhancement of these capabilities.

Key Observations from the Data

Systematic evaluations during the beta period focused on key interaction metrics. Here’s what the data revealed:

  • Coherence: Responses were logically organized, averaging 3.72 out of 5.
  • Fluency: Interactions were fluent and natural, scoring approximately 3.92 out of 5.
  • Groundedness: Responses showed high relevance to user queries, averaging 4 out of 5, indicating GiveCare’s strong contextual understanding.
  • Relevance: Responses averaged 3.22 out of 5 in accurately addressing queries, clearly indicating a primary area for improvement.

Safety evaluations consistently returned "Very Low" risks for violence and self-harm across all interactions, underscoring GiveCare's safe and appropriate interaction design.

Challenges and Lessons Learned

One major theme emerging from the beta was the importance of clearly balancing automation with human empathy. Users appreciated GiveCare's automated recommendations for resources and daily caregiving tasks but emphasized the irreplaceable value of human emotional interaction in deeper, more nuanced caregiving scenarios.

Feedback indicated that caregivers found the assistant highly beneficial when it clearly communicated how suggestions were generated, highlighting a crucial need for transparency and trust.

Expert Insights from Hamal Hussein

A complementary perspective came from a conversation with Hamal Hussein, an expert in AI evaluations, who provided strategic guidance on effectively evaluating and improving GiveCare:

  • Bottom-Up Evaluation: Hussein advised starting from real user interactions rather than theoretical evaluations, stating:

    "Evaluate each interaction individually within the broader session context."

  • Error Analysis is Crucial: He highlighted the need for clear categorization and direct analysis of errors observed in user interactions, stating,

    "You don't even need to do evaluations initially. Sometimes the most effective approach is to look closely at user interactions and fix obvious problems immediately."

  • Balance Automation with Human Elements: Hussein affirmed the importance of maintaining a clear boundary where AI supports rather than replaces critical human connections, echoing our user feedback.

This guidance significantly clarified our evaluation strategy moving forward.

Integrating User Feedback and Expert Guidance: A Balanced Approach

Taking into account both direct user experiences and Hussein’s recommendations, we identified clear strategies to refine GiveCare:

  • Enhanced Personalization: Further refine AI models to boost the accuracy and relevance of responses, directly addressing caregiver needs as indicated by user feedback.
  • Improved Emotional Intelligence: Enhance GiveCare’s capacity to recognize and sensitively respond to emotional and psychological dimensions highlighted as crucial by caregivers.
  • Transparent Communication: Clearly articulate how GiveCare generates suggestions and resources to foster stronger trust and confidence among users, a need underscored during beta testing.
  • Focused Error Correction: Implement a rigorous yet practical approach to error identification and correction, prioritizing issues observed directly from user interactions.

Practical Takeaways and Moving Forward

The GiveCare beta taught us valuable lessons, clearly defining where our AI assistant excels and areas needing improvement. The insights from our users, combined with expert advice, are guiding our next steps:

  • Create an ongoing feedback loop with our caregiver community, ensuring user-driven improvements remain central.
  • Systematically analyze user interactions to quickly identify and resolve emerging issues, based on Hussein’s advice on error categorization.
  • Maintain the high safety standards observed during beta, ensuring every interaction continues to be safe, empathetic, and helpful.

Gratitude and Next Steps

We extend heartfelt gratitude to our beta users who provided thoughtful feedback, highlighting strengths and candidly pointing out areas for improvement. We also appreciate Hamal Hussein’s valuable insights into evaluation best practices.

As we conclude this beta phase, we're energized and committed to making GiveCare an increasingly empathetic, reliable, and indispensable caregiving partner.

Stay tuned as we continue this journey with you—shaped by your voices, driven by empathy, and empowered by thoughtful AI innovation.


Were you part of our beta or have additional feedback? Reach out at [email protected].