Researcher Julie urges MDTs to adopt developmental evaluation and ripple-effect mapping

Summit session · March 10, 2026

Get AI-powered insights, summaries, and transcripts

Subscribe
AI-Generated Content: All content on this page was generated by AI to highlight key points from the meeting. For complete details and context, we recommend watching the full video. so we can fix them.

Summary

At a summit session, Julie, a researcher, said multidisciplinary teams (MDTs) should use developmental evaluation and tools like ripple-effect mapping to capture complex, emergent impacts that standard outcome metrics miss; she cited national surveys, a quasi-experimental study, and case examples to illustrate.

Julie, a researcher presenting at a summit session, told attendees that multidisciplinary teams (MDTs) should embrace developmental evaluation — an embedded, real-time approach — to better capture complex, evolving outcomes that conventional metrics often miss.

Developmental evaluation, Julie said, ‘‘is a process for accompanying innovation and adaptation with evaluative thinking,’’ and is suited for ‘‘wicked problems’’ where solutions and measures of success are uncertain. She contrasted it with traditional evaluation, which assumes program stability, predictable cause-and-effect and fixed indicators. ‘‘Stories are data,’’ she added, arguing qualitative accounts help identify the right quantitative measures and reveal unanticipated effects.

Julie reviewed evidence and patterns from her national work and surveys. She said an early inventory found about 40 MDTs in the U.S.; a later national survey she helped conduct identified about 324 teams. She reported that roughly 70% of teams had no dedicated funding and that half of those with funding received under $500 per month. She also cited a quasi-experimental study using a control group that found MDTs were more likely to result in successful prosecution and guardianship compared with usual-care adult protective services.

To operationalize developmental evaluation, Julie recommended practical strategies: cultivate evaluative thinking during case discussions; treat logic models as living documents; regularly reflect on patterns seen across cases; and use database review time on agendas for structured reflection. She emphasized strong relationships within and beyond teams as the ‘‘secret sauce’’ that enables information sharing and broader impact.

As a specific method, Julie described ripple-effect mapping, a reflective technique that traces causal chains backward from observed impacts. She outlined steps — interviews or group mapping, validating causal links with stakeholders, extracting stories, and producing graphics that show how team activities led to observed results — and said the maps are useful for demonstrating value to funders and partners.

Julie illustrated the approach with case examples. In one published vignette she described as ‘‘Mrs. M,’’ a service advocate reframed a case so the older adult’s priority (staying at home) and support for an informal caregiver became the lens of success, rather than only focusing on abuse risk. In another example from Colorado, a team found that residents refused safer placement solely because facilities barred pets; the team negotiated policy changes with local facilities so placements became acceptable to clients.

She invited practitioners to a breakout workshop on ripple-effect mapping and encouraged attendees to bring examples. The moderator thanked Julie and the session took a five-minute break.

Next steps noted by Julie included treating evaluation as an ongoing, embedded activity rather than a one-time product, and sharing team-level impacts more widely to secure resources and institutional support.