Group+2+GlobalGiving


 * GlobalGiving **


 * Contact Person:** Alison Carlman, Unmarketing Manager
 * Phone:** 202-330-4027
 * Email:** acarlman@globalgiving.org
 * Executive Director:** Mari Kuraishi
 * Communications Director:** Alison Carlman (self)
 * Social Media Staff:** Alison Carlman (self)


 * Baseline **

3) (gravy) I have a personal theory about what I'm calling 'assets-based communication.' That the stories that are the best (most authentic, dignifying, compelling) are the ones where you focus on a beneficiary's assets rather than his or her problems. I'd like to figure out how to test this using social media.
 * CWRF Overall Level:** Run
 * Overall Score:** 2.23
 * Detailed Score:**[]
 * Burning Questions To Be Answered**: 1) Our organization has org-level KPIs for every team except for the product team. The product team has lower-level KPIs for web metrics, emails, facebook, and twitter. But I'd like to get a handle on what the main Product Team KPIs we should be looking at from a management level. 2) I'd like to develop a plan to better identify and engage with our key 'tribe' on social media. These are the key influencers out there that we interact with regularly, but we haven't had any formal process of engaging them and inviting them to go deeper.

//* I think that my scores listed above skewed pretty low - especially when I look at details for each score on your baseline spreadsheet. As I look at the spreadsheet I feel like we're at 3 aiming for 4 for the most part. I'm also confused at how the overall score is 2.23 if the 3 scores are 1, 2, and 2. Is there a typo?//
 * Measurement Indicators [Crawl=1, Walk=2, Run=3, Fly=4]**
 * Data-Informed: 1**
 * Tools: 2**
 * Sense-Making: 2**

Social Media Presence


 * Blog: blog.globalgiving.org**
 * Facebook: https://www.facebook.com/GlobalGiving**
 * Fans: 38358**
 * YouTube: youtube.com/globalgiving**
 * Subscribers:**
 * Views:**
 * Twitter: http://www.twitter.com/globalgiving**
 * Followers: 51,740**
 * Hashtags:**

I'd like to develop a better social media champions/influencer program. This means that we'd first need to identify our objectives for doing so. Next, we'd need to figure out how we define/measure who our social media champions and influencers are, and then finally, develop a plan to engage with them.
 * Week 1: Orientation**
 * What area of practice do you want to improve and how could that be part of your action learning project? **

I think a project to identify our champions and influencers - a list that' we'd actually use in our implementation - would help us develop a better measurement habit beyond measuring just the stats from our social media.
 * What small measurement project would help your organization create a measurement habit? **

**Our Experiment with Pro-Active Engagement of Core Donors on Twitter:**
**BACKGROUND**: We say all the time that the purpose of social media is to engage with our various audiences - to open channels of communication to listen and share in a more personal way. GlobalGiving has many strategies for engaging on social media, but we spend the majority of time on Facebook and Twitter.

Currently, we measure the quantitative success of engagement with our own outbound messaging on twitter. (That means we really only look at aggreagate, quantitative measurements of the posts we send, and how people react to them.) Our current success metrics on twitter are as follows: applause rate (a measurement of how many people like the content enough to click on it), conversation rate (a measurement of the number of conversations started by the content), amplification rate (a measurement of reach; how far the message spread) and economic value (the total volume of donations that can be attributed to the content).

Each of these metrics is a measure of quantity of engagement actions, but not necessarily quality of engagement. Quantitative data is great for helping us decide which content is successful at driving higher numbers of people to engage, but it doesn’t necessarily help us improve our relationships with our most dedicated followers. Furthermore, the majority of our ‘listening’ to our followers only happens when they are talking about us. (What kind of friend does that make us?!)

The experiment: GlobalGiving’s unmarketing team is undertaking a deeper exploration of our ‘tribe’ this year. We’re identifying our ‘white hot core’ of 1000 donors who might think of themselves as GlobalGivers. (We’ve pulled a list of 1,500 donors who have given more than $100 to more than two projects on GlobalGiving.) We’re finding ways to engage more deeply with these donors, and our hypothesis is that a higher quantity of higher quality conversations with these folks on twitter could support better relationship building, and eventually more donations to our projects. Our goals are threefold: To measure the results, we’ll keep track of the following metrics:
 * 1) To determine whether pro-active engagement with donors on twitter elicits higher quality conversations on twitter (and higher engagement rates)
 * 2) To determine whether pro-active engagement on twitter (and higher quality conversations + higher engagement rates) turns these donors into even better advocates for GlobalGiving on twitter
 * 3) To increase donations from a target group of donors

-Increase in test group amplification rate: increase in the number of RTs in aggregate from test group subjects (compared to overall GG amplification rate) || - Comparison of results between groups ||
 * **Goal** || ** Metric/KPI ** ||
 * To improve the quantity of quality conversations with core donors over time by 25% || - Increase in the number of quality conversations from the test groups over time (compared to their baseline) ||
 * To get core donors to be amplify Global Giving messaging/campaigns 25% more || -Increase in test group conversation rate: replies or mentions from test group subjects (compared to their baselines)
 * To increase dollars donated from core donors by 20% || -Increase in donations from the test group (compared to previous donation rates as well as compared to control group donation rates) ||
 * To determine whether or not to dedicate resources to pro-active twitter engagement in the future (and to determine which groups of followers would be the best targets for such interactions.) || - Evaluation of results compared to baselines

Therefore, we’d like to run an experiment with better listening and better quality conversations with some of our tribe members. There are three main parts to the experiment:

**DESIGN PART 1**: Developing a way to identify four test groups of GlobalGiving donors who are active on twitter, and could be considered part of the GlobalGiving core tribe. This means that they might have allegiance to GlobalGiving as a way to give, and not just to a particular project on GlobalGiving that happens to use our platform for soliciting donations. The groups are divided as follows:


 * A. Frequent givers: these are donors who are active on twitter, and already give often and high amounts to GlobalGiving. (This relates to goal #3)
 * They will have donated more than $300 to 2 or more projects, in the past 3 years.
 * B. Donors with twitter influence: these are donors who are active on twitter and have a Klout score of over 30 score (or have more than 600 followers). (This relates to goal #2)
 * They will have donated more than $100 to 2 or more projects in the past 3 years
 * C. Donors with a GlobalGiving conversation History: these are donors who have previously engaged with us on twitter. (This relates to goal #1)
 * They will have donated more than $100 to more than 2 projects in the past 3 years
 * They will have mentioned, re-tweeted, or replied to @globalgiving in the past.
 * D. Control group: these donors will have qualities from the above three lists - that could fit in to at least one category. We will not change our current interactions with them.

Tasks to perform:
 * 1) Create query of donors who have donated more than $100 to more than 2 projects,in the past three years.
 * 2) Use this list of emails in a tool that will give twitter handles for email addresses - cross-check the identify of these donors using twitter.com (and perhaps some Googling.) Make sure they’ve tweeted regularly (in the past month) and that their tweets are conversational (i.e. not push only.)
 * 3) Assign those who have given more than $300 to list A.
 * 4) Assign those with higher Klout scores to list B.
 * 5) Use Argyle Social or Thrive to see which of these twitter handles have interacted with us before. Assign those that have used our twitter handle in a tweet (mention, RT, reply) to group C.
 * 6) Try to evenly distribute donors among the groups if donors could fit in more than one category. Select an equal (or as equal as possible) ratio from groups A, B, and C to make up the control group, D.

How it worked out: Translating a list of donors to a list of active tweeters was the most time-intensive part of this experiment. Our commitment to experimental integrity proved to be a huge challenge. Of the 92 donors in the query who had given GlobalGiving their twitter handles, we chose less than 30 to join groups. We sought individuals who weren’t known GG staff, partner organization staff, organization/professional accounts, etc. We also only added accounts that were active (in the past 30-45 days) and conversational (i.e. didn’t only RT about one hashtag, or auto-tweet music, for example) on twitter. We also made sure that they had more than 45 tweets, and their accounts weren’t private.

**DESIGN PART 2**: Developing a ‘quality of conversation’ metric, and developing a way to track quality conversations with the people in these lists.

In order to track the improvement over conversation quality, we had to develop a ‘quality of conversation metric’. Based upon our experience tweeting with GlobalGiving followers over the past two years, we developed the following 5-point scale for ‘quality conversations’ on twitter - whether it was our level of effort or their level of response:



Baselines and controls: We evaluated each twitter handle before the experiment, and gave them a baseline conversation rate score and conversation quality score related to their previous interactions with GlobalGiving. We would use this baseline to measure improvement in conversation rate and conversation quality.

in order to measure improvement in donations, we determined the group’s aggregate donation rate from the same time period in the previous year. We will use this as a baseline for determining whether donations have improved from the active test group. We will also use the data from the control group as a secondary comparison.

Finally, we will compare test groups A, B, and C against one another and the control, to determine if there was significant improvement in any of the metrics among one group over another. This final analysis will help us determine a) whether to pursue pro-active twitter engagement in the future, and b) which types of donors are the best target for this type of engagement.

**DATA COLLECTION**: Beginning the experiment and testing our hypothesis over a 4-month timeframe

We decided on a 4-month timeframe to test the question: does better listening and pro-active engagment from GlobalGiving on twitter lead to higher quality conversations, higher engagement rates and/or donation rates?

In order to perform the experiment, we had to develop an engagement plan. How much time will we spend 'engaging' with people equally across the three lists? What will our engagement look like? How will we record interactions? How often will we check in on results?

We determined that one person from the unmarketing team would spend 20 minutes per day (1 hour per week) actively looking for opportunities to listen and engage with donors in groups A, B, and C of the experiment. We would try to engage equally across each of the groups, but we wouldn’t force awkward interactions if nothing seemed appropriate.

We would record both our efforts and the responses in a spreadsheet. Things we’d keep track of include: Externally initiatied? GG initiated? Date, Twitter handle, Test group, Tweet text, Our level of engagement, Level of their response, Tweet text (response 1)

We also decided to store key interactions in a Storify board. (We opted to keep the storify a draft so that it wouldn’t be published publicly.)

**INSIGHTS**: We’re still currently running the experiment (as of June 2013) so we don’t have data yet.

This grew into a big monster! We’re testing for a lot of inter-related things, with some made-up metrics, which requires a lot of manual digging, tracking, and evaluation. This wasn’t a pilot, it’s a full-blown experiment! Not only are there a lot of variables, but there aren’t a lot of out-of-the-box tools to use to get the information (twitter handles from email addresses), or data (interaction rates for a sub-set of followers) that I need. That’s meant it’s been a very manual process to design. And I thought the most manual part would be implementation! It’s important to maintain scientific integrity with an experiment if you want valid results, but it’s probably easier to choose experiments that have more tools already built. Hopefully we’ll get some valid results that will help us to decide whether or not to pivot our core donor engagement strategy on social media!