This is a read only archive of pad.okfn.org. See the
shutdown announcement
for details.
XNzqegv4aH
a Breakout Session Topics & Guiding Questions
Each group is expected to report back the three most important points from their breakout group discussion at the end of the breakout session.
Groups should designate a note-taker who will be responsible for uploading detailed notes from their group’s discussion to http://pad.okfn.org/.
Use the guiding questions below to start conversations within your group, but don’t limit yourselves to discussing only those questions.
Group 1: Quantitative vs. Qualitative Metrics
- What are the relative advantages and challenges of collecting and reporting both types of metrics?
- Quantitative is easier to collect but only tells the what and not the why
- Does that skew people’s perception of value? Might quantitative metrics be more highly valued because they are more prevalent?
- Qualitative: how do you judge what use is “high impact”? Do we judge what use of our materials is for the most social good?
- Metrics about Facebook/twitter/blogs: that’s very interesting stuff. But we aren’t making resources available to people because they might be popular. We are hoping they’re of scholarly value, which is probably not popular at all
- Deciding what’s important to measure is a challenge. Use, reuse, etc.?
- What audiences are we measuring for (what’s important to measure to who?) different stakeholders and different value propositions
- Different audiences have different definitions of success
- Do libraries really want anyone, especially folks outside subject expertise, to be re-evaluating acquisition decisions from 100 years ago?
- How do we (or should we) collect metrics on “direct links” (from twitter or emails, typing URLs, etc.) and what are the implications of this kind of access?
- Aren’t we trying to look at the bigger picture, don’t we lose depth if we only look at such a specific slice?
- Instead of framing everything “us saying what the value is” we are helping surface more material that’s of value to you. Related content. What are others visiting.
- There are stats more about attention from public (twitter), is that more important than being mentioned in a policy document or a patent? Isn't surfacing that second stuff is more interesting or valuable?
- Issues are different for teaching materials, IRs, and digital collections. Different motivations, different approaches for assessing impact. There should be a different package of metrics for each of the different products we produce: a different suite of lenses. We can’t just come up with one “package” and apply it to everything.
- More important that talking about assessment might be talking about assessment plans. It doesn’t just happen. A plan is the foundation for how you’re going to manage your project. What are the goals and objectives? How are you going to do that not just at the endpoint but throughout the process?
- Are there more relevant or useful combinations of commonly collected quantitative metrics that we should leverage (i.e., following digital footprints/session-based behaviors)?
- Difficult question. Relevant and useful to whom? Are there things we can generate to benchmark against others v. what would the university administration want to see for impact?
- It’s difficult for people like us to assess the qualitative use of materials because we aren’t experts in the field of the content. We can do the quantitative metrics. The people using it or who content is being created for need to determine qualitative metrics. It gets back to user engagement: knowing who users are and ability to engage them directly and indirectly. Watching what they do in your system. Can we attribute meaning to certain actions, feed that back to impact statements
- When we start or launch a digital library, we should determine the components that have to be there and assessment gets left out. If you start it late in the game you’re only going to look at usage stats and web metrics.
- How could we identify and trace qualitative metric types relevant to library collections and digital scholarship projects? (i.e., Should we track adaptations/re-use? How?)
NOTES
- * Need for collaboration with subject specialists and other stakeholders to determine meaningful qualitative metrics. Different groups have different stakes and impact is different.
- * Need to determine goals of assessment prior to beginning. Assessment plan should be a foundation. Assessment shouldn't be an event, should be a process.
Three Most Important Points
Point 1:
Point 2:
Point 3: