The Signpost

Op-ed

A new metric for Wikimedia

Contribute  —  
Share this
By Denny Vrandečić

TL;DR: We should focus on measuring how much knowledge we allow every human to share in, instead of number of articles or active editors. A project to measure Wikimedia's success has been started. We can already start using this metric to evaluate new proposals with a common measure.

In the middle of the night, a man searches frantically for something, walking around a streetlamp. Another one passes, and asks if he may help. “Yes, I'm looking for my car keys; I lost them over there,” he says, pointing to the path leading to the streetlamp. Confused, the other man asks: “Then why are you looking for them here, if you lost them there?” “Well, the light is here.”

This is quite a common pattern. Creating and agreeing on measures and metrics that capture what is really important can often be very hard and sometimes impossible. If we are lucky, the numbers and data that are readily available are a good signal for what is actually important. If we're not lucky, there's no correlation between what we can measure and what we should measure. We may end up like the man searching for his keys.

In my humble opinion, the Wikimedia movement seems to be in a similar situation. For a long time we've been keenly looking at the number of articles. Today, the primary metrics we measure are unique visitors, pageviews, and new and active editors. The Wikimetrics project is adding powerful ways to create cohort-based reports and other metrics, and is adding an enormous amount of value for everyone interested in the development of Wikimedia projects. But are these really the primary metrics we should be keeping an eye on? Is editor engagement the ultimate goal? I agree that these are very interesting metrics; but I'm unsure whether they answer the question: Are we achieving our mission?

What would an alternative look like? I want to sketch out a proposal. It's no ready solution, but I hope it starts the conversation.

Wikimedia’s vision is “a world, in which every single human being can freely share in the sum of all knowledge”. Let’s start it from there. Imagine this to be all knowledge. A few examples are given.

A simplified view on all knowledge
A simplified view on all knowledge

I don’t want to suggest that knowledge is one-dimensional, but it is a helpful simplification. A lot of knowledge is outside the scope of Wikimedia projects, so let’s cut this out for our next step. I suggest something like a logarithmic scale on this line. A little knowledge goes a long way. Take an analogy: a hundred dollars are much more important for a poor person than for a billionaire; a logarithmic scale captures that. But we also have to remember that knowledge can't be arbitrary ordered—it's not just data, but depends on the reader’s previous knowledge. Finally, let’s sort that knowledge so that we use two colors in the line: the knowledge that is already available to me thanks to Wikimedia (here shown in black), and the knowledge that is not (here shown in white). If we do all this, the above line could look like this:

A simplified view on all available knowledge in Wikipedia
A simplified view on all available knowledge in Wikipedia

Bear with me—this is just an intuitive estimate, and you might have drawn the line very differently. That’s OK. What is more important is that for every one of us this line looks quite different. For example, a person with a deeper insight into political theories might be able to gain even more from certain articles in Wikipedia than I do. A person who is more challenged by reading long and convoluted sentences might find large parts of the English Wikipedia (or this essay) inaccessible (see, for example, the readability of Wikipedia project or this (German) collection of works on understandability). A person who speaks a different set of languages will read and understand articles that I don’t understand, and vice versa. A person with a more restricted or expensive access to the Internet will have a different line. In the end, we would have more than seven billion such lines—one for every human on the planet.

A simplified view on all available knowledge in Wikipedia for different users
A simplified view on all available knowledge in Wikipedia for different users

Let us now take all these seven billion lines, turn them sideways, sort them, and then stick them together. The result is a curve. And that curve tells us how far along we are in achieving our vision, and how far we still have to go.

Are we done yet?
Are we done yet?

We can estimate this curve at different points in time and see how Wikimedia has evolved in the last few years.

Over the years
Over the years

We can visualize an estimation of the effect of different proposals and initiatives on that curve. We can compare the effect of, say, (1) adding good articles about every asteroid to the Alemanic Wikipedia, with (2) bringing the articles for a thousand of the most important topics to a good quality in Malay, Arabic, Hindi, and Chinese, and with (3) providing free access to Wikipedia through mobile providers in Indonesia, India and China. Note that a combination of (2) and (3) would lead to a much bigger area to be covered! (This is left as an exercise to the readers—maybe a reader will add the answer to the comments section.)

Effect of different proposals
Effect of different proposals

All three projects would be good undertakings, and all three should be done. But the question the movement as a whole faces is how to prioritize the allocation of resources. Even though (1) might have a small effect, if volunteers decided to do it they should be applauded and cherished. It would be a weird demand (but nevertheless not unheard of) if Wikipedia was required to be balanced regarding its depth and coverage, if it was required to have articles on the rise and fall of the Roman empire at least as deep and detailed as about the episodes of The Simpsons. The energy and interest of volunteers cannot be allocated arbitrarily. But there are resources that can: priorities in software development and business development by paid staff, or the financial resources available to the Funds Dissemination Committee. Currently, the intended effects of such allocations seem to be hard to compare. With a common measure—like the one suggested here—diverse projects and initiatives could all be compared against a common goal.

There are many open questions. How do we actually measure this metric? How do we know how much knowledge is available, and how much of it is covered? How do we estimate the effect of planned proposals? The conversation about how this metric can be measured and captured had already started on the research pages on Meta, and you are more than welcome to join and add your ideas to it. The WMF's analytics team is doing an astounding job of clearly defining its metrics, and measuring the overall success of Wikimedia should be as thoroughly and collaboratively done as any of these metrics (see also the 2014 talk by the analytics team).

While it would be fantastic to have a stable and good metric at hand, this is not required for the idea to be useful. The examples above show that we can argue by using the proposed curve intuitively. Over time, I expect us to achieve an improved understanding and intuition about how to derive an increasingly precise curve—maybe even an automatically updated, collaboratively created online version of the curve, based on the statistics, metrics and data available from the WMF. This curve we could just look up, but the expectation already would be that we have a fairly common understanding of the curve, and what certain changes to the curve would mean.

This would allow us to be more meaningful when speaking about our goals, and would free us from volatile metrics that don’t really express what we're trying to achieve; that way, we could stop looking solely at numbers, such as number of articles, pageviews, and active editors. Most of the metrics we currently look at are expected to have a place in deriving the proposed curve, but we have to realize that they're not the primary metrics we need improve if we're to achieve our common vision.

Denny Vrandečić is a researcher and has received his PhD from Karlsruhe Institute of Technology in Germany. He is one of the original developers of Semantic MediaWiki, was project director of Wikidata while working for Wikimedia Germany, and now works for Google. A Wikipedian since 2003, he was the first administrator and bureaucrat of the Croatian Wikipedia.
Acknowledgements and thanks go to Aaron Halfaker, Atlasowa, and Dario Taraborelli for their comments and contributions this article. You can follow Vrandečić on Twitter.
The views expressed in this opinion piece are those of the author only; responses and critical commentary are invited in the comments section. Editors wishing to propose their own Signpost contribution should email the Signpost's editor in chief.
+ Add a comment

Discuss this story

These comments are automatically transcluded from this article's talk page. To follow comments, add the page to your watchlist. If your comment has not appeared here, you can try purging the cache.

Hi Denny, good article and interesting topic.

  • A small note: Maybe you should flip around File:A_new_metric_for_Wikimedia_-_3.png, because the next graph changes left-right direction. Maybe easier to understand.
  • As to your question, "But are these really the primary metrics we should keep an eye on? Is Editor engagement the ultimate goal? I agree that these are very interesting metrics. I am not sure if they answer the question whether we are achieving our mission. But how could an alternative look like?" I think the metrics we don't have and we should keep an eye on are:
    • (A) article quality metrics that scale (are the articles reliable, readable, sourced, grammatically correct, updated, vandalised etc.) We have pretty much nothing. and
    • (B) maintenance metrics: Which article types need more maintenance or less maintenance than others (Animal species? Biographies of Living People? Bio of dead people? Company articles? Sports Tournaments? Rivers/Mountains and other geographic features?) How can a stagnating editor base keep up a growing article mass? Which tools allow us to do more effective maintenance and push the boundaries?
I think these are far more important and pressing questions for us than "how many percent of the infinite 'all knowledge' do we cover?" (see User:Emijrp/All human knowledge). And, as you correctly pointed out: "The energy and interest of volunteers cannot be allocated arbitrarily." / "But the question that the movement as a whole faces is how to prioritize the allocation of resources that the movement can actually allocate?" --Atlasowa (talk) 11:42, 19 August 2014 (UTC)[reply]
Hi Atlasowa, thank you for the comments and your careful reading. I have changed the direction of the graph, that's a good catch. I agree that article quality and maintenance play an important (and more immediate role) than many other metrics we are currently gathering. I took the liberty to extend my article based on your comments, and also will add quality and maintenance to the Meta project page. I would say maintenance metrics are a derivative for quality, and quality plays an important role for availability of knowledge -- but I stay with my statement that the ultimate goal is accessibility of knowledge to every one. --denny vrandečić (talk) 15:37, 19 August 2014 (UTC)[reply]
Hi Denny, that was fast! WRT "I would say maintenance metrics are a derivative for quality", now if i look at your quality metrics at meta:Research:Measuring mission success i find this:
      • Quality
        • persistence (implied review)
          • content persistence
          • reverts
          • article survival
You're not measuring article quality at all! You're just implying that if the stuff sticks, it's good ^^ If sticking is quality and "maintenance metrics are a derivative for quality", then... you lost me... :-) Hm, I guess the botgenerated Cebuano and Waray-Waray WP have excellent quality according to these quality metrics? --Atlasowa (talk) 17:46, 19 August 2014 (UTC)[reply]
Good point and good examples! Note that the tree is from Aaron, whereas I am answering - there is the root of this inconsistency. And his tree was a first draft in order to have something out to start the conversation on wiki instead of emails - I really appreciate his fast work. But it doesn't mean we agree on every detail. I actually extended the points you referred to in order to reflect how they could start to measure quality - feel free to do so as well. --denny vrandečić (talk) 18:19, 19 August 2014 (UTC)[reply]
Hi Denny, OK, i'll try to write about quality metrics and maintenance at meta:Research talk:Measuring mission success. That you link for "collection of works on understandability" to de:Benutzer:Atlasowa/Verständlichkeit is very flattering, but... my notes and lists are an unreadable mess (ironic, i know) and mostly in german. How about a link to http://www.readabilityofwikipedia.com ? "Calculate readability for... Enter the title of any Wikipedia article here to test its readability using the Flesch Reading Ease Score." "Results for: Tokyo Stock Exchange@English Wikipedia The Flesch reading ease score is 55, this means that 64% of the articles on Wikipedia are harder to read than this one." (The outcome of this formula can be interpreted by the following table: 0 - 30 Very difficult; 30 - 50 Difficult; 50 - 60 Fairly difficult; 60 - 70 Standard; 70 - 80 Fairly Easy; 80 - 90 Easy; 90 - 100 Very Easy. When writing for a general public, one should aim for a score between 60 and 70. Academic publications mostly score below 30. Note that the formula only works for English texts. [1]) Best, --Atlasowa (talk) 13:18, 21 August 2014 (UTC)[reply]
I'll link to both, Atlasowa. I didn't mean to be flattering, but it was the most comprehensible list on the issue I could think of at that moment. :) --denny vrandečić (talk) 15:10, 21 August 2014 (UTC)[reply]
You both might be interested in Wikipedia:Short popular vital articles which was an attempt to quantify the most important articles which appeared (by their raw size) to be lacking in detail. Presumably the baseline there can be used to measure how fast the entire cohort is growing. Novis Ordo (talk) 02:08, 24 August 2014 (UTC)[reply]
That's an excellent list, Novis Ordo, thanks! --Atlasowa (talk) 20:34, 24 August 2014 (UTC)[reply]
  • Fascinating big-picture questions. At some point Wikipedia's scope comes into play in its ability to spread knowledge -- pressing for free mobile coverage in developing countries would help and is laudable, but so would teaching everyone English. The sharing of knowledge would be much more efficient if we all read English (bye-bye Alemanic asteriod project), but no one would argue that should be the goal of Wikipedia. How to focus resources is a very important question, even within areas we all agree is in our core scope. Paris Hilton has many dogs (not only one you ignoramuses!), and its not easy to get someone interested in that subject to expand Maryam Mirzakhani, people with different knowledge sets need to be encouraged to join the project as metrics of success are defined.--Milowenthasspoken 18:31, 22 August 2014 (UTC)[reply]
  • A bit of information that would guide my work as an editors is "which parts of our articles do people read most". Is it mostly the lead. Mostly certain sections? I already direct my work to English Wikipedia's most read medical articles. And would love to see this tool Wikipedia:WikiProject_Medicine/Popular_pages appear in other languages. Doc James (talk · contribs · email) (if I write on your page reply on mine) 23:53, 23 August 2014 (UTC)[reply]
    • I believe there is some research about which parts get read most, and to what extent, as someone mentioned some figures to me once. All the best: Rich Farmbrough02:35, 24 August 2014 (UTC).
Hi Rich! There is some analysis of session length at meta:Research:Mobile_sessions, and amongst the results "... we are able to identify a dataset-specific cutoff point to identify 'sessions' - 430 seconds. This provides a clean breakpoint, and is in line with existing research on session time as applied to Wikipedia." That is ~7 minutes for a Wikipedia article, and that is consistent with similar analysis for blog posts (see links at meta:Research talk:Mobile sessions). --Atlasowa (talk) 20:37, 24 August 2014 (UTC)[reply]
Thanks for that Atlasowa! All the best: Rich Farmbrough21:00, 24 August 2014 (UTC).
  • I enjoyed this rather eccentric approach, and agree with the main thrust, but a serious flaw is the idea of what is "done". Actually much of our "serious" content is very poor, and that it does not change is purely due to lack of editors. Too many Wikipedians just seem to look at the length of articles, and count the number of references. Actually reading them is too often a sobering experience, even without specialist knowledge, and worse if you have it. Much of our core content is unacceptably low in quality, and it is precisely this that tends to get left, because improving it essentially means starting from scratch. Articles that are only half-bad are much more attractive targets, and more likely to attract the small remaining band of editors ready to actually write text. Johnbod (talk) 16:19, 24 August 2014 (UTC)[reply]
If it is regarded eccentric to focus on our vision instead of other measures, then this points out to the fact that this article is rather timely :) --denny vrandečić (talk) 16:48, 24 August 2014 (UTC)[reply]
It is not the focus (which has long been my own) but the approach to measuring it that I thought eccentric. Johnbod (talk) 17:03, 24 August 2014 (UTC)[reply]
Thank's for the clarification. I would very much like to hear alternatives. This proposal is really not more than a first stab. --denny vrandečić (talk) 17:16, 24 August 2014 (UTC)[reply]
I would certainly change "done" to something else. If I were doing it I would be tempted to split the black area into two parts, one representing what we have some coverage of, and the other an essentially subjective guess at what is covered with some degree of quality. Unfortunately the article assessment system is of little use for this for the vast majority of articles below Good Article. Johnbod (talk) 14:23, 25 August 2014 (UTC)[reply]
Extend your sentence to "a person without internet access has access to none of the world's knowledge through the work of the Wikimedia movement". I did not mean to measure any possible access to knowledge, as I wouldn't know how to even start, but merely to focus on what the Wikimedia movement provides. This is a limitation of the approach, and it could (and maybe should) be extended in that sense, I do not know. If you have ideas on how to estimate access to knowledge outside Wikimedia, please add it to the Meta project page. Thanks for raising the point. --denny vrandečić (talk) 16:48, 24 August 2014 (UTC)[reply]
Fair enough, but even then it's not quite true. There are off-line readers that can be preloaded with wiki text. But they might not be common. Deltahedron (talk) 20:40, 24 August 2014 (UTC)[reply]
There are also a huge number of printed books that are made from WP content. I doubt that they penetrate the places where people have no Internet access though.
There are certainly a lot of schools in Kenya that have off-line copies of WP. All the best: Rich Farmbrough21:00, 24 August 2014 (UTC).

I quite agree! Also wanted to point out a new tool I've built, Quarry that lets people explore our databases for research purposes from a web friendly way and share the results. Might be useful for less technically minded Wikimedians who want more power than Wikimetrics :) YuviPanda (talk) 12:55, 25 August 2014 (UTC)[reply]

Sweet! Thanks, Yuvipanda! --denny vrandečić (talk) 16:01, 29 August 2014 (UTC)[reply]
  • One further point -- related to the one Johnbod made above -- is that longer does not always mean better. Almost always a short but well-written article can be preferable & more useful than a longer but diffuse article -- even if the longer version has more information. -- llywrch (talk) 17:52, 25 August 2014 (UTC)[reply]
Absolutely correct. --denny vrandečić (talk) 16:01, 29 August 2014 (UTC)[reply]
Quite devastating, I reckon. The file would probably not be accessible anymore. --denny vrandečić (talk) 16:01, 29 August 2014 (UTC)[reply]

Thank you for this, this is exactly the kind of discussion we should be having! Jan-Bart de Vreede 217.200.185.43 (talk) 09:05, 27 August 2014 (UTC)[reply]

Thanks! I hope you carry it to the relevant places, Jan-Bart! --denny vrandečić (talk) 16:01, 29 August 2014 (UTC)[reply]

Don't want to be ultra-pedantic, but you misspelled diphtheria... -- AnonMoos (talk) 04:39, 27 August 2014 (UTC)[reply]

Aww. Thanks. Too my defense, smarter people than me do it too. Fixed it. --denny vrandečić (talk) 16:01, 29 August 2014 (UTC)[reply]

Guy who lost keys

Interesting proposal. Just something about the parable 'Guy who lost keys'. I loved the subtlety of this version: pub chat between Hoyle and Feynman: (skip to 24:00).

Thanks for that! :) --denny vrandečić (talk) 15:47, 29 August 2014 (UTC)[reply]

Some skeptical comments

Denny, this is a great article but I am somewhat skeptical of the metric and especially of its suggested relevance for the allocation of resources. Honestly, I would be worried that funding decisions on the basis of the suggested metric would do more harm than good because they simplify our complex goals too much. To give you at least one example: it seems to me that the metric is biased against small language projects for a number of related reasons:

(a) One of the strengths of small language Wikimedia projects is that they build free knowledge communities and contribute to the preservation of linguistic and other local knowledge. I think that Wikimedia's mission should also include these aspects of knowledge preservation and global community building of people who care about free knowledge. If we do not consider this part of our mission, we could simply slash any support for languages with less than x million monolingual speakers and relocate the resources to Mandarin, Hindi, etc.

(b) If I understand the metric correctly, it represents knowledge in Wikimedia projects in complete isolation from other knowledge resources that users have access to. I'm afraid that this creates biased results given that the crucial question is how people actually use knowledge from Wikipedia and how they integrate it with other knowledge resources (see also my article here). For example, information in small language projects is often so valuable because it is not readily available anywhere else. The metric does not distinguish between this genuinely novel knowledge and knowledge that is readily available through other means such as a quick Google search.

(c) There are also vexed problems with the very notion of the "sum of all knowledge" in Wikimedia's mission. For example, it seems quite intuitive that missing knowledge on the top left of your diagram (e.g. the name of Paris Hilton's dog) tends to be less important than knowledge in the bottom right (e.g. the name of the current US president). Of course, we could adjust the measure so that the name of the US president covers more space in the diagram than the name of a celebrity's dog. But then we face the problem of having to asses the weigh of different pieces of information across of linguistic and cultural contexts.

All three points seem to contribute to a bias against small language projects. Furthermore, similar problems will pop up in other contexts. For example, problem (b) is equally obvious in the case of long specialized articles in English, German, French, etc. I always found long specialized articles to be especially valuable as far too much information in Wikipedia is the result of a quick Google search and essentially redundant to other information resources. However, the suggested metric is completely blind to that and seems to favor superficial work that transcribes readily available knowledge to Wikipedia over the work of editors who spend a lot of time in libraries and archives to free knowledge that is actually locked away.

Don't get me wrong: I think that the metric is very interesting and helpful in focusing some important issues. However, I do not think that one simple metric can give reliable advise for complex questions of funding etc. Cheers, David Ludwig (talk) 20:03, 27 August 2014 (UTC)[reply]

  • I think that is less a problem with the metrics used than it is a shortcoming of the stated goal of the Wikipedia movement. If the goal is "a world, in which every single human being can freely share in the sum of all knowledge", it implicitly values knowledge presented in languages with more users, and the op-ed's metrics are largely consistent with the implications of that stated goal. If, however, the goal of the Wikimedia foundation were instead the preservation of culturally important knowledge in all the world's languages, then we would be talking about different metrics that would more fundamentally value contributions in small language projects. But I don't think it helps to conflate the shortcomings of the Wikimedia foundation's vision statement with any shortcomings of the metrics. VanIsaacWScont 03:21, 29 August 2014 (UTC)[reply]
You're right that (a) is mostly about the mission. However, I would disagree with the metric's implicit assumption that our mission is exhausted by the "sum of all knowledge"-idea. For example,the mission statement of WMF starts with the goal "to empower and engage people around the world to collect and develop educational content." Also: Even if we limit ourselves to the "sum of all knowledge"-idea, the problems (b) & (c) remain. (b) It is not very plausible to measure our success in making knowledge available by looking at Wikimedia projects in complete isolation of other knowledge resources that people have access to. (c) Even if our goal is "the sum all knowledge," it seems plausible that we should prioritize knoweldge (e.g. names of presidents vs names of celebrity dogs). David Ludwig (talk) 12:29, 29 August 2014 (UTC)[reply]
Thank you, @David Ludwig:, for your thoughtful comments. I will go through your three points as you give them:
(a) yes, this is correct. As a help to decide which tasks to tackle next, the metric favors to help more people over helping less people. This obviously doesn't mean that small languages should not be supported, because we will never get close to a perfect score if we don't give much more massive support to smaller language communities as well - but if the question is prioritization then the metric suggests that underserved large language communities should get support and attention before underserved small language communities. But there is an (intentional) balancing factor, too: the metric is logarithmic on the y-axis. This means that providing, say, the first 5000 vital articles in a small language will be more valuable than providing articles 500,000-600,000 in a large language. So once the larger underserved languages are improved, the focus will naturally shift to the smaller languages. And it might stay there for a longer time, because smaller language communities have intrinsically a harder time to get started because of the smaller possible pool of contributors. So, yes, your observation is correct for now, but it will self-correct over time. If I had to choose between two projects that would have similar effect on their given language Wikipedias, and both of them are roughly similar, I would take the number of people served by this action into account. So, yes, observation (a) is correct and intended, but it is self-correcting.
(b) this is correct. Unfortunately, I cannot read your article for free, but I think I understand your point nevertheless. It is correct that the metric only considers free knowledge provided through the Wikimedia movement, because this is currently one of the few knowledge sources where the consumers can share in, i.e. become contributors. Maybe this is only my interpretation of the vision, but I regard the possibility to become a contributor in the knowledge that is counted as integral to the vision. If this is the case with other sources too, maybe they should be counted -- or, even better, they could be integrated into the Wikimedia movement. It could be a possibility to try to capture other sources too in an alternative background curve, which would show if a proposed project covers new ground or merely duplicates existing work, but this would have several issues (like the assumed linearization of knowledge, which only works well with a single source I guess). I'd say, if there are external sources of free knowledge, integrate them, and resolve the issue thus. So, yes, observation (b) is correct and intended.
(c) this is partially correct. Adding the name of Hilton's dog to the English Wikipedia is less valuable than adding Netwon's law to the Gujarati Wikipedia. But you are correct that this not a function of the knowledge being added (i.e. Hilton's dog's name vs Newton's law) but rather of the already existing absolute size of the given languages. This metric is incomplete with regards to that, and this is an oversight on my side which results from experience: most of the actual Wikipedia work I have done seems to suggest that the communities in general know how to prioritize, and thus this seems to be self-regulatory. But you are right: it is not. Cebuano or Waray-Waray have an unreasonable amount of articles on topics with no particular high relevance to their communities, but may lack vital articles. This is a result of bots creating articles not based on the importance of their subject but rather the availability of usable raw data. The metric as proposed is blind to that. This should be improved, and the most obvious way to improve this is by weighting knowledge with its relevance. I will add that to the metric discussion on Meta.
Again, thank you for your thoughts! This is one kind of discussion that I wanted to see: highlighting consequences of the proposal, and its shortcomings. --denny vrandečić (talk) 15:44, 29 August 2014 (UTC)[reply]
Hi Denny, thanks for your helpful response! While I agree that the suggested metric captures something relevant, I still think that it is too narrow to guide priorities, funding decisions, and so on. Of course, similar problems will arise with any available metric but maybe we should not aim for one single metric that captures our mission, anyway. Instead, we may come to the conclusion that our diverse goals require a “toolbox” of evaluative heuristics that include equally diverse measures and metrics. Let me illustrate this by returning to 2 of my original points:
(a) You write that “the metric favors to help more people over helping less people.” Sure, but this is not a controversial aspect of your metric and I do not think that we need a metric at all to establish that it is better to help more people :-). The real issues occur when we need to prioritize projects that have different strengths and where different factors point in different directions (e.g. potential audience measured by speakers of a language, potential audience measured by literate speakers of a language with internet access, actual audience measured by page views and so on, importance of the covered topic, uniqueness of the information compared to other available online sources, second-order effects such as building of free knowledge communities, potential for translation of created knowledge, preservation of linguistic knowledge of small languages, preservation of local knowledge such as the cultural heritage of small communities, “freeing knowledge” that is hidden in archives, collections, repositories with paywalls, and so on.) I still think that reliance on your metric would reduce this complexity too much and that we need a rather broad interpretation of “the mission of the Wikimedia Foundation [...] to empower and engage people around the world”. Of course, a lot depends on how we interpret this mission. I want WMF to embrace the diversity of goals and motivations in our movement. I want that the mission of Wikimedia develops at least partly “bottom-up” through the interests of active volunteers and is not entirely directed “top-down” by a rigid metric that can alienate volunteers who are judged to be “not important” (real-life example: “No one needs free knowledge in Esperanto”). I also think that a very narrow interpretation of our mission would hurt Wikimedia because people with “fringe interests” that barely matter in your metric are an essential part of our community. However, you may have a different perspective on some of these issues. It is OK to disagree here but it is important to clarify the premises of potential metrics. And it seems to me that your metric makes pretty substantial assumptions of about what our mission should be.
(b) I think that you underestimate the problem here. Let me illustrate this by sketching an alternative metric. Let us start with the “sum of all knowledge”-idea of your metric but let us consider all knowledge resources that people have access to: Wikipedia, libraries, online search engines, school education, TV education, knowledgeable friends, books at home, and so on. We may visualize this by extending your one-dimensional representation of knowledge through a two-dimensional representation that involves Venn Diagrams. Every circle represents a set of knowledge that is accessible through a specific resource and the intersections of the circles represent the sub-sets of knowledge that is accessible through various resources. In analogy to your metric, the Venn Diagrams and the intersections will vary for every individual (access to computers? access to libraries? what libraries? what school education? and so on). For every individual, this metric would represent (1) the total knowledge that is made accessible through Wikimedia (the full circle, i.e. the same as your metric) (2) the knowledge that is uniquely made accessible through Wikimedia (the parts of the circle that have no intersection with other circles). I can think of many reasons to prefer a metric that considers both (1) and (2) over your metric that only considers (1). It models the educational contribution of Wikimedia much more realistically. It is sensitive to core contributions that “free knowledge” by bringing it from archives or paywalls to public places such as Wikimedia. It recognizes the pioneering work of Wikipedias in indigenous languages that often only have a small body of published knowledge. It recognizes the importance of academic editors who write free knowledge articles on their narrow area of research. And so on. If we completely ignore (2) and only focus on (1), we loose all of that. Honestly, it strikes me as something that may be a good strategy for a company that is only focused on its own product but seems pretty ill-suited for an educational charity that is primarily interested in the positive impact of its work for people.
Thanks again for starting this interesting conversation. David Ludwig (talk) 16:24, 1 September 2014 (UTC)[reply]



       

The Signpost · written by many · served by Sinepost V0.9 · 🄯 CC-BY-SA 4.0