• Cynexis Media

    I still prefer Google. I did the online challenge when it first came out–I only picked Bing results once out of all my search queries.

  • http://www.cromiller.com/ Ryan Miller

    When taking on the challenge, I found I preferred Bing (marginally) around media / entertainment based queries. For basically any other searches, Google was the clear winner.

  • http://www.cromiller.com/ Ryan Miller

    When taking on the challenge, I found I preferred Bing (marginally) around media / entertainment based queries. For basically any other searches, Google was the clear winner.

  • Durant Imboden

    The “Bing it on” campaign hasn’t done much (if anything) to improve Bing’s market share, so at this point it seems more a way to motivate the troops than to sway the consumer.

  • http://www.rokitseo.com/ Jon Cline

    Very interesting! Thank you for posting this! I felt this advertisement had to be slanted. I don’t really know too many people who prefer Bing. It’s got decent local listings though!

  • Jorge Montero

    I use google, so it’s weird to know that other people use bing, but is also good to know it, so we can know how to act.

  • http://www.mattwallaert.com/ matt wallaert

    (Note: I work at Bing, so take all this from that viewpoint.)

    A couple of notes are important before I talk about the claims made. There are two separate claims that have been used with the Bing It On challenge. The first is “People chose Bing web search results over Google nearly 2:1 in blind comparison tests”. We blogged about the method and it was used back when we kicked off the Bing It On campaign in September 2012. In 2013, we updated the claim to “People prefer Bing over Google for the web’s top searches”. Now, on to Ayers’ issues and my explanations.

    First, Ayers is “annoyed” by the sample size, saying that 1,000 people is too few to obtain a representative sample on which to base a claim. Interestingly, he then links to a paper he put together with some students, in which he also used a sample size of 1,000 people. He then subdivided the sample into thirds with different conditions and still manage to meet conventional statistical tests using this sample.

    So I’m confused: if he got significance, is it hard to understand that we might? A sample of 1,000 people doing the same task has more statistical power than a sample of 300 people doing the same task. Which is why statistics are so important; they help us understand whether the data we see is an aberration or a representation. A 1,000 person, truly representative sample is actually fairly large. As a comparison, the Gallup poll on presidential approval is around 1,500 people.

    Next, Ayers’ is bothered that we don’t release the data from the Bing It On site on how many times people choose Bing over Google. The answer here is pretty simple: we don’t release it because we don’t track it. Microsoft takes a pretty strong stance on privacy and unlike in an experiment, where people give informed consent to having their results tracked and used, people who come to BingItOn.com are not agreeing to participate in research; they’re coming for a fun challenge. It isn’t conducted in a controlled environment, people are free to try and game it one way or another, and it has Bing branding all over it.

    So we simply don’t track their results, because the tracking itself would be incredibly unethical. And we aren’t basing the claim on the results of a wildly uncontrolled website, because that would also be incredibly unethical (and entirely unscientific). And on a personal side note: I’m assuming that the people in Ayers’ study were fully debriefed and their participation solicited under informed consent, as they were in our commissioned research.

    Ayers’ final issue is the fact that the Bing It On site suggests queries you can use to take the challenge. He contends that these queries inappropriately bias visitors towards queries that are likely to result in Bing favorability.

    First, I think it is important to note: I have no idea if he’s right. Because as noted in the previous answer, we don’t track the results from the Bing It On challenge. So I have no idea how using suggested queries versus self-generated queries affects the outcome, despite his suggestion that we knowingly manipulated the query suggestions, which seems to be pure supposition.

    Here is what I can tell you. We have the suggested queries because a blank search box, when you’re not actually trying to use it to find something, can be quite hard to fill. If you’ve ever watched anyone do the Bing It On challenge at a Seahawks game, there is a noted pause as people try to figure out what to search for. So we give them suggestions, which we source from topics that are trending now, on the assumption that trending topics are things that people are likely to have heard of and be able to evaluate results about.

    Which means that if Ayers is right and those topics are in fact biasing the results, it may be because we provide better results for current news topics than Google does. This is supported somewhat by the study done to arrive at the second claim; “the web’s top queries” were pulled from Google’s 2012 Zeitgeist report, which reflects a great deal of timely news that occurred throughout that year.

    To make it clear, in the actual controlled studies used to determine what claims we made, we used two different approaches to suggesting queries. For the first claim (nearly 2:1), participants self-generated their own queries with no suggestions from us. In the second claim (web’s top queries), we suggested five queries of which they could select one. These five queries were randomly drawn from a list of roughly 500 from the Google 2012 Zeitgeist, and participants could easily get additional sets of five if they didn’t want to use the queries in the first set.

    There is just one more clarifying point worth making: Ayers’ noted that only 18% of the world’s searches go through Bing. This is actually untrue; because Bing powers search for Facebook, Siri, Yahoo, and other partners, almost 30% of the world’s searches go through Bing. And that number is higher now than it was a year ago. So despite his assertions, I’m happy to stand by Bing It On, both the site and the sentiment.

    (For those who have them, I’m always open to questions about the studies we conducted – feel free to shoot me a note or leave a comment. This invitation is also open to Ian and his students.)

  • http://www.mattwallaert.com/ matt wallaert

    Sorry, what leads you to believe that Bing It On hasn’t improved our share? Share has increased in the time since the program launched, which is only correlational, but in the same way it doesn’t prove causation, it certainly doesn’t disprove causation.

  • http://www.mattwallaert.com/ matt wallaert

    One possible reason for this is homophily: the tendency of people to group with people who are like them. In that you are technically sophisticated enough to be reading SEL, it feels as though your friend group may not be a representative sample. =]

  • Chad Harris

    I think there are 2 sides that need to be taken into consideration. Google has become such a household topic- like” hey google that” people find themselves being comfortable with the search results that are produced. It is not a common where people are search Bing like Google- It’s like going to Starbucks, everyone does it- so that is what is best right? I do believe that Google produces great results for specific things, Bing does as well. I use Google, Bing, Yahoo and a bunch more- no preference for one or the other. I am a Adwords Advertiser as well as a Bing Advertiser- Bing wins on conversions. So Bing is better for making money and Google is better for finding Cat Video’s?

  • Durant Imboden

    According to comScore Media Mextrix, Bing’s share of the U.S. search market has gone up slightly since the “Bing it on” campaign launched, but Google’s share is almost unchanged. The biggest source of additional Bing market share has been your search partner, Yahoo. Maybe your ad campaigns should be targeting Yahoo, whose users appear more willing to jump ship than Google’s are.

  • http://www.mattwallaert.com/ matt wallaert

    As a psychologist, that seems to me to make a flawed assumption: that a new user is equally likely to choose Bing and Google. In theory, because people tend to do what the majority does, the search lead of Google should grow, not decrease. If you’re in a tug of war against a large opponent, and you don’t lose ground, that’s a good thing.

  • Tyler Miller

    One thing I’m sure you only left out to keep your response short ;) is that Ayres sample “over-represents younger people, whites, and males relative to the general U.S. population” (Ayres, et. al., 2013). They go on to say that their sample is “consistent with a large Internet sample” taken, of course in 2004. I think all readers of SEL know that a decade on the internet is a really long time ago. Then the internet was clearly skewing towards young people and men. This year my aunt wanted an iPad for her 99th birthday.

  • http://www.mattwallaert.com/ matt wallaert

    I tried to address only the things he brought up in his blog post, and not the paper itself. MTurk is a valid sample for many things, but not for anything related to tech (evidenced by the fact that in order to know about MTurk, you have to be in a 1% minority of people).

  • Alexander Trust

    If your link to his cv is right, then the man’s name is Ayres not Ayers. Freud helped you out with this one, I guess. :)

  • Moose

    your very touchy on the topic lol

  • Nicolas Garfinkel

    I don’t think Microsoft mentioned the surveyed population. If the study population they interviewed were active Bing users I bet they could replicate that 2:1 study quite easily. However, I also don’t know what kind of impact this campaign had on people’s search behavior. My guess would be, a negligible amount.