Survey
* Your assessment is very important for improving the work of artificial intelligence, which forms the content of this project
* Your assessment is very important for improving the work of artificial intelligence, which forms the content of this project
ONLINE MATERIALS Online Materials A Descriptive Statistics for the Observed Data in the Main Experiments 1–7 Table OA1 Number of Objects in Experiments 1–7 Experiment, objects 1, Cities Unrecognized objects 97.00 (4.59) Mean number of objects (SE) Tartle objects Knowledge objects 45.10 72.35 (3.08) (3.84) Nparticipants 49 2, Cities 84.77 (3.51) 44.82 (2.28) 79.56 (3.61) 71 3, Cities 96.95 (3.69) 45.47 (2.84) 71.51 (3.86) 55 4, Countries 11.60 (1.76) 46.25 (5.77) 91.40 (7.80) 20 5, Companies 35.81 (2.71) 6.29 (0.84) 32.33 (1.85) 21 6, Diseases 24.95 (1.92) 7.70 (1.01) 16.30 (1.06) 20 7, Politicians 106.74 12.63 47.74 19 (4.24) (1.64) (3.25) Note. Numbers of objects were computed for each participant individually and then averaged across participants. 1 ONLINE MATERIALS Table OA2 Time It Took Participants to Judge an Object as Recognized or Unrecognized in the Recognition Tasks of Experiments 1–7 Experiment, objects 1, Cities Means of median time in seconds (SE) Unrecognized Tartle objects Knowledge objects objects .87 .83 .66 (.03) (.03) (.01) 2, Cities .89 (.03) .81 (.02) .64 (.01) 3, Cities .74 (.02) .72 (.02) .61 (.01) 4, Countries 1.13 (.08) .75 (.04) .60 (.03) 5, Companies 1.19 (.16) 1.17 (.09) .77 (.03) 6, Diseases 1.39 (.12) 1.27 (.15) .81 (.03) 7, Politicians .93 1.10 .78 (.04) (.04) (.02) Note. Times on knowledge and tartle objects are referred to as recognition times in the main text. Times were computed as medians for each participant individually and then averaged across participants. 2 ONLINE MATERIALS Table OA3 Number of Exhaustive Paired Comparisons of Two Objects in Experiments 1–7 Experiment, objects Unrecognized pairs Mean number of exhaustive comparisons between two objects (SE) Tartle– Knowledge– Tartle pairs Tartle– unrecognized unrecognized knowledge pairs pairs pairs 565.67 641.57 198.76 421.78 (34.37) (37.96) (25.91) (27.62) Knowledge pairs 1, Cities 962.71 (75.01) 2, Cities 765.58 (51.74) 551.99 (31.63) 614.48 (31.16) 182.92 (17.62) 429.96 (18.36) 668.10 (48.58) 3, Cities 935.53 (59.46) 592.44 (35.38) 637.29 (38.12) 195.62 (20.37) 405.24 (27.10) 562.20 (47.80) 4, Countries 91.00 (24.06) 578.80 (119.66) 907.35 (128.09) 1,362.75 (288.87) 3,575.05 (322.47) 4,708.55 (702.67) 5, Companies 696.57 (84.44) 216.57 (32.23) 1,073.10 (65.75) 23.71 (6.58) 197.52 (22.90) 540.71 (65.19) 6, Diseases 333.65 (38.70) 179.70 (21.29) 386.00 (33.43) 35.55 (10.43) 122.00 (17.18) 135.35 (18.49) 7, Politicians 554.14 (45.95) 5,804.47 1,384.63 5,018.21 97.63 551.00 1,210.68 (436.61) (184.96) (309.24) (25.34) (61.59) (166.41) Note. Numbers of pairs were computed for each participant individually and then averaged across participants. Comparisons of cities are computed within a country. 3 ONLINE MATERIALS 4 Table OA4 Proportion of Correct Inferences Participants Made in Experiments 1–3 Experiment, object, criterion Unrecognized pairs 1, Cities, size .55 (.02) 2, Cities, fame .57 (.02) Mean proportion of correct inferences (SE) Tartle– Knowledge– Tartle pairs Tartle– unrecognized unrecognized knowledge pairs pairs pairs .70 .81 .61 .71 (.02) (.02) (.05) (.02) .84 (.02) .95 (.01) .68 (.03) .79 (.02) Knowledge pairs .73 (.02) .69 (.02) 3, Cities, size .51 .60 .76 .57 .62 .61 (.02) (.02) (.02) (.04) (.02) (.02) Note. Proportions of correct inferences were computed for each participant individually and then averaged across participants. City size corresponds to the number of inhabitants of a city. We operationalized city fame as the proportion of participants who recognized a city in Experiments 1, 2, and 3. ONLINE MATERIALS Table OA5 Inference Times in Experiments 1–3 Experiment, object, criterion 1, Cities, size 2, Cities, fame Unrecognized pairs 2.04 (.10) 1.96 (.09) Means of median inference times in seconds (SE) Tartle– Knowledge– Tartle pairs Tartle– unrecognized unrecognized knowledge pairs pairs pairs 1.90 1.67 2.18 1.98 (.09) (.07) (.12) (.08) 1.62 (.06) 1.38 (.03) 1.79 (.06) 1.50 (.04) Knowledge pairs 2.14 (.10) 1.62 (.06) 3, Cities, .93 .92 .89 .94 .88 .91 size (.01) (.01) (.01) (.02) (.01) (.01) Note. Inference times are the times it took participants to make an inference in the inference tasks. Median inference times were computed for each participant individually and then averaged across participants. 5 ONLINE MATERIALS Online Materials B Stimuli Used in the Experiments Objects included cities (Experiments 1–3, 10), countries, companies, diseases, and politicians (Experiments 4–9). Counts of the number of websites in which an object’s name occurred were produced by the search engine Yahoo on November 3, 2006 (Experiments 1–3), and on May 15, 2007 (Experiments 4–9). Experiments 1–3, 10. City names (Table OB1) and statistics on city size were retrieved from http://www.citypopulation.de. We excluded the capitals from the retrieved lists of cities because Germans are likely to know for sure that, for instance, Paris and London are the largest cities in France and Great Britain, respectively. Such conclusive knowledge about the criterion (i.e., city size) allows people to deduce with certainty (rather than to infer under uncertainty) that the capital cities are larger than other cities (for a discussion of the differences between inferences and deductions, see Gigerenzer et al., 1991; Pachur & Hertwig, 2006). We additionally dropped the smallest of the British, U.S., and Austrian cities as well as the six smallest Italian cities from the lists of retrieved cities. Dropping these cities allowed us to divide the tasks in our experiments into even parts that were separated by a reasonable number of breaks, Experiments 4–9. Country names (Table OB2) and statistics on countries’ gross domestic product were retrieved from http://www.destatis.de (German Federal Statistical Office); company names (Table OB3) and statistics on companies’ market capitalization from http://deutsche-boerse.com (German Stock Exchange); disease names (Table OB4) from http://www.rki.de (German Federal Agency for Disease Control and Prevention); and politicians’ names (Table OB5) from http://de.wikipedia.org. 6 ONLINE MATERIALS Modifications and exclusion of stimuli. The retrieved lists of names were slightly modified: (a) Where the spelling of names differed from the names in common usage, we used the more common spelling. (b) We deleted the Democratic Republic of Congo from the retrieved list of countries. Its name closely resembles that of another country, namely, the Republic of Congo. Many participants would not know that there are, in fact, two countries with the name Congo, and we did not want them to mistakenly believe that there was an error in the set of stimuli. To allow the tasks in our experiments to be divided into even parts separated by a reasonable number of breaks, we added Monaco, San Marino, Andorra, and Vatican City as filler stimuli to the list of city names. These four fillers were excluded from all analyses. Moreover, one name from the country list, Great Britain, was excluded from all analyses because there was a typographical error in its name. Finally, the countries Myanmar and Cuba could not be included in all analyses, as we did not find information on these countries’ gross domestic product. (c) In the history of the Federal Republic of Germany, two politicians have shared the same name. We used this name only once. 7 ONLINE MATERIALS Table OB1 City Names (in German) Used in Experiments 1–3, 10 Cities Aachen Colombes Lille Prato Aberdeen Columbus Limoges Preston Albacete Córdoba Livorno Quimper Alcorcón Coventry Logroño Raleigh Alicante Crawley Lorca Rankweil Almería Créteil Lorient Ravenna Amiens Dallas Lübeck Reading Anaheim Denver Lustenau Reims Ancona Derby Luton Rennes Andria Detroit Mailand Rimini Angers Dijon Mainz Rostock Antibes Dornbirn Málaga Roubaix Arezzo Dortmund Mannheim Rouen Atlanta Drancy Marbella Sabadell Augsburg Dresden Mataró Salerno Aurora Dudley Memphis Salzburg Austin Duisburg Messina Sassari Avignon Dundee Miami Schwaz Avilés Elche Modena Seattle Badajoz Erfurt Mödling Sevilla Badalona Essen Monza Siracusa Baden Exeter Móstoles Slough Barletta Ferrara München Solingen Belfast Florenz Münster Spittal Bergamo Foggia Murcia Steyr Besançon Forlì Nancy Swansea Béziers Fresno Nanterre Swindon Bilbao Fürth Nantes Tampa Bludenz Genua Neapel Tarent Bochum Getafe Neuss Tarrasa Bologna Gijón Newark Telde Bolton Glasgow Newport Telfs Bordeaux Gmunden Nîmes Terni Boston Granada Nizza Ternitz Bottrop Grenoble Norwich Toledo Bourges Guecho Novara Toulon Bozen Hagen Nürnberg Toulouse Bradford Hallein Oakland Tours Braunau Hamburg Oldham Traun Bregenz Hannover Omaha Trient 8 ONLINE MATERIALS Cities Bremen Herne Orense Triest Brescia Hohenems Orléans Tucson Brest Honolulu Oviedo Tulln Brighton Houston Oxford Tulsa Brindisi Huelva Padua Turin Bristol Ipswich Palermo Udine Buffalo Kassel Pamplona Valence Burgos Köflach Parla Valencia Cáceres Krefeld Parma Venedig Cádiz Krems Perugia Verona Cagliari Kufstein Pesaro Vicenza Calais Latina Pescara Villach Cannes Lecce Phoenix Vitoria Cardiff Leeds Piacenza Walsall Casoria Leganés Pistoia Watford Catania Leipzig Plymouth Wichita Cesena Leoben Poitiers Wörgl Chemnitz Leonding Poole Würzburg Chicago Lérida Portland Zaragoza Colmar Lienz Potsdam Zwettl 9 ONLINE MATERIALS Table OB2 Country Names (in German) Used in Experiment 4 Countries Ägypten Iran Pakistan Albanien Irland Panama Algerien Island Papua-Neuguinea Andorra Israel Paraguay Angola Italien Peru Argentinien Jamaika Philippinen Armenien Japan Polen Aserbaidschan Jemen Portugal Äthiopien Jordanien Republik Kongo Australien Kambodscha Ruanda Bahamas Kamerun Rumänien Bahrain Kanada Russland Bangladesch Kap Verde Sambia Barbados Kasachstan San Marino Belgien Katar Saudi-Arabien Belize Kenia Schweden Benin Kirgisistan Schweiz Bhutan Kiribati Senegal Bolivien Kolumbien Serbien und Montenegro Bosnien und Herzegowina Komoren Sierra Leone Botsuana Kroatien Simbabwe Brasilien Kuba Singapur Brunei Darussalam Kuwait Slowakei Bulgarien Laos Slowenien Burkina Faso Lesotho Spanien Burundi Lettland Sri Lanka Chile Libanon Südafrika China Libyen Südkorea Costa Rica Litauen Sudan Côte d'Ivoire Luxemburg Suriname Dänemark Madagaskar Swasiland Deutschland Malawi Syrien Dominikanische Republik Malaysia Tadschikistan Dschibuti Malediven Taiwan Ecuador Mali Tansania El Salvador Malta Thailand Eritrea Marokko Togo Estland Mauretanien Trinidad und Tobago Finnland Mauritius Tschad Frankreich Mazedonien Tschechische Republik 10 ONLINE MATERIALS Countries Gabun Mexiko Tunesien Gambia Moldau Türkei Georgien Monaco Turkmenistan Ghana Mongolei Uganda Grenada Mosambik Ukraine Griechenland Myanmar Ungarn Großbritannien Namibia Uruguay Guatemala Nepal USA Guinea Neuseeland Usbekistan Guinea-Bissau Nicaragua Vatikanstadt Guyana Niederlande Venezuela Haiti Niger Vereinigte Arabische Emirate Honduras Nigeria Vietnam Indien Norwegen Weißrussland Indonesien Oman Zentralafrikanische Republik Irak Österreich Zypern 11 ONLINE MATERIALS Table OB3 Company Names Used in Experiment 5 Companies Aareal Bank Deutsche Postbank Infineon Technologies Premiere adidas Deutsche Telekom IVG Immobilien ProSiebenSat.1 Media Allianz Deutz IWKA Puma Altana Douglas K+S Rheinmetall AMB Generali E.ON Karstadt Quelle RHÖN-KLINIKUM AWD Holding EADS Klöckner RWE BASF Fraport Krones Salzgitter Bayer Fresenius LANXESS SAP Beiersdorf Fresenius Medical Care Leoni SGL Carbon Bilfinger Berger GAGFAH Linde Siemens BMW GEA Group MAN STADA Arzneimittel Celesio Hannover Merck Südzucker Commerzbank Rückversicherung METRO Symrise Continental HeidelbergCement MLP Techem DaimlerChrysler Heidelberger MTU Aero Engines ThyssenKrupp Druckmaschinen DEPFA BANK Münchener Rück TUI Deutsche Bank Henkel Norddeutsche Affinerie Volkswagen Deutsche Börse HOCHTIEF PATRIZIA Immobilien Vossloh Deutsche EuroShop Hugo Boss Pfleiderer Wacker Chemie Deutsche Lufthansa Hypo Real Estate Praktiker Bau- und WINCOR NIXDORF Deutsche Post IKB Dt. Industriebank Heimwerkermärkte 12 ONLINE MATERIALS Table OB4 Disease Names (in German) Used in Experiments 6 and 8 Diseases Adenovirus Giardiasis Legionellose Röteln Botulismus Haemophilus influenzae Lepra Salmonellose Brucellose Hämolytisch-urämisches Leptospirose Shigellose Syndrom Sonstige virale Campylobacter-Enteritis Listeriose Cholera Hantavirus-Erkrankung Malaria Creutzfeld-Jakob- Hepatitis A Masern Syphilis Krankheit Hepatitis B Meningokokken Tollwut Denguefieber Hepatitis C Milzbrand Toxoplasmose Diphtherie Hepatitis D Norovirus-Gastroenteritis Trichinellose E.-coli-Enteritis Hepatitis E Ornithose Tuberkulose Echinokokkose Hepatitis Non A-E Paratyphus Tularämie EHEC-Erkrankung HIV-Infektion Pest Typhus abdominalis Fleckfieber Influenza Poliomyelitis Yersiniose Frühsommer- Kryptosporidiose Q-Fieber Meningoenzephalitiss Läuserückfallfieber Rotavirus-Erkrankung hämorrhagische Fieber 13 ONLINE MATERIALS Table OB5 Politicians’ Names Used in Experiments 7 and 9 Politicians Konrad Adenauer Michael Glos Werner Maihofer Hans Schuberth Walter Arendt Johann Baptist Gradl Lothar de Maizière Irmgard Schwaetzer Egon Bahr Kurt Gscheidle Thomas de Maizière Werner Schwarz Siegfried Balke Dieter Haack Hans Matthöfer Elisabeth Schwarzhaupt Martin Bangemann Kai-Uwe von Hassel Erich Mende Christian Schwarz- Rainer Barzel Gerda Hasselfeldt Hans-Joachim von Gerhart Baum Volker Hauff Merkatz Hans-Christoph Seebohm Ernst Benda Helmut Haussmann Angela Merkel Horst Seehofer Wolfgang Mischnick Rudolf Seiters Schilling Christine Bergmann Gustav Heinemann Sabine Bergmann-Pohl Heinrich Hellwege Jürgen Möllemann Carl-Dieter Spranger Theodor Blank Hermann Höcherl Alex Möller Wolfgang Stammberger Franz Blücher Bodo Hombach Werner Müller Heinz Starke Norbert Blüm Antje Huber Franz Müntefering Frank Walter Steinmeier Kurt Bodewig Richard Jaeger Fritz Neumayer Peer Steinbrück Wolfgang Bötsch Gerhard Jahn Alois Niederalt Manfred Stolpe Friedrich Bohl Franz Josef Jung Wilhelm Niklas Gerhard Stoltenberg Jochen Borchert Jakob Kaiser Claudia Nolte Anton Storch Willy Brandt Manfred Kanther Theodor Oberländer Franz Josef Strauß Aenne Brauksiepe Hans Katzer Rainer Offergeld Käte Strobel Heinrich von Brentano di Ignaz Kiechle Rainer Ortleb Peter Struck Eduard Oswald Richard Stücklen Tremezzo Kurt Georg Kiesinger Ewald Bucher Klaus Kinkel Victor-Emanuel Preusker Rita Süssmuth Andreas von Bülow Hans Klein Karl Ravens Wolfgang Tiefensee Edelgard Bulmahn Reinhard Klimmt Günter Rexrodt Robert Tillmanns Wolfgang Clement Helmut Kohl Heinz Riesenhuber Klaus Töpfer Herta Däubler-Gmelin Waldemar Kraft Walter Riester Jürgen Trittin Rolf Dahlgrün Günther Krause Hannelore Rönsch Hans-Jochen Vogel Thomas Dehler Heinrich Krone Helmut Rohde Theodor Waigel Klaus von Dohnanyi Hans Krüger Volker Rühe Walter Wallmann Werner Dollinger Paul Krüger Jürgen Rüttgers Hansjoachim Walther Horst Ehmke Renate Künast Hermann Schäfer Jürgen Warnke Herbert Ehrenberg Karl-Hans Laermann Fritz Schäffer Karl Weber Hans Eichel Oskar Lafontaine Wolfgang Schäuble Herbert Wehner Hans A. Engelhard Manfred Lahnstein Rudolf Scharping Heinz Westphal Björn Engholm Otto Graf Lambsdorff Annette Schavan Ludger Westrick Erhard Eppler Lauritz Lauritzen Walter Scheel Heidemarie Wieczorek- Ludwig Erhard Georg Leber Karl Schiller Eberhard Wildermuth Zeul Josef Ertl Robert Lehr Otto Schily Franz Etzel Ursula Lehr Marie Schlei Hans Wilhelmi Andrea Fischer Ernst Lemmer Carlo Schmid Dorothee Wilms 14 ONLINE MATERIALS Politicians Joschka Fischer Hans Lenz Helmut Schmidt Heinrich Windelen Katharina Focke Hans Leussink Renate Schmidt Hans-Jürgen Wischnewski Egon Franke Sabine Leutheusser- Ulla Schmidt Matthias Wissmann Hans Friderichs Schnarrenberger Edzard Schmidt-Jortzig Manfred Wörner Anke Fuchs Ursula von der Leyen Jürgen Schmude Franz-Josef Wuermeling Karl-Heinz Funke Hermann Lindrath Kurt Schmücker Friedrich Zimmermann Sigmar Gabriel Heinrich Lübke Oscar Schneider Brigitte Zypries Heiner Geißler Paul Lücke Rupert Scholz Hans-Dietrich Genscher Hans Lukaschek Gerhard Schröder 15 ONLINE MATERIALS Online Materials C Perceptual-Motor Times In ACT-R, different actions take prescribed amounts of time. We wrote an ACT-R model that implemented two production rules to model the perceptual-motor times in the trials in the computerized recognition tasks in our experiments. In each trial, one object was shown at a time, and participants had to judge whether they recognized the object. Participants responded by pressing a key. When the first production rule, attend-encode, is fired, a request is made to attend and encode the object. It takes 50 msec to fire this production rule, and 85 msec to attend and encode the object. When the second production rule, press-key, is fired, a request is made to press the key. It takes 50 msec to fire this production rule, 250 msec to prepare the movement, 50 msec to initiate the action, and another 100 msec for the key to be struck. Thus, the total perceptual-motor time in a trial in the recognition tasks is 585 msec. Production rule attend-encode If the goal is to attend and encode an object and the visual location of the object is known and focused on, Then attend and encode the object and set the goal to press a key. Production rule press-key If the goal is to press a key, Then prepare the features of the movement, initiate the action, and press the key. Note that in each trial in the recognition tasks, prior to each presentation of an object, a small fixation cross was shown in the place where the object would subsequently appear. Participants were instructed to always fixate on this cross when it appeared. The production rule attend-encode takes this into account by assuming that a participant would already know the visual location of an object. 16 ONLINE MATERIALS Online Materials D Detailed Descriptions of Simulations 1–11 (Main text) and C1–C3 (Appendix C) The summaries of our simulations are necessarily incomplete, simplified descriptions of computer code. We invite interested readers to contact us for implementation details. In Simulations 2, 3, 9, C1, C2, and C3, each run of our memory model can be thought of as generating one hypothetical person's recognition responses, knowledge responses, and recognition times for each of the objects considered in the simulation. In those simulations that additionally involve the timing model, this hypothetical person's predicted data is then fed into the timing model in the same way as the observed data is fed into it. That is, the simulation structure imposed by the timing model on the data predicted by the memory model is always identical to the structure imposed on our observed data. Using the Ecological Memory Model to Predict Memory Performance From the Environment: Simulation 1 In Simulation 1, we examined whether environmental data enables our memory model to account for the probabilities of recognizing objects and retrieving knowledge about them, as well as for the associated recognition time distributions. Model Calibration. In ACT-R, a chunk’s activation equals the log odds (ln[PR/(1−PR)]) that a chunk is needed in the current context. We calibrated the memory model to log recall odds, which indirectly calibrates log need odds. To calibrate the memory model to the observed recognition probabilities of Experiment 1, we transformed them and Equation 5 into their log odds forms. In doing so, to avoid division by zero (where PR = 1) and infinity (where PR = 0), we corrected PR = 1 to PR – 1 / (2N) and PR = 0 to PR + 1 / (2N) (N = number of participants in Experiment 1). Using the log odds form of the observed recognition probabilities and of Equation 5, we estimated the constant, cR (-8.52), the total retrieval noise, s 17 ONLINE MATERIALS (.83), and the scaling parameter, bR (.70), in a nonlinear regression analysis. We anchored the activation scale by setting the expected value of the retrieval criterion distribution, τ, to zero; an object with an activation of 0 would have a 50% chance of being retrieved. (This parameter can be arbitrarily set; the memory model’s fit in the regression analysis does not depend on it.) With these parameters fixed, in a second calibration step we transformed the observed knowledge probabilities of Experiment 1 and Equation 6 into their log-odds forms and estimated the constant cK (-9.93) and the scaling parameter bK (.68) in a nonlinear regression analysis. In a final calibration step, we calibrated the memory model to the recognition time distributions of Experiment 1, striking a balance between fitting the distributions’ 25th, 50th, and 75th percentiles by informally searching the parameter space (i.e., through visual inspection of Figure 4). Specifically, to calibrate the memory model, we estimated those parts of each object’s activation distribution that fell above the retrieval criterion, and we computed a retrieval time distribution from it (i.e., using Equation 7). This is the retrieval time distribution given that the object is retrieved (cf. Equation 4). We assume there is not just one retrieval criterion, but rather a probability distribution of retrieval criteria (Appendix B). Therefore, we split the total retrieval noise estimated previously from the observed recognition probabilities into criterion noise and activation noise, which determine the shape of the retrieval criterion and activation distributions, respectively. We then computed retrieval time distributions across the retrieval criterion distribution. In doing so, we estimated the criterion noise, sτ (.60), and the scaling parameter F (.49). The activation noise is fixed (sA = .58) once sτ is estimated (Equation 8), and the parameter values for cR and bR were set to the values estimated previously from the observed recognition probabilities. We assume recognition times are a function of retrieval times plus perceptual-motor times (Equation 7). We set the standard deviation of the 18 ONLINE MATERIALS perceptual-motor time distribution to .12 sec, which is a value we estimated in conjunction with the other parameters (F, sτ). Data Predicted by the Memory Model. To test how well the memory model predicts behavior, we used Equations 5–8 with the fixed parameters to predict the recognition and knowledge probabilities (PR and PK respectively) as well as the recognition times in Experiments 2–9. Quantifying Cognitive Niches: Simulation 2 In Simulation 2, we quantified the overlap between the niches of the fluency heuristic, the recognition heuristic, and knowledge-based strategies. Applying the Timing Model to the Observed data. For each participant of Experiments 2–7, objects were exhaustively paired and grouped into bins according to the ranks of each object’s environmental frequency, as measured by its web frequency. There were 50 bins for cities, and 20 bins for countries, companies, diseases, and politicians. (We tried to allocate the same number of data points to each bin. Where the total number of data points was not divisible by the number of bins, the size of some bins were increased to accommodate one extra data point. Which bins’ sizes were increased was determined at random.) For each pair in each bin, we tested which strategy the participant would have been able to apply to make inferences about that pair. Fluency heuristic. To test whether the participant would have been able to apply the fluency heuristic, we first checked if both objects in a pair were recognized. If so, the probability of a person being able to apply the fluency heuristic is the probability of detecting a difference in recognition times. To estimate this detection probability, PD, we fed the participant’s recognition times into the timing model (Equation 9). In each run of the timing model, we let it count the number of pulses associated with the person’s recognition time for 19 ONLINE MATERIALS each of the objects in a pair. Across runs of the timing model, for each pair, we counted how often a difference in pulses was detected and how often it was not detected. For each pair of recognized objects, we computed the detection probability as the proportion of times a difference was detected, using Equation 10. If either one or both objects in a pair are unrecognized, by definition, the probability of a person being able to apply the fluency heuristic is zero. Across all pairs in a bin, we averaged the probabilities of the participant being able to apply the fluency heuristic. We averaged the probabilities across participants. Knowledge-based strategies. Similarly, for each participant, we estimated the probability that this individual would have been able to apply a knowledge-based strategy, assuming that knowledge-based strategies, such as those listed in Table 1, are applicable when knowledge is available about at least one object in a pair. Across the pairs in a bin, we computed this probability as the proportion of pairs for which knowledge was available. As for the fluency heuristic, we averaged the probabilities across participants. Recognition heuristic. Finally, across the pairs in each bin, we computed the probability that the participant would have been able to apply the recognition heuristic as the proportion of pairs in which the participant recognized one object but not the other. As for the fluency heuristic and knowledge-based strategies, we averaged the probabilities across participants. Applying the Timing Model to the Data Predicted by the Memory Model. To generate the predicted data, we ran the memory model 1,500 times, creating 1,500 hypothetical persons’ predicted recognition and knowledge responses. Specifically, according to the predicted recognition probability, PR, we determined whether a hypothetical (i.e., simulated) person would recognize an object and determined the object’s predicted recognition time, Trecognition, by drawing a sample from the object’s predicted recognition time distribution. If the 20 ONLINE MATERIALS object was recognized, then according to the predicted knowledge probability, PK, we determined whether that hypothetical person would additionally indicate knowing something about it. Each hypothetical person’s objects were then exhaustively paired and grouped into bins according to the ranks of each object’s environmental frequency. The binning procedure was the same as the one used for the observed data. For each pair in each bin, we tested which strategy a hypothetical person would have been able to apply to make inferences about that pair. Fluency heuristic. To test whether a hypothetical person would have been able to apply the fluency heuristic, we fed the predicted recognition times into the timing model (Equation 9), using this model to process the hypothetical person’s predicted data in the same way as we processed our experimental participants’ observed data. Specifically, we used Equation 10 to compute the predicted detection probability, PD, which is the predicted probability of a hypothetical person being able to apply the fluency heuristic on predicted pairs of two recognized objects. If a hypothetical person did not recognize either one or both objects in a pair, by definition, the predicted probability of the hypothetical person being able to apply the fluency heuristic is zero on this pair. Across all pairs in a bin, we averaged the predicted probabilities of a hypothetical person being able to apply the fluency heuristic. We averaged the predicted probabilities across hypothetical persons. Knowledge-based strategies. As we did for the observed data, across the pairs in each bin, we also computed the predicted probability that a hypothetical person would have been able to apply knowledge-based strategies, processing the hypothetical person’s data in the same way as we processed the experimental participants’ data. Specifically, we estimated the probability of a hypothetical person having been able to apply knowledge-based strategies, assuming that knowledge-based strategies are applicable when knowledge is available about at 21 ONLINE MATERIALS least one object in a pair. Across the pairs in a bin, for each hypothetical person we computed this probability as the proportion of predicted pairs for which knowledge was available. As for the fluency heuristic, we averaged this predicted probability across hypothetical persons. Recognition heuristic. Finally, across the pairs in each bin, we computed the predicted probability that a hypothetical person would have been able to apply the recognition heuristic as the proportion of pairs in which the hypothetical person recognized one object but not the other. As for the fluency heuristic and knowledge-based strategies, we averaged the predicted probabilities across hypothetical persons. When Does the Fluency Heuristic Help Make Accurate Inferences? Simulation 3 In Simulation 3, we predicted how the magnitude of the fluency validity changes as a function of the detection probability, PD. To this end, we applied the timing model to the data observed in Experiments 2–7 and used both this model and our memory model to generate the predicted data. In doing so, we predicted validities for inferring cities’ size, countries’ gross domestic product in 2006, companies’ market capitalization on May 31, 2007, diseases’ fame, and politicians’ fame. We operationalized fame as the proportion of participants who recognized a disease in Experiments 6 and 8, and a politician in Experiments 7 and 9, respectively. (Note that our use of the validity equation excludes objects with equal criterion values.) Applying the Timing Model to the Observed Data. The simulation (Equations 9–11) can be broken into two parts. First, exhaustively pairing the objects, we used the timing model to estimate for each of each participant’s pairs of two recognized objects the detection probability, PD, that the participant would have been able to detect a difference in recognition times between the objects. This is the probability that a person would have been capable of applying the fluency heuristic. Second, according to the detection probabilities, we grouped 22 ONLINE MATERIALS each participant’s tartle, tartle–knowledge, and knowledge pairs into four bins, running the timing model a second time to estimate the magnitude of the fluency validity, vfh, in each bin. (Note that in this and all subsequent analyses involving cities as objects, pairs are made up of cities from the same country. This is also true for the data analysis shown in Figure 8.) To estimate the detection probability, PD, in each of a first set of runs of the timing model, for each of each participant’s pairs of objects, we checked whether a participant would have detected a difference in the numbers of pulses between two objects. For each of each participant’s pairs of objects, we computed PD as the proportion of times a difference would have been detected across this first set of runs of the timing model. To estimate the fluency validity, vfh, in each of a second set of runs of the timing model, for each of each participant’s pairs of objects, we let the timing model generate the number of pulses that participant would have counted while recognizing each of the two objects. We let the timing model then compare these numbers of pulses. In each run of this second set of runs of the timing model, for the pairs where a participant would have detected a difference in the numbers of pulses, we checked whether the participant would have made a correct or an incorrect inference if that person had inferred the object with the smaller number of pulses to score a higher value on the criterion. The fluency validity, vfh, is the proportion of times the participant would have made a correct inference, computed across those pairs where the participant would have detected a difference in the numbers of pulses between two objects. This yields the fluency validity conditional on the participant having detected a difference in recognition times. Specifically for estimating the fluency validity, in each run of this second set of runs of the timing model, we used each participant’s responses in the recognition and general knowledge task to classify that participant’s pairs of objects into tartle, tartle–knowledge, and 23 ONLINE MATERIALS knowledge pairs. Within these three types of pairs, for each participant we grouped those pairs that would have allowed the participant to detect a difference in the numbers of pulses into four bins, arranged by quartiles of the previously computed (i.e., in the first series of runs of the timing model) detection probabilities, PD. (In this and all subsequent simulations involving quartiles of PD, the quartiles were approximated by first ordering the pairs according to PD and then splitting the pairs into four equal parts; where the total number of data points was not divisible by the number of bins, we first evenly allocated as many data points as possible to the four bins and then allocated the remaining data points to the last bin.) In each run of this second set of runs of the timing model, for each of the 3 × 4 bins (i.e., three types of pairs, four bins), we computed the average of the previously computed detection probability, PD, as well as the fluency validity, vfh. We then computed means (including standard errors) across participants. Finally, we averaged the variables across runs of the second series of runs of the timing model. Applying the Timing Model to the Data Predicted by the Memory Model. To generate the combined predictions of the memory model and the timing model, we ran another simulation using Equations 5–11. The simulation of the memory model was run 1,500 times, creating 1,500 hypothetical persons’ predicted recognition and knowledge responses. For each of these 1,500 hypothetical persons, we ran the same two sets of runs of the timing model we had also run for the observed data. Specifically, the total simulation comprised three steps. First, we used our memory model to generate hypothetical persons’ predicted recognition responses, knowledge responses, and recognition times. Second, we used the timing model to compute for each of each hypothetical person’s pairs of two recognized objects the predicted detection probability, PD, that the hypothetical person would have been able to detect a difference in predicted recognition times between the objects. Third, according to the predicted detection probabilities, 24 ONLINE MATERIALS we grouped each hypothetical person’s predicted tartle, tartle–knowledge, and knowledge pairs into four bins, arranged by quartiles, running the timing model a second time to compute the magnitude of the predicted fluency validity, vfh, in each bin. First, in each run of the memory model, according to the predicted recognition probability, PR, we determined whether a hypothetical person would recognize an object. If the object was recognized, then, according to the predicted knowledge probability, PK, we determined whether that person would additionally know something about it. In each run of the memory model, that is, for each hypothetical person, we then exhaustively paired objects into that hypothetical person’s predicted tartle, tartle–knowledge, and knowledge pairs and also determined each object’s predicted recognition time, Trecognition, by drawing a sample from the object’s predicted recognition time distribution. Second, for each hypothetical person, we further used the timing model to compute the predicted detection probability, PD, that this hypothetical person would have detected a difference in predicted recognition times between two objects. To compute PD, in each run of a first set of runs of the timing model for each pair of objects, we checked whether a hypothetical person would have detected a difference in the predicted numbers of pulses between the two objects. We computed PD as the proportion of times a difference would have been detected across this first set of runs of the timing model. Third, to compute the predicted fluency validity, vfh, in each run of a second set of runs of the timing model, for each pair of objects, we let the timing model generate the predicted number of pulses a hypothetical person would have counted while recognizing each of the two objects in a pair in that run. We let the timing model then compare these predicted numbers of pulses. As for the observed data, the predicted fluency validity is the proportion of times the hypothetical person would have made a correct inference, computed across those predicted 25 ONLINE MATERIALS pairs where the person would have detected a difference in the predicted numbers of pulses between two objects. Specifically, for computing the predicted fluency validity in each run of this second set of runs of the timing model, within each hypothetical person’s predicted tartle, tartle– knowledge, and knowledge pairs we grouped all pairs into four bins by ordering the pairs according to the previously computed (i.e., in the first series of runs of the timing model) predicted detection probabilities, PD. Bins were arranged by quartiles in the same way as the observed data. In each run of this second set of runs of the timing model, for each of the 3 × 4 bins, we averaged the previously computed predicted detection probabilities, PD, as well as the predicted fluency validity, vfh. In each run of this second set of runs of the timing model, we then computed means (including standard errors) across hypothetical persons. Finally, we averaged the variables across the second set of runs of the timing model. The Fluency Validity and the Knowledge-Based Strategies’ Validities: Simulation 4 In Simulation 4, we examined the magnitude of the fluency validity and the validities of each of the six knowledge-based strategies on pairs of objects in which one of the knowledgebased strategies and the fluency heuristic were both applicable. We computed the validities in this situation of overlapping cognitive niches as a function of the detection probability, PD, of a person being able to apply the fluency heuristic. Applying the Timing Model to the Observed Data. To examine which strategy would help a person make the most accurate inferences, we ran a simulation, using Equations 9–11 and 13 on the data observed in Experiments 1 and 3. The simulation can be broken into two parts. First, exhaustively pairing the objects, we used the timing model to compute for each participant’s pairs of two recognized objects the detection probability, PD, that the participant would have detected a difference in recognition times and hence been capable of applying the 26 ONLINE MATERIALS fluency heuristic. Second, we reran the timing model to assess for each of each participant’s tartle–knowledge and knowledge pairs whether that participant would be able to apply the fluency heuristic as well as a knowledge-based strategy in that run. For each participant we grouped the tartle–knowledge and knowledge pairs where one of the knowledge-based strategies was applicable along with the fluency heuristic into four bins according to the previously computed detection probabilities, PD. We computed all strategies’ validities (vfh, vt1, vt2, vttb, vttfc, vttfv1, vttfv2) across the pairs in each bin. Specifically, to compute the detection probability, PD, in each of a first set of runs of the timing model, for each of each participant’s pairs of objects, we checked whether that participant would have detected a difference in the numbers of pulses between two objects. For each of each participant’s pairs of objects, we computed PD as the proportion of times a difference would have been detected across this first set of runs of the timing model. To assess whether a participant would have been able to apply the fluency heuristic, in each run of a second set of runs of the timing model, for each of each participant’s tartle– knowledge and knowledge pairs, we let the timing model generate the number of pulses that participant would have counted while recognizing each of the two objects in a pair. For these pairs, we also checked whether that person would also have been able to apply a knowledgebased strategy. For take-the-best-cue, take-the-first-value1, and take-the-first-value2, this entailed letting the timing model generate the numbers of pulses the participant would have counted when retrieving the cue values. For each comparison between one of the six knowledge-based strategies and the fluency heuristic, we then selected those tartle–knowledge and knowledge pairs where the fluency heuristic and the respective knowledge-based strategy were both applicable. 27 ONLINE MATERIALS Within these tartle–knowledge and knowledge pairs we grouped all pairs into four bins, arranged by quartiles of the previously (i.e., in the first series of runs of the timing model) computed detection probabilities, PD. Quartiles were approximated as described in Simulation 3. For each of the 2 × 4 bins (i.e., two types of pairs, four bins), we calculated the mean detection probability, PD, as well as the validities (vfh, vt1, vt2, vttb, vttfc, vttfv1, vttfv2) for the fluency heuristic and the knowledge-based strategies and computed means (including standard errors) across participants. Finally, we averaged the variables across the second set of runs of the timing model. The Fluency Heuristic Accordance Rate and the Knowledge-based Strategies’ Accordance Rates: Simulation 5 In Simulation 5, we examined how well the fluency heuristic and each of six knowledge-based strategies predicted people’s inferences on the pairs of objects for which one of the knowledge-based strategies and the fluency heuristic were both applicable. We computed the strategies’ accordance rates in this situation of overlapping cognitive niches as a function of the detection probability, PD, of a person being able to apply the fluency heuristic. (Note that in this and all other simulations involving accordance rates or other measures that depend on participants’ actual behavior—i.e., inference times, proportion of correct inferences—we focused only on those pairs of objects the participant had actually seen in the inference task. In contrast, simulations involving measures—i.e., validities—that do not depend on participant’s actual behavior were run by exhaustively pairing objects with each other.) Applying the Timing Model to the Observed Data. To examine how well the strategies account for people’s inferences, we ran a simulation with Equations 9, 10, 12, and 13 on the data observed in Experiment 1. The simulation can be broken into two parts. First, as in Simulation 4, in a first series of runs of the timing model, we computed for each participant’s 28 ONLINE MATERIALS pairs of two recognized objects the detection probability, PD, that the participant would be able to apply the fluency heuristic. Second, as in Simulation 4, we ran a second series of runs of the timing model to assess for each of each participant’s tartle–knowledge and knowledge pairs whether that participant would be able to apply the fluency heuristic as well as each of the six knowledge-based strategies. Then, diverging from Simulation 4, in each of this second series of runs of the timing model, we computed the competing strategies’ accordance rates, kfh, kt1, kt2, kttb, kttfc, kttfv1, kttfv2 (rather than their validities), on those tartle–knowledge and knowledge pairs where each of the six knowledge-based strategies, respectively, was applicable simultaneously with the fluency heuristic. To this end, we grouped these pairs into four bins, arranged by quartiles of the previously (i.e., in the first series of runs of the timing model) computed detection probabilities, PD. Quartiles were approximated as described in Simulation 3. For each of the 2 × 4 bins (i.e., two types of pairs, four bins), we calculated the mean detection probability, PD, as well as each of the competing strategies’ accordance rate and computed means (including standard errors) across participants. As in Simulation 4, we averaged the variables across the second series of runs of the timing model. Do People Adopt the Fluency Heuristic When They Cannot Use Knowledge? Simulation 6 In Simulation 6, we examined the magnitude of the fluency heuristic accordance rate on tartle pairs in Experiment 2, in which participants (i.e., in the instruct group) were instructed to always use the fluency heuristic when inferring which of two cities is recognized by more students. In addition, we examined the magnitude of the fluency heuristic accordance rate on tartle pairs in Experiment 1. Applying the Timing Model to the Observed Data. To model people’s inferences with the fluency heuristic, we modified the design of Simulation 5 (Equations 9, 10, and 12). First, in a first series of runs of the timing model, we computed for each participant’s pairs of 29 ONLINE MATERIALS tartle objects the detection probability, PD, that the participant would be able to apply the fluency heuristic. Second, in each run of a second series of runs of the timing model, we assessed for each participant’s tartle pairs (rather than tartle–knowledge and knowledge pairs as in Simulation 5) which city the participant would infer to score a larger value on the criterion. In doing so, we computed the fluency heuristic accordance rate (kfh) on the tartle pairs conditional on the fluency heuristic being applicable (rather than conditional on this heuristic being applicable simultaneously with the other strategies, as in Simulation 5). We grouped each participant’s tartle pairs in which the participant would have been able to apply the fluency heuristic into four bins according to the previously computed (i.e., in the first series of runs of the timing model) detection probabilities, PD. Bins were arranged by quartiles of the previously computed detection probabilities, PD. Quartiles were approximated as described in Simulation 3. As in Simulation 5, for each of the four bins, we computed the mean detection probability, PD, as well as the accordance rate (kfh) for the fluency heuristic and computed means (including standard errors) across participants. Finally, we averaged the variables across the second set of runs of the timing model. When Is the Fluency Heuristic Easy to Use? Simulation 7 In Simulation 7, we considered participants in the instruct group of Experiment 2. In the instruct group, participants were instructed to always apply the fluency heuristic when inferring which of two cities was recognized by more students. We examined (a) how well the fluency heuristic predicted participants’ inferences, (b) the proportion of correct inferences they made, and (c) the time it took them to make an inference. We computed these behavioral data as a function of the detection probability, PD, of a person being able to apply the fluency heuristic. Applying the Timing Model to the Observed Data. To model people’s inferences with the fluency heuristic, we modified the design of Simulation 5 (Equations 9, 10, 12). As in 30 ONLINE MATERIALS Simulation 5, in a first set of runs of the timing model, we computed for each participant’s pairs of two recognized cities the detection probability, PD, that the participant would have been able to detect a difference in recognition times and apply the fluency heuristic. Deviating from Simulation 5, in a second set of runs of the timing model, we then computed three kinds of behavioral data conditional on a participant having been able to apply the fluency heuristic. First, for each pair of recognized cities, we used the timing model to assess which city the participant would infer was recognized by more students, assuming the participant had used the fluency heuristic. Second, on those pairs where the participant had made an inference consistent with the fluency heuristic, we also examined how many correct inferences the participant had made, taking the proportion of participants from Experiments 1, 2, and 3 who had recognized each city as a criterion for which city was recognized by more students. Third, on those pairs where the participant had made an inference consistent with the fluency heuristic, we furthermore assessed the time it took the participant to make an inference; we refer to this time as inference time. As in Simulation 5, in each run of this second set of runs of the timing model we then grouped the pairs into four bins, arranged by quartiles of the previously computed detection probabilities, PD. Quartiles were approximated as described in Simulation 3. For each of the four bins, we computed (a) the fluency heuristic accordance rate, (b) the proportion of correct inferences, and (c) the median inference time across the pairs in a bin. As in Simulation 5, for each of the bins, we also computed the mean detection probability, PD. In each run of this second set of runs of the timing model, we then computed means (including standard errors) across participants. Finally, we averaged the variables across runs of the second series of runs of the timing model. 31 ONLINE MATERIALS When Is the Fluency Heuristic Fast to Use? Simulation 8 Simulation 8 is identical to Simulation 5, except that we ran the former on the data observed in Experiment 3, and the latter on the data observed in Experiment 1. In both simulations, we calculated the fluency heuristic’s and each of the six knowledge-based strategies’ accordance rates conditional on the fluency heuristic and a knowledge-based strategy both being applicable, plotting each strategy’s accordance rate as a function of the probability, PD, that a person would have been able to detect a difference in recognition times and apply the fluency heuristic. Recognition Validity as a Function of Perceived Recognition Times: Simulation 9 In Simulation 9, we predicted the recognition validity for inferring cities’ size, countries’ gross domestic product in 2006, companies’ market capitalization on May 31, 2007, diseases’ fame, and politicians’ fame in Experiments 2–7. To this end, we used our memory model in conjunction with the timing model. Applying the Timing Model to the Observed Data. To compute the observed recognition validity, vrh, we ran a simulation using Equations 9 and 11. Using participants’ responses in the recognition and general knowledge tasks in Experiments 2–7, we exhaustively paired each participant’s objects into tartle–unrecognized and knowledge–unrecognized pairs. Then, for each participant’s recognized objects, we let the timing model generate the number of pulses that participant would have counted while recognizing each object. We grouped each participant’s tartle–unrecognized and knowledge–unrecognized pairs into four bins according to the numbers of pulses accumulated for the recognized objects. Bins were arranged by quartiles of the numbers of pulses. Quartiles were approximated as described in Simulation 3. In each of the 2 × 4 bins (two types of pairs, four bins), we calculated the mean number of pulses as well as the recognition validity (vrh) for each participant. We computed means 32 ONLINE MATERIALS (including standard errors) across participants. Finally, we averaged the variables across runs of the timing model. Applying the Timing Model to the Data Predicted by the Memory Model. To generate the combined predictions of our memory model and the timing model, we ran another simulation using Equations 5–9 and 11. The simulation of the memory model was run 1,500 times, generating 1,500 hypothetical person’s predicted recognition and knowledge responses. For each of these 1,500 hypothetical persons, we ran the same set of runs of the timing model we had also run for the observed data. That is, first, in each run of the memory model, according to the predicted recognition probability, PR, we determined whether a hypothetical person would recognize an object. If the object was recognized, then, according to the predicted knowledge probability, PK, we determined whether that hypothetical person would additionally know something about it. In each run, that is, for each hypothetical person, we also determined each object’s predicted recognition time, Trecognition, by drawing a sample from the object’s predicted recognition time distribution. In each run, we then exhaustively paired these objects into predicted tartle– unrecognized and knowledge–unrecognized pairs. Second, for each hypothetical person’s predicted recognized objects, we let the timing model generate the predicted number of pulses that participant would have counted while recognizing each object. We grouped each hypothetical person’s predicted tartle–unrecognized and knowledge–unrecognized pairs into four bins according to the predicted numbers of pulses accumulated for the recognized objects. Bins were arranged by quartiles of the predicted numbers of pulses. Quartiles were approximated as described in Simulation 3. For each of the 2 × 4 bins (two types of pairs, four bins), we calculated the mean predicted number of pulses, as well as the predicted recognition validity. We computed means (including standard errors) 33 ONLINE MATERIALS across runs of the memory model, that is, across hypothetical persons. Finally, we averaged the variables across runs of the timing model. When Is the Recognition Heuristic Easy and Fast to Use? Simulation 10 In Simulation 10, in Experiment 1 and in the no-instruct group of Experiment 2, we examined (a) the magnitude of the recognition heuristic accordance rate, (b) the proportion of correct inferences participants made, and (c) the time it took them to make an inference. We computed these behavioral data as a function of a person’s perceived recognition times. Applying the Timing Model to the Observed Data. The simulation (Equations 9, 12) is similar to the way we processed the observed data in Simulation 9. Using participants’ responses in the recognition and general knowledge tasks in Experiment 1 and the no-instruct group of Experiment 2, we grouped each participant’s objects into tartle–unrecognized and knowledge–unrecognized pairs. For each participant, we let the timing model generate for each of the recognized objects the number of pulses the participant would have perceived when recognizing this object. We grouped each participant’s tartle–unrecognized and knowledge– unrecognized pairs into four bins according to the numbers of pulses accumulated for the recognized objects. Bins were arranged by quartiles of the numbers of pulses. Quartiles were approximated as described in Simulation 3. For each participant, in each of the 2 × 4 bins (two types of pairs, four bins), we calculated the mean number of pulses and the recognition heuristic accordance rate (krh). On those pairs where a participant had made an inference consistent with the recognition heuristic, we also computed the proportion of correct inferences the participant made as well as the median inference time it took the participant to make an inference. (In the no-instruct group of Experiment 2, we took the proportion of participants from Experiments 1, 2, and 3 who had recognized each city as a criterion for which city was recognized by more students.) For all of these variables, we computed means (including 34 ONLINE MATERIALS standard errors) across participants. Finally, we averaged the variables across runs of the timing model. When Are Knowledge-Based Strategies Easy to Use? Simulation 11 In Simulation 11, we explored how the magnitude of the knowledge-based strategies’ validities increases as a function of how easy it is for a person to apply the knowledge-based strategies. In doing so, we computed validities for inferring cities’ size. Applying the Timing Model to the Observed Data. The simulation (Equations 9, 11, 13) can be broken into two parts. First, using participants’ responses in the recognition and general knowledge tasks in Experiments 1 and 3, we exhaustively paired each participant’s objects into tartle–knowledge and knowledge pairs. We assessed how effortful it would have been for a participant to use a strategy. For the integration strategies tally1 and tally2, we computed differences between sums of cue values as a measure of effort (the more the sums of cue values differ, the less effortful it is to use an integration strategy), and for the lexicographic and sequential sampling strategies take-the-best, take-the-first-cue, take-the-first-value1, and take-the-first-value2, the number of comparisons of cue values that need to be considered prior to making an inference (the more comparisons that need to be considered, the more effortful it is to make an inference). For take-the-first-cue, take-the-first-value1, and take-the-first-value2, this required running the timing model, using each participant’s reaction times observed for each cue value in the cue-knowledge tasks as input for the timing model, and computing the effort as average over runs of the timing model. Second, for each participant and each strategy, we selected those tartle–knowledge and knowledge pairs where a strategy was applicable. For take-the-first-cue, take-the-first-value1, and take-the-first-value2, this required a second series of runs of the timing model. In each run 35 ONLINE MATERIALS of this second series of runs, we used the timing model to determine for each pair of objects whether take-the-first-cue, take-the-first-value1, and take-the-first-value2 were applicable. We grouped those pairs where a strategy was applicable into four bins according to the previously calculated effort involved in using the strategy. Bins were arranged by quartiles of each strategy’s respective currency of effort. In each of the 2 × 4 bins (two types of pairs, four bins), for each participant, we calculated each strategy’s validity (vt1, vt2, vttb, vttfc, vttfv1, vttfv2) as well as averages of each strategy’s currency of effort. We computed means (including standard errors) across participants. In the case of take-the-first-cue, take-the-first-value1, and take-thefirst-value2, we additionally averaged these data across the second series of runs of the timing model. For each strategy, this simulation procedure yields its validity and the effort involved in using it conditional on the strategy being applicable. Robustness of Model Predictions Across Proxies for Effort: Simulation C1 In Simulation C1, we predicted how the magnitude of the fluency validity changes as a function of the raw differences in recognition times between two objects, irrespective of the pulses associated with the objects’ recognition times and irrespective of whether a person would have detected a difference in pulses. To this end, we modified the design of Simulation 3, using our memory model alone, that is, without the timing model, generating the predicted data from web frequency to predict the fluency validity in Experiments 2–7. As in Simulation 3, we predicted validities for inferring cities’ size, countries’ gross domestic product in 2006, companies’ market capitalization on May 31, 2007, diseases’ fame, and politicians’ fame. Observed data. To compute the observed fluency validity, vfh, we used each participant’s responses in the recognition and general knowledge tasks in Experiments 2–7 to exhaustively pair the objects that the participant recognized into tartle, tartle–knowledge, and knowledge pairs. For each participant we grouped these pairs into four bins by ordering the 36 ONLINE MATERIALS pairs according to the observed differences in raw recognition times between the objects. Bins were arranged by quartiles of the recognition times. Quartiles were approximated as described in Simulation 3. For each of the 3 × 4 bins (three types of pairs, four bins), we computed for each participant the fluency validity for the paired comparisons of the objects in the bin, assuming that a person using the fluency heuristic would infer an object with a shorter recognition time to score a larger value on the criterion than an object with a longer recognition time. For each participant we also computed the median of the observed recognition time differences between two objects in a bin. We computed means (including standard errors) across participants. Data Predicted by the Memory Model. To generate the predictions of our memory model, we ran a simulation using Equations 5–8 and 11. This simulation of the memory model was run 1,500 times, creating 1,500 hypothetical persons’ predicted recognition and knowledge responses. Specifically, in each run of the memory model, according to the predicted recognition probability, PR, we determined whether a hypothetical person would recognize an object. If the object was recognized, then, according to the predicted knowledge probability, PK, we determined whether that hypothetical person would additionally know something about it. In each run, that is, for each hypothetical person, we also determined each object’s predicted recognition time, Trecognition, by drawing a sample from the object’s predicted recognition time distribution. For each hypothetical person, we exhaustively paired these objects into predicted tartle, tartle–knowledge and knowledge pairs. Within these pairs we computed the difference in predicted recognition times between two objects. For each hypothetical person, we then used the predicted recognition time differences to divide the three types of pairs further into four bins. Bins were arranged by quartiles of the predicted recognition time differences. Quartiles were approximated as described in Simulation 37 ONLINE MATERIALS 3. Finally, for each hypothetical person, we computed the predicted fluency validity, vfh, in each of the 3 × 4 bins (three types of pairs, four bins), assuming that a hypothetical person using the fluency heuristic would infer an object with a shorter predicted recognition time to score a larger value on the criterion than an object with a longer predicted recognition time. In addition, we computed the median of the predicted recognition time differences between two objects in a bin. We computed means (including standard errors) across hypothetical persons. Robustness of Model Predictions Across Proxies for Effort: Simulation C2 In Simulation C2, we predicted how the magnitude of the fluency validity changes as a function of the difference in pulses between two objects. To this end, we applied the timing model to the data observed in Experiments 2–7 and used both this model and our memory model to generate the predicted data from web frequency. The simulation procedure is similar to that of Simulation 3. However, pairs of objects are binned by the difference in pulses between two objects and not by the probability of a person detecting a difference in pulses. As in Simulation 3, we predicted validities for inferring cities’ size, countries’ gross domestic product in 2006, companies’ market capitalization on May 31, 2007, diseases’ fame, and politicians’ fame. Applying the Timing Model to the Observed Data. Exhaustively pairing the objects, we used the timing model (Equations 9, 11) to generate for each of each participant’s recognized objects the number of pulses that the participant would have counted while recognizing each object. According to the difference in the numbers of pulses between two objects, we grouped each participant’s tartle, tartle–knowledge, and knowledge pairs into four bins, calculating the magnitude of the fluency validity in each bin. To estimate the fluency validity, vfh, in each run of the timing model, for each of each participant’s pairs of objects, we let the timing model generate the number of pulses that 38 ONLINE MATERIALS participant would have counted while recognizing each of the two objects in a pair. We let the timing model then compare these numbers of pulses. In each run of the timing model, for the pairs where a participant would have detected a difference in the numbers of pulses, we checked whether the participant would have made a correct or an incorrect inference if that person had inferred the object with the smaller number of pulses to score a higher value on the criterion. The fluency validity, vfh, is the proportion of times the participant would have made a correct inference, computed across those pairs where the participant would have detected a difference in the numbers of pulses between two objects. This yields the fluency validity conditional on the participant having detected a difference in recognition times. Specifically for estimating the fluency validity, in each run of the timing model, we used each participant’s responses in the recognition and general knowledge task to classify that participant’s pairs of objects into tartle, tartle–knowledge, and knowledge pairs. Within these three types of pairs, for each participant we grouped those pairs that would have allowed the participant to detect a difference in the numbers of pulses into four bins by ordering the pairs according to the differences in pulses between two objects. Bins were arranged by quartiles of the differences in pulses. Quartiles were approximated as described in Simulation 3. In each run of the timing model, for each of the 3 × 4 bins (i.e., three types of pairs, four bins), we computed the average of the differences in pulses between two objects, as well as the fluency validity, vfh. We then computed means (including standard errors) across participants. Finally, we averaged the variables across runs of the timing model. Applying the Timing Model to the Data Predicted by the Memory Model. To generate the combined predictions of our memory model and the timing model, we ran a simulation using Equations 5–9 and 11. The simulation of the memory model was run 1,500 times, creating 1,500 hypothetical persons’ predicted recognition and knowledge responses. 39 ONLINE MATERIALS For each of these hypothetical persons, we ran the same runs of the timing model we had also run for the observed data. The total simulation comprises two steps. First, in each run of our memory model, according to the predicted recognition probability, PR, we determined whether a hypothetical person would recognize an object. If the object was recognized, then, according to the predicted knowledge probability, PK, we determined whether that hypothetical person would additionally know something about it. In each run, that is, for each hypothetical person, we then exhaustively paired objects into predicted tartle, tartle–knowledge, and knowledge pairs. In each run, we also determined each object’s predicted recognition time, Trecognition, by drawing a sample from the object’s predicted recognition time distribution. Second, to compute the predicted fluency validity, vfh, for each hypothetical person, for each pair of objects, we ran a series of runs of the timing model, letting this model generate the predicted number of pulses the hypothetical person would have counted while recognizing each of the two objects in a pair. In each run of the timing model, we let the timing model compare the predicted numbers of pulses the person would have counted. As for the observed data, the predicted fluency validity is the proportion of times the hypothetical person would have made a correct inference, computed across those pairs where the person would have detected a difference in the predicted numbers of pulses between two objects. Specifically, for computing the predicted fluency validity in each of the runs of the timing model, within each hypothetical person’s predicted tartle, tartle–knowledge, and knowledge pairs we grouped all pairs into four bins by ordering the pairs according to the differences in predicted numbers of pulses. The bins were arranged by quartiles of the differences in predicted numbers of pulses in the same way as we binned the observed data. In each of the runs of the timing model, for each of the 3 × 4 bins, we computed the averages of 40 ONLINE MATERIALS the previously computed differences in predicted numbers of pulses, as well as the predicted fluency validity. In each run of the timing model, we then computed means (including standard errors) across hypothetical persons. Finally, we averaged the variables across runs of the timing model. Recognition Times as a Function of Knowledge: Simulation C3 In Simulation C3, we used our memory model to predict recognition times for tartle and knowledge objects. Observed data. We used participants’ responses in the recognition and general knowledge tasks of Experiments 2–7 to identify tartle and knowledge objects. For each participant, we calculated the median recognition times for these objects, averaging the medians across participants. We also computed standard errors. Data Predicted by the Memory Model. In a simulation, we used Equations 5–8. The simulation was run 1,500 times, creating hypothetical persons’ predicted recognition and knowledge responses. Specifically, in each run of the memory model, according to the predicted recognition probability, PR, we determined whether a hypothetical person would recognize an object. If the object was recognized, then, according to the predicted knowledge probability, PK, we determined whether that person would additionally know something about it. In each run, that is, for each hypothetical person, we also determined each object’s predicted recognition time, Trecognition, by drawing a sample from the object’s predicted recognition time distribution. For each hypothetical person, we computed the medians of these predicted recognition times separately for predicted tartle and knowledge objects. For each of these types of objects, we then averaged the medians across hypothetical persons. We also computed standard errors. 41