Information search platforms, from Google to AI-assisted search engines, have transformed information access but may fail to promote a shared factual foundation. We demonstrate that the combination of usersâ prior beliefs influencing their search terms and the narrow scope of search algorithms can limit belief updating from search. We test this ânarrow search effectâ across 21 studies (14 preregistered) using various topics (e.g., health, financial, societal, political) and platforms (e.g., Google, ChatGPT, AI-powered Bing, our custom-designed search engine and AI chatbot interfaces). We then test user-based and algorithm-based interventions to counter the ânarrow search effectâ and promote belief updating. Studies 1 to 5 show that usersâ prior beliefs influence the direction of the search terms, thereby generating narrow search results that limit belief updating. This effect persists across various domains (e.g., beliefs related to coronavirus, nuclear energy, gas prices, crime rates, bitcoin, caffeine, and general food or beverage health concerns; Studies 1a to 1b, 2a to 2g, 3, 4), platforms (e.g., GoogleâStudies 1a to 1b, 2a to 2g, 4, 5; ChatGPT, Study 3), and extends to consequential choices (Study 5). Studies 6 and 7 demonstrate the limited efficacy of prompting users to correct for the impact of narrow searches on their beliefs themselves. Using our custom-designed search engine and AI chatbot interfaces, Studies 8 and 9 show that modifying algorithms to provide broader results can encourage belief updating. These findings highlight the need for a behaviorally informed approach to the design of search algorithms.