2024届中考英语说明文阅读理解专练02--人工智能类(原卷板+解析版)

说明文阅读理解专练02--人工智能类(解析版)

1.(2024·浙江·二模)
The maker of ChatGPT recently announced its next move into generative artificial intelligence. San Francisco-based OpenAI’s new text-to-video generator, called Sora, is a tool that instantly makes short videos based on written commands, called prompts.
Sora is not the first of its kind. Google, Meta and Runway ML are among the other companies to have developed similar technology. But the high quality of videos displayed by OpenAI — some released after CEO Sam Altman asked social media users to send in ideas for written prompts-surprised observers.
A photographer from New Hampshire posted one suggestion, or prompt, on X. The prompt gave details about a kind of food to be cooked, gnocchi (意大利团子), as well as the setting — an old Italian country kitchen. The prompt said: “An instructional cooking session for homemade gnocchi, hosted by a grandmother — a social media influencer, set in a rustic (土气的) Tuscan country kitchen.” Altman answered a short time later with a realistic video that showed what the prompt described.
The tool is not yet publicly available. OpenAI has given limited information about how it was built. The company also has not stated what imagery and video sources were used to train Sora. At the same time, the video results led to fears about the possible ethical and societal effects.
The New York Times and some writers have taken legal actions against OpenAI for its use of copyrighted works of writing to train ChatGPT. And OpenAI pays a fee to The Associated Press, the source of this report, to license its text news archive (档案) . OpenAI said in a blog post that it is communicating with artists, policymakers and others before releasing the new tool to the public.
The company added that it is working with “red teamers” — people who try to find problems and give helpful suggestions — to develop Sora. “We are working with red teamers-express in areas like misinformation, hateful content, and bias — who will be adversarially testing the model,” the company said. “We’re also building tools to help detect misleading content such as a detection classifier that can tell when a video was generated by Sora.”
1.What makes Sora impressive
A.Its extraordinary video quality. B.Its ethical and societal influence.
C.Its artificial intelligence history. D.Its written commands and prompts.
2.What can we infer from the text
A.Some disagreements over Sora have arisen.
B.Sora is the first text-to-video generator in history.
C.OpenAI CEO Altman wrote a prompt as an example.
D.All the details about how Sora was built have been shared.
3.What is the main idea of Paragraph 6
A.The company’s current challenge.
B.The company’s advanced technology.
C.The company’s problems in management.
D.The company’s efforts for Sora’s improvement.
4.What is the author’s attitude towards Sora
A.Neutral. B.Optimistic. C.Pessimistic. D.Cautious.

2.(2024·河北·一模)
Many parents confused by how their children shop or socialize, would feel undisturbed by how they are taught — this sector remains digitally behind. Can artificial intelligence boost the digital sector of classroom ChatGPT-like generative AI is generating excitement for providing personalized tutoring to students. By May, New York had let the bot back into classrooms.
Learners are accepting the technology. Two-fifths of undergraduates surveyed last y car by online tutoring company Chegg reported using an AI chatbot to help them with their studies, with half of those using it daily. Chegg’s chief executive told investors it was losing customers to ChatGPT as a result of the technology’s popularity. Yet there are good reasons to believe that education specialists who harness AI will eventually win over generalists such as Open AI and other tech firms eyeing the education business.
For one, AI chat bots have a bad habit of producing nonsense. “Students want content from trusted providers,” argues Kate Edwards from a textbook publisher. Her company hasn’t allowed ChatGPT and other AIs to use its material, but has instead used the content to train its own models into its learning apps. Besides, teaching isn’t merely about giving students an answer, but about presenting it in a way that helps them learn. Charbots must also be tailored to different age groups to avoid either cheating or infantilizing (使婴儿化) students.
Bringing AI to education won’t be easy. Many teachers are behind the learning curve. Less than a fifth of British educators surveyed by Pearson last year reported receiving training on digital learning tools. Tight budgets at many institutions will make selling new technology an uphill battle. Teachers’ attention may need to shift towards motivating students and instructing them on how to best work with AI tools. If those answers can be provided, it’s not just companies that stand to benefit. An influent in l paper from 1984 found that one-to-one tutoring improved the average academic performance of students. With the learning of students, especially those from poorer households, held back, such a development would certainly deserve top marks.
5.What do many parents think remains untouched by AI about their children
A.Their shopping habits. B.Their social behavior.
C.Their classroom learning. D.Their interest in digital devices.
6.What does the underlined word “harness” in paragraph 2 mean
A.Develop. B.Use. C.Prohibit. D.Blame.
7.What mainly prevents AI from entering the classroom at present
A.Many teachers aren’t prepared technically.
B.Tailored chatbots can’t satisfy different needs.
C.AI has no right to copy textbooks for teaching.
D.It can be tricked to produce nonsense answers.
8.Where is the text most probably taken from
A.An introduction to AI. B.A product advertisement.
C.A guidebook to AI application. D.A review of AI in education.

3.(2024·北京西城·一模)
Evan Selinger, professor in RIT’s Department of Philosophy, has taken an interest in the ethics (伦理标准) of Al and the policy gaps that need to be filled in. Through a humanities viewpoint, Selinger asks the questions, “How can AI cause harm, and what can governments and companies creating Al programs do to address and manage it ” Answering them, he explained, requires an interdisciplinary approach.
“AI ethics go beyond technical fixes. Philosophers and other humanities experts are uniquely skilled to address the nuanced (微妙的) principles, value conflicts, and power dynamics. These skills aren’t just crucial for addressing current issues. We desperately need them to promote anticipatory (先行的) governance, ” said Selinger.
One example that illustrates how philosophy and humanities experts can help guide these new, rapidly growing technologies is Selinger’s work collaborating with a special AI project. “One of the skills I bring to the table is identifying core ethical issues in emerging technologies that haven’t been built or used by the public. We can take preventative steps to limit risk, including changing how the technology is designed, ”said Selinger.
Taking these preventative steps and regularly reassessing what risks need addressing is part of the ongoing journey in pursuit of creating responsible AI. Selinger explains that there isn’t a step-by-step approach for good governance. “AI ethics have core values and principles, but there’s endless disagreement about interpreting and applying them and creating meaningful accountability mechanisms, ” said Selinger. “Some people are rightly worried that AI can become integrated into ‘ethics washing’-weak checklists, flowery mission statements, and empty rhetoric that covers over abuses of power. Fortunately, I’ve had great conversations about this issue, including with some experts, on why it is important to consider a range of positions. ”
Some of Selinger’s recent research has focused on the back-end issues with developing AI, such as the human impact that comes with testing AI chatbots before they’re released to the public. Other issues focus on policy, such as what to do about the dangers posed by facial recognition and other automated surveillance(监视) approaches.
Selinger is making sure his students are informed about the ongoing industry conversations on AI ethics and responsible AI. “Students are going to be future tech leaders. Now is the time to help them think about what goals their companies should have and the costs of minimizing ethical concerns. Beyond social costs, downplaying ethics can negatively impact corporate culture and hiring, ” said Selinger. “To attract top talent, you need to consider whether your company matches their interests and hopes for the future. ”
9.Selinger advocates an interdisciplinary approach because ________.
A.humanities experts possess skills essential for AI ethics
B.it demonstrates the power of anticipatory governance
C.AI ethics heavily depends on technological solutions
D.it can avoid social conflicts and pressing issues
10.To promote responsible AI, Selinger believes we should ________.
A.adopt a systematic approach B.apply innovative technologies
C.anticipate ethical risks beforehand D.establish accountability mechanisms
11.What can be inferred from the last two paragraphs
A.More companies will use AI to attract top talent.
B.Understanding AI ethics will help students in the future.
C.Selinger favors companies that match his students’ values.
D.Selinger is likely to focus on back-end issues such as policy.

4.(23-24高三·浙江·阶段练习)
Users of Google Gemini, the tech giant’s artificial-intelligence model, recently noticed that asking it to create images of Vikings, or German soldiers from 1943 produced surprising results: hardly any of the people depicted were white. Other image-generation tools have been criticized because they tend to show white men when asked for images of entrepreneurs or doctors. Google wanted Gemini to avoid this trap; instead, it fell into another one, depicting George Washington as black. Now attention has moved on to the chatbot’s text responses, which turned out to be just as surprising.
Gemini happily provided arguments in favor of positive action in higher education, but refused to provide arguments against. It declined to write a job ad for a fossil-fuel lobby group (游说团体), because fossil fuels are bad and lobby groups prioritize “the interests of corporations over public well-being”. Asked if Hamas is a terrorist organization, it replied that the conflict in Gaza is “complex”; asked if Elon Musk’s tweeting of memes had done more harm than Hitler, it said it was “difficult to say”. You do not have to be a critic to perceive its progressive bias.
Inadequate testing may be partly to blame. Google lags behind OpenAI, maker of the better-known ChatGPT. As it races to catch up, Google may have cut corners. Other chatbots have also had controversial launches. Releasing chatbots and letting users uncover odd behaviors, which can be swiftly addressed, lets firms move faster, provided they are prepared to weather (经受住) the potential risks and bad publicity, observes Eth an Mollick, a professor at Wharton Business School.
But Gemini has clearly been deliberately adjusted, or “fine-tuned”, to produce these responses. This raises questions about Google’s culture. Is the firm so financially secure, with vast profits from internet advertising, that it feels free to try its hand at social engineering Do some employees think it has not just an opportunity, but a responsibility, to use its reach and power to promote a particular agenda All eyes are now on Google’s boss, Sundar Pichai. He says Gemini is being fixed. But does Google need fixing too
12.What do the words “this trap” underlined in the first paragraph refer to
A.Having a racial bias. B.Responding to wrong texts.
C.Criticizing political figures. D.Going against historical facts.
13.What is Paragraph 2 mainly about
A.Gemini’s refusal to make progress. B.Gemini’s failure to give definite answers.
C.Gemini’s prejudice in text responses. D.Gemini’s avoidance of political conflicts.
14.What does Eth an Mollick think of Gemini’s early launch
A.Creative. B.Promising. C.Illegal. D.Controversial.
15.What can we infer about Google from the last paragraph
A.Its security is doubted. B.It lacks financial support.
C.It needs further improvement. D.Its employees are irresponsible.

5.(2024·山东·模拟预测)
Traditionally, people have been forced to reduce complex choices to a small handful of options that don’t do justice to their true desires. For example, in a restaurant, the limitations of the kitchen, the way supplies have to be ordered and the realities of restaurant cooking make you get a menu of a few dozen standardized options, with the possibility of some modifications (修改) around the edges. We are so used to these bottlenecks that we don’t even notice them. And when we do, we tend to assume they are the unavoidable cost of scale (规模) and efficiency. And they are. Or, at least, they were.
Artificial intelligence (AI) has the potential to overcome this limitation. By storing rich representations of people’s preferences and histories on the demand side, along with equally rich representations of capabilities, costs and creative possibilities on the supply side, AI systems enable complex customization at large scale and low cost. Imagine walking into a restaurant and knowing that the kitchen has already started working on a meal optimized (优化) for your tastes, or being presented with a personalized list of choices.
There have been some early attempts at this. People have used ChatGPT to design meals based on dietary restrictions and what they have in the fridge. It’s still early days for these technologies, but once they get working, the possibilities are nearly endless.
Recommendation systems for digital media have reduced their reliance on traditional intermediaries. Radio stations are like menu items: Regardless of how nuanced (微妙) your taste in music is, you have to pick from a handful of options. Early digital platforms were only a little better: “This person likes jazz, so we’ll suggest more Jazz.” Today’s streaming platforms use listener histories and a broad set of characters describing each track to provide each user with personalized music recommendations.
A world without artificial bottlenecks comes with risks — loss of jobs in the bottlenecks, for example — but italso has the potential to free people from the straightjackets that have long limited large-scale human decision-’making. In some cases — restaurants, for example — the effect on most people might be minor. But in others, likepolitics and hiring, the effects could be great.
16.What does the underlined word “bottlenecks” in paragraph 1 refer to
A.Facing too many choices. B.Choosing from limited options.
C.Avoiding the cost of choosing. D.Having too many desires to satisfy.
17.How can AI meet everyone’s needs
A.By meeting both ends of supply and demand.
B.By decreasing representations on the supply side.
C.By disconnecting the sides of supply and demand.
D.By reducing people’s preferences on the demand side.
18.What’s the similarity between radio stations and menu items
A.They are a necessary part in people’s life. B.They offer limited choices.
C.They depend on digital platforms. D.They provide reasonable suggestions.
19.What does the text mainly talk about
A.The variety of human’s choices. B.Standardized optrarts in daily life.
C.AI settlements to the option bottlenecks. D.Recommendation systems for digital media.

6.(2024·福建·模拟预测)
Our species’ incredible capacity to quickly acquire words from 300 by age 2 to over 1, 000 by age 4 isn’t fully understood. Some cognitive scientists and linguists have theorized that people are born with built-in expectations and logical constraints (约束) that make this possible. Now, however, machine-learning research is showing that preprogrammed assumptions aren’t necessary to swiftly pick up word meanings from minimal data.
A team of scientists has successfully trained a basic artificial intelligence model to match images to words using just 61 hours of naturalistic footage (镜头) and sound-previously collected from a child named Sam in 2013 and 2014. Although it’s a small slice of a child’s life, it was apparently enough to prompt the AI to figure out what certain words mean.
The findings suggest that language acquisition could be simpler than previously thought. Maybe children “don’t need a custom-built, high-class language-specific mechanism” to efficiently grasp word meanings, says Jessica Sullivan, an associate professor of psychology at Skidmore College. “This is a really beautiful study, ” she says, because it offers evidence that simple information from a child’s worldview is rich enough to kick-start pattern recognition and word comprehension.
The new study also demonstrates that it’s possible for machines to learn similarly to the way that humans do. Large language models are trained on enormous amounts of data that can include billions and sometimes trillions of word combinations. Humans get by on orders of magnitude less information, says the paper’s lead author Wai Keen Vong. With the right type of data, that gap between machine and human learning could narrow dramatically.
Yet additional study is necessary in certain aspects of the new research. For one, the scientists acknowledge that their findings don’t prove how children acquire words. Moreover, the study only focused on recognizing the words for physical objects.
Still, it’s a step toward a deeper understanding of our own mind, which can ultimately help us improve human education, says Eva Portelance, a computational linguistics researcher. She notes that AI research can also bring clarity to long-unanswered questions about ourselves. “We can use these models in a good way, to benefit science and society, ” Portelance adds.
20.What is a significant finding of machine-learning research
A.Vocabulary increases gradually with age.
B.Vocabulary can be acquired from minimal data.
C.Language acquisition is tied to built-in expectations.
D.Language acquisition is as complex as formerly assumed.
21.What does the underlined word “prompt” in paragraph 2 mean
A.Facilitate. B.Persuade. C.Advise. D.Expect.
22.What is discussed about the new research in paragraph 5
A.Its limitations. B.Its strengths. C.Its uniqueness. D.Its process.
23.What is Eva Portelance’s attitude to the AI research
A.Doubtful. B.Cautious. C.Dismissive. D.Positive




答案及解析
【答案】1.A 2.A 3.D 4.A
【导语】这是一篇说明文。文章主要介绍了OpenAI新推出了一款文本到视频生成器Sora,文章介绍了其特点以及其争议。
1.细节理解题。根据第二段的“But the high quality of videos displayed by OpenAI—some released after CEO Sam Altman asked social media users to send in ideas for written prompts-surprised observers.(但OpenAI显示的高质量视频——其中一些是在首席执行官萨姆·奥特曼要求社交媒体用户发送书面提示的想法后发布的——让观察者感到惊讶)”可知,Sora让人印象深刻的是其非凡的视频质量。故选A。
2.推理判断题。根据第四段的“OpenAI has given limited information about how it was built. The company also has not stated what imagery and video sources were used to train Sora. At the same time, the video results led to fears about the possible ethical and societal effects.(OpenAI提供的关于它是如何构建的信息有限。该公司也没有说明用于训练Sora的图像和视频来源。与此同时,视频结果引发了人们对可能产生的道德和社会影响的担忧)”可知,视频结果引发了人们对可能产生的道德和社会影响的担忧。由此可知,社会上就Sora出现了一些分歧。故选A。
3.主旨大意题。根据第六段“The company added that it is working with “red teamers” —people who try to find problems and give helpful suggestions—to develop Sora. “We are working with red teamers-express in areas like misinformation, hateful content, and bias—who will be adversarially testing the model,” the company said. “We’re also building tools to help detect misleading content such as a detection classifier that can tell when a video was generated by Sora.”(该公司补充说,它正在与“红队成员”合作开发索拉。红队成员试图发现问题并提出有用的建议。该公司表示:“我们正在与错误信息、仇恨内容和偏见等领域的红队快递员合作,他们将对该模式进行不利的测试。”。“我们还在构建一些工具来帮助检测误导性内容,比如一个检测分类器,它可以判断视频是由索拉生成的。”)”可知,第六段主要介绍了公司为Sora的改进所做的努力。故选D。
4.推理判断题。根据第一段“The maker of ChatGPT recently announced its next move into generative artificial intelligence. San Francisco-based OpenAI’s new text-to-video generator, called Sora, is a tool that instantly makes short videos based on written commands, called prompts.(ChatGPT的制造商最近宣布了其向生成人工智能的下一步行动。基于旧金山的OpenAI新的文本到视频生成器Sora是一种基于书面命令(称为提示)即时制作短视频的工具)”可知,主要介绍了OpenAI新推出了一款文本到视频生成器Sora。作者客观的在陈述Sora的特点以及其争议,该公司为了Sora的改进所做的努力。所以作者对Sora的态度是中立的。

【答案】5.C 6.B 7.A 8.D
【导语】这是一篇说明文。文章主要介绍了人工智能在教育行业的应用与限制,及其未来在教育行业的发展。
5.细节理解题。由文章第一段中“Many parents confused by how their children shop or socialize, would feel undisturbed by how they are taught—this sector remains digitally behind. (对他们的孩子如何购物或社交感到困惑的许多家长,对于孩子接受教育的方式没有感到担忧——这个领域在数字上仍然落后。)”可知,很多父母对于孩子接受教育的方式没有感到担忧,因为在课堂学习这一领域,数字化仍然很落后。故选C。
6.词句猜测题。画线词后所在部分“who harness AI”是限制性定语从句,修饰先行词education specialists(教育专家),结合上文“will eventually win over generalists such as Open AI and other tech firms eyeing the education business(将最终战胜Open AI等多面手和其他关注教育业务的科技公司)” 可推知,教育专家要利用AI才能的最终战胜开放人工智能等多面手和其他关注教育事业的科技公司。故划线词harness意为“利用”,与use同义。故选B。
7.推理判断题。由文章第四段中“Bringing AI to education won’t be easy. Many teachers are behind the learning curve. Less than a fifth of British educators surveyed by Pearson last year reported receiving training on digital learning tools. (将人工智能带入教育领域并不容易。许多教师都落后于学习曲线。在皮尔森去年调查的英国教育工作者中,只有不到五分之一的人表示接受过数字学习工具的培训。)”可知,许多老师没有得到良好的培训以帮助他们使用数字化的教学工具,这妨碍了人工智能进入教学领域,所以目前阻碍人工智能进入课堂的主要因素是很多老师在技术上没有准备好。故选A。
8.推理判断题。通读全文,尤其是由文章第一段中“Can artificial intelligence boost the digital sector of classroom ChatGPT-like generative AI is generating excitement for providing personalized tutoring to students. By May. New York had let the bot back into classrooms. (人工智能能否推动课堂数字化?类似ChatGPT的生成式人工智能正在为学生提供个性化辅导。在5月。纽约已经允许机器人回到教室。)”可知,本文探讨了人工智能在教育行业的应用前景,并讨论了应用的困难和希望,因此本篇文章最有可能选自一份AI在教育领域的评论。

【答案】9.A 10.C 11.B
【导语】这是一篇说明文。文章主要说明了RIT哲学系教授Evan Selinger对于对人工智能的伦理的一些看法和建议。
9.细节理解题。根据第二段““AI ethics go beyond technical fixes. Philosophers and other humanities experts are uniquely skilled to address the nuanced (微妙的) principles, value conflicts, and power dynamics. These skills aren’t just crucial for addressing current issues. We desperately need them to promote anticipatory (先行的) governance, ” said Selinger.( Selinger说:“人工智能伦理超越了技术修复。哲学家和其他人文专家在处理微妙的原则、价值冲突和权力动态方面具有独特的技能。这些技能不仅对解决当前问题至关重要。我们迫切需要他们来促进预见性治理。”)”可知,塞林格主张跨学科的方法,因为人文学科专家拥有人工智能伦理所必需的技能。故选A。
10.细节理解题。根据第四段“Taking these preventative steps and regularly reassessing what risks need addressing is part of the ongoing journey in pursuit of creating responsible AI.(采取这些预防措施并定期重新评估需要解决的风险,是追求创造负责任的人工智能的持续旅程的一部分)”可知,为了促进负责任的人工智能,塞林格认为我们应该事先预测道德风险。故选C。
11.推理判断题。根据最后一段““Students are going to be future tech leaders. Now is the time to help them think about what goals their companies should have and the costs of minimizing ethical concerns. Beyond social costs, downplaying ethics can negatively impact corporate culture and hiring, ” said Selinger. “To attract top talent, you need to consider whether your company matches their interests and hopes for the future.”(“学生们将成为未来的科技领袖。现在是时候帮助他们思考他们的公司应该有什么样的目标,以及最小化道德问题的成本。除了社会成本之外,轻视道德还会对企业文化和招聘产生负面影响。”“为了吸引顶尖人才,你需要考虑你的公司是否符合他们的兴趣和对未来的希望。”)”可推知,理解人工智能伦理对学生未来有帮助。

【答案】12.A 13.C 14.D 15.C
【导语】本文为一篇新闻报道。文章主要围绕谷歌的人工智能模型Gemini的表现进行了描述和分析,指出了该模型在生成图像和文本回复时出现的问题,以及这些问题可能反映出的谷歌公司文化和战略考量。
12.词句猜测题。根据划线单词上文“Users of Google Gemini, the tech giant’s artificial-intelligence model, recently noticed that asking it to create images of Vikings, or German soldiers from 1943 produced surprising results: hardly any of the people depicted were white. Other image-generation tools have been criticized because they tend to show white men when asked for images of entrepreneurs or doctors.(科技巨头谷歌的人工智能模型Google Gemini的用户最近注意到,让它创建维京人或1943年的德国士兵的图像产生了令人惊讶的结果:几乎没有一个被描绘的人是白人。其他图像生成工具也受到了批评,因为当被要求提供企业家或医生的图像时,它们往往会显示白人男性。)”可推测,“this trap”指的是前面提到的其他图像生成工具在生成图像时存在的种族偏见问题。而谷歌的 Gemini想要摆脱这种陷阱,却又掉入了另一个——把乔治.华盛顿描绘成黑人。故选A。
13.主旨大意题。根据文章第二段“Gemini happily provided arguments in favor of positive action in higher education, but refused to provide arguments against. It declined to write a job ad for a fossil-fuel lobby group (游说团体), because fossil fuels are bad and lobby groups prioritize “the interests of corporations over public well-being”. Asked if Hamas is a terrorist organization, it replied that the conflict in Gaza is “complex”; asked if Elon Musk’s tweeting of memes had done more harm than Hitler, it said it was “difficult to say”. You do not have to be a critic to perceive its progressive bias.( Gemini乐于提供支持高等教育的积极行动的论据,但拒绝提供反对的论据。它拒绝为化石燃料游说团体写招聘广告,因为化石燃料不好,游说团体优先考虑“公司的利益而不是公众的福祉”。当被问及Hamas是否是恐怖组织时,它回答说,加沙的冲突是“复杂的”;当被问及Elon Musk在推特上发布的表情包是否比希特勒造成的伤害更大时,该公司表示“很难说”。即使你不是批评家,也能看出它的进步偏见。)”可知,第二段主要讲述的是Gemini在回复信息时有自己的偏见。故选C。
14.推理判断题。根据文章第三段“Google lags behind OpenAI, maker of the better-known ChatGPT. As it races to catch up, Google may have cut corners. Other chatbots have also had controversial launches. Releasing chatbots and letting users uncover odd behaviors, which can be swiftly addressed, lets firms move faster, provided they are prepared to weather (经受住) the potential risks and bad publicity, observes Eth an Mollick, a professor at Wharton Business School.(在奋力追赶的过程中,谷歌可能走了捷径。其他聊天机器人的发布也引发了争议。沃顿商学院教授Eth Mollick表示,发布聊天机器人,让用户发现可以迅速解决的奇怪行为,可以让企业更快地行动,前提是它们准备好经受住潜在风险和负面宣传。)”可知,Eth an Mollick观察到发布聊天机器人并让用户发现奇怪的行为,如果公司准备好承受潜在的风险和负面宣传,可以迅速解决这些问题,从而让公司更快地行动。这表明他认为Gemini的早期发布是有争议的,因为它可能带来一些未预料到的问题和负面反应。故选D。
15.推理判断题。根据文章最后一段“But Gemini has clearly been deliberately adjusted, or “fine-tuned”, to produce these responses. This raises questions about Google’s culture. Is the firm so financially secure, with vast profits from internet advertising, that it feels free to try its hand at social engineering Do some employees think it has not just an opportunity, but a responsibility, to use its reach and power to promote a particular agenda All eyes are now on Google’s boss, Sundar Pichai. He says Gemini is being fixed. But does Google need fixing too (但Gemini显然经过了刻意调整,或“微调”,以产生这些反应。这引发了对谷歌文化的质疑。这家公司从互联网广告中获得巨额利润,在财务上如此安全,以至于它可以自由地尝试社交工程吗?是否有些员工认为它不仅有机会,而且有责任利用其影响力和权力来推动特定的议程?现在所有的目光都集中在谷歌的老板Sundar Pichai身上。他说Gemini正在被修复。但谷歌也需要修复吗?)”可知,最后一段提出了对谷歌文化的质疑,并暗示该公司可能在微调其人工智能模型Gemini时采取了一些自由,导致了意想不到的反应。这意味着该模型存在需要解决和纠正的问题,表明谷歌在该领域仍有改进的空间。

【答案】16.B 17.A 18.B 19.C
【导语】本文是一篇说明文。文章主要介绍了人工智能将颠覆社会的许多方面,消除许多系统中固有的人为限制,包括决策中的信息和选择瓶颈限制。
16.词句猜测题。根据第一段前两句“Traditionally, people have been forced to reduce complex choices to a small handful of options that don’t do justice to their true desires. For example, in a restaurant, the limitations of the kitchen, the way supplies have to be ordered and the realities of restaurant cooking make you get a menu of a few dozen standardized options, with the possibility of some modifications around the edges.(传统上,人们被迫将复杂的选择减少到少数几个不符合他们真正愿望的选择。例如,在餐馆里,厨房的局限性、订购材料的方式以及餐馆烹饪的现实,让你得到一份只有几十种标准化选项的菜单,还有可能进行一些修改)”及“these”可知,“bottlenecks”指的是前面提到的“人们不得不从有限的选项当中做出选择”这种情况,故选B项。
17.细节理解题。根据第二段第二句“By storing rich representations of people’s preferences and histories on the demand side, along with equally rich representations of capabilities, costs and creative possibilities on the supply side, AI systems enable complex customization at large scale and low cost.(通过在需求端存储人们的偏好和历史的丰富描述以及在供给端存储同样丰富的能力、成本和创造性的可能性描述,人工智能系统可以实现大规模、低成本的复杂定制)”可知,人工智能系统通过在供需两端提供丰富的数据存储来满足每个人的需求。故选A项。
18.细节理解题。根据第四段第二句“Radio stations are like menu items: Regardless of how nuanced your taste in music is, you have to pick from a handful of options.(广播电台就像菜单项目:不管你的音乐品味有多微妙,你都必须从少数几个选项中做出选择)”可知,这两个事物的共同点是它们都提供一些有限的选择。故选B项。
19.主旨大意题。根据第一段第一句“Traditionally, people have been forced to reduce complex choices to a small handful of options that don’t do justice to their true desires.(传统上,人们被迫将复杂的选择减少到少数几个不符合他们真正愿望的选择)”及第二段第一句“Artificial intelligence (AI) has the potential to overcome this limitation.(人工智能有潜力克服这一限制)”并结合后文对人工智能将颠覆社会的许多方面,消除许多系统中固有的人为限制的介绍可知,本文主要介绍了人工智能有助于解决选项瓶颈。

【答案】20.B 21.A 22.A 23.D
【导语】这是一篇说明文。文章讲述了现在机器学习研究表明,要从最少的数据中快速获取单词的含义,并不需要预先编程的假设。
20.细节理解题。根据文章第一段“Now, however, machine-learning research is showing that preprogrammed assumptions aren’t necessary to swiftly pick up word meanings from minimal data.(然而,现在机器学习研究表明,要从最少的数据中快速获取单词的含义,并不需要预先编程的假设。)”可知,机器学习研究的一个重要发现是词汇可以从最小的数据中获得。故选B。
21.词句猜测题。根据上文“A team of scientists has successfully trained a basic artificial intelligence model to match images to words using just 61 hours of naturalistic footage and sound-previously collected from a child named Sam in 2013 and 2014.(一组科学家已经成功地训练了一个基本的人工智能模型,只需使用61小时的自然镜头和声音,就能将图像与文字匹配起来——这些镜头和声音之前是在2013年和2014年从一个名叫萨姆的孩子身上收集的。)”可知,虽然只是孩子生活中的一小部分,但显然足以促使人工智能弄清楚某些单词的意思。prompt意为“促使”。故选A。
22.主旨大意题。根据文章第五段“Yet additional study is necessary in certain aspects of the new research. For one, the scientists acknowledge that their findings don’t prove how children acquire words. Moreover, the study only focused on recognizing the words for physical objects. (然而,在这项新研究的某些方面,还需要进一步的研究。首先,科学家们承认,他们的发现并不能证明儿童是如何习得词汇的。此外,这项研究只关注于识别实物的单词。)”可知,第五段主要讲述了这项新研究的局限性。故选A。
23.推理判断题。根据文章最后一段“She notes that AI research can also bring clarity to long-unanswered questions about ourselves. “We can use these models in a good way, to benefit science and society, ” Portelance adds.”(她指出,人工智能研究也可以让我们自己长期未解之谜变得清晰。“我们可以很好地利用这些模型,造福科学和社会,”Portelance补充说。)可推知,Eva Portelance对人工智能研究的态度是积极的。
说明文阅读理解专练02--人工智能类(原卷版)

1.(2024·浙江·二模)
The maker of ChatGPT recently announced its next move into generative artificial intelligence. San Francisco-based OpenAI’s new text-to-video generator, called Sora, is a tool that instantly makes short videos based on written commands, called prompts.
Sora is not the first of its kind. Google, Meta and Runway ML are among the other companies to have developed similar technology. But the high quality of videos displayed by OpenAI — some released after CEO Sam Altman asked social media users to send in ideas for written prompts-surprised observers.
A photographer from New Hampshire posted one suggestion, or prompt, on X. The prompt gave details about a kind of food to be cooked, gnocchi (意大利团子), as well as the setting — an old Italian country kitchen. The prompt said: “An instructional cooking session for homemade gnocchi, hosted by a grandmother — a social media influencer, set in a rustic (土气的) Tuscan country kitchen.” Altman answered a short time later with a realistic video that showed what the prompt described.
The tool is not yet publicly available. OpenAI has given limited information about how it was built. The company also has not stated what imagery and video sources were used to train Sora. At the same time, the video results led to fears about the possible ethical and societal effects.
The New York Times and some writers have taken legal actions against OpenAI for its use of copyrighted works of writing to train ChatGPT. And OpenAI pays a fee to The Associated Press, the source of this report, to license its text news archive (档案) . OpenAI said in a blog post that it is communicating with artists, policymakers and others before releasing the new tool to the public.
The company added that it is working with “red teamers” — people who try to find problems and give helpful suggestions — to develop Sora. “We are working with red teamers-express in areas like misinformation, hateful content, and bias — who will be adversarially testing the model,” the company said. “We’re also building tools to help detect misleading content such as a detection classifier that can tell when a video was generated by Sora.”
1.What makes Sora impressive
A.Its extraordinary video quality. B.Its ethical and societal influence.
C.Its artificial intelligence history. D.Its written commands and prompts.
2.What can we infer from the text
A.Some disagreements over Sora have arisen.
B.Sora is the first text-to-video generator in history.
C.OpenAI CEO Altman wrote a prompt as an example.
D.All the details about how Sora was built have been shared.
3.What is the main idea of Paragraph 6
A.The company’s current challenge.
B.The company’s advanced technology.
C.The company’s problems in management.
D.The company’s efforts for Sora’s improvement.
4.What is the author’s attitude towards Sora
A.Neutral. B.Optimistic. C.Pessimistic. D.Cautious.

2.(2024·河北·一模)
Many parents confused by how their children shop or socialize, would feel undisturbed by how they are taught — this sector remains digitally behind. Can artificial intelligence boost the digital sector of classroom ChatGPT-like generative AI is generating excitement for providing personalized tutoring to students. By May, New York had let the bot back into classrooms.
Learners are accepting the technology. Two-fifths of undergraduates surveyed last y car by online tutoring company Chegg reported using an AI chatbot to help them with their studies, with half of those using it daily. Chegg’s chief executive told investors it was losing customers to ChatGPT as a result of the technology’s popularity. Yet there are good reasons to believe that education specialists who harness AI will eventually win over generalists such as Open AI and other tech firms eyeing the education business.
For one, AI chat bots have a bad habit of producing nonsense. “Students want content from trusted providers,” argues Kate Edwards from a textbook publisher. Her company hasn’t allowed ChatGPT and other AIs to use its material, but has instead used the content to train its own models into its learning apps. Besides, teaching isn’t merely about giving students an answer, but about presenting it in a way that helps them learn. Charbots must also be tailored to different age groups to avoid either cheating or infantilizing (使婴儿化) students.
Bringing AI to education won’t be easy. Many teachers are behind the learning curve. Less than a fifth of British educators surveyed by Pearson last year reported receiving training on digital learning tools. Tight budgets at many institutions will make selling new technology an uphill battle. Teachers’ attention may need to shift towards motivating students and instructing them on how to best work with AI tools. If those answers can be provided, it’s not just companies that stand to benefit. An influent in l paper from 1984 found that one-to-one tutoring improved the average academic performance of students. With the learning of students, especially those from poorer households, held back, such a development would certainly deserve top marks.
5.What do many parents think remains untouched by AI about their children
A.Their shopping habits. B.Their social behavior.
C.Their classroom learning. D.Their interest in digital devices.
6.What does the underlined word “harness” in paragraph 2 mean
A.Develop. B.Use. C.Prohibit. D.Blame.
7.What mainly prevents AI from entering the classroom at present
A.Many teachers aren’t prepared technically.
B.Tailored chatbots can’t satisfy different needs.
C.AI has no right to copy textbooks for teaching.
D.It can be tricked to produce nonsense answers.
8.Where is the text most probably taken from
A.An introduction to AI. B.A product advertisement.
C.A guidebook to AI application. D.A review of AI in education.

3.(2024·北京西城·一模)
Evan Selinger, professor in RIT’s Department of Philosophy, has taken an interest in the ethics (伦理标准) of Al and the policy gaps that need to be filled in. Through a humanities viewpoint, Selinger asks the questions, “How can AI cause harm, and what can governments and companies creating Al programs do to address and manage it ” Answering them, he explained, requires an interdisciplinary approach.
“AI ethics go beyond technical fixes. Philosophers and other humanities experts are uniquely skilled to address the nuanced (微妙的) principles, value conflicts, and power dynamics. These skills aren’t just crucial for addressing current issues. We desperately need them to promote anticipatory (先行的) governance, ” said Selinger.
One example that illustrates how philosophy and humanities experts can help guide these new, rapidly growing technologies is Selinger’s work collaborating with a special AI project. “One of the skills I bring to the table is identifying core ethical issues in emerging technologies that haven’t been built or used by the public. We can take preventative steps to limit risk, including changing how the technology is designed, ”said Selinger.
Taking these preventative steps and regularly reassessing what risks need addressing is part of the ongoing journey in pursuit of creating responsible AI. Selinger explains that there isn’t a step-by-step approach for good governance. “AI ethics have core values and principles, but there’s endless disagreement about interpreting and applying them and creating meaningful accountability mechanisms, ” said Selinger. “Some people are rightly worried that AI can become integrated into ‘ethics washing’-weak checklists, flowery mission statements, and empty rhetoric that covers over abuses of power. Fortunately, I’ve had great conversations about this issue, including with some experts, on why it is important to consider a range of positions. ”
Some of Selinger’s recent research has focused on the back-end issues with developing AI, such as the human impact that comes with testing AI chatbots before they’re released to the public. Other issues focus on policy, such as what to do about the dangers posed by facial recognition and other automated surveillance(监视) approaches.
Selinger is making sure his students are informed about the ongoing industry conversations on AI ethics and responsible AI. “Students are going to be future tech leaders. Now is the time to help them think about what goals their companies should have and the costs of minimizing ethical concerns. Beyond social costs, downplaying ethics can negatively impact corporate culture and hiring, ” said Selinger. “To attract top talent, you need to consider whether your company matches their interests and hopes for the future. ”
9.Selinger advocates an interdisciplinary approach because ________.
A.humanities experts possess skills essential for AI ethics
B.it demonstrates the power of anticipatory governance
C.AI ethics heavily depends on technological solutions
D.it can avoid social conflicts and pressing issues
10.To promote responsible AI, Selinger believes we should ________.
A.adopt a systematic approach B.apply innovative technologies
C.anticipate ethical risks beforehand D.establish accountability mechanisms
11.What can be inferred from the last two paragraphs
A.More companies will use AI to attract top talent.
B.Understanding AI ethics will help students in the future.
C.Selinger favors companies that match his students’ values.
D.Selinger is likely to focus on back-end issues such as policy.

4.(23-24高三·浙江·阶段练习)
Users of Google Gemini, the tech giant’s artificial-intelligence model, recently noticed that asking it to create images of Vikings, or German soldiers from 1943 produced surprising results: hardly any of the people depicted were white. Other image-generation tools have been criticized because they tend to show white men when asked for images of entrepreneurs or doctors. Google wanted Gemini to avoid this trap; instead, it fell into another one, depicting George Washington as black. Now attention has moved on to the chatbot’s text responses, which turned out to be just as surprising.
Gemini happily provided arguments in favor of positive action in higher education, but refused to provide arguments against. It declined to write a job ad for a fossil-fuel lobby group (游说团体), because fossil fuels are bad and lobby groups prioritize “the interests of corporations over public well-being”. Asked if Hamas is a terrorist organization, it replied that the conflict in Gaza is “complex”; asked if Elon Musk’s tweeting of memes had done more harm than Hitler, it said it was “difficult to say”. You do not have to be a critic to perceive its progressive bias.
Inadequate testing may be partly to blame. Google lags behind OpenAI, maker of the better-known ChatGPT. As it races to catch up, Google may have cut corners. Other chatbots have also had controversial launches. Releasing chatbots and letting users uncover odd behaviors, which can be swiftly addressed, lets firms move faster, provided they are prepared to weather (经受住) the potential risks and bad publicity, observes Eth an Mollick, a professor at Wharton Business School.
But Gemini has clearly been deliberately adjusted, or “fine-tuned”, to produce these responses. This raises questions about Google’s culture. Is the firm so financially secure, with vast profits from internet advertising, that it feels free to try its hand at social engineering Do some employees think it has not just an opportunity, but a responsibility, to use its reach and power to promote a particular agenda All eyes are now on Google’s boss, Sundar Pichai. He says Gemini is being fixed. But does Google need fixing too
12.What do the words “this trap” underlined in the first paragraph refer to
A.Having a racial bias. B.Responding to wrong texts.
C.Criticizing political figures. D.Going against historical facts.
13.What is Paragraph 2 mainly about
A.Gemini’s refusal to make progress. B.Gemini’s failure to give definite answers.
C.Gemini’s prejudice in text responses. D.Gemini’s avoidance of political conflicts.
14.What does Eth an Mollick think of Gemini’s early launch
A.Creative. B.Promising. C.Illegal. D.Controversial.
15.What can we infer about Google from the last paragraph
A.Its security is doubted. B.It lacks financial support.
C.It needs further improvement. D.Its employees are irresponsible.

5.(2024·山东·模拟预测)
Traditionally, people have been forced to reduce complex choices to a small handful of options that don’t do justice to their true desires. For example, in a restaurant, the limitations of the kitchen, the way supplies have to be ordered and the realities of restaurant cooking make you get a menu of a few dozen standardized options, with the possibility of some modifications (修改) around the edges. We are so used to these bottlenecks that we don’t even notice them. And when we do, we tend to assume they are the unavoidable cost of scale (规模) and efficiency. And they are. Or, at least, they were.
Artificial intelligence (AI) has the potential to overcome this limitation. By storing rich representations of people’s preferences and histories on the demand side, along with equally rich representations of capabilities, costs and creative possibilities on the supply side, AI systems enable complex customization at large scale and low cost. Imagine walking into a restaurant and knowing that the kitchen has already started working on a meal optimized (优化) for your tastes, or being presented with a personalized list of choices.
There have been some early attempts at this. People have used ChatGPT to design meals based on dietary restrictions and what they have in the fridge. It’s still early days for these technologies, but once they get working, the possibilities are nearly endless.
Recommendation systems for digital media have reduced their reliance on traditional intermediaries. Radio stations are like menu items: Regardless of how nuanced (微妙) your taste in music is, you have to pick from a handful of options. Early digital platforms were only a little better: “This person likes jazz, so we’ll suggest more Jazz.” Today’s streaming platforms use listener histories and a broad set of characters describing each track to provide each user with personalized music recommendations.
A world without artificial bottlenecks comes with risks — loss of jobs in the bottlenecks, for example — but italso has the potential to free people from the straightjackets that have long limited large-scale human decision-’making. In some cases — restaurants, for example — the effect on most people might be minor. But in others, likepolitics and hiring, the effects could be great.
16.What does the underlined word “bottlenecks” in paragraph 1 refer to
A.Facing too many choices. B.Choosing from limited options.
C.Avoiding the cost of choosing. D.Having too many desires to satisfy.
17.How can AI meet everyone’s needs
A.By meeting both ends of supply and demand.
B.By decreasing representations on the supply side.
C.By disconnecting the sides of supply and demand.
D.By reducing people’s preferences on the demand side.
18.What’s the similarity between radio stations and menu items
A.They are a necessary part in people’s life. B.They offer limited choices.
C.They depend on digital platforms. D.They provide reasonable suggestions.
19.What does the text mainly talk about
A.The variety of human’s choices. B.Standardized optrarts in daily life.
C.AI settlements to the option bottlenecks. D.Recommendation systems for digital media.

6.(2024·福建·模拟预测)
Our species’ incredible capacity to quickly acquire words from 300 by age 2 to over 1, 000 by age 4 isn’t fully understood. Some cognitive scientists and linguists have theorized that people are born with built-in expectations and logical constraints (约束) that make this possible. Now, however, machine-learning research is showing that preprogrammed assumptions aren’t necessary to swiftly pick up word meanings from minimal data.
A team of scientists has successfully trained a basic artificial intelligence model to match images to words using just 61 hours of naturalistic footage (镜头) and sound-previously collected from a child named Sam in 2013 and 2014. Although it’s a small slice of a child’s life, it was apparently enough to prompt the AI to figure out what certain words mean.
The findings suggest that language acquisition could be simpler than previously thought. Maybe children “don’t need a custom-built, high-class language-specific mechanism” to efficiently grasp word meanings, says Jessica Sullivan, an associate professor of psychology at Skidmore College. “This is a really beautiful study, ” she says, because it offers evidence that simple information from a child’s worldview is rich enough to kick-start pattern recognition and word comprehension.
The new study also demonstrates that it’s possible for machines to learn similarly to the way that humans do. Large language models are trained on enormous amounts of data that can include billions and sometimes trillions of word combinations. Humans get by on orders of magnitude less information, says the paper’s lead author Wai Keen Vong. With the right type of data, that gap between machine and human learning could narrow dramatically.
Yet additional study is necessary in certain aspects of the new research. For one, the scientists acknowledge that their findings don’t prove how children acquire words. Moreover, the study only focused on recognizing the words for physical objects.
Still, it’s a step toward a deeper understanding of our own mind, which can ultimately help us improve human education, says Eva Portelance, a computational linguistics researcher. She notes that AI research can also bring clarity to long-unanswered questions about ourselves. “We can use these models in a good way, to benefit science and society, ” Portelance adds.
20.What is a significant finding of machine-learning research
A.Vocabulary increases gradually with age.
B.Vocabulary can be acquired from minimal data.
C.Language acquisition is tied to built-in expectations.
D.Language acquisition is as complex as formerly assumed.
21.What does the underlined word “prompt” in paragraph 2 mean
A.Facilitate. B.Persuade. C.Advise. D.Expect.
22.What is discussed about the new research in paragraph 5
A.Its limitations. B.Its strengths. C.Its uniqueness. D.Its process.
23.What is Eva Portelance’s attitude to the AI research
A.Doubtful. B.Cautious. C.Dismissive. D.Positive

延伸阅读:

标签:

上一篇:2024年中考英语(书面表达)题型突破 (吉林专用)(含解析)

下一篇:【情境试题】Module 8 Unit 2 We thought somebody was moving about 情境练语篇外延版八年级下册(含答案)