Please ensure Javascript is enabled for purposes of website accessibility

US Consumer Spending Falls as Trump Tariff’s Muddle Economy

2 days ago

US Supreme Court Lets Parents Take Kids Out of Classes With LGBT Storybooks

2 days ago

In Win for Trump, US Supreme Court Limits Judges’ Power to Block Birthright Citizenship Order

2 days ago

California’s Newsom Sues Fox News for $787 Million for Defamation Over Trump Call

2 days ago

Motorcycle Collides With Tractor in Fatal Fresno County Collision

2 days ago

Fourth of July Celebrations Begin Saturday. Here’s Your Fresno Area Guide

2 days ago

Bill Moyers, Broadcaster and LBJ’s White House Press Secretary, Dies at 91

3 days ago

State Department Approves $30 Million for Gaza Humanitarian Foundation

3 days ago

Cargo Ship That Caught Fire Carrying Electric Vehicles Sinks in the Pacific

3 days ago

4 Million Acres of California Forests Could Lose Protection. What Trump’s ‘Roadless Rule’ Repeal Could Do

4 days ago
Popular AIs Head-to-Head: OpenAI Beats DeepSeek on Sentence-Level Reasoning
gvw_ap_news
By Associated Press
Published 2 months ago on
April 17, 2025

Comparing AI reasoning abilities reveals OpenAI's o1 model surpasses DeepSeek's R1 in generating accurate, sentence-level citations. (Shutterstock)

Share

ChatGPT and other AI chatbots based on large language models are known to occasionally make things up, including scientific and legal citations. It turns out that measuring how accurate an AI model’s citations are is a good way of assessing the model’s reasoning abilities.

An AI model “reasons” by breaking down a query into steps and working through them in order. Think of how you learned to solve math word problems in school.

Ideally, to generate citations an AI model would understand the key concepts in a document, generate a ranked list of relevant papers to cite, and provide convincing reasoning for how each suggested paper supports the corresponding text. It would highlight specific connections between the text and the cited research, clarifying why each source matters.

The question is, can today’s models be trusted to make these connections and provide clear reasoning that justifies their source choices? The answer goes beyond citation accuracy to address how useful and accurate large language models are for any information retrieval purpose.

Developing a Benchmark for AI Reasoning

I’m a computer scientist. My colleagues − researchers from the AI Institute at the University of South Carolina, Ohio State University and University of Maryland Baltimore County − and I have developed the Reasons benchmark to test how well large language models can automatically generate research citations and provide understandable reasoning.

We used the benchmark to compare the performance of two popular AI reasoning models, DeepSeek’s R1 and OpenAI’s o1. Though DeepSeek made headlines with its stunning efficiency and cost-effectiveness, the Chinese upstart has a way to go to match OpenAI’s reasoning performance.

The Importance of Sentence-Level Specificity

The accuracy of citations has a lot to do with whether the AI model is reasoning about information at the sentence level rather than paragraph or document level. Paragraph-level and document-level citations can be thought of as throwing a large chunk of information into a large language model and asking it to provide many citations.

In this process, the large language model overgeneralizes and misinterprets individual sentences. The user ends up with citations that explain the whole paragraph or document, not the relatively fine-grained information in the sentence.

Further, reasoning suffers when you ask the large language model to read through an entire document. These models mostly rely on memorizing patterns that they typically are better at finding at the beginning and end of longer texts than in the middle. This makes it difficult for them to fully understand all the important information throughout a long document.

Large language models get confused because paragraphs and documents hold a lot of information, which affects citation generation and the reasoning process. Consequently, reasoning from large language models over paragraphs and documents becomes more like summarizing or paraphrasing.

The Reasons benchmark addresses this weakness by examining large language models’ citation generation and reasoning.

Testing Citations and Reasoning Performance

Following the release of DeepSeek R1 in January 2025, we wanted to examine its accuracy in generating citations and its quality of reasoning and compare it with OpenAI’s o1 model. We created a paragraph that had sentences from different sources, gave the models individual sentences from this paragraph, and asked for citations and reasoning.

To start our test, we developed a small test bed of about 4,100 research articles around four key topics that are related to human brains and computer science: neurons and cognition, human-computer interaction, databases and artificial intelligence. We evaluated the models using two measures: F-1 score, which measures how accurate the provided citation is, and hallucination rate, which measures how sound the model’s reasoning is − that is, how often it produces an inaccurate or misleading response.

Our testing revealed significant performance differences between OpenAI o1 and DeepSeek R1 across different scientific domains. OpenAI’s o1 did well connecting information between different subjects, such as understanding how research on neurons and cognition connects to human-computer interaction and then to concepts in artificial intelligence, while remaining accurate. Its performance metrics consistently outpaced DeepSeek R1’s across all evaluation categories, especially in reducing hallucinations and successfully completing assigned tasks.

OpenAI o1 was better at combining ideas semantically, whereas R1 focused on making sure it generated a response for every attribution task, which in turn increased hallucination during reasoning. OpenAI o1 had a hallucination rate of approximately 35% compared with DeepSeek R1’s rate of nearly 85% in the attribution-based reasoning task.

In terms of accuracy and linguistic competence, OpenAI o1 scored about 0.65 on the F-1 test, which means it was right about 65% of the time when answering questions. It also scored about 0.70 on the BLEU test, which measures how well a language model writes in natural language. These are pretty good scores.

DeepSeek R1 scored lower, with about 0.35 on the F-1 test, meaning it was right about 35% of the time. However, its BLEU score was only about 0.2, which means its writing wasn’t as natural-sounding as OpenAI’s o1. This shows that o1 was better at presenting that information in clear, natural language.

OpenAI Holds the Advantage

On other benchmarks, DeepSeek R1 performs on par with OpenAI o1 on math, coding and scientific reasoning tasks. But the substantial difference on our benchmark suggests that o1 provides more reliable information, while R1 struggles with factual consistency.

Though we included other models in our comprehensive testing, the performance gap between o1 and R1 specifically highlights the current competitive landscape in AI development, with OpenAI’s offering maintaining a significant advantage in reasoning and knowledge integration capabilities.

These results suggest that OpenAI still has a leg up when it comes to source attribution and reasoning, possibly due to the nature and volume of the data it was trained on. The company recently announced its deep research tool, which can create reports with citations, ask follow-up questions and provide reasoning for the generated response.

The jury is still out on the tool’s value for researchers, but the caveat remains for everyone: Double-check all citations an AI gives you.

This article is republished from The Conversation under a Creative Commons license. Read the original article here: https://theconversation.com/popular-ais-head-to-head-openai-beats-deepseek-on-sentence-level-reasoning-249109.

RELATED TOPICS:

DON'T MISS

What Are Fresno Real Estate Experts Predicting for 2025 and Beyond?

DON'T MISS

First California EV Mandates Hit Automakers This Year. Most Are Not Even Close

DON'T MISS

I Detest Netanyahu, but on Some Things He’s Actually Right

DON'T MISS

University of Virginia President Resigns Under Pressure From Trump Administration

DON'T MISS

How Did the Supreme Court Rule? Here’s a Look at the Big Cases

DON'T MISS

Mamdani’s NYC Primary Win Sparks Surge in Anti-Muslim Posts, Advocates Say

DON'T MISS

Trump Sends in DOGE to Slash Federal Gun Regulations by July 4

DON'T MISS

Tensions Flare at Announcement of Major Fresno County Gang Takedown

DON'T MISS

Measure C ‘Blackmailed’ As Fresno Enviro Coalition Gets Huge Say on Transportation Tax

DON'T MISS

Despite $49M Deficit, Fresno Unified Gives Top Brass 5% Raise, 3% One-Time Bonus

DON'T MISS

US Consumer Spending Falls as Trump Tariff’s Muddle Economy

DON'T MISS

US Supreme Court Preserves Key Element of Obamacare

UP NEXT

University of Virginia President Resigns Under Pressure From Trump Administration

UP NEXT

How Did the Supreme Court Rule? Here’s a Look at the Big Cases

UP NEXT

Mamdani’s NYC Primary Win Sparks Surge in Anti-Muslim Posts, Advocates Say

UP NEXT

Trump Sends in DOGE to Slash Federal Gun Regulations by July 4

UP NEXT

Tensions Flare at Announcement of Major Fresno County Gang Takedown

UP NEXT

Measure C ‘Blackmailed’ As Fresno Enviro Coalition Gets Huge Say on Transportation Tax

UP NEXT

US Consumer Spending Falls as Trump Tariff’s Muddle Economy

UP NEXT

US Supreme Court Preserves Key Element of Obamacare

UP NEXT

US Supreme Court Lets Parents Take Kids Out of Classes With LGBT Storybooks

UP NEXT

Fresno Unified Trustees Will Get Automatic Raises on Tuesday

Mamdani’s NYC Primary Win Sparks Surge in Anti-Muslim Posts, Advocates Say

1 day ago

Trump Sends in DOGE to Slash Federal Gun Regulations by July 4

2 days ago

Tensions Flare at Announcement of Major Fresno County Gang Takedown

2 days ago

Measure C ‘Blackmailed’ As Fresno Enviro Coalition Gets Huge Say on Transportation Tax

2 days ago

Despite $49M Deficit, Fresno Unified Gives Top Brass 5% Raise, 3% One-Time Bonus

2 days ago

US Consumer Spending Falls as Trump Tariff’s Muddle Economy

2 days ago

US Supreme Court Preserves Key Element of Obamacare

2 days ago

US Supreme Court Lets Parents Take Kids Out of Classes With LGBT Storybooks

2 days ago

Fresno Unified Trustees Will Get Automatic Raises on Tuesday

2 days ago

Alleged ‘Fake’ ICE Agents Charged. Fresno Court Date Set

2 days ago

I Detest Netanyahu, but on Some Things He’s Actually Right

Like a lot of people of center-right/center-left political leanings, I’ve spent the past few decades detesting Prime Minister Benjamin Netan...

1 day ago

2022 Election Rally for Netanyahu
1 day ago

I Detest Netanyahu, but on Some Things He’s Actually Right

University of Virginia President James Ryan Resigns
1 day ago

University of Virginia President Resigns Under Pressure From Trump Administration

1 day ago

How Did the Supreme Court Rule? Here’s a Look at the Big Cases

Zohran Mamdani Speaks to Supporters
1 day ago

Mamdani’s NYC Primary Win Sparks Surge in Anti-Muslim Posts, Advocates Say

American Flag Revolver
2 days ago

Trump Sends in DOGE to Slash Federal Gun Regulations by July 4

Rob_Bonta_Speaking_At_Press_Conference_1280x720
2 days ago

Tensions Flare at Announcement of Major Fresno County Gang Takedown

Garry_Bredefeld_Sandra_Celedon_Mesure_C_1280x720
2 days ago

Measure C ‘Blackmailed’ As Fresno Enviro Coalition Gets Huge Say on Transportation Tax

Fresno_Unified_Raises_1280x720
2 days ago

Despite $49M Deficit, Fresno Unified Gives Top Brass 5% Raise, 3% One-Time Bonus

Help continue the work that gets you the news that matters most.

Search

Send this to a friend