From flattery to debate: Training AI to mirror human reasoning
Generative artificial intelligence systems often work in agreement, complimenting the user in its response. But human interactions aren't typically built on flattery. To help strengthen these conversations, researchers in the USF Bellini College of Artificial Intelligence, Cybersecurity and Computing are challenging the technology to think and debate in ways that resemble human reasoning.
computer-sciencesRethinking rush hour with vehicle automation
It's often the worst part of many people's day—bottlenecked, rush-hour traffic. When the daily commute backs up, drivers lose time, burn fuel and waste energy. Researchers at the U.S. Department of Energy's National Transportation Research Center at Oak Ridge National Laboratory are tackling this problem with cooperative driving automation (CDA), an emerging technology that allows vehicles and traffic infrastructure to communicate, keeping traffic flowing efficiently and safely.
computer-sciencesSmall modular reactors gain competitive edge with new digital twin
Advanced nuclear is within reach—and a new digital twin reveals how smarter plant operations can enhance the economic viability and safety of small modular reactors, or SMRs. In collaboration with the University of Tennessee and GE Vernova Hitachi, researchers at Oak Ridge National Laboratory recently published research in the journal Nuclear Science and Engineering on a new risk-informed digital twin designed to enhance operational decision-making for the GE Vernova Hitachi BWRX-300 SMR design.
computer-sciencesIs artificial general intelligence already here? A new case that today's LLMs meet key tests
Will artificial intelligence ever be able to reason, learn, and solve problems at levels comparable to humans? Experts at the University of California San Diego believe the answer is yes—and that such artificial general intelligence has already arrived. This debate is tackled by four faculty members spanning humanities, social sciences, and data science in a recently published Comment invited by Nature.
computer-sciencesAI 기반 회전체 불균형 계측, 0.1 마이크로미터 한계를 넘다
AI기반 회전체 불균형 계측, 0.1 마이크로미터 한계를 넘다 - 기계연·㈜피앤에스, 최고 성능급 자동화 밸런싱 머신 핵심기술 확보, 첫 상용화 - - 일본 수출규제 대응 성과… 0.1㎛급 잔류 불균형 검출로 정밀 제조장비 기술 자립 - □ 회전체의 미세한 질량 불균형을 자동으로 정밀 측정·교정할 수 있는 최고 성능급
한국기계연구원 > KIMM NEWSA bot-only social media platform: What the Moltbook experiment is teaching us about AI
What happens when you create a social media platform that only AI bots can post to? The answer, it turns out, is both entertaining and concerning. Moltbook is exactly that—a platform where artificial intelligence agents chat among themselves and humans can only watch from the sidelines.
computer-sciencesDoes AI understand word impressions like humans do?
By now, it's no secret that large language models (LLMs) are experts at mimicking natural language. Trained on vast troves of data, these models have proven themselves capable of generating text so convincing that it regularly appears humanlike to readers. But is there any difference between how we think about a word and how an LLM does?
computer-sciencesInner 'self-talk' helps AI models learn, adapt and multitask more easily
Talking to oneself is a trait which feels inherently human. Our inner monologs help us organize our thoughts, make decisions, and understand our emotions. But it's not just humans who can reap the benefits of such self-talk.
computer-sciencesEmoticons can confuse LLMs, causing 'silent failures' in coding responses
Large language models (LLMs), artificial intelligence (AI) systems that can process and generate texts in various languages, are now widely used by people worldwide. These models have proved to be effective in rapidly sourcing information, answering questions, creating written content for specific applications and writing computer code.
computer-sciences'TransMiter' technique transplants learned knowledge between AI models
How inconvenient would it be if you had to manually transfer every contact and photo from scratch every time you switched to a new smartphone? Current artificial intelligence (AI) models face a similar predicament. Whenever a superior new AI model—such as a new version of ChatGPT—emerges, it has to be retrained with massive amounts of data and at a high cost to acquire specialized knowledge in specific fields. Now a Korean research team has developed a "knowledge transplantation" technology between AI models that can resolve this inefficiency.
computer-sciencesFoundation AI models trained on physics, not words, are driving scientific discovery
While popular AI models such as ChatGPT are trained on language or photographs, new models created by researchers from the Polymathic AI collaboration are trained using real scientific datasets. The models are already using knowledge from one field to address seemingly completely different problems in another.
computer-sciencesMoore's law: The famous rule of computing has reached the end of the road, so what comes next?
For half a century, computing advanced in a reassuring, predictable way. Transistors—devices used to switch electrical signals on a computer chip—became smaller. Consequently, computer chips became faster, and society quietly assimilated the gains almost without noticing.
computer-sciencesAI is already writing almost one-third of new software code, study shows
Generative AI is reshaping software development—and fast. A new study published in Science shows that AI-assisted coding is spreading rapidly, though unevenly: in the U.S., the share of new code relying on AI rose from 5% in 2022 to 29% in early 2025, compared with just 12% in China. AI usage is highest among less experienced programmers, but productivity gains go to seasoned developers.
computer-sciencesAI models mirror human 'us vs. them' social biases, study shows
Large language models (LLMs), the computational models underpinning the functioning of ChatGPT, Gemini and other widely used artificial intelligence (AI) platforms, can rapidly source information and generate texts tailored for specific purposes. As these models are trained on large amounts of texts written by humans, they could exhibit some human-like biases, which are inclinations to prefer specific stimuli, ideas or groups that deviate from objectivity.
computer-sciencesMaking blockchain fast enough for IoT networks
The vision of a fully connected world is rapidly becoming a reality through the Internet of Things (IoT)—a growing network of physical devices that collect and share data over the Internet, including everything from small sensors to autonomous vehicles and industrial equipment.
computer-sciencesNew method helps AI reason like humans without extra training data
A study led by UC Riverside researchers offers a practical fix to one of artificial intelligence's toughest challenges by enabling AI systems to reason more like humans—without requiring new training data beyond test questions.
computer-sciencesUsing AI to understand how emotions are formed
Emotions are a fundamental part of human psychology—a complex process that has long distinguished us from machines. Even advanced artificial intelligence (AI) lacks the capacity to feel. However, researchers are now exploring whether the formation of emotions can be computationally modeled, providing machines with a deeper, more human-like understanding of emotional states.
computer-sciencesMistaken correlations: Why it's critical to move beyond overly aggregated machine-learning metrics
MIT researchers have identified significant examples of machine-learning model failure when those models are applied to data other than what they were trained on, raising questions about the need to test whenever a model is deployed in a new setting.
computer-sciencesCreative talent: Has AI knocked humans out?
Are generative artificial intelligence systems such as ChatGPT truly creative? A research team led by Professor Karim Jerbi from the Department of Psychology at the Université de Montréal, and including AI pioneer Yoshua Bengio, also a professor at Université de Montréal, has just published the largest comparative study ever conducted on the creativity of large language models versus humans.
computer-sciencesFindings on phonetic reduction in speech could help make AI voices more natural-sounding
A speech study by a research team from The University of Texas at El Paso has identified an underappreciated aspect of speech in English and Spanish speakers that could lead to improvements in artificial intelligence (AI) spoken dialogue systems.
computer-sciencesAI models tested on Dungeons & Dragons to assess long-term decision-making
Large Language Models, like ChatGPT, are learning to play Dungeons & Dragons. The reason? Simulating and playing the popular tabletop role-playing game provides a good testing ground for AI agents that need to function independently for long stretches of time.
computer-sciences