<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:atom="http://www.w3.org/2005/Atom" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:g-custom="http://base.google.com/cns/1.0" xmlns:media="http://search.yahoo.com/mrss/" version="2.0">
  <channel>
    <title>MIT News - Artificial intelligence</title>
    <link>http://www.titanseo.ai</link>
    <description />
    <atom:link href="http://www.titanseo.ai/feed/rss2" type="application/rss+xml" rel="self" />
    <item>
      <title>Tips for writing great posts that increase your site traffic</title>
      <link>http://www.titanseo.ai/tips-for-writing-great-posts-that-increase-your-site-traffic</link>
      <description />
      <content:encoded>&lt;div data-rss-type="text"&gt;&#xD;
  &lt;div&gt;&#xD;
    &lt;p&gt;&#xD;
      
           Write about something you know. If you don’t know much about a specific topic that will interest your readers, invite an expert to write about it.
          &#xD;
    &lt;/p&gt;&#xD;
  &lt;/div&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div&gt;&#xD;
  &lt;img src="https://irt-cdn.multiscreensite.com/md/unsplash/dms3rep/multi/desktop/photo-1455849318743-b2233052fcff.jpg" alt="A couple of people standing on a sidewalk with the words passion led us here written on the ground." title=""/&gt;&#xD;
  &lt;span&gt;&#xD;
  &lt;/span&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;div&gt;&#xD;
    &lt;b&gt;&#xD;
      
           Speak to your audience
          &#xD;
    &lt;/b&gt;&#xD;
    &lt;br/&gt;&#xD;
    &lt;p&gt;&#xD;
      
           You know your audience better than anyone else, so keep them in mind as you write your blog posts. Write about things they care about. If you have a company Facebook page, look here to find topics to write about
          &#xD;
    &lt;/p&gt;&#xD;
    &lt;br/&gt;&#xD;
    &lt;b&gt;&#xD;
      
           Take a few moments to plan your post
          &#xD;
    &lt;/b&gt;&#xD;
    &lt;br/&gt;&#xD;
    &lt;p&gt;&#xD;
      
           Once you have a great idea for a post, write the first draft. Some people like to start with the title and then work on the paragraphs. Other people like to start with subtitles and go from there. Choose the method that works for you.
          &#xD;
    &lt;/p&gt;&#xD;
    &lt;br/&gt;&#xD;
    &lt;b&gt;&#xD;
      
           Don’t forget to add images
          &#xD;
    &lt;/b&gt;&#xD;
    &lt;br/&gt;&#xD;
    &lt;p&gt;&#xD;
      
           Be sure to include a few high-quality images in your blog. Images break up the text and make it more readable. They can also convey emotions or ideas that are hard to put into words.
          &#xD;
    &lt;/p&gt;&#xD;
    &lt;br/&gt;&#xD;
    &lt;b&gt;&#xD;
      
           Edit carefully before posting
          &#xD;
    &lt;/b&gt;&#xD;
    &lt;br/&gt;&#xD;
    &lt;p&gt;&#xD;
      
           Once you’re happy with the text, put it aside for a day or two, and then re-read it. You’ll probably find a few things you want to add, and a couple more that you want to remove. Have a friend or colleague look it over to make sure there are no mistakes. When your post is error-free, set it up in your blog and publish.
          &#xD;
    &lt;/p&gt;&#xD;
  &lt;/div&gt;&#xD;
&lt;/div&gt;</content:encoded>
      <enclosure url="https://irt-cdn.multiscreensite.com/md/dmtmpl/dms3rep/multi/woman_coffee_street.jpg" length="417830" type="image/jpeg" />
      <pubDate>Thu, 15 Aug 2024 16:09:06 GMT</pubDate>
      <author>admin@lingows.com (Lingows Admin)</author>
      <guid>http://www.titanseo.ai/tips-for-writing-great-posts-that-increase-your-site-traffic</guid>
      <g-custom:tags type="string" />
      <media:content medium="image" url="https://irt-cdn.multiscreensite.com/md/dmtmpl/dms3rep/multi/woman_coffee_street.jpg">
        <media:description>thumbnail</media:description>
      </media:content>
      <media:content medium="image" url="https://irt-cdn.multiscreensite.com/md/dmtmpl/dms3rep/multi/woman_coffee_street.jpg">
        <media:description>main image</media:description>
      </media:content>
    </item>
    <item>
      <title>Helping robots practice skills independently to adapt to unfamiliar environments</title>
      <link>http://www.titanseo.ai/2024/helping-robots-practice-skills-independently-adapt-unfamiliar-environments-0808</link>
      <description>A new algorithm helps robots practice skills like sweeping and placing objects, potentially helping them improve at important tasks in houses, hospitals, and factories.</description>
      <content:encoded>&lt;div data-rss-type="text"&gt;&#xD;
  &lt;p&gt;&#xD;
    
                    The phrase “practice makes perfect” is usually reserved for humans, but it’s also a great maxim for robots newly deployed in unfamiliar environments.
                  &#xD;
  &lt;/p&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;p&gt;&#xD;
    
                    Picture a robot arriving in a warehouse. It comes packaged with the skills it was trained on, like placing an object, and now it needs to pick items from a shelf it’s not familiar with. At first, the machine struggles with this, since it needs to get acquainted with its new surroundings. To improve, the robot will need to understand which skills within an overall task it needs improvement on, then specialize (or parameterize) that action.
                  &#xD;
  &lt;/p&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;p&gt;&#xD;
    
                    A human onsite could program the robot to optimize its performance, but researchers from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) and The AI Institute have developed a more effective alternative. Presented at the Robotics: Science and Systems Conference last month, their “Estimate, Extrapolate, and Situate” (EES) algorithm enables these machines to practice on their own, potentially helping them improve at useful tasks in factories, households, and hospitals. 
    
  
  
                    &#xD;
    &lt;br/&gt;&#xD;
    &lt;br/&gt;&#xD;
    &lt;b&gt;&#xD;
      
                      
    
    
      Sizing up the situation
    
  
  
                    &#xD;
    &lt;/b&gt;&#xD;
  &lt;/p&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;p&gt;&#xD;
    
                    To help robots get better at activities like sweeping floors, EES works with a vision system that locates and tracks the machine’s surroundings. Then, the algorithm estimates how reliably the robot executes an action (like sweeping) and whether it would be worthwhile to practice more. EES forecasts how well the robot could perform the overall task if it refines that particular skill, and finally, it practices. The vision system subsequently checks whether that skill was done correctly after each attempt.
                  &#xD;
  &lt;/p&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;p&gt;&#xD;
    
                    EES could come in handy in places like a hospital, factory, house, or coffee shop. For example, if you wanted a robot to clean up your living room, it would need help practicing skills like sweeping. According to Nishanth Kumar SM ’24 and his colleagues, though, EES could help that robot improve without human intervention, using only a few practice trials.
                  &#xD;
  &lt;/p&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;p&gt;&#xD;
    
                    “Going into this project, we wondered if this specialization would be possible in a reasonable amount of samples on a real robot,” says Kumar, co-lead author of a 
    
  
  
                    &#xD;
    &lt;a href="https://arxiv.org/pdf/2402.15025.pdf" target="_blank"&gt;&#xD;
      
                      
    
    
      paper
    
  
  
                    &#xD;
    &lt;/a&gt;&#xD;
    
                    
  
  
     describing the work, PhD student in electrical engineering and computer science, and a CSAIL affiliate. “Now, we have an algorithm that enables robots to get meaningfully better at specific skills in a reasonable amount of time with tens or hundreds of data points, an upgrade from the thousands or millions of samples that a standard reinforcement learning algorithm requires.”
                  &#xD;
  &lt;/p&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;b&gt;&#xD;
      
                      
    
    
      See Spot sweep
    
  
  
                    &#xD;
    &lt;/b&gt;&#xD;
  &lt;/p&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;p&gt;&#xD;
    
                    EES’s knack for efficient learning was evident when implemented on Boston Dynamics’ Spot quadruped during research trials at The AI Institute. The robot, which has an arm attached to its back, completed manipulation tasks after practicing for a few hours. In one demonstration, the robot learned how to securely place a ball and ring on a slanted table in roughly three hours. In another, the algorithm guided the machine to improve at sweeping toys into a bin within about two hours. Both results appear to be an upgrade from previous frameworks, which would have likely taken more than 10 hours per task.
    
  
  
                    &#xD;
    &lt;br/&gt;&#xD;
    &lt;br/&gt;&#xD;
    
                    
  
  
    “We aimed to have the robot collect its own experience so it can better choose which strategies will work well in its deployment,” says co-lead author Tom Silver SM ’20, PhD ’24, an electrical engineering and computer science (EECS) alumnus and CSAIL affiliate who is now an assistant professor at Princeton University. “By focusing on what the robot knows, we sought to answer a key question: In the library of skills that the robot has, which is the one that would be most useful to practice right now?”
                  &#xD;
  &lt;/p&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;p&gt;&#xD;
    
                    EES could eventually help streamline autonomous practice for robots in new deployment environments, but for now, it comes with a few limitations. For starters, they used tables that were low to the ground, which made it easier for the robot to see its objects. Kumar and Silver also 3D printed an attachable handle that made the brush easier for Spot to grab. The robot didn’t detect some items and identified objects in the wrong places, so the researchers counted those errors as failures.
                  &#xD;
  &lt;/p&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;b&gt;&#xD;
      
                      
    
    
      Giving robots homework
    
  
  
                    &#xD;
    &lt;/b&gt;&#xD;
  &lt;/p&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;p&gt;&#xD;
    
                    The researchers note that the practice speeds from the physical experiments could be accelerated further with the help of a simulator. Instead of physically working at each skill autonomously, the robot could eventually combine real and virtual practice. They hope to make their system faster with less latency, engineering EES to overcome the imaging delays the researchers experienced. In the future, they may investigate an algorithm that reasons over sequences of practice attempts instead of planning which skills to refine.
    
  
  
                    &#xD;
    &lt;br/&gt;&#xD;
    &lt;br/&gt;&#xD;
    
                    
  
  
    “Enabling robots to learn on their own is both incredibly useful and extremely challenging,” says Danfei Xu, an assistant professor in the School of Interactive Computing at Georgia Tech and a research scientist at NVIDIA AI, who was not involved with this work. “In the future, home robots will be sold to all sorts of households and expected to perform a wide range of tasks. We can't possibly program everything they need to know beforehand, so it’s essential that they can learn on the job. However, letting robots loose to explore and learn without guidance can be very slow and might lead to unintended consequences. The research by Silver and his colleagues introduces an algorithm that allows robots to practice their skills autonomously in a structured way. This is a big step towards creating home robots that can continuously evolve and improve on their own.”
    
  
  
                    &#xD;
    &lt;br/&gt;&#xD;
    &lt;br/&gt;&#xD;
    
                    
  
  
    Silver and Kumar’s co-authors are The AI Institute researchers Stephen Proulx and Jennifer Barry, plus four CSAIL members: Northeastern University PhD student and visiting researcher Linfeng Zhao, MIT EECS PhD student Willie McClinton, and MIT EECS professors Leslie Pack Kaelbling and Tomás Lozano-Pérez. Their work was supported, in part, by The AI Institute, the U.S. National Science Foundation, the U.S. Air Force Office of Scientific Research, the U.S. Office of Naval Research, the U.S. Army Research Office, and MIT Quest for Intelligence, with high-performance computing resources from the MIT SuperCloud and Lincoln Laboratory Supercomputing Center.
                  &#xD;
  &lt;/p&gt;&#xD;
&lt;/div&gt;</content:encoded>
      <enclosure url="https://irp.cdn-website.com/021d5bda/dms3rep/multi/MIT-csail-EES-algorithm.jpg" length="117828" type="image/jpeg" />
      <pubDate>Thu, 08 Aug 2024 14:45:00 GMT</pubDate>
      <guid>http://www.titanseo.ai/2024/helping-robots-practice-skills-independently-adapt-unfamiliar-environments-0808</guid>
      <g-custom:tags type="string" />
      <media:content medium="image" url="https://irp.cdn-website.com/021d5bda/dms3rep/multi/MIT-csail-EES-algorithm.jpg">
        <media:description>thumbnail</media:description>
      </media:content>
    </item>
    <item>
      <title>Dimitris Bertsimas named vice provost for open learning</title>
      <link>http://www.titanseo.ai/2024/dimitris-bertsimas-named-vice-provost-open-learning-0808</link>
      <description>Leveraging more than 35 years of experience at MIT, Bertsimas will work with partners across the Institute to transform teaching and learning on and off campus.</description>
      <content:encoded>&lt;div data-rss-type="text"&gt;&#xD;
  &lt;p&gt;&#xD;
    
                    Dimitris Bertsimas PhD ’88 has been appointed vice provost for open learning at MIT, effective Sept. 1. In this role, Bertsimas, who is the Boeing Leaders for Global Operations Professor of Management at MIT, will work with partners across the Institute to transform teaching and learning on and off MIT’s campus.
                  &#xD;
  &lt;/p&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;p&gt;&#xD;
    
                    Provost Cynthia Barnhart announced Bertsimas’s appointment in an email to the MIT community today.
                  &#xD;
  &lt;/p&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;p&gt;&#xD;
    
                    “As the vice provost for open learning, Dimitris will work with faculty and staff across MIT to shape Open Learning’s next chapter,” Barnhart wrote. “Dimitris will be a member of my leadership team as well as Academic Council, and he will work closely with the school and college deans, faculty, and staff to advance research into the science of learning with the goal of innovating, studying, and scaling up digital technologies on campus and for the benefit of the world.”
                  &#xD;
  &lt;/p&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;p&gt;&#xD;
    
                    She added, “I am thrilled that Dimitris has agreed to serve the Institute in this capacity.”
                  &#xD;
  &lt;/p&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;p&gt;&#xD;
    
                    Bertsimas comes to MIT Open Learning from the MIT Sloan School of Management, where he is associate dean for the master of business analytics program and a professor of operations research. Bertsimas has been a faculty member at the Institute since 1988, after completing his PhD in operations research and applied mathematics from MIT. He works in the areas of optimization and machine learning and their applications, including in health care and medicine. Bertsimas developed and launched the MBA program at MIT and has served as its inaugural faculty director since 2013. The program has been rated No. 1 in analytics in the world every year since its inception. Passionate about teaching, research, and entrepreneurship, Bertsimas is no stranger to MIT Open Learning. He developed 
    
  
  
                    &#xD;
    &lt;a href="https://www.edx.org/learn/analytics/massachusetts-institute-of-technology-the-analytics-edge" target="_blank"&gt;&#xD;
      
                      
    
    
      15.071
    
  
  
                    &#xD;
    &lt;/a&gt;&#xD;
    
                    
  
  
     (The Analytics Edge), available on 
    
  
  
                    &#xD;
    &lt;em&gt;&#xD;
      
                      
    
    
      MITx,
    
  
  
                    &#xD;
    &lt;/em&gt;&#xD;
    
                    
  
  
     which has attracted hundreds of thousands of learners since its launch in 2013.
                  &#xD;
  &lt;/p&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;p&gt;&#xD;
    
                    In his new role, Bertsimas will oversee MIT Open Learning’s product offerings — including OpenCourseWare, 
    
  
  
                    &#xD;
    &lt;em&gt;&#xD;
      
                      
    
    
      MITx
    
  
  
                    &#xD;
    &lt;/em&gt;&#xD;
    
                    
  
  
     courses, MicroMasters programs, xPRO courses, MIT Horizon, Jameel World Education Lab, MIT pK-12, and others — as well as Open Learning’s infrastructure, finances, and operations.
                  &#xD;
  &lt;/p&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;p&gt;&#xD;
    
                    “I am excited about the opportunity to lead Open Learning and to advance its mission,” says Bertsimas. “I have particular interest in introducing students of all ages, from all backgrounds — science, engineering, management, architecture/planning, law, medicine, the social sciences, the humanities, and the arts — to the art of the feasible in AI and its potential to revolutionize fields.”
                  &#xD;
  &lt;/p&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;p&gt;&#xD;
    
                    Bertsimas is a member of the National Academy of Engineering and a recipient of various research and teaching awards, including the John von Neumann Theory Prize from INFORMS. He views MIT Open Learning as central to the Institute’s mission.
                  &#xD;
  &lt;/p&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;p&gt;&#xD;
    
                    “OpenCourseWare is arguably the most significant accomplishment of MIT in the arena of open learning,” says Bertsimas, who has co-authored seven graduate-level books and co-founded 10 analytics companies. “MIT led the way in educating millions of people around the world by having access to MIT classes. I aspire for Open Learning to equal and possibly surpass the impact of OpenCourseWare in the new era of AI.”
                  &#xD;
  &lt;/p&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;p&gt;&#xD;
    
                    Bertsimas succeeds Eric Grimson PhD ’80, who served as interim vice president for open learning for the past two years. Grimson, the Bernard M. Gordon Professor of Medical Engineering and professor of computer science and engineering, will continue to serve the Institute as chancellor for academic advancement.
                  &#xD;
  &lt;/p&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;p&gt;&#xD;
    
                    Grimson’s connection to Open Learning dates back to 2012 when he co-taught two of the earliest courses available on 
    
  
  
                    &#xD;
    &lt;em&gt;&#xD;
      
                      
    
    
      MITx
    
  
  
                    &#xD;
    &lt;/em&gt;&#xD;
    
                    
  
  
    , which remain among the world’s most popular online courses: 
    
  
  
                    &#xD;
    &lt;a href="https://www.edx.org/learn/computer-science/massachusetts-institute-of-technology-introduction-to-computer-science-and-programming-using-python" target="_blank"&gt;&#xD;
      
                      
    
    
      6.00.1x
    
  
  
                    &#xD;
    &lt;/a&gt;&#xD;
    
                    
  
  
     (Introduction to Computer Science and Programming in Python) and 
    
  
  
                    &#xD;
    &lt;a href="https://www.edx.org/learn/computer-science/massachusetts-institute-of-technology-introduction-to-computational-thinking-and-data-science" target="_blank"&gt;&#xD;
      
                      
    
    
      6.00.2x
    
  
  
                    &#xD;
    &lt;/a&gt;&#xD;
    
                    
  
  
     (Introduction to Computational Thinking and Data Science).
                  &#xD;
  &lt;/p&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;p&gt;&#xD;
    
                    In July 2022, Grimson was named interim vice president for open learning. During his time at the helm of MIT Open Learning, Grimson expanded outreach to the Institute’s school councils and college, providing comprehensive information on opportunities for faculty members to use Open Learning resources. He advanced research into artificial intelligence’s impact on education, including experiments in creating AI-based tutors for introductory online courses. Grimson oversaw the expansion of 
    
  
  
                    &#xD;
    &lt;a href="https://mitxonline.mit.edu/"&gt;&#xD;
      &lt;em&gt;&#xD;
        
                        
      
      
        MITx
      
    
    
                      &#xD;
      &lt;/em&gt;&#xD;
      
                      
    
    
       Online
    
  
  
                    &#xD;
    &lt;/a&gt;&#xD;
    
                    
  
  
    , a platform that serves as an alternative to edX for delivery of 
    
  
  
                    &#xD;
    &lt;em&gt;&#xD;
      
                      
    
    
      MITx
    
  
  
                    &#xD;
    &lt;/em&gt;&#xD;
    
                    
  
  
    ’s digital courses, as well as the development of a soon-to-be-launched portal that will unify access to all MIT online educational content for learners worldwide.
                  &#xD;
  &lt;/p&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;p&gt;&#xD;
    
                    “When former MIT President Rafael Reif launched Open Learning, his stated goals were to educate millions of learners around the world, to change how we teach on campus, and to learn about learning and use that knowledge to guide our innovations in teaching,” Grimson says. “I share that vision, and I have been delighted to be part of Open Learning as it strives to revolutionize teaching and learning, both on campus and off. Seeing the incredible impact that MIT has globally in providing easy access to high-quality educational experiences is one of the great pleasures from being part of MIT.”
                  &#xD;
  &lt;/p&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;p&gt;&#xD;
    
                    Bertsimas’s appointment follows an internal search launched in January. The search advisory group was chaired by Duane Boning, the Clarence J. LeBel Professor of Electrical Engineering and Computer Science. As part of its work, the advisory group sought input from current and former leaders of Open Learning, members of the Open Learning faculty advisory committees, MIT deans, Open Learning staff, and leaders of online learning initiatives at other universities.
                  &#xD;
  &lt;/p&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;p&gt;&#xD;
    
                    “With his exceptional background and deep commitment to MIT, Dimitris is a leader who will get big things done on behalf of Open Learning and all of MIT, in this moment of time when learning technologies are fast evolving and provide enormous opportunities for educational impact," Boning says.
                  &#xD;
  &lt;/p&gt;&#xD;
&lt;/div&gt;</content:encoded>
      <enclosure url="https://irp.cdn-website.com/021d5bda/dms3rep/multi/mit-BERTSIMAS-Dimitris-provost.jpg" length="135421" type="image/jpeg" />
      <pubDate>Thu, 08 Aug 2024 11:45:00 GMT</pubDate>
      <guid>http://www.titanseo.ai/2024/dimitris-bertsimas-named-vice-provost-open-learning-0808</guid>
      <g-custom:tags type="string" />
      <media:content medium="image" url="https://irp.cdn-website.com/021d5bda/dms3rep/multi/mit-BERTSIMAS-Dimitris-provost.jpg">
        <media:description>thumbnail</media:description>
      </media:content>
    </item>
    <item>
      <title>Precision home robots learn with real-to-sim-to-real</title>
      <link>http://www.titanseo.ai/2024/precision-home-robotics-real-sim-real-0731</link>
      <description>CSAIL researchers introduce a novel approach allowing robots to be trained in simulations of scanned home environments, paving the way for customized household automation accessible to anyone.</description>
      <content:encoded>&lt;div data-rss-type="text"&gt;&#xD;
  &lt;p&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    
                    
    
  
    At the top of many automation wish lists is a particularly time-consuming task: chores. 
    
  
    
                    &#xD;
    &lt;br/&gt;&#xD;
    &lt;br/&gt;&#xD;
    
                    
    
  
    The moonshot of many roboticists is cooking up the proper hardware and software combination so that a machine can learn “generalist” policies (the rules and strategies that guide robot behavior) that work everywhere, under all conditions. Realistically, though, if you have a home robot, you probably don’t care much about it working for your neighbors. MIT Computer Science and Artificial Intelligence Laboratory (CSAIL) researchers decided, with that in mind, to attempt to find a solution to easily train robust robot policies for very specific environments.
  

  
                  &#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
  &lt;/p&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;p&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    
                    
    
  
    “We aim for robots to perform exceptionally well under disturbances, distractions, varying lighting conditions, and changes in object poses, all within a single environment,” says Marcel Torne Villasevil, MIT CSAIL research assistant in the Improbable AI lab and lead author on a recent 
    
  
    
                    &#xD;
    &lt;a href="https://arxiv.org/abs/2403.03949"&gt;&#xD;
      
                      
      
    
      paper
    
  
    
                    &#xD;
    &lt;/a&gt;&#xD;
    
                    
    
  
     about the work. “We propose a method to create digital twins on the fly using the latest advances in computer vision. With just their phones, anyone can capture a digital replica of the real world, and the robots can train in a simulated environment much faster than the real world, thanks to GPU parallelization. Our approach eliminates the need for extensive reward engineering by leveraging a few real-world demonstrations to jump-start the training process.”
  

  
                  &#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
  &lt;/p&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;p&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;b&gt;&#xD;
      
                      
      
    
      Taking your robot home
    
  
    
                    &#xD;
    &lt;/b&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
  &lt;/p&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;p&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    
                    
    
  
    RialTo, of course, is a little more complicated than just a simple wave of a phone and (boom!) home bot at your service. It begins by using your device to scan the target environment using tools like NeRFStudio, ARCode, or Polycam. Once the scene is reconstructed, users can upload it to RialTo’s interface to make detailed adjustments, add necessary joints to the robots, and more.
  

  
                  &#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
  &lt;/p&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;p&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    
                    
    
  
    The refined scene is exported and brought into the simulator. Here, the aim is to develop a policy based on real-world actions and observations, such as one for grabbing a cup on a counter. These real-world demonstrations are replicated in the simulation, providing some valuable data for reinforcement learning. “This helps in creating a strong policy that works well in both the simulation and the real world. An enhanced algorithm using reinforcement learning helps guide this process, to ensure the policy is effective when applied outside of the simulator,” says Torne.
  

  
                  &#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
  &lt;/p&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;p&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    
                    
    
  
    Testing showed that RialTo created strong policies for a variety of tasks, whether in controlled lab settings or more unpredictable real-world environments, improving 67 percent over imitation learning with the same number of demonstrations. The tasks involved opening a toaster, placing a book on a shelf, putting a plate on a rack, placing a mug on a shelf, opening a drawer, and opening a cabinet. For each task, the researchers tested the system’s performance under three increasing levels of difficulty: randomizing object poses, adding visual distractors, and applying physical disturbances during task executions. When paired with real-world data, the system outperformed traditional imitation-learning methods, especially in situations with lots of visual distractions or physical disruptions.
  

  
                  &#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
  &lt;/p&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;p&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    
                    
    
  
    “These experiments show that if we care about being very robust to one particular environment, the best idea is to leverage digital twins instead of trying to obtain robustness with large-scale data collection in diverse environments,” says Pulkit Agrawal, director of Improbable AI Lab, MIT electrical engineering and computer science (EECS) associate professor, MIT CSAIL principal investigator, and senior author on the work.
  

  
                  &#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
  &lt;/p&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;p&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    
                    
    
  
    As far as limitations, RialTo currently takes three days to be fully trained. To speed this up, the team mentions improving the underlying algorithms and using foundation models. Training in simulation also has its limitations, and currently it’s difficult to do effortless sim-to-real transfer and simulate deformable objects or liquids.
  

  
                  &#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
  &lt;/p&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;p&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;b&gt;&#xD;
      
                      
      
    
      The next level
    
  
    
                    &#xD;
    &lt;/b&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
  &lt;/p&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;p&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    
                    
    
  
    So what’s next for RialTo’s journey? Building on previous efforts, the scientists are working on preserving robustness against various disturbances while improving the model’s adaptability to new environments. “Our next endeavor is this approach to using pre-trained models, accelerating the learning process, minimizing human input, and achieving broader generalization capabilities,” says Torne.
  

  
                  &#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
  &lt;/p&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;p&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    
                    
    
  
    “We’re incredibly enthusiastic about our 'on-the-fly' robot programming concept, where robots can autonomously scan their environment and learn how to solve specific tasks in simulation. While our current method has limitations — such as requiring a few initial demonstrations by a human and significant compute time for training these policies (up to three days) — we see it as a significant step towards achieving 'on-the-fly' robot learning and deployment,” says Torne. “This approach moves us closer to a future where robots won’t need a preexisting policy that covers every scenario. Instead, they can rapidly learn new tasks without extensive real-world interaction. In my view, this advancement could expedite the practical application of robotics far sooner than relying solely on a universal, all-encompassing policy.”
  

  
                  &#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
  &lt;/p&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;p&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    
                    
    
  
    “To deploy robots in the real world, researchers have traditionally relied on methods such as imitation learning from expert data, which can be expensive, or reinforcement learning, which can be unsafe,” says Zoey Chen, a computer science PhD student at the University of Washington who wasn’t involved in the paper. “RialTo directly addresses both the safety constraints of real-world RL [robot learning], and efficient data constraints for data-driven learning methods, with its novel real-to-sim-to-real pipeline. This novel pipeline not only ensures safe and robust training in simulation before real-world deployment, but also significantly improves the efficiency of data collection. RialTo has the potential to significantly scale up robot learning and allows robots to adapt to complex real-world scenarios much more effectively.”
  

  
                  &#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
  &lt;/p&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;p&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    
                    
    
  
    "Simulation has shown impressive capabilities on real robots by providing inexpensive, possibly infinite data for policy learning,” adds Marius Memmel, a computer science PhD student at the University of Washington who wasn’t involved in the work. “However, these methods are limited to a few specific scenarios, and constructing the corresponding simulations is expensive and laborious. RialTo provides an easy-to-use tool to reconstruct real-world environments in minutes instead of hours. Furthermore, it makes extensive use of collected demonstrations during policy learning, minimizing the burden on the operator and reducing the sim2real gap. RialTo demonstrates robustness to object poses and disturbances, showing incredible real-world performance without requiring extensive simulator construction and data collection.”
  

  
                  &#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
  &lt;/p&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;p&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    
                    
    
  
    Torne wrote this paper alongside senior authors Abhishek Gupta, assistant professor at the University of Washington, and Agrawal. Four other CSAIL members are also credited: EECS PhD student Anthony Simeonov SM ’22, research assistant Zechu Li, undergraduate student April Chan, and Tao Chen PhD ’24. Improbable AI Lab and WEIRD Lab members also contributed valuable feedback and support in developing this project. 
  

  
                  &#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
  &lt;/p&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;p&gt;&#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
    
                    
    
  
    This work was supported, in part, by the Sony Research Award, the U.S. government, and Hyundai Motor Co., with assistance from the WEIRD (Washington Embodied Intelligence and Robotics Development) Lab. The researchers presented their work at the Robotics Science and Systems (RSS) conference earlier this month.
  

  
                  &#xD;
  &lt;/p&gt;&#xD;
  &lt;p&gt;&#xD;
  &lt;/p&gt;&#xD;
&lt;/div&gt;</content:encoded>
      <enclosure url="https://irp.cdn-website.com/021d5bda/dms3rep/multi/MIT-RialTo.png" length="1152472" type="image/png" />
      <pubDate>Wed, 31 Jul 2024 19:45:00 GMT</pubDate>
      <guid>http://www.titanseo.ai/2024/precision-home-robotics-real-sim-real-0731</guid>
      <g-custom:tags type="string" />
      <media:content medium="image" url="https://irp.cdn-website.com/021d5bda/dms3rep/multi/MIT-RialTo.png">
        <media:description>thumbnail</media:description>
      </media:content>
    </item>
    <item>
      <title>Method prevents an AI model from being overconfident about wrong answers</title>
      <link>http://www.titanseo.ai/2024/thermometer-prevents-ai-model-overconfidence-about-wrong-answers-0731</link>
      <description>More efficient than other approaches, the “Thermometer” technique could help someone know when they should trust a large language model.</description>
      <content:encoded>&lt;div data-rss-type="text"&gt;&#xD;
  &lt;p&gt;&#xD;
    
                    People use large language models for a huge array of tasks, from translating an article to identifying financial fraud. However, despite the incredible capabilities and versatility of these models, they sometimes generate inaccurate responses.
                  &#xD;
  &lt;/p&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;p&gt;&#xD;
    
                    On top of that problem, the models can be overconfident about wrong answers or underconfident about correct ones, making it tough for a user to know when a model can be trusted.
                  &#xD;
  &lt;/p&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;p&gt;&#xD;
    
                    Researchers typically calibrate a machine-learning model to ensure its level of confidence lines up with its accuracy. A well-calibrated model should have less confidence about an incorrect prediction, and vice-versa. But because large language models (LLMs) can be applied to a seemingly endless collection of diverse tasks, traditional calibration methods are ineffective.
                  &#xD;
  &lt;/p&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;p&gt;&#xD;
    
                    Now, researchers from MIT and the MIT-IBM Watson AI Lab have introduced a calibration method tailored to large language models. Their method, called 
    
  
  
                    &#xD;
    &lt;a href="https://arxiv.org/pdf/2403.08819" target="_blank"&gt;&#xD;
      
                      
    
    
      Thermometer
    
  
  
                    &#xD;
    &lt;/a&gt;&#xD;
    
                    
  
  
    , involves building a smaller, auxiliary model that runs on top of a large language model to calibrate it.
                  &#xD;
  &lt;/p&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;p&gt;&#xD;
    
                    Thermometer is more efficient than other approaches — requiring less power-hungry computation — while preserving the accuracy of the model and enabling it to produce better-calibrated responses on tasks it has not seen before.
                  &#xD;
  &lt;/p&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;p&gt;&#xD;
    
                    By enabling efficient calibration of an LLM for a variety of tasks, Thermometer could help users pinpoint situations where a model is overconfident about false predictions, ultimately preventing them from deploying that model in a situation where it may fail.
                  &#xD;
  &lt;/p&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;p&gt;&#xD;
    
                    “With Thermometer, we want to provide the user with a clear signal to tell them whether a model’s response is accurate or inaccurate, in a way that reflects the model’s uncertainty, so they know if that model is reliable,” says Maohao Shen, an electrical engineering and computer science (EECS) graduate student and lead author of a 
    
  
  
                    &#xD;
    &lt;a href="https://arxiv.org/pdf/2403.08819" target="_blank"&gt;&#xD;
      
                      
    
    
      paper on Thermometer
    
  
  
                    &#xD;
    &lt;/a&gt;&#xD;
    
                    
  
  
    .
                  &#xD;
  &lt;/p&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;p&gt;&#xD;
    
                    Shen is joined on the paper by Gregory Wornell, the Sumitomo Professor of Engineering who leads the Signals, Information, and Algorithms Laboratory in the Research Laboratory for Electronics, and is a member of the MIT-IBM Watson AI Lab; senior author Soumya Ghosh, a research staff member in the MIT-IBM Watson AI Lab; as well as others at MIT and the MIT-IBM Watson AI Lab. The research was recently presented at the International Conference on Machine Learning.
                  &#xD;
  &lt;/p&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;b&gt;&#xD;
      
                      
    
    
      Universal calibration
    
  
  
                    &#xD;
    &lt;/b&gt;&#xD;
  &lt;/p&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;p&gt;&#xD;
    
                    Since traditional machine-learning models are typically designed to perform a single task, calibrating them usually involves one task-specific method. On the other hand, since LLMs have the flexibility to perform many tasks, using a traditional method to calibrate that model for one task might hurt its performance on another task.
                  &#xD;
  &lt;/p&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;p&gt;&#xD;
    
                    Calibrating an LLM often involves sampling from the model multiple times to obtain different predictions and then aggregating these predictions to obtain better-calibrated confidence. However, because these models have billions of parameters, the computational costs of such approaches rapidly add up.
                  &#xD;
  &lt;/p&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;p&gt;&#xD;
    
                    “In a sense, large language models are universal because they can handle various tasks. So, we need a universal calibration method that can also handle many different tasks,” says Shen.
                  &#xD;
  &lt;/p&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;p&gt;&#xD;
    
                    With Thermometer, the researchers developed a versatile technique that leverages a classical calibration method called temperature scaling to efficiently calibrate an LLM for a new task.
                  &#xD;
  &lt;/p&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;p&gt;&#xD;
    
                    In this context, a “temperature” is a scaling parameter used to adjust a model’s confidence to be aligned with its prediction accuracy. Traditionally, one determines the right temperature using a labeled validation dataset of task-specific examples.
                  &#xD;
  &lt;/p&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;p&gt;&#xD;
    
                    Since LLMs are often applied to new tasks, labeled datasets can be nearly impossible to acquire. For instance, a user who wants to deploy an LLM to answer customer questions about a new product likely does not have a dataset containing such questions and answers.
                  &#xD;
  &lt;/p&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;p&gt;&#xD;
    
                    Instead of using a labeled dataset, the researchers train an auxiliary model that runs on top of an LLM to automatically predict the temperature needed to calibrate it for this new task.
                  &#xD;
  &lt;/p&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;p&gt;&#xD;
    
                    They use labeled datasets of a few representative tasks to train the Thermometer model, but then once it has been trained, it can generalize to new tasks in a similar category without the need for additional labeled data.
                  &#xD;
  &lt;/p&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;p&gt;&#xD;
    
                    A Thermometer model trained on a collection of multiple-choice question datasets, perhaps including one with algebra questions and one with medical questions, could be used to calibrate an LLM that will answer questions about geometry or biology, for instance.
                  &#xD;
  &lt;/p&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;p&gt;&#xD;
    
                    “The aspirational goal is for it to work on any task, but we are not quite there yet,” Ghosh says.   
                  &#xD;
  &lt;/p&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;p&gt;&#xD;
    
                    The Thermometer model only needs to access a small part of the LLM’s inner workings to predict the right temperature that will calibrate its prediction for data points of a specific task. 
                  &#xD;
  &lt;/p&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;b&gt;&#xD;
      
                      
    
    
      An efficient approach
    
  
  
                    &#xD;
    &lt;/b&gt;&#xD;
  &lt;/p&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;p&gt;&#xD;
    
                    Importantly, the technique does not require multiple training runs and only slightly slows the LLM. Plus, since temperature scaling does not alter a model’s predictions, Thermometer preserves its accuracy.
                  &#xD;
  &lt;/p&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;p&gt;&#xD;
    
                    When they compared Thermometer to several baselines on multiple tasks, it consistently produced better-calibrated uncertainty measures while requiring much less computation.
                  &#xD;
  &lt;/p&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;p&gt;&#xD;
    
                    “As long as we train a Thermometer model on a sufficiently large number of tasks, it should be able to generalize well across any new task, just like a large language model, it is also a universal model,” Shen adds.
                  &#xD;
  &lt;/p&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;p&gt;&#xD;
    
                    The researchers also found that if they train a Thermometer model for a smaller LLM, it can be directly applied to calibrate a larger LLM within the same family.
                  &#xD;
  &lt;/p&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;p&gt;&#xD;
    
                    In the future, they want to adapt Thermometer for more complex text-generation tasks and apply the technique to even larger LLMs. The researchers also hope to quantify the diversity and number of labeled datasets one would need to train a Thermometer model so it can generalize to a new task.
                  &#xD;
  &lt;/p&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;p&gt;&#xD;
    
                    This research was funded, in part, by the MIT-IBM Watson AI Lab.
                  &#xD;
  &lt;/p&gt;&#xD;
&lt;/div&gt;</content:encoded>
      <enclosure url="https://irp.cdn-website.com/021d5bda/dms3rep/multi/MIT-LLMThermo-01-press.jpg" length="693030" type="image/jpeg" />
      <pubDate>Wed, 31 Jul 2024 04:00:00 GMT</pubDate>
      <guid>http://www.titanseo.ai/2024/thermometer-prevents-ai-model-overconfidence-about-wrong-answers-0731</guid>
      <g-custom:tags type="string" />
      <media:content medium="image" url="https://irp.cdn-website.com/021d5bda/dms3rep/multi/MIT-LLMThermo-01-press.jpg">
        <media:description>thumbnail</media:description>
      </media:content>
    </item>
    <item>
      <title>Study: When allocating scarce resources with AI, randomization can improve fairness</title>
      <link>http://www.titanseo.ai/2024/study-structured-randomization-ai-can-improve-fairness-0724</link>
      <description>Introducing structured randomization into decisions based on machine-learning model predictions can address inherent uncertainties while maintaining efficiency.</description>
      <content:encoded>&lt;div data-rss-type="text"&gt;&#xD;
  &lt;p&gt;&#xD;
    
                    Organizations are increasingly utilizing machine-learning models to allocate scarce resources or opportunities. For instance, such models can help companies screen resumes to choose job interview candidates or aid hospitals in ranking kidney transplant patients based on their likelihood of survival.
                  &#xD;
  &lt;/p&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;p&gt;&#xD;
    
                    When deploying a model, users typically strive to ensure its predictions are fair by reducing bias. This often involves techniques like adjusting the features a model uses to make decisions or calibrating the scores it generates.
                  &#xD;
  &lt;/p&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;p&gt;&#xD;
    
                    However, researchers from MIT and Northeastern University argue that these fairness methods are not sufficient to address structural injustices and inherent uncertainties. In a 
    
  
  
                    &#xD;
    &lt;a href="https://arxiv.org/html/2404.08592v1" target="_blank"&gt;&#xD;
      
                      
    
    
      new paper
    
  
  
                    &#xD;
    &lt;/a&gt;&#xD;
    
                    
  
  
    , they show how randomizing a model’s decisions in a structured way can improve fairness in certain situations.
                  &#xD;
  &lt;/p&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;p&gt;&#xD;
    
                    For example, if multiple companies use the same machine-learning model to rank job interview candidates deterministically — without any randomization — then one deserving individual could be the bottom-ranked candidate for every job, perhaps due to how the model weighs answers provided in an online form. Introducing randomization into a model’s decisions could prevent one worthy person or group from always being denied a scarce resource, like a job interview.
                  &#xD;
  &lt;/p&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;p&gt;&#xD;
    
                    Through their analysis, the researchers found that randomization can be especially beneficial when a model’s decisions involve uncertainty or when the same group consistently receives negative decisions.
                  &#xD;
  &lt;/p&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;p&gt;&#xD;
    
                    They present a framework one could use to introduce a specific amount of randomization into a model’s decisions by allocating resources through a weighted lottery. This method, which an individual can tailor to fit their situation, can improve fairness without hurting the efficiency or accuracy of a model.
                  &#xD;
  &lt;/p&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;p&gt;&#xD;
    
                    “Even if you could make fair predictions, should you be deciding these social allocations of scarce resources or opportunities strictly off scores or rankings? As things scale, and we see more and more opportunities being decided by these algorithms, the inherent uncertainties in these scores can be amplified. We show that fairness may require some sort of randomization,” says Shomik Jain, a graduate student in the Institute for Data, Systems, and Society (IDSS) and lead author of the paper.
                  &#xD;
  &lt;/p&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;p&gt;&#xD;
    
                    Jain is joined on the paper by Kathleen Creel, assistant professor of philosophy and computer science at Northeastern University; and senior author Ashia Wilson, the Lister Brothers Career Development Professor in the Department of Electrical Engineering and Computer Science and a principal investigator in the Laboratory for Information and Decision Systems (LIDS). The research will be presented at the International Conference on Machine Learning.
                  &#xD;
  &lt;/p&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;b&gt;&#xD;
      
                      
    
    
      Considering claims
    
  
  
                    &#xD;
    &lt;/b&gt;&#xD;
  &lt;/p&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;p&gt;&#xD;
    
                    This work builds off a 
    
  
  
                    &#xD;
    &lt;a href="https://dl.acm.org/doi/pdf/10.1145/3630106.3658899" target="_blank"&gt;&#xD;
      
                      
    
    
      previous paper
    
  
  
                    &#xD;
    &lt;/a&gt;&#xD;
    
                    
  
  
     in which the researchers explored harms that can occur when one uses deterministic systems at scale. They found that using a machine-learning model to deterministically allocate resources can amplify inequalities that exist in training data, which can reinforce bias and systemic inequality. 
                  &#xD;
  &lt;/p&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;p&gt;&#xD;
    
                    “Randomization is a very useful concept in statistics, and to our delight, satisfies the fairness demands coming from both a systemic and individual point of view,” Wilson says.
                  &#xD;
  &lt;/p&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;p&gt;&#xD;
    
                    In 
    
  
  
                    &#xD;
    &lt;a href="https://openreview.net/pdf?id=44qxX6Ty6F" target="_blank"&gt;&#xD;
      
                      
    
    
      this paper
    
  
  
                    &#xD;
    &lt;/a&gt;&#xD;
    
                    
  
  
    , they explored the question of when randomization can improve fairness. They framed their analysis around the ideas of philosopher John Broome, who wrote about the value of using lotteries to award scarce resources in a way that honors all claims of individuals.
                  &#xD;
  &lt;/p&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;p&gt;&#xD;
    
                    A person’s claim to a scarce resource, like a kidney transplant, can stem from merit, deservingness, or need. For instance, everyone has a right to life, and their claims on a kidney transplant may stem from that right, Wilson explains.
                  &#xD;
  &lt;/p&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;p&gt;&#xD;
    
                    “When you acknowledge that people have different claims to these scarce resources, fairness is going to require that we respect all claims of individuals. If we always give someone with a stronger claim the resource, is that fair?” Jain says.
                  &#xD;
  &lt;/p&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;p&gt;&#xD;
    
                    That sort of deterministic allocation could cause systemic exclusion or exacerbate patterned inequality, which occurs when receiving one allocation increases an individual’s likelihood of receiving future allocations. In addition, machine-learning models can make mistakes, and a deterministic approach could cause the same mistake to be repeated.
                  &#xD;
  &lt;/p&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;p&gt;&#xD;
    
                    Randomization can overcome these problems, but that doesn’t mean all decisions a model makes should be randomized equally.
                  &#xD;
  &lt;/p&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;b&gt;&#xD;
      
                      
    
    
      Structured randomization
    
  
  
                    &#xD;
    &lt;/b&gt;&#xD;
  &lt;/p&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;p&gt;&#xD;
    
                    The researchers use a weighted lottery to adjust the level of randomization based on the amount of uncertainty involved in the model’s decision-making. A decision that is less certain should incorporate more randomization.
                  &#xD;
  &lt;/p&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;p&gt;&#xD;
    
                    “In kidney allocation, usually the planning is around projected lifespan, and that is deeply uncertain. If two patients are only five years apart, it becomes a lot harder to measure. We want to leverage that level of uncertainty to tailor the randomization,” Wilson says.
                  &#xD;
  &lt;/p&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;p&gt;&#xD;
    
                    The researchers used statistical uncertainty quantification methods to determine how much randomization is needed in different situations. They show that calibrated randomization can lead to fairer outcomes for individuals without significantly affecting the utility, or effectiveness, of the model.
                  &#xD;
  &lt;/p&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;p&gt;&#xD;
    
                    “There is a balance to be had between overall utility and respecting the rights of the individuals who are receiving a scarce resource, but oftentimes the tradeoff is relatively small,” says Wilson.
                  &#xD;
  &lt;/p&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;p&gt;&#xD;
    
                    However, the researchers emphasize there are situations where randomizing decisions would not improve fairness and could harm individuals, such as in criminal justice contexts.
                  &#xD;
  &lt;/p&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;p&gt;&#xD;
    
                    But there could be other areas where randomization can improve fairness, such as college admissions, and the researchers plan to study other use cases in future work. They also want to explore how randomization can affect other factors, such as competition or prices, and how it could be used to improve the robustness of machine-learning models.
                  &#xD;
  &lt;/p&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;p&gt;&#xD;
    
                    “We are hoping our paper is a first move toward illustrating that there might be a benefit to randomization. We are offering randomization as a tool. How much you are going to want to do it is going to be up to all the stakeholders in the allocation to decide. And, of course, how they decide is another research question all together,” says Wilson.
                  &#xD;
  &lt;/p&gt;&#xD;
&lt;/div&gt;</content:encoded>
      <enclosure url="https://irp.cdn-website.com/021d5bda/dms3rep/multi/MIT-Random-Resources-01-press.jpg" length="280181" type="image/jpeg" />
      <pubDate>Wed, 24 Jul 2024 04:00:00 GMT</pubDate>
      <guid>http://www.titanseo.ai/2024/study-structured-randomization-ai-can-improve-fairness-0724</guid>
      <g-custom:tags type="string" />
      <media:content medium="image" url="https://irp.cdn-website.com/021d5bda/dms3rep/multi/MIT-Random-Resources-01-press.jpg">
        <media:description>thumbnail</media:description>
      </media:content>
    </item>
    <item>
      <title>MIT researchers advance automated interpretability in AI models</title>
      <link>http://www.titanseo.ai/2024/mit-researchers-advance-automated-interpretability-ai-models-maia-0723</link>
      <description>MAIA is a multimodal agent that can iteratively design experiments to better understand various components of AI systems.</description>
      <content:encoded>&lt;div data-rss-type="text"&gt;&#xD;
  &lt;p&gt;&#xD;
    
                    As artificial intelligence models become increasingly prevalent and are integrated into diverse sectors like health care, finance, education, transportation, and entertainment, understanding how they work under the hood is critical. Interpreting the mechanisms underlying AI models enables us to audit them for safety and biases, with the potential to deepen our understanding of the science behind intelligence itself.
                  &#xD;
  &lt;/p&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;p&gt;&#xD;
    
                    Imagine if we could directly investigate the human brain by manipulating each of its individual neurons to examine their roles in perceiving a particular object. While such an experiment would be prohibitively invasive in the human brain, it is more feasible in another type of neural network: one that is artificial. However, somewhat similar to the human brain, artificial models containing millions of neurons are too large and complex to study by hand, making interpretability at scale a very challenging task. 
                  &#xD;
  &lt;/p&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;p&gt;&#xD;
    
                    To address this, MIT Computer Science and Artificial Intelligence Laboratory (CSAIL) researchers decided to take an automated approach to interpreting artificial vision models that evaluate different properties of images. They developed “MAIA” (Multimodal Automated Interpretability Agent), a system that automates a variety of neural network interpretability tasks using a vision-language model backbone equipped with tools for experimenting on other AI systems.
                  &#xD;
  &lt;/p&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;p&gt;&#xD;
    
                    “Our goal is to create an AI researcher that can conduct interpretability experiments autonomously. Existing automated interpretability methods merely label or visualize data in a one-shot process. On the other hand, MAIA can generate hypotheses, design experiments to test them, and refine its understanding through iterative analysis,” says Tamar Rott Shaham, an MIT electrical engineering and computer science (EECS) postdoc at CSAIL and co-author on a new 
    
  
  
                    &#xD;
    &lt;a href="https://arxiv.org/pdf/2404.14394.pdf" target="_blank"&gt;&#xD;
      
                      
    
    
      paper about the research
    
  
  
                    &#xD;
    &lt;/a&gt;&#xD;
    
                    
  
  
    . “By combining a pre-trained vision-language model with a library of interpretability tools, our multimodal method can respond to user queries by composing and running targeted experiments on specific models, continuously refining its approach until it can provide a comprehensive answer.”
                  &#xD;
  &lt;/p&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;p&gt;&#xD;
    
                    The automated agent is demonstrated to tackle three key tasks: It labels individual components inside vision models and describes the visual concepts that activate them, it cleans up image classifiers by removing irrelevant features to make them more robust to new situations, and it hunts for hidden biases in AI systems to help uncover potential fairness issues in their outputs. “But a key advantage of a system like MAIA is its flexibility,” says Sarah Schwettmann PhD ’21, a research scientist at CSAIL and co-lead of the research. “We demonstrated MAIA’s usefulness on a few specific tasks, but given that the system is built from a foundation model with broad reasoning capabilities, it can answer many different types of interpretability queries from users, and design experiments on the fly to investigate them.” 
                  &#xD;
  &lt;/p&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;p&gt;&#xD;
    &lt;b&gt;&#xD;
      
                      
    
    
      Neuron by neuron
    
  
  
                    &#xD;
    &lt;/b&gt;&#xD;
  &lt;/p&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;p&gt;&#xD;
    
                    In one example task, a human user asks MAIA to describe the concepts that a particular neuron inside a vision model is responsible for detecting. To investigate this question, MAIA first uses a tool that retrieves “dataset exemplars” from the ImageNet dataset, which maximally activate the neuron. For this example neuron, those images show people in formal attire, and closeups of their chins and necks. MAIA makes various hypotheses for what drives the neuron’s activity: facial expressions, chins, or neckties. MAIA then uses its tools to design experiments to test each hypothesis individually by generating and editing synthetic images — in one experiment, adding a bow tie to an image of a human face increases the neuron’s response. “This approach allows us to determine the specific cause of the neuron’s activity, much like a real scientific experiment,” says Rott Shaham.
                  &#xD;
  &lt;/p&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;p&gt;&#xD;
    
                    MAIA’s explanations of neuron behaviors are evaluated in two key ways. First, synthetic systems with known ground-truth behaviors are used to assess the accuracy of MAIA’s interpretations. Second, for “real” neurons inside trained AI systems with no ground-truth descriptions, the authors design a new automated evaluation protocol that measures how well MAIA’s descriptions predict neuron behavior on unseen data.
                  &#xD;
  &lt;/p&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;p&gt;&#xD;
    
                    The CSAIL-led method outperformed baseline methods describing individual neurons in a variety of vision models such as ResNet, CLIP, and the vision transformer DINO. MAIA also performed well on the new dataset of synthetic neurons with known ground-truth descriptions. For both the real and synthetic systems, the descriptions were often on par with descriptions written by human experts.
                  &#xD;
  &lt;/p&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;p&gt;&#xD;
    
                    How are descriptions of AI system components, like individual neurons, useful? “Understanding and localizing behaviors inside large AI systems is a key part of auditing these systems for safety before they’re deployed — in some of our experiments, we show how MAIA can be used to find neurons with unwanted behaviors and remove these behaviors from a model,” says Schwettmann. “We’re building toward a more resilient AI ecosystem where tools for understanding and monitoring AI systems keep pace with system scaling, enabling us to investigate and hopefully understand unforeseen challenges introduced by new models.”
    
  
  
                    &#xD;
    &lt;br/&gt;&#xD;
    &lt;br/&gt;&#xD;
    &lt;b&gt;&#xD;
      
                      
    
    
      Peeking inside neural networks
    
  
  
                    &#xD;
    &lt;/b&gt;&#xD;
  &lt;/p&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;p&gt;&#xD;
    
                    The nascent field of interpretability is maturing into a distinct research area alongside the rise of “black box” machine learning models. How can researchers crack open these models and understand how they work?
    
  
  
                    &#xD;
    &lt;br/&gt;&#xD;
    &lt;br/&gt;&#xD;
    
                    
  
  
    Current methods for peeking inside tend to be limited either in scale or in the precision of the explanations they can produce. Moreover, existing methods tend to fit a particular model and a specific task. This caused the researchers to ask: How can we build a generic system to help users answer interpretability questions about AI models while combining the flexibility of human experimentation with the scalability of automated techniques?
                  &#xD;
  &lt;/p&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;p&gt;&#xD;
    
                    One critical area they wanted this system to address was bias. To determine whether image classifiers displayed bias against particular subcategories of images, the team looked at the final layer of the classification stream (in a system designed to sort or label items, much like a machine that identifies whether a photo is of a dog, cat, or bird) and the probability scores of input images (confidence levels that the machine assigns to its guesses). To understand potential biases in image classification, MAIA was asked to find a subset of images in specific classes (for example “labrador retriever”) that were likely to be incorrectly labeled by the system. In this example, MAIA found that images of black labradors were likely to be misclassified, suggesting a bias in the model toward yellow-furred retrievers.
                  &#xD;
  &lt;/p&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;p&gt;&#xD;
    
                    Since MAIA relies on external tools to design experiments, its performance is limited by the quality of those tools. But, as the quality of tools like image synthesis models improve, so will MAIA. MAIA also shows confirmation bias at times, where it sometimes incorrectly confirms its initial hypothesis. To mitigate this, the researchers built an image-to-text tool, which uses a different instance of the language model to summarize experimental results. Another failure mode is overfitting to a particular experiment, where the model sometimes makes premature conclusions based on minimal evidence.
                  &#xD;
  &lt;/p&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;p&gt;&#xD;
    
                    “I think a natural next step for our lab is to move beyond artificial systems and apply similar experiments to human perception,” says Rott Shaham. “Testing this has traditionally required manually designing and testing stimuli, which is labor-intensive. With our agent, we can scale up this process, designing and testing numerous stimuli simultaneously. This might also allow us to compare human visual perception with artificial systems.”
                  &#xD;
  &lt;/p&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;p&gt;&#xD;
    
                    “Understanding neural networks is difficult for humans because they have hundreds of thousands of neurons, each with complex behavior patterns. MAIA helps to bridge this by developing AI agents that can automatically analyze these neurons and report distilled findings back to humans in a digestible way,” says Jacob Steinhardt, assistant professor at the University of California at Berkeley, who wasn’t involved in the research. “Scaling these methods up could be one of the most important routes to understanding and safely overseeing AI systems.”
                  &#xD;
  &lt;/p&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;p&gt;&#xD;
    
                    Rott Shaham and Schwettmann are joined by five fellow CSAIL affiliates on the paper: undergraduate student Franklin Wang; incoming MIT student Achyuta Rajaram; EECS PhD student Evan Hernandez SM ’22; and EECS professors Jacob Andreas and Antonio Torralba. Their work was supported, in part, by the MIT-IBM Watson AI Lab, Open Philanthropy, Hyundai Motor Co., the Army Research Laboratory, Intel, the National Science Foundation, the Zuckerman STEM Leadership Program, and the Viterbi Fellowship. The researchers’ findings will be presented at the International Conference on Machine Learning this week.
                  &#xD;
  &lt;/p&gt;&#xD;
&lt;/div&gt;</content:encoded>
      <enclosure url="https://irp.cdn-website.com/021d5bda/dms3rep/multi/MIT-Multimodal-Automated-Interpretability-Agent-00.jpg" length="260833" type="image/jpeg" />
      <pubDate>Tue, 23 Jul 2024 20:00:00 GMT</pubDate>
      <guid>http://www.titanseo.ai/2024/mit-researchers-advance-automated-interpretability-ai-models-maia-0723</guid>
      <g-custom:tags type="string" />
      <media:content medium="image" url="https://irp.cdn-website.com/021d5bda/dms3rep/multi/MIT-Multimodal-Automated-Interpretability-Agent-00.jpg">
        <media:description>thumbnail</media:description>
      </media:content>
    </item>
    <item>
      <title>Proton-conducting materials could enable new green energy technologies</title>
      <link>http://www.titanseo.ai/2024/proton-conducting-materials-could-enable-new-green-energy-technologies-0723</link>
      <description>Analysis and materials identified by MIT engineers could lead to more energy-efficient fuel cells, electrolyzers, batteries, or computing devices.</description>
      <content:encoded>&lt;div data-rss-type="text"&gt;&#xD;
  &lt;p&gt;&#xD;
    
                    As the name suggests, most electronic devices today work through the movement of electrons. But materials that can efficiently conduct protons — the nucleus of the hydrogen atom — could be key to a number of important technologies for combating global climate change.
                  &#xD;
  &lt;/p&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;p&gt;&#xD;
    
                    Most proton-conducting inorganic materials available now require undesirably high temperatures to achieve sufficiently high conductivity. However, lower-temperature alternatives could enable a variety of technologies, such as more efficient and durable fuel cells to produce clean electricity from hydrogen, electrolyzers to make clean fuels such as hydrogen for transportation, solid-state proton batteries, and even new kinds of computing devices based on iono-electronic effects.
                  &#xD;
  &lt;/p&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;p&gt;&#xD;
    
                    In order to advance the development of proton conductors, MIT engineers have identified certain traits of materials that give rise to fast proton conduction. Using those traits quantitatively, the team identified a half-dozen new candidates that show promise as fast proton conductors. Simulations suggest these candidates will perform far better than existing materials, although they still need to be conformed experimentally. In addition to uncovering potential new materials, the research also provides a deeper understanding at the atomic level of how such materials work.
                  &#xD;
  &lt;/p&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;p&gt;&#xD;
    
                    The new findings are 
    
  
  
                    &#xD;
    &lt;a href="https://pubs.rsc.org/en/content/articlelanding/2024/ee/d4ee01219d" target="_blank"&gt;&#xD;
      
                      
    
    
      described in the journal 
      
    
    
                      &#xD;
      &lt;em&gt;&#xD;
        
                        
      
      
        Energy and Environmental Sciences
      
    
    
                      &#xD;
      &lt;/em&gt;&#xD;
    &lt;/a&gt;&#xD;
    
                    
  
  
    , in a paper by MIT professors Bilge Yildiz and Ju Li, postdocs Pjotrs Zguns and Konstantin Klyukin, and their collaborator Sossina Haile and her students from Northwestern University. Yildiz is the Breene M. Kerr Professor in the departments of Nuclear Science and Engineering, and Materials Science and Engineering.
                  &#xD;
  &lt;/p&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;p&gt;&#xD;
    
                    “Proton conductors are needed in clean energy conversion applications such as fuel cells, where we use hydrogen to produce carbon dioxide-free electricity,” Yildiz explains. “We want to do this process efficiently, and therefore we need materials that can transport protons very fast through such devices.”
                  &#xD;
  &lt;/p&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;p&gt;&#xD;
    
                    Present methods of producing hydrogen, for example steam methane reforming, emit a great deal of carbon dioxide. “One way to eliminate that is to electrochemically produce hydrogen from water vapor, and that needs very good proton conductors,” Yildiz says. Production of other important industrial chemicals and potential fuels, such as ammonia, can also be carried out through efficient electrochemical systems that require good proton conductors.
                  &#xD;
  &lt;/p&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;p&gt;&#xD;
    
                    But most inorganic materials that conduct protons can only operate at temperatures of 200 to 600 degrees Celsius (roughly 450 to 1,100 Fahrenheit), or even higher. Such temperatures require energy to maintain and can cause degradation of materials. “Going to higher temperatures is not desirable because that makes the whole system more challenging, and the material durability becomes an issue,” Yildiz says. “There is no good inorganic proton conductor at room temperature.” Today, the only known room-temperature proton conductor is a polymeric material that is not practical for applications in computing devices because it can’t easily be scaled down to the nanometer regime, she says.
                  &#xD;
  &lt;/p&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;p&gt;&#xD;
    
                    To tackle the problem, the team first needed to develop a basic and quantitative understanding of exactly how proton conduction works, taking a class of inorganic proton conductors, called solid acids. “One has to first understand what governs proton conduction in these inorganic compounds,” she says. While looking at the materials’ atomic configurations, the researchers identified a pair of characteristics that directly relates to the materials’ proton-carrying potential.
                  &#xD;
  &lt;/p&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;p&gt;&#xD;
    
                    As Yildiz explains, proton conduction first involves a proton “hopping from a donor oxygen atom to an acceptor oxygen. And then the environment has to reorganize and take the accepted proton away, so that it can hop to another neighboring acceptor, enabling long-range proton diffusion.” This process happens in many inorganic solids, she says. Figuring out how that last part works — how the atomic lattice gets reorganized to take the accepted proton away from the original donor atom — was a key part of this research, she says.
                  &#xD;
  &lt;/p&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;p&gt;&#xD;
    
                    The researchers used computer simulations to study a class of materials called solid acids that become good proton conductors above 200
    
  
  
                    &#xD;
    &lt;sup&gt;&#xD;
      
                      
    
    
       
    
  
  
                    &#xD;
    &lt;/sup&gt;&#xD;
    
                    
  
  
    degrees Celsius. This class of materials has a substructure called the polyanion group sublattice, and these groups have to rotate and take the proton away from its original site so it can then transfer to other sites. The researchers were able to identify the phonons that contribute to the flexibility of this sublattice, which is essential for proton conduction. Then they used this information to comb through vast databases of theoretically and experimentally possible compounds, in search of better proton conducting materials.
                  &#xD;
  &lt;/p&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;p&gt;&#xD;
    
                    As a result, they found solid acid compounds that are promising proton conductors and that have been developed and produced for a variety of different applications but never before studied as proton conductors; these compounds turned out to have just the right characteristics of lattice flexibility. The team then carried out computer simulations of how the specific materials they identified in their initial screening would perform under relevant temperatures, to confirm their suitability as proton conductors for fuel cells or other uses. Sure enough, they found six promising materials, with predicted proton conduction speeds faster than the best existing solid acid proton conductors.
                  &#xD;
  &lt;/p&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;p&gt;&#xD;
    
                    “There are uncertainties in these simulations,” Yildiz cautions. “I don’t want to say exactly how much higher the conductivity will be, but these look very promising. Hopefully this motivates the experimental field to try to synthesize them in different forms and make use of these compounds as proton conductors.”
                  &#xD;
  &lt;/p&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;p&gt;&#xD;
    
                    Translating these theoretical findings into practical devices could take some years, she says. The likely first applications would be for electrochemical cells to produce fuels and chemical feedstocks such as hydrogen and ammonia, she says.
                  &#xD;
  &lt;/p&gt;&#xD;
&lt;/div&gt;&#xD;
&lt;div data-rss-type="text"&gt;&#xD;
  &lt;p&gt;&#xD;
    
                    The work was supported by the U.S. Department of Energy, the Wallenberg Foundation, and the U.S. National Science Foundation.
                  &#xD;
  &lt;/p&gt;&#xD;
&lt;/div&gt;</content:encoded>
      <enclosure url="https://irp.cdn-website.com/021d5bda/dms3rep/multi/MIT-proton-conduct-01a-PRESS.jpg" length="432615" type="image/jpeg" />
      <pubDate>Tue, 23 Jul 2024 14:30:00 GMT</pubDate>
      <guid>http://www.titanseo.ai/2024/proton-conducting-materials-could-enable-new-green-energy-technologies-0723</guid>
      <g-custom:tags type="string" />
      <media:content medium="image" url="https://irp.cdn-website.com/021d5bda/dms3rep/multi/MIT-proton-conduct-01a-PRESS.jpg">
        <media:description>thumbnail</media:description>
      </media:content>
    </item>
  </channel>
</rss>
