<?xml version="1.0" encoding="utf-8" standalone="yes" ?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom">
  <channel>
    <title>Home | Eloi Du Bois - Résumé</title>
    <link>http://localhost:1313/edubois/</link>
      <atom:link href="http://localhost:1313/edubois/index.xml" rel="self" type="application/rss+xml" />
    <description>Home</description>
    <generator>Hugo Blox Builder (https://hugoblox.com)</generator><language>en-us</language><lastBuildDate>Mon, 01 Jan 2024 00:00:00 +0000</lastBuildDate>
    
    
    <item>
      <title>Automatic personalized avatar generation from 2d images</title>
      <link>http://localhost:1313/edubois/patents/p2a/</link>
      <pubDate>Fri, 16 Aug 2024 00:00:00 +0000</pubDate>
      <guid>http://localhost:1313/edubois/patents/p2a/</guid>
      <description>&lt;h2 id=&#34;overview&#34;&gt;Overview&lt;/h2&gt;
&lt;p&gt;url: &lt;a href=&#34;https://patents.google.com/patent/WO2025038916A1/en?inventor=Eloi&amp;#43;du&amp;#43;bois&#34; target=&#34;_blank&#34; rel=&#34;noopener&#34;&gt;https://patents.google.com/patent/WO2025038916A1/en?inventor=Eloi+du+bois&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;Implementations described herein relate to methods, systems, apparatuses, and computerreadable media to generate personalized avatars for a user, based on one or more 2D images of a face. The automatic generation may be facilitated by a generative component deployed at a virtual experience server and that is configured to generate a data structure representing a 3D mesh of the personalized avatar responsive to input of the one or more 2D images. The data structure may be converted to a polygonal mesh, and the polygonal mesh may be automatically fit and rigged onto a head portion of an avatar data model.&lt;/p&gt;
</description>
    </item>
    
    <item>
      <title>Diffusion Synthesizer for Efficient Multilingual Speech to Speech Translation</title>
      <link>http://localhost:1313/edubois/publication/hirschkind2024diffusionsynthesizerefficientmultilingual/</link>
      <pubDate>Fri, 14 Jun 2024 00:00:00 +0000</pubDate>
      <guid>http://localhost:1313/edubois/publication/hirschkind2024diffusionsynthesizerefficientmultilingual/</guid>
      <description></description>
    </item>
    
    <item>
      <title>Robust facial animation from video using neural networks</title>
      <link>http://localhost:1313/edubois/patents/v2c/</link>
      <pubDate>Tue, 04 Jun 2024 00:00:00 +0000</pubDate>
      <guid>http://localhost:1313/edubois/patents/v2c/</guid>
      <description>&lt;h2 id=&#34;overview&#34;&gt;Overview&lt;/h2&gt;
&lt;p&gt;url: &lt;a href=&#34;https://patents.google.com/patent/US12002139B2&#34; target=&#34;_blank&#34; rel=&#34;noopener&#34;&gt;https://patents.google.com/patent/US12002139B2&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;Implementations described herein relate to methods, systems, and computer-readable media to generate animations for a 3D avatar from input video captured at a client device. A camera may capture video of a face while a trained face detection model and a trained regression model output a set of FACS weights, head poses, and facial landmarks to be translated into the animations of the 3D avatar. Additionally, a higher level-of-detail may be intelligently selected based upon user preferences and/or computing conditions at the client device.&lt;/p&gt;
</description>
    </item>
    
    <item>
      <title>Voice Toxicity Detection Using Multi-Task Learning</title>
      <link>http://localhost:1313/edubois/publication/voice-safety-2024/</link>
      <pubDate>Sun, 14 Apr 2024 00:00:00 +0000</pubDate>
      <guid>http://localhost:1313/edubois/publication/voice-safety-2024/</guid>
      <description></description>
    </item>
    
    <item>
      <title>Audiovisual Inputs for Learning Robust, Real-time Facial Animation with Lip Sync</title>
      <link>http://localhost:1313/edubois/publication/real-time-facial-animation-with-lip-sync-2024/</link>
      <pubDate>Wed, 15 Nov 2023 00:00:00 +0000</pubDate>
      <guid>http://localhost:1313/edubois/publication/real-time-facial-animation-with-lip-sync-2024/</guid>
      <description>










  





&lt;video controls  &gt;
  &lt;source src=&#34;http://localhost:1313/edubois/edubois/media/audiovisual_face.mp4&#34; type=&#34;video/mp4&#34;&gt;
&lt;/video&gt;

</description>
    </item>
    
    <item>
      <title>Experience</title>
      <link>http://localhost:1313/edubois/experience/</link>
      <pubDate>Tue, 24 Oct 2023 00:00:00 +0000</pubDate>
      <guid>http://localhost:1313/edubois/experience/</guid>
      <description></description>
    </item>
    
    <item>
      <title>Projects</title>
      <link>http://localhost:1313/edubois/projects/</link>
      <pubDate>Tue, 24 Oct 2023 00:00:00 +0000</pubDate>
      <guid>http://localhost:1313/edubois/projects/</guid>
      <description></description>
    </item>
    
    <item>
      <title>Systems and methods for animation generation</title>
      <link>http://localhost:1313/edubois/patents/a2c/</link>
      <pubDate>Tue, 10 Jan 2023 00:00:00 +0000</pubDate>
      <guid>http://localhost:1313/edubois/patents/a2c/</guid>
      <description>&lt;h2 id=&#34;overview&#34;&gt;Overview&lt;/h2&gt;
&lt;p&gt;url: &lt;a href=&#34;https://patents.google.com/patent/US11551393B2&#34; target=&#34;_blank&#34; rel=&#34;noopener&#34;&gt;https://patents.google.com/patent/US11551393B2&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;Systems and methods for animating from audio in accordance with embodiments of the invention are illustrated. One embodiment includes a method for generating animation from audio. The method includes steps for receiving input audio data, generating an embedding for the input audio data, and generating several predictions for several tasks from the generated embedding. The several predictions includes at least one of blendshape weights, event detection, and/or voice activity detection. The method includes steps for generating a final prediction from the several predictions, where the final prediction includes a set of blendshape weights, and generating an output based on the generated final prediction.&lt;/p&gt;
</description>
    </item>
    
    <item>
      <title>Fast Facial Animation from Video</title>
      <link>http://localhost:1313/edubois/publication/fast-facial-animation-from-video-2021/</link>
      <pubDate>Fri, 06 Aug 2021 00:00:00 +0000</pubDate>
      <guid>http://localhost:1313/edubois/publication/fast-facial-animation-from-video-2021/</guid>
      <description></description>
    </item>
    
  </channel>
</rss>
