<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Preprint | Dimitris Tzionas</title><link>https://dtzionas.com/publication-type/preprint/</link><atom:link href="https://dtzionas.com/publication-type/preprint/index.xml" rel="self" type="application/rss+xml"/><description>Preprint</description><generator>Hugo Blox Builder (https://hugoblox.com)</generator><language>en-us</language><lastBuildDate>Wed, 22 Apr 2026 00:00:00 +0000</lastBuildDate><item><title>LEXIS: LatEnt proXimal Interaction Signatures for 3D HOI from an Image</title><link>https://dtzionas.com/preprint/2026_arxiv_lexis/</link><pubDate>Wed, 22 Apr 2026 00:00:00 +0000</pubDate><guid>https://dtzionas.com/preprint/2026_arxiv_lexis/</guid><description>&lt;!--
&lt;div class="alert alert-note">
&lt;div>
Click the &lt;em>Cite&lt;/em> button above to demo the feature to enable visitors to import publication metadata into their reference management software.
&lt;/div>
&lt;/div>
-->
&lt;!--
&lt;div class="alert alert-note">
&lt;div>
Create your slides in Markdown - click the &lt;em>Slides&lt;/em> button to check out the example.
&lt;/div>
&lt;/div>
-->
&lt;!-- Add the publication's **full text** or **supplementary notes** here. You can use rich formatting such as including [code, math, and images](https://docs.hugoblox.com/content/writing-markdown-latex/). -->
&lt;p style="text-align: justify;">Reconstructing 3D Human–Object Interaction from an RGB image is essential for perceptive systems. Yet, this remains challenging as it requires capturing the subtle physical coupling between the body and objects. While current methods rely on sparse, binary contact cues, these fail to model the continuous proximity and dense spatial relationships that characterize natural interactions. We address this limitation via InterFields, a representation that encodes dense, continuous proximity across the entire body and object surfaces. However, inferring these fields from single images is inherently ill-posed. To tackle this, our intuition is that interaction patterns are characteristically structured by the action and object geometry. We capture this structure in LEXIS, a novel discrete manifold of interaction signatures learned via a VQ-VAE. We then develop LEXIS-Flow, a diffusion framework that leverages LEXIS signatures to estimate human and object meshes alongside their InterFields. Notably, these InterFields help in a guided refinement that ensures physically-plausible, proximity-aware reconstructions without requiring post-hoc optimization. Evaluation on Open3DHOI and BEHAVE shows that LEXIS-Flow significantly outperforms existing SotA baselines in reconstruction, contact, and proximity quality. Our approach not only improves generalization but also yields reconstructions perceived as more realistic, moving us closer to holistic 3D scene understanding. Code &amp;amp; models will be public at &lt;a href="https://anticdimi.github.io/lexis" target="_blank" rel="noopener">https://anticdimi.github.io/lexis&lt;/a>.&lt;/p></description></item></channel></rss>