Neural Mesh Refinement

College of Information Science and Electronic Engineering, Zhejiang University, Hangzhou 310027, China
Zhejiang Provincial Key Laboratory of Information Processing, Communication and Networking (IPCAN), Hangzhou 310027, China
Corresponding author
Frontiers of Information Technology & Electronic Engineering (FITEE), 2025. 26(5) (Cover Article)
Teaser image for Neural Mesh Refinement (NMR)

Neural Mesh Refinement (NMR) performs data-driven nonlinear refinement and demonstrates robust generalization to unseen shapes, unseen poses, and non-isometric deformations. It can also refine coarse non-organic shapes into finer ones with appropriate geometric details, even when trained on organic shapes.

Abstract

Subdivision is a widely used technique for mesh refinement. Classic methods rely on fixed manually defined weighting rules and struggle to generate a finer mesh with appropriate details, while advanced neural subdivision methods achieve data-driven nonlinear subdivision but lack robustness, suffering from limited subdivision levels and artifacts on novel shapes. To address these issues, this paper introduces a neural mesh refinement (NMR) method that uses the geometric structural priors learned from fine meshes to adaptively refine coarse meshes through subdivision, demonstrating robust generalization. Our key insight is that it is necessary to disentangle the network from non-structural information such as scale, rotation, and translation, enabling the network to focus on learning and applying the structural priors of local patches for adaptive refinement. For this purpose, we introduce an intrinsic structure descriptor and a locally adaptive neural filter. The intrinsic structure descriptor excludes the non-structural information to align local patches, thereby stabilizing the input feature space and enabling the network to robustly extract structural priors. The proposed neural filter, using a graph attention mechanism, extracts local structural features and adapts learned priors to local patches. Additionally, we observe that Charbonnier loss can alleviate over-smoothing compared to L2 loss. By combining these design choices, our method gains robust geometric learning and locally adaptive capabilities, enhancing generalization to various situations such as unseen shapes and arbitrary refinement levels. We evaluate our method on a diverse set of complex three-dimensional (3D) shapes, and experimental results show that it outperforms existing subdivision methods in terms of geometry quality.

Method

NMR processes a coarse triangle mesh to produce a sequence of subdivided meshes with varying levels of detail. Each refinement involves the processing of three modules:

  1. Midpoint Subdivision ($\mathcal{M}$): The original face is subdivided into four faces by inserting new vertices at the midpoint of each edge and connecting them through new edges.
  2. Intrinsic Structure Descriptor Construction ($\mathcal{I}$): For newly inserted vertices, an intrinsic structure descriptor is created by normalizing their one-ring neighborhood. This normalization makes the features invariant to scale, rotation, and translation, allowing the network to focus on structural priors.
  3. Neural Filter ($\mathcal{N}_{\Theta}$): This filter predicts offsets for the new vertices to update their positions. It consists of:
    • Edge Feature Embedding ($\mathcal{E}_{\theta}$): Capture features from the outgoing edges of the central vertex.
    • Graph Attention Aggregation ($\mathcal{G}_{\theta}$): Aggregate edge features to the central vertex using a graph attention mechanism, adaptively emphasizing relevant local structures.
    • Vertex Repositioning ($\mathcal{V}_{\theta}$): Predict residual from the central vertex feature to refine its position.

Pipeline of for Neural Mesh Refinement (NMR)

Our training data comprises pairs of coarse and fine meshes (left) with a bijective map $f$ between each pair. During training, we minimize the Charbonnier loss between the ground truth (green) and the output meshes (blue) across levels.

Results

Neural Mesh Refinement (NMR) method shows significant advantages and robust generalization capabilities compared to existing methods. We demonstrate that NMR can effectively generalize to unseen shapes, unseen poses, arbitrary refinement levels, and non-isometric deformations. This is a core strength achieved through its design.

Teaser image for Neural Mesh Refinement (NMR)

NMR does not suffer from the inherent limitations of existing methods, such as volume shrinkage and over-smoothing (Loop), amplification of tessellation artifacts (modified butterfly), or shape damage (neural subdivision). Moreover, it outperforms neural subdivision in generalization across non-isometric deformations, unseen shapes, and unseen refinement levels.

Interactive Demo

We provide an interactive demo to visualize the results of our method and compare them with baseline methods, including Loop, Modified Butterfly, and Neural Subdivision.
Use mouse to rotate (left-click & drag), pan (right-click & drag), and zoom (scroll wheel).

Gear

Input (SubD 0)

Loading...

Reference

Loading...

Loop (SubD 3)

Loading...

Modified Butterfly (SubD 3)

Loading...

Neural Subdivsion (SubD 3)

Loading...

Ours (SubD 3)

Loading...

Bunny

Input (SubD 0)

Loading...

Reference

Loading...

Loop (SubD 3)

Loading...

Modified Butterfly (SubD 3)

Loading...

Neural Subdivsion (SubD 3)

Loading...

Ours (SubD 3)

Loading...

Applications

The strong geometric structure learning capability of NMR allows it to learn rich structural priors from the dataset during the training phase, and adaptively apply structural priors based on the geometric structure of the input shape during inference.ability to learn geometric structures. As a data-driven approach, the refinement characteristics of NMR are inherently linked to the composition of the training dataset. Different training dataset derives different applications:

  • Adaptive Refinement: When trained on diverse datasets, NMR learns to adaptively refinemeshes, adding detail where needed while preserving smoothness elsewhere. It makes NMR highly generalizable to unseen shapes, unseen poses, arbitrary refinement levels, and non-isometric deformations.
  • Style Transfer: When trained on a constrained dataset with limited diversity, NMR tends to overfit and exhibit style transfer characteristics. NMR can recognize and replicate the "style" of the training shape, enabling it to refine an initial coarse mesh into various distinct styles.
Qualitative results on the TOSCA dataset

The refinement characteristics of NMR depend on the composition of the training dataset. When trained on a constrained dataset with limited diversity, NMR exhibits stylized refinement ability (top two rows). When trained on extensive datasets with diverse geometries, NMR exhibits adaptive refinement ability (bottom).

Qualitative results on the TOSCA dataset

When trained on a single shape, NMR exhibits style transfer characteristics. Using different shapes in training leads to stylized refinement results (blue) biased towards the training shapes (green).

Conclusion

We propose a data-driven neural mesh refinement (NMR) method that performs adaptive refinement and exhibits robust generalization. Our main contribution is that we propose and demonstrate that disentangling the network from non-structural information (e.g., scale, rotation, and translation) allows the network to focus on learning and applying structural priors for adaptive refinement. Extensive experiments demonstrate that the proposed method outperforms state-of-the-art (SOTA) mesh subdivision methods. Specifically, NMR is characterized by several innovative contributions:

  • Intrinsic Structure Descriptor: A key component that excludes the non-structural information to align local patches, thereby stabilizing the input feature space and enabling the network to robustly extract structural priors.
  • Locally Adaptive Neural Filter: This filter, using a graph attention mechanism, extracts local structural features and dynamically adapts learned priors to local patches of different objects, emphasizing relevant neighbors.
  • Improved Loss Function (Charbonnier Loss): The use of Charbonnier loss, a smooth approximation of L1 loss, helps alleviate over-smoothing often seen with L2 loss, better preserving fine details and aiding convergence.
  • Progressive Upsampling with Deep Supervision: The method recursively refines meshes, and multi-level loss supervision aids in network convergence and allows for outputs at various levels of detail.

Citation

If you find our work useful in your research, please consider citing:

BibTeX

@article{zhu2025neural,
        title   = {Neural mesh refinement},
        author  = {ZHU, Zhiwei and GAO, Xiang and YU, Lu and LIAO, Yiyi},
        journal = {Frontiers of Information Technology \& Electronic Engineering},
        year    = {2025},
        volume  = {26},
        number  = {5},
        pages   = {695--712},
        doi     = {10.1631/FITEE.2400344}
      }

Plaintext

Zhiwei ZHU, Xiang GAO, Lu YU, Yiyi LIAO, 2025. Neural mesh refinement. Frontiers of Information Technology & Electronic Engineering, 26(5):695-712. https://doi.org/10.1631/FITEE.2400344

Acknowledgements

Project supported by the National Natural Science Foundation of China (No. 62071427) and the National Natural Science Foundation of China (No. U21B2004).