Explainable artificial intelligence for mechanics: physics-informing neural networks for constitutive models. (arXiv:2104.10683v1 [cs.LG])

(Artificial) neural networks have become increasingly popular in mechanics as
means to accelerate computations with model order reduction techniques and as
universal models for a wide variety of materials. However, the major
disadvantage of neural networks remains: their numerous parameters are
challenging to interpret and explain. Thus, neural networks are often labeled
as black boxes, and their results often elude human interpretation. In
mechanics, the new and active field of physics-informed neural networks
attempts to mitigate this disadvantage by designing deep neural networks on the
basis of mechanical knowledge. By using this a priori knowledge, deeper and
more complex neural networks became feasible, since the mechanical assumptions
could be explained. However, the internal reasoning and explanation of neural
network parameters remain mysterious.

Complementary to the physics-informed approach, we propose a first step
towards a physics-informing approach, which explains neural networks trained on
mechanical data a posteriori. This novel explainable artificial intelligence
approach aims at elucidating the black box of neural networks and their
high-dimensional representations. Therein, the principal component analysis
decorrelates the distributed representations in cell states of RNNs and allows
the comparison to known and fundamental functions. The novel approach is
supported by a systematic hyperparameter search strategy that identifies the
best neural network architectures and training parameters. The findings of
three case studies on fundamental constitutive models (hyperelasticity,
elastoplasticity, and viscoelasticity) imply that the proposed strategy can
help identify numerical and analytical closed-form solutions to characterize
new materials.

Source: https://arxiv.org/abs/2104.10683

webmaster

Related post