TablEye: Seeing small Tables through the Lens of Images. (arXiv:2307.02491v1 [cs.LG])

The exploration of few-shot tabular learning becomes imperative. Tabular data
is a versatile representation that captures diverse information, yet it is not
exempt from limitations, property of data and model size. Labeling extensive
tabular data can be challenging, and it may not be feasible to capture every
important feature. Few-shot tabular learning, however, remains relatively
unexplored, primarily due to scarcity of shared information among independent
datasets and the inherent ambiguity in defining boundaries within tabular data.
To the best of our knowledge, no meaningful and unrestricted few-shot tabular
learning techniques have been developed without imposing constraints on the
dataset. In this paper, we propose an innovative framework called TablEye,
which aims to overcome the limit of forming prior knowledge for tabular data by
adopting domain transformation. It facilitates domain transformation by
generating tabular images, which effectively conserve the intrinsic semantics
of the original tabular data. This approach harnesses rigorously tested
few-shot learning algorithms and embedding functions to acquire and apply prior
knowledge. Leveraging shared data domains allows us to utilize this prior
knowledge, originally learned from the image domain. Specifically, TablEye
demonstrated a superior performance by outstripping the TabLLM in a 4-shot task
with a maximum 0.11 AUC and a STUNT in a 1- shot setting, where it led on
average by 3.17% accuracy.



Related post