Do we really need 300 floats to represent the meaning of a word? Representing words with words - a logical approach to word embedding using a self-supervised Tsetlin Machine Autoencoder.
A new self-supervised machine learning approach captures word meaning with concise logical expressions. The logical expressions consist of contextual words like "black," "cup," and "hot" to define other words like "coffee," thus being human-understandable. This logical embedding performs competitively on several intrinsic and extrinsic benchmarks, matching pre-trained GLoVe embeddings on six downstream classification tasks.