<AI>Devspace

How to store datasets of lexical connections?

clock icon
asked 2 weeks ago
message icon
2
eye icon
193

I'm investigating the possibility of storing the semantic-lexical connections (such as the relationships to the other words such as phrases and other dependencies, its strength, part of speech, language, etc.) in order to provide analysis of the input text.

I assume this has been already done. If so, to avoid reinventing the wheel, is there any efficient method to store and manage such data in some common format which has been already researched and tested?

2 Answers

If I understand you correctly, you should check out Word2Vec. From Wikipedia:

Word2vec is a group of related models that are used to produce word embeddings. These models are shallow, two-layer neural networks that are trained to reconstruct linguistic contexts of words. Word2vec takes as its input a large corpus of text and produces a high-dimensional space (typically of several hundred dimensions), with each unique word in the corpus being assigned a corresponding vector in the space. Word vectors are positioned in the vector space such that words that share common contexts in the corpus are located in close proximity to one another in the space.

1

Write your answer here