I embrace Sander Robijns article The Data Industry Evolution: Structured Data and the ‘JSON-ization’ of Everything.
https://lnkd.in/dnVBxtzU
Marco Wobben left a great comment on the post above
”As in the 90s, the trend was to XMLize everything. Next to JSONization, I also see Graphalizing and GPTizing”
Don’t you think that we are running in circles around our tail :-) i.e. rediscovering hierarchical databases and graph databases and corresponding data serialization formats.
Although the evolution of data modeling, data management, data semantics and data search nowadays clearly points to the adoption of graph data model, I can argue that we simply got right the network structure but we stacked into property and triplets (EAV-SPO) graph data models.
In simpler terms, we’ve adopted a very specific representation of nodes and edges that, while easy for humans to program (coding), isn’t optimal for computer (AI) processing. This is why there’s a trend toward converting human-readable semantic data into vectors for more efficient and easily digestible representation by AI engines.
Hence, in my view, the key takeaway is the necessity to identify and establish fundamental graph data models capable of representing information in a numeric or vectorized format while maintaining a meaningful link with the corresponding human-readable representation.
This was precisely the focus of my research involving S3DM, grounded in the authentic Aristotelian semiotic principle of the triangle of reference. Essentially, on a practical level, we must substitute and reach a consensus on foundational graph semantics like entity, attribute, and value, replacing them with numerical equivalents.
In conclusion, the tabular and hierarchical formats ought to emerge naturally from the graph representation. Conversely, any serialized data format should be transformed into this foundational graph format.