More than three quarters of large companies today have a “data-hungry” AI initiative under way — projects involving neural networks or deep-learning systems trained on huge repositories of data. Yet, many of the most valuable data sets in organizations are quite small: Think kilobytes or megabytes rather than exabytes. Because this data lacks the volume and velocity of big data, it’s often overlooked, languishing in PCs and functional databases and unconnected to enterprise-wide IT innovation initiatives.