Place the tagline here when it is ready
Crawler is a searchable metadata repository and explorer, where your sources, data models, and enterprise objects coexist within the same space. Crawler comes with a set of tools that semi-automate the process of creating meaningful mappings from the sources all the way to the enterprise entities. With a semantic layer and AI driven mappings your metadata, enterprise models, and schemas are no longer decoupled.
By mapping your source metadata and enterprise objects, you can start creating a common vocabulary across your systems. Crawler can ingest common semantic formats such as XML, RDFS and OWL, and the inbuilt MapSuggest engine to help you find implicit relationships. Query data the way you think, not the way you think a search engine thinks.
Explore your data and write semantic queries against a federated, virtual schema. Without loading into a physical data warehouse, you get access to fresh, non-duplicated data. Crawler supports most common SQL and NoSQL sources, and its modular design makes it easy to add new source plugins.
With the automated sync functionality, you can discover schema changes as soon as they happen, ensuring your version of the metadata is up to date. The history component logs version changes, and stores version differences, allowing you to rollback to older states of the source.