As discussed in yesterday's meeting (notes).
In the W3C Recommendation Track Readiness Best Practices one of the strongly recommended "readiness criteria" is about having a clear problem statement:
Strongly Recommended: The proposal identifies the real-world problem this work would address, and why existing solutions are inadequate.
What are web developers forced to do without this feature being available in a standardized way? What fraction of web sites, hybrid applications, data publishers, etc. are using a similar capability in a non-standardized way? How would users benefit from this feature if standardized?
As far as we could tell in the meeting, we do not have a clear problem statement yet. In the meeting, we collected some initial ideas that should be taken into account when drafting such a problem statement:
- Before the spec, developers had to learn SPARQL or learn a specific API and understand the particular data model of the data source they are reconciling against. SPARQL has a steep learning curve, and SPARQL enpoints are often brittle.
- As a large, prominent case, one might refer to Wikidata where you had to learn SPARQL or work with the MediaWiki API before the reconciliation endpoint existed.
- In the GND context, people implemented reconciliation on top of the lobid (non-reconciliation) API. This meant a lot of additional effort with poorer results compared to just using the existing reconciliation API.
- In NFDI, at least one project (Antelope) implemented reconciliation functionality on their own terms. See also https://metadaten.community/t/antelope-tib/444 (in German).
As discussed in yesterday's meeting (notes).
In the W3C Recommendation Track Readiness Best Practices one of the strongly recommended "readiness criteria" is about having a clear problem statement:
As far as we could tell in the meeting, we do not have a clear problem statement yet. In the meeting, we collected some initial ideas that should be taken into account when drafting such a problem statement: