openapi vs ifc development

hi, as i read the open api specification and the cogital strategy papers, the interoperability in the next level bim (level4? - the digital ecosystem) is supposed to be based not on ifc, but on open apis. they are being developed on rest, just as is the bsdd implementation.

is this the trend?
rob

anybodyā€¦?

It is not out of the realm of possibility, but what does it matter? Are you worried that APIs would replace IFC?

Questionā€¦ If one considers email or texting (SMS ro such) a replacement for ā€˜snail mailā€™, isnā€™t there still a need for language (e.g. English, German, French, Polish), the semiotic constructs and semantics in order to communicate, regardless of the technical method? In essence, the schema is still relevant, regardless if the information exchange is file-based or streaming, monolithic or transactional, asynchronous or synchronous.

The difference between an ā€˜openā€™ API and an ā€˜Open APIā€™ is that the former is just a construct of the target vendor to enable direct connection to their system and enclosed (proprietary) data set, but possibly still heavily reliant on their product-specific semantics and constructs, available to all others (free or not), and the later a ā€˜universalā€™ system, construct, and semantic basis that any vendor could use to connect to any other vendor, provided the products have Open API compatibility. In essence, this is what the Open CDE group is working on, starting with foundations and then moving to ā€˜documentsā€™. Eventually, they would be in a position to address model-based information transactionsā€¦ but through utilizing a standard information schema (IFC) in one of many possible formats (e.g. JSON/XML, HDF5, STP, OWL/RDF, etc.) using RESTful, or possibly other, methods.

Thus IFC is still important, but I think the Technical Roadmap 2020-2025 lays out that to take advantage of more robust, modern methodologies of information exchange, what form it is in and how it is used will in fact change.

1 Like

hi jeff, actually itā€™s not a worry, but a need for clarification.

iā€™ve had a long email discussion with alain waha from the cogital. theyā€™ve made a graphics resembling the bew-richards wedge where the last step (level 3) is evolving into a step 4: the digital ecosystem, based on openapis (including web2.0, nosql, and the likes).

for me itā€™s an ecosystem of proprietary to mostly unknown interfaces, because we will have to cope with a world of digital twins with big data streaming irt 24/7 between the digital and physical assets. itā€™s surely good for the market, because many applications will have to emerge, but is it the direction that bs intā€™l subscribes to?

i simply donā€™t see it as a simplification of the data exchange processes, but i might miss some vital development details. iā€™ve welcomed the log-loi decoupling trend, which was supposed to contribute to the simplification process.
do the restful implementations of bsdd and openapis base on similar steps, and similar transparency?

my understanding of the data handling evolution was of a few steps, beginning with the unstructured, via partly structured (apisā€™ calls), then structured (sql queries) through personalized data (like digital twins use to handle big data). do we go back to the second kind of it?

The data that are exchanged between APIs still needs (or at least that is very preferable) to be based at a standard.
APIs are defining the interface to access the data. IFC and APIs go hand in hand.
The Technical Roadmap states the buildingSMART strategy about APIs.

if somebody develops an interface for the allplan and revit connection they surely donā€™t need ifc for the data transfer. some other information format might be used, that bs intā€™l has no control overā€¦

With an API to Revit or Allplan, you have on one hand the API protocol (which could be REST, standardised for exchange of JSON strings/fragments). On the other hand, youā€™d also have to know something about the data scheme of the system serving the API. It could be custom (e.g. talking about project, element, family, worksets in Revit-speak) or based on a standard scheme (e.g. using terms as IfcProduct, IfcProject and IfcPropertySet).

We developed a REST-like API to allow some applications to act as a server (Revit, Archicad, SketchUp and an IFC viewer). The Revit connection uses software-specific terminology, as that makes most sense when directly talking to the system. The Archicad connection uses other terms. You could imagine having a generic wrapper based on IFC data scheme, but that would not allow you to access all software-specific constructs (such as Schedules, Views, Sectionsā€¦).

So APIs donā€™t negate the need for agreed, common data schemes, such as IFC. And In fact, developing IFC5 to be more API-based would allow a more flexible and future-proof way to connect systems. You may be sharing json-strings via web requests and never store an IFC file, but be still fully adopting IFC as the data scheme in such connections.

does it mean ifc is not future-proof?

iā€™m still trying to figure out what is the direction in which the bim data exchange standards are heading. i donā€™t trust autodesk (although never used anything from them, not even autocad), so iā€™m wondering if they will play by the rules. in the past they havenā€™t, when it comes to open bim.

@gester

IFC2x3 TC1 is an LTS (long term support) product, as is IFC4, and soon to be IFC4.3.

IFC5 provides an opportunity for bSI and the community to make it even better as an integral mechanism to openBIM interoperability in the future. The earlier published formats should still be effective for a long time. The marketplace may even develop tools which enable translations between them.

I think you have to remember that the greatest value IFC has is in the semantic schema construct, that is giving relevant meaning to all the data that can be generated and used across the lifecycle of built environment assets. It also provides a basis for linking to other standards and sources of data (hopefully open as well) in the cases where the schema may not be comprehensive enough. The use of the schema through different types of serializations (e.g. file formats) should be flexible to enable lots of different kinds of workflows as I stated earlier. Monolithic, file-based transfers of models are an old paradigm that may still be relevant for some workflows (e.g. archiving, contracted data drops, etc.), but not optimal for other, more fluid use cases. @berlotti is leading efforts by various players in the technical community to address these issuesā€¦ not to eliminate IFC, but to strengthen it even further.

Autodesk is openly cooperating in all the buildingSMART International Rooms to support many projects. Recently, they announced that they will be using ODAā€™s IFC SDK to enable that interoperability, eventually throughout their entire AEC/Infra product lines. I believe that is a great step forward and should be appreciated by everyone. At first, this may have a bigger impact on the infrastructure realm (as this is where the most urgent current demand is) but will also impact buildings. While no one is perfect, or has a perfect track record, I am giving Autodesk the benefit of the doubt, as I have for a long time as ISG Deputy-Chair, Chair, and Leader, in working toward a shared interoperability goal.

I didnā€™t say it needs to be IFC. I said it needs to define a data schema. You can invent your own, but many who tried eventually turned to IFC due to the global consensus, semantic descriptions, etc.
IFC is not a file format, it is a semantic data standard that makes is easy to automate processes. Defining your own data schema often leads to manual mapping when exchanging. Again, not saying this is wrong; not saying IFC is the only solution; just trying to explain that comparing a semantic data standard with a programming interface is apples and oranges.

iā€™m not comparing, iā€™m just asking what the direction is.
iā€™ve just wondered if a step from the standardised data semantics to the api call to retrieve this data is not a step back, regarding the evolution of information handling, storing, and evaluating.

i used to be a software engineer for some time in my previous life ;), and i know the api-based data channels as closed paths of information wandering. iā€™ll have to check the openapi specification, thoughā€¦

Lots of good discussion here. The territory seems well covered. Just two other points:

The OpenAPI Initiative and ā€œopen APIsā€ as a concept are distinct. You can use the OpenAPI Spec to write REST services for closed systems inaccessible to the public and write open APIs based on open data that donā€™t have an OpenAPI spec at all. Confusing terminology, I know :slight_smile: In my experience, adoption of OpenAPI Specifications specifically within the AECO industry is mixed. The OpenAPI Specification is a point in a larger space of service description languages - not the first and not the last. Whether teams adopt it probably depends on their use cases, consumer base, and technical aesthetic.

Thereā€™s a lot of fun territory to cover integrating bSI standards with OpenAPI Specs alone. You could, for example, use ifcJSON as the data model of your OpenAPI interface, so your API consumers can speak an open standard, while your backend system remain untouched. Likewise with BCF: you could write a proxy server that adapts legacy ā€œissue trackersā€ to support a BCF-compatible API. That lets you keep your existing systems-of-records in place while providing a standards-based interface for all public consumers. BCF doesnā€™t have an OpenAPI Spec yet, but could be a nice project if youā€™re interested (especially because youā€™d be able to demo auto-generating a client SDK). Things could get really interesting if your HTTP API exposed hypermedia responses (i.e., HATEOAS, which OpenAPI isnā€™t currently that good at describing), because then clients would need no prior knowledge of the API contract to interact with remote services, and could traverse the IFC model graph by link-hopping.

1 Like

this is exactly what iā€™m afraid of.

Sidenoteā€¦ it is funny thay the CEO of Autodesk now talks about ā€˜filesā€™ and ā€˜fileformatā€™. As a data veteranā€¦ But it is human. Buildingsmart maybe better explain why it isnā€™t? In simple explanations.

Hereā€™s one take at it - eager if others want to take a stab too. :slight_smile:


IFC standardizes a ā€œmapā€ of all the concepts people normally think about when designing, building, and operating built assets. Every IFC entity, like an IfcWall or an IfcWorkPlan, represents a concept in the domain. Each concept can have relationships with other concepts (an IfcBuildingStorey is part of an IfcBuilding) as well as properties (a buildingā€™s elevation from sea level). The standard defines all the possible relationships and properties the IFC concept map can have, and the IFC data we create in practice are just specific instances of that map ā€“ a big network of domain entities connected together via links that together capture our knowledge of the asset weā€™re working on.

You can see this network model directly inside a typical IFC-STP file. Every line of the STP file represents a single instance of an IFC concept (e.g., a specific IfcMaterial). Each instance has an address (e.g., ā€œ#78ā€) that uniquely identifies it within the network. Other instances can reference this address to create relationships - to state, for example, that a certain MaterialLayer ("#77") has a certain Material ("#78"). What we typically refer to as ā€œIFC filesā€ are really just freeze-dried networks of domain knowledge defined by the IFC concept map.

The IFC model network doesnā€™t have to be bound to files. If every IFC instance address is a URL instead of a file-specific number, then you can distribute the IFC model across the open web. Every IFC instance becomes its own web ā€œresourceā€ that uses hyperlinks to describe relationships it has to other resources. You can follow the links to explore the network, just like we do in the file, except now visiting a single link might take you to part of the network hosted anywhere in the world. We havenā€™t changed the structure of the IFC network by moving it to the web, weā€™ve just made the address space for IFC data available to everyone. So you might think of the web as the worldā€™s largest IFC file, or existing IFC files as frozen mini-webs. The fundamental network model is unchanged either way. The future of IFC on the open web can be as bright as we want it to be.


Not sure this is the simplest explanation, but maybe a start.

we know the ifc syntax, but thanks.

this is fine, but how can i, as an architect, use the network to visualize the edifice model?
i apparently have to retrieve it, say, via some api, into my design software (the same is with exporting, if we donā€™t use files).

the ifc network is an open book, will it be safe in every api?

re: IFC syntax: thatā€™s actually a key point. Because IFC describes a data model, the concrete syntax doesnā€™t matter. IFC-STP just makes the network model easy to see in plain text. The same IFC graph can be serialized as XML, RDF triples, JSON, UML, or any other serialization suited to the use case. Likewise, the transport mechanism is irrelevant (on a floppy disk? over the network? via bluetooth?) so long as the participating users/applications understand and agree to the contract for accessing the model graph.

How IFC data is secured is a critical but orthogonal concept. Like syntax and transport, thereā€™s more than one option: Oauth scopes over HTTP? Public key cryptography? A Virtual Private Network? It would be nice if there were more domain-oriented infrastructure for this today, but nothing prevents us from building it up if we have a clear idea of the workflow weā€™re after.

The diversity of approaches to serialization, transport, and security of IFC graphs lets us flip the question on its head: How would we like to interact with IFC models in the future? With a clear sense of the workflows we need, itā€™s easier to prune the solution space into a toolkit we can profitably use in practice.

PS: If you havenā€™t tried it before, it can fun to explore the ifcOWL ontology in Protege.

As long as i can rememeber IFC was always about sharing clumsy, slow and ā€˜unleanā€™ big textfiles. What are waiting for to leave this behind?

i donā€™t mean the transfer security, it could be done using the distributed ledger technology.

i mean the security of the open model structure, which is prone to any technology bias, not necessarily via open standards.