hi, as i read the open api specification and the cogital strategy papers, the interoperability in the next level bim (level4? - the digital ecosystem) is supposed to be based not on ifc, but on open apis. they are being developed on rest, just as is the bsdd implementation.
It is not out of the realm of possibility, but what does it matter? Are you worried that APIs would replace IFC?
Questionā¦ If one considers email or texting (SMS ro such) a replacement for āsnail mailā, isnāt there still a need for language (e.g. English, German, French, Polish), the semiotic constructs and semantics in order to communicate, regardless of the technical method? In essence, the schema is still relevant, regardless if the information exchange is file-based or streaming, monolithic or transactional, asynchronous or synchronous.
The difference between an āopenā API and an āOpen APIā is that the former is just a construct of the target vendor to enable direct connection to their system and enclosed (proprietary) data set, but possibly still heavily reliant on their product-specific semantics and constructs, available to all others (free or not), and the later a āuniversalā system, construct, and semantic basis that any vendor could use to connect to any other vendor, provided the products have Open API compatibility. In essence, this is what the Open CDE group is working on, starting with foundations and then moving to ādocumentsā. Eventually, they would be in a position to address model-based information transactionsā¦ but through utilizing a standard information schema (IFC) in one of many possible formats (e.g. JSON/XML, HDF5, STP, OWL/RDF, etc.) using RESTful, or possibly other, methods.
Thus IFC is still important, but I think the Technical Roadmap 2020-2025 lays out that to take advantage of more robust, modern methodologies of information exchange, what form it is in and how it is used will in fact change.
hi jeff, actually itās not a worry, but a need for clarification.
iāve had a long email discussion with alain waha from the cogital. theyāve made a graphics resembling the bew-richards wedge where the last step (level 3) is evolving into a step 4: the digital ecosystem, based on openapis (including web2.0, nosql, and the likes).
for me itās an ecosystem of proprietary to mostly unknown interfaces, because we will have to cope with a world of digital twins with big data streaming irt 24/7 between the digital and physical assets. itās surely good for the market, because many applications will have to emerge, but is it the direction that bs intāl subscribes to?
i simply donāt see it as a simplification of the data exchange processes, but i might miss some vital development details. iāve welcomed the log-loi decoupling trend, which was supposed to contribute to the simplification process.
do the restful implementations of bsdd and openapis base on similar steps, and similar transparency?
my understanding of the data handling evolution was of a few steps, beginning with the unstructured, via partly structured (apisā calls), then structured (sql queries) through personalized data (like digital twins use to handle big data). do we go back to the second kind of it?
The data that are exchanged between APIs still needs (or at least that is very preferable) to be based at a standard.
APIs are defining the interface to access the data. IFC and APIs go hand in hand.
The Technical Roadmap states the buildingSMART strategy about APIs.
if somebody develops an interface for the allplan and revit connection they surely donāt need ifc for the data transfer. some other information format might be used, that bs intāl has no control overā¦
With an API to Revit or Allplan, you have on one hand the API protocol (which could be REST, standardised for exchange of JSON strings/fragments). On the other hand, youād also have to know something about the data scheme of the system serving the API. It could be custom (e.g. talking about project, element, family, worksets in Revit-speak) or based on a standard scheme (e.g. using terms as IfcProduct, IfcProject and IfcPropertySet).
We developed a REST-like API to allow some applications to act as a server (Revit, Archicad, SketchUp and an IFC viewer). The Revit connection uses software-specific terminology, as that makes most sense when directly talking to the system. The Archicad connection uses other terms. You could imagine having a generic wrapper based on IFC data scheme, but that would not allow you to access all software-specific constructs (such as Schedules, Views, Sectionsā¦).
So APIs donāt negate the need for agreed, common data schemes, such as IFC. And In fact, developing IFC5 to be more API-based would allow a more flexible and future-proof way to connect systems. You may be sharing json-strings via web requests and never store an IFC file, but be still fully adopting IFC as the data scheme in such connections.
iām still trying to figure out what is the direction in which the bim data exchange standards are heading. i donāt trust autodesk (although never used anything from them, not even autocad), so iām wondering if they will play by the rules. in the past they havenāt, when it comes to open bim.
IFC2x3 TC1 is an LTS (long term support) product, as is IFC4, and soon to be IFC4.3.
IFC5 provides an opportunity for bSI and the community to make it even better as an integral mechanism to openBIM interoperability in the future. The earlier published formats should still be effective for a long time. The marketplace may even develop tools which enable translations between them.
I think you have to remember that the greatest value IFC has is in the semantic schema construct, that is giving relevant meaning to all the data that can be generated and used across the lifecycle of built environment assets. It also provides a basis for linking to other standards and sources of data (hopefully open as well) in the cases where the schema may not be comprehensive enough. The use of the schema through different types of serializations (e.g. file formats) should be flexible to enable lots of different kinds of workflows as I stated earlier. Monolithic, file-based transfers of models are an old paradigm that may still be relevant for some workflows (e.g. archiving, contracted data drops, etc.), but not optimal for other, more fluid use cases. @berlotti is leading efforts by various players in the technical community to address these issuesā¦ not to eliminate IFC, but to strengthen it even further.
Autodesk is openly cooperating in all the buildingSMART International Rooms to support many projects. Recently, they announced that they will be using ODAās IFC SDK to enable that interoperability, eventually throughout their entire AEC/Infra product lines. I believe that is a great step forward and should be appreciated by everyone. At first, this may have a bigger impact on the infrastructure realm (as this is where the most urgent current demand is) but will also impact buildings. While no one is perfect, or has a perfect track record, I am giving Autodesk the benefit of the doubt, as I have for a long time as ISG Deputy-Chair, Chair, and Leader, in working toward a shared interoperability goal.
I didnāt say it needs to be IFC. I said it needs to define a data schema. You can invent your own, but many who tried eventually turned to IFC due to the global consensus, semantic descriptions, etc.
IFC is not a file format, it is a semantic data standard that makes is easy to automate processes. Defining your own data schema often leads to manual mapping when exchanging. Again, not saying this is wrong; not saying IFC is the only solution; just trying to explain that comparing a semantic data standard with a programming interface is apples and oranges.
iām not comparing, iām just asking what the direction is.
iāve just wondered if a step from the standardised data semantics to the api call to retrieve this data is not a step back, regarding the evolution of information handling, storing, and evaluating.
i used to be a software engineer for some time in my previous life ;), and i know the api-based data channels as closed paths of information wandering. iāll have to check the openapi specification, thoughā¦
Lots of good discussion here. The territory seems well covered. Just two other points:
The OpenAPI Initiative and āopen APIsā as a concept are distinct. You can use the OpenAPI Spec to write REST services for closed systems inaccessible to the public and write open APIs based on open data that donāt have an OpenAPI spec at all. Confusing terminology, I know In my experience, adoption of OpenAPI Specifications specifically within the AECO industry is mixed. The OpenAPI Specification is a point in a larger space of service description languages - not the first and not the last. Whether teams adopt it probably depends on their use cases, consumer base, and technical aesthetic.
Thereās a lot of fun territory to cover integrating bSI standards with OpenAPI Specs alone. You could, for example, use ifcJSON as the data model of your OpenAPI interface, so your API consumers can speak an open standard, while your backend system remain untouched. Likewise with BCF: you could write a proxy server that adapts legacy āissue trackersā to support a BCF-compatible API. That lets you keep your existing systems-of-records in place while providing a standards-based interface for all public consumers. BCF doesnāt have an OpenAPI Spec yet, but could be a nice project if youāre interested (especially because youād be able to demo auto-generating a client SDK). Things could get really interesting if your HTTP API exposed hypermedia responses (i.e., HATEOAS, which OpenAPI isnāt currently that good at describing), because then clients would need no prior knowledge of the API contract to interact with remote services, and could traverse the IFC model graph by link-hopping.
Sidenoteā¦ it is funny thay the CEO of Autodesk now talks about āfilesā and āfileformatā. As a data veteranā¦ But it is human. Buildingsmart maybe better explain why it isnāt? In simple explanations.
Hereās one take at it - eager if others want to take a stab too.
IFC standardizes a āmapā of all the concepts people normally think about when designing, building, and operating built assets. Every IFC entity, like an IfcWall or an IfcWorkPlan, represents a concept in the domain. Each concept can have relationships with other concepts (an IfcBuildingStorey is part of an IfcBuilding) as well as properties (a buildingās elevation from sea level). The standard defines all the possible relationships and properties the IFC concept map can have, and the IFC data we create in practice are just specific instances of that map ā a big network of domain entities connected together via links that together capture our knowledge of the asset weāre working on.
You can see this network model directly inside a typical IFC-STP file. Every line of the STP file represents a single instance of an IFC concept (e.g., a specific IfcMaterial). Each instance has an address (e.g., ā#78ā) that uniquely identifies it within the network. Other instances can reference this address to create relationships - to state, for example, that a certain MaterialLayer ("#77") has a certain Material ("#78"). What we typically refer to as āIFC filesā are really just freeze-dried networks of domain knowledge defined by the IFC concept map.
The IFC model network doesnāt have to be bound to files. If every IFC instance address is a URL instead of a file-specific number, then you can distribute the IFC model across the open web. Every IFC instance becomes its own web āresourceā that uses hyperlinks to describe relationships it has to other resources. You can follow the links to explore the network, just like we do in the file, except now visiting a single link might take you to part of the network hosted anywhere in the world. We havenāt changed the structure of the IFC network by moving it to the web, weāve just made the address space for IFC data available to everyone. So you might think of the web as the worldās largest IFC file, or existing IFC files as frozen mini-webs. The fundamental network model is unchanged either way. The future of IFC on the open web can be as bright as we want it to be.
Not sure this is the simplest explanation, but maybe a start.
this is fine, but how can i, as an architect, use the network to visualize the edifice model?
i apparently have to retrieve it, say, via some api, into my design software (the same is with exporting, if we donāt use files).
the ifc network is an open book, will it be safe in every api?
re: IFC syntax: thatās actually a key point. Because IFC describes a data model, the concrete syntax doesnāt matter. IFC-STP just makes the network model easy to see in plain text. The same IFC graph can be serialized as XML, RDF triples, JSON, UML, or any other serialization suited to the use case. Likewise, the transport mechanism is irrelevant (on a floppy disk? over the network? via bluetooth?) so long as the participating users/applications understand and agree to the contract for accessing the model graph.
How IFC data is secured is a critical but orthogonal concept. Like syntax and transport, thereās more than one option: Oauth scopes over HTTP? Public key cryptography? A Virtual Private Network? It would be nice if there were more domain-oriented infrastructure for this today, but nothing prevents us from building it up if we have a clear idea of the workflow weāre after.
The diversity of approaches to serialization, transport, and security of IFC graphs lets us flip the question on its head: How would we like to interact with IFC models in the future? With a clear sense of the workflows we need, itās easier to prune the solution space into a toolkit we can profitably use in practice.
PS: If you havenāt tried it before, it can fun to explore the ifcOWL ontology in Protege.