buildingSMART Forums

IFC Modularisation: current status and request for feedback

Modularisation already exists, and it called Model View Definitions. It seem Modularisation is being used as an enabler to allow poor software engineering without proper systems analysis and poor contractual behavior. This so-called Modularisation would prevent proper standard information exchanges to flourish globally. It’s known folks are creating their own information exchanges in parts of Europe due to software inability and forcing them in and out of software not built to handle the rubbish, and this Modularisation is catering to this improper behavior.

Modularisation as presented and discussed herein will prevent proper testing of information against contractual agreements as MVDs are meant to satisfy. If this Modularisation comes into existence we will not be able to share contracted information deliverables in a standard way because people globally will be doing what they feel like doing versus following standard information exchanges that can be contractually agreed and objectively tested in the same manner no matter the location globally. we fight this now with folks forcing information into coordination models versus agreeing on standard MVDs we can all utilize.

Millions of US taxpayer dollars went into creating a foundation to battle this problem, by the USACE ERDC conducting the Life Cycles information exchange Project, resulting in several standard contractable MVDs of which bSI to date has ignored. It hurts to see such waste and a proposal that would only add fuel to the fire that folks worked so hard and put so much time and money into to put out those flames. It’s really sad to see the new proposed direction that has become clear over these pass days. I hope the focus goes back on contractable information exchanges that can be objectively tested, where if the data has not met the standard then folks will simply not get paid.

Too much nonsense is going on in the Construction sector with information modeling and data wrangling where no one actually knows if the job has been done correctly. In the days where we drew by hand, one could not put crap all over my drawings and demand to be paid. however, now this is the case with “BIM”. We have an opportunity to do this properly, so let’s not create a mechanism where we allow garbage in and out of software and pat each other on the back and say well done!

Until we have MVDs that are testable against contracted information exchanges, we shall continue to suffer in industry as a whole. Modularisation will not solve this. In fact, we already have Modularisation, called an MVD.

1 Like

I don’t follow you here. What is “this modularization” you are talking about? I’ve not yet seen any actual examples of what this modularization is going to be, so it is way too early to say that it can’t be objectively tested for example.

I agree that modularization sounds like something that should be solved by MVD. But people should still be allowed to discuss other ideas as well. And maybe the outcome of the dicussion is some features that are currently missing in MVD, so maybe modularization really is MVD 2.0 or something.

The problem with this discussion is still that noone knows what this modularization is and why it is wanted/needed. And that why part has to be properly established first. We cannot discuss or invent modularization if there is no commonly understood reasons for its existence.

Still waiting on @jwouellette and @berlotti to further explain the problems that needs to be solved by modularization.


Thanks @Helland. That is indeed the major lesson from this: the cause/motive needs clarification. Working on that right now!

1 Like

If that’s the only part you do not follow, then I’m sure we are on the same page! :slight_smile:

One in which MVD as we know it is abolished?

Waiting to see if that’s part of the “motive” and an effect of the “cause”.

1 Like

Thanks Leon!

I think the only way we can show together what we have in mind is to work on different ideas and let the industry choose which solutions are better

And I think the majority have started this approach

My approach as I said before is to add IDM/MVD Configurator inside the software and let software users the ability to do anything “inside the software”

From IFD, IFC and IDM and MVD setting, to UC, to ER, to MVD, to anything, the whole workflow would be flexible and under the “users” control

Yes, definitely new topic. But just to shortly reply. I look at IfcDoc through the lens of documentation generation and schema editing more than just validation. But sure, Schematron is definitely worth considering as an addition for schema validation. Maybe adding it to every publication in computer interpretable listings.


This post was flagged by the community and is temporarily hidden.

Coming in late, I saw good ideas, e.g. from @nn1, @DrShawnOKeeffe, @sergej and @Helland. On the other hand the topic is complex and multifaceted, therefore there is a danger to mix up the different topics, like modularization vs. mvd vs. certification vs. base modelling languages (UML, SysML, etc.). Here is my view:

Modularization: (of a schema) - to me it means mainly that I cluster the overall IFC schema into certain parts in order to manage it (rather) independently from other parts (by different persons, etc.). To some degree that was always the case in IFC development (see the architecture diagram, born back in late 1990’ies). But when it came to package all for implementation in software, all was combined into a single, first EXPRESS, later also XSD schema file.

Model view definitions: from a software perspective - obviously no software can support data against the complete schema. In order to avoid that the sender and the receiver uses / expects data written against different parts of the schema, there was the need to limit the scope of the schema (creating a subschema) that more comprehensively describe the common subset of the expected exchange. This is what an MVD is - a subset of the total IFC schema, plus additional constraints imposed on that subset, that restricts the scope of the data exchange for a set of exchange scenarios (e.g. all exchanges, that rely of linking of discipline models - ako Reference View).

Exchange requirements: from a user perspective, the IDM method was born (early 2000) in order to describe the data exchange needs between different processes or disciplines in terms of expected data deliveries. Later something similar came up as part of the LOD (D for definition, not detail) discussions - the LOI (level of information) - [side note: to me, someone should at one time define the commonalities and differences between these two IDM and LOIN (which is the new term in ISO 19650) - right now it is a big confusion]. Unfortunately the terms “Model View Definition” and “Exchange Requirements” are continuously mixed up confusing all.

E.g. there are 100++ different exchange requirements dealing with different attributes assigned to model elements - whereas in IFC this is one concept called “property set”. So in principle, it requires only one MVD (containing property sets) to satisfy 100++ exchange requirements.

mvdXML - the specification was created for three (separate) reasons:

  1. to make MVD subsetting computer readible and executable (before it was a spreadsheet with x behind classes)
  2. to guide the documentation process - i.e. to generate the IFC specification (see the differences between IFC2x3 and IFC4)
  3. to add constraints and check rules

Modularization and MVD is not the same, MVD and Exchange requirments are not the same, and by chosing UML as the main language to define the schema (and some tool to actually handling the UML diagrams, like enterprise architect) those questions are not answered either.

Still I see the need to move away from ISO/EXPRESS to more mainstream language/tool such as UML. And the need to use standard tools for modeling. This is aroute to go (and yes, as @jwouellette said, there a many discussion, whether to use straight UMLor sysML - e.g. the STEP community decided to move to sysML).

I am not so sure about Modularization versus MVD - going by (large) packages to subdivide the total IFC schema and use those as subsets for implementation/certification lacks in my view the fexibility of the MVD approach (in particular the use of concept templates). And as @nn1 said, there is hardly any domain specific class. For me, it should not be differentiated by domain, but rather by:

  • fundamental type of geometry to be supported (simple triangulation, nurbs, CSG solids, procedural geometry) - and geometry is not domain specific
  • fundamental types of model representations (volumetric models, surface based models (e.g. thermal boundaries for energy calculations), idealized geometry (e.g. for structural analysis), 4D (adding work schedules, tasks, etc. connected to model elements), 5D (adding cost schedules), etc.

I will focus on the modularization part in my reply since other topics can be discussed elsewhere.

If the objective is easier maintenance of the schema I guess it would make sense. But is that currently really a pressing issue?

So, the current subschemas focusing on domains are not the proper way and as you suggest “fundamental type of geometry” and “fundamental type of model representations” would be better. In this case, will there be other people working on maintaining these modules or will it still be the same handful of people that currently do this? If the latter is the case, are the current “maintainers” complaining about the schema not being modular or where does this initiative come from? As I stated in my previous post, I think focusing the scarce technical resources on IfcDoc (or alternative), mvdxml and Github (adding this one; in my mind this is in the IfcDoc domain) seems more reasonable to me.

If I understand your proposal with geometry and model representation modules correctly, would this just mean removing the domains from the resource layer (leaving the ones that are connected to model representations e.g. structural analysis domain) and splitting up the geometry model resource. But also adding some new ones (or maybe combining existing ones?) for schedules, cost etc. This could be achieved with the existing system, just reorganizing it a bit. Evolution over revolution. Who benefits? Or as I asked before, who is the initializer? Just strapping on different names on existing concepts does not change the idea behind it.

I apologize if this sounds dismissive again. This is not my intention. But yes, still trying to understand what the benefits would be and who would benefit from this. I am all for simplifying things (makes my job easier) but this still seems like complicating again. My motives are selfish because in the end, I will have to use outdated tools in our standardization efforts and in the bSI harmonization and deployment projects next year. Hacking myself when I encounter a bug (@jonm appreciate all the help from you!).


Hi @sergej

maybe I was not clear in my writing. What I want to bring forward is my personal belief that (1) Modularization and (2) MVD als implementation subsets are not the same.

(1) Modularization could still basically follow the IFC architecture diagram with resource, core and interoperability layer but (2) generating MVD should follow concept template (or units of functionalities) that are guided by the fundamental geometry types, the representation types, etc.).

In my view (to be challenged) we should not move away from MVD, but put MVD back head on feet. Meaning decouple it from the exchange requirements and restrict it to max 5? very basic IFC subsets, which are still broad enough to cover dynamic exchange requirements (ako LOIN).


Hi @TLiebich,

oh no, you were clear. But I was agreeing on all that (other bold points in your original) and just wanted to contribute to the modularization discussion the way I understood the original proposal. :slight_smile: What the community knows as MVDs was renamed into modules so everyone was confused. The presentation of @berlotti even suggested that “Certification based on use cases (MVDs) does not scale”. We still don’t know what this means and what the modules offer to solve in this respect.

The point I was making in one of my previous posts explaining the idea of concept templates that can be configured (even with exchange requirements but let’s not go there again :slight_smile: ) is completely in line with what you are saying. Maybe I just went into too much detail. So, yes that all sounds reasonable and productive. :+1:

Interesting! :slight_smile:

Hi Thomas…always great to hear what you have to say on these matters.

Can you please provide 5 examples?

Can you please elaborate on “very basic IFC subsets, which are still broad enough to cover dynamic exchange requirements” ?

Should not MVDs be very concise and explicit for verification and validation purposes? (I guess more the validation part, since verification is simply check the format of the data and has no way of knowing if the information exchanged is what was required in the contract.)

If it’s basic and broad, how will one know if the requirements has been met? and How would one contract such broad and basic exchanges? Pretty vague in my opinion. I would not want to be the receiver of such model views…especially if I’m the owner, e.g. how do I know I’m getting what I paid for?

Hi @DrShawnOKeeffe - always good to discuss with you.

Actually your answer goes to the heart of the necessary clarification! What do we understand about “verification and validation and (also) certification process”. Here (in my view) different people understand the same word very differently. Depending on the context in which it is used.

Here my understanding:

MVD is a software development topic, it is to defined the IFC subset and to implement it in software (as IFC interfaces today, maybe as openAPI in future), and the quality assurance of the implementation. Validation (as part of a certification ot software QA process) is to ensure, that sender and receiver of the information (according to the IFC subset) have the same understanding and that there is no data loss or misinterpretation. But an MVD is not concerned with the completeness of the content of the information.

Validation of the information exchanges is a user topic, it is to ensure that the creator of the submitted model has provided all necessary information for what he was contracted. So it is connected to the use case (or exchange requirement) for which the information exchange is required.

Of cause, the underlying MVD is the boundary in which a successful information exchange can be performed (in a way the eye of the needle). But the same MVD can enable multiple exchange requirements. So to answer your question - it is the formal (and computer readable) exchange requirement (based on a MVD) that has to be referenced in a contract. I would phrase it like this in a contract:

  • the party “A” is requested to deliver a BIM model according to the exchange requirement “abc”. The format of the delivery is “ifc mvd 1”.

This would separate the requirments on the content (the exchange requirement) from the requirement on the format by which it is delivered.


Thanks for you explanations…Just a thought…

Me and others use a concrete cylinder as an example, i.e. the cylinder is the MVD and the contents (concrete) is what was to be exchange…BUT…

How do we know if the concrete is good? We test it to assure it was created per the specification/standard, and so we know we can consistently pour it on site.

In your case Thomas, I do not see the testing of the Concrete to be possible. “?” At least not in a standardised way. That is what we want right?

I’m happy you clarified this

I’m not sure I understood your view about this or not?

In my view (to be challenged) we should not move away from MVD, but put MVD back head on feet. Meaning decouple it from the exchange requirements and restrict it to max 5? very basic IFC subsets, which are still broad enough to cover dynamic exchange requirements (ako LOIN).

Maybe I’m wrong but current IDM Toolkit/Configurator follows this view which I think is good

In my approach we will have just the whole model, based on the highest LoD(s), because everything happens inside the software, and everything is automatic as well as “programmable”

[quote=“TLiebich, post:97, topic:2054”]
"Validation of the information exchanges is a user topic, it is to ensure that the creator of the submitted model has provided all necessary information for what he was contracted. So it is connected to the use case (or exchange requirement) for which the information exchange is required".

Is this not the current problem?

Currently there are not many Validation methods, nor Automatic Verification tools.

The IDM/MVD should include this explicitly with an objectively tested and approved standardised method, right?

Also, I think this is incomplete for practical use. It does not specify the verified MVD, so folks can actually use it…e.g. It passed the verification tool testing with NO errors so we know machines/programs will read it. And, the Validation method that specifies how the user shall assure the information in the, e.g. IFC-SPF (could be different format, which should be specified too if that’s the case), is in fact what was contracted. We cannot leave this open to subjectivity. I believe the big issue we all are facing is the need to reduce/remove Subjectivity by creating 100% Objective methods where ever possible. It is 100% possible to do this with most design and construction deliverables, and where that information cannot be mapping to the IFC data model, the IDM simply says so and where does that warranted information lye.


  • the party “A” is requested to deliver a BIM model according to the exchange requirement “abc”. The format of the delivery is “ifc mvd 1”.

Here is a real example from a contract where I know this process/method I’m speaking of works. I can literally let files go out the door knowing the machine will read them first time, and the information within them are correct and validated by the person who actually created that information. This latter part is most important, but we can revisit it once Modularisation as spoken of here can assure these other issues are sorted first.

"ALL COBie data in every COBie file generated must be verified, which means the data must be in the correct format as per the COBie standard specification. 100% error free (Zero Errors – See Table 1) COBie QC Reports for Design, Construction, and Handover, as per the COBie standard specification and as required by the EIR and AIR, shall be submitted to assure the Employer this is in fact the case with each and every COBie deliverable specified.

COBie MVD example…
“ALL COBie files, drawings, and models shall be validated. For example, the information on the drawings, in-particular the drawing schedules (e.g. door schedules, valve schedules), must match the COBie data output. The COBie Design data output must be derived from the BIM model. Participants shall not copy and paste schedules on to drawing. ALL drawing schedules must be generated parametrically from the BIM model parameters in the design software. In short, the data in the drawing schedules must come from within the model elements’ parameters in the design software to assure that the needed COBie data in the COBie files are in fact generated by the Revit® COBie Extension.”

"NO XLSX formatted COBie files shall be filled in by hand in Excel unless approved by Contractor’s Project Information Manager.

A field validation check of the AIM and its COBie counterpart deliverable shall be conducted prior to handover, i.e. the Participant is required to check that serial numbers, warranty dates, etc. of the components, as per the AIR, installed on site match the AIM and COBie data handed over.

NOTE: For help with COBie QC see:

The COBie QC for Design and Construction Handover appears as follows (as per NBIMS US V3 Section 4.2 COBie Version 2.4)"

"COBie QC for Design:

For the COBie Design Rule, the Participants must satisfy ALL COBie requirements by ensuring the following fields are populated as per the EIR and the associated Employer’s AIR – Contacts, Facility, Floor, Space, Zone, Type, Component, System, and Attribute. Document is optional, and Spare, Resource, and Job are N/A.

COBie QC for Construction:

For the COBie Construction Rule, the Participants must satisfy ALL COBie requirements by ensuring the following fields are populated as per the EIR and the associated Employer’s AIR – Contacts, Facility, Floor, Space, Zone, Type, Component, System, Spare, Resource, Job, Document, and Attribute."

"Handover – Tests on Completion - Contacts, Facility, Floor, Space, Zone, Type, Component, System, Spare, Resource, Job, Document, and Attribute.

NB: Participants’ attention is drawn to note that the following shall be included as ‘Tests on Completion”

a. COBie – XLSX for Design - 100% QC with zero errors

b. COBie – XLSX with Construction - 100% QC with zero errors

c. Field Validation Checklist demonstrating data provided matches to field installation

d. Model IFC2x3 CV2.0 MVD file and native (Revit) formatted models with matching As-Built Drawings as PDF demonstrating model matches COBie data and record drawings."

"COBie Handover Inclusions and Exclusions:

COBie scope inclusions shall be as set out in COBie Standard - Chapter 4.2 NBIMS US V3 pg. 219-220; exclusions for COBie, Chapter 4.2 NBIMS US V3 COBie Annex A pg. 71, and the COBie Responsibility Matrix.

See Appendix 9 below for pgs 219-220 of NBIMS V3.

Quality Assurance Matrix that shall be followed is detailed in Appendix 10"

I see a few issues I would fix in there, but overall, this person did very well with the knowledge they have. The files created for a Water Treatment Facility in Ireland using this method in the contact, was very successful. The files were read first time by the CMMS, and even IBM in Ireland said they were the best files they have seen to date. To be clearer on what they meant exactly, is that the files were the best in relation to Verification, not Validation - IBM could not possibly know if the information in the files were suffice to what was actually contracted.

I would also like to suggest that we be very clear about Verification vs Validation and one can see this clarity in ISO 9001… e.g. for this MVD in particular it looks like this for Design…"Verification: data in the correct format, e.g. ISO Date and Time used in title blocks etc.

Validation: Data = Drawings (Rule of Thumb), e.g. scheduled items volts on the drawings match that same data in the corresponding COBie file."

Of course in the contract it should even be more specific…I guess at that level specificity it should be given as Thomas says, by the User…e.g. in a BIM Implementation/Execution Plan saying how they will assure the deliverable are correct so they can actually get paid. At the end of the day all of this is about 2 things: 1. Assurance the Owner got what they paid for,…and on the other side that coin, 2. that folks get Paid. One should not be paid for a service that has no proof the service was done properly, hence, the Concrete Testing example. Surely we would not allow the concrete for the bridge to be poured if there was no field testing conducted, e.g. slump test, temperature, breaking of the cylinders, etc.

IDM/MVD should be no different than this, and any new technology brought to industry by bSI MUST adhere to these simple needs that the Construction industry has conformed, in the case of concrete 200 years of standardised testing.

It has been proven we can and should do the same with data using IDM/MVD. I mentioned this earlier in the posts above.

As for "This would separate the requirments on the content (the exchange requirement) from the requirement on the format by which it is delivered."

I’m not sure I agree with this in all cases of data/information exchange, especially where the format is critical for the success of the delivery as per the contract, e.g. the format must be only a certain type of formatted file, for a particular purpose, and to assure that file works it must be tested in the singular format.

Contracts and IDM/MVDs/Standards cannot be vague.

Some countries have started Integrated Digital Delivery (IDD) projects nationally, so the way bSI and other standards were suggesting “someday” now is not “effective”

ISO and bSI just provide “bases” but countries and experts/leaders for sure won’t limit solutions to what ISO and bSI suggest

What you all are talking about “again” was good for BIM Level 2 projects, today “few” work on BIM Level 3, and “real” Integrated Digital Systems --> GIS+BIM+PLM

bSI has started to cover GIS+BIM and somewhat PLM when they work on PDT as part of PDM systems

However, anything is “dependent” to “national” projects and strategies and solutions

Hi, I really see it as a two stage approach:

  • MVD is the first (or base) stage - few, international, general purpose, focussing on the software capability to guarantee loss-free exchange
  • ER is the second (dynamic) stage - more local, project oriented, specific, focussing on the capability of the design team to delivery the right information

here an example what I would anticipate in terms of validations/testing (as there is a long debate on right color exchange on the forum). Assume one wants to state in a contracted delivery, that all traversal reinforcing bars should be delivered as red in a BIM delivery.

  • MVD for colors

    1. Ensure, that all software interprets the color definitions in IFC the same (see concept templates - like)
    2. Ensure, that there is precise implementation guidance (e.g. about the precedence of geometry color over material color)
    3. Have unit test cases comprising all possible variations of IFC color definitions to enable full software tests
    4. Have a certification programme that oversees the above = this shall guarantee that the software, the design team is using, is able to carry the correct color information from the native software over into the neutral exchange file - *.ifc.
  • ER for colors

    1. formally define, that each instance of IfcReinforcingBar classified as traversal shall have a color "red assigned (this should be part of the exchange requirement definitions)
    2. have a way to express this in a formal rule (today mvdXML [attention: name is misleading, as the rule checking part is actually checking against exchange requirements])
    3. have checking software that can read the formal rules and execute them - showing all traversal reinforcing bars, that are not red “non complying” = this shall guarantee that the design team has delivered the contracted information exchanges according to the contract, as a prerequisite, they shall use software, that passed the MVD certification as described above.
1 Like

Exactly. First level is that the concept of a brep (the idea of that kind of geometric representation i.e. in terms of mvdxml concept templates) gets harmonized between software products. Second is that I as an end user can say that e.g. I want only brep on roofs and either brep or extrusion on walls. The same principle works on any other concepts. First software products need to agree on how a property is written into SPF then as an end user I can say that I want Pset_WallCommon.FireRating on my Partitioning walls.

@DrShawnOKeeffe you are talking about this second level. The points you bring up are very important if data exchange is supposed to be meaningful. One thing that mvdxml should be able to do with ERs (my view) is to be able to pull the data from the authoring system. The authoring system should be able to deliver that kind of data if it is certified according to the official MVD that was used to configure the ERs. This way we could move away from exclusively Excel and PDF (not replacing as there is still a need for legal documents) based ERs and into the digital realm.