Contact

Industry Leading Blog for Manufacturers

“A man’s got to recognize his limitations” – Dirty Harry

"All limits are self-imposed” – Icarus

Scalability is the ability of a model to grow in size and complexity, using the same design, without sacrificing efficiency.   It is always desirable.  It is not always attainable.  The point is not that you always have to make sure a model will scale.  The point is that it is important to realize just how scalable your model is.  And to measure that against how scalable you anticipate it needing to be. 

When I approach a large, complex model, I’ll build a proof of concept first just to prove that the solution will actually do what I need it to do.  Minimal characteristics, tiny tables, simple functions; just enough to be sure I don’t get the thing half coded and discover that some crucial functionality won’t actually do what I need it to do.  Sort of probing where I suspect the kinks will be before actually working them out.

Then, once all the little pieces seem to be playing well together, I’ll start building the model out to life-size.  Full size tables, all the constraints and dependencies, everything needed to actually make the model work “in real life”. 

Along the way I give thought to how different it would be once the “real life” version would be in place.  I knew the approximate size of my tables.  I guesstimated the number and complexity of dependencies.  I projected the number of characteristics. 

In other words, I anticipated scaling my model from the proof of concept up to the “real life” version.  I kind of knew where it was going.  I knew taking a table from 5 rows to 500 wouldn’t be a problem.  The same was true for other objects.  Everything would stay within the scalability limits of the design.

Importantly, I also had the ability to finish building the model and to test performance before deploying it out to general use in Production.  If any issues were discovered along the way, I had the ability to tweak the design; changing tables and rewriting constraints as necessary. 

Once a model is in Production, it’s a lot harder to make changes.  Changes that might have taken a day in development now take much longer and involve Change Management, process re-design, and user re-education.  Much better to get it right the first time.

My point here is that evaluating the scalability of a design isn’t just important when projecting the scale change from proof of concept to real life size.  It is also important to project any scaling that might take place over the life of the product being modeled.  The Product Lifespan scaling is actually more important because of the increased cost of design change post-production.

You probably know when you’re first modeling a configurable product whether you’re tackling the whole product, or just a subset.  Perhaps you’re doing this because the whole product seems too complex.  This is exactly the time to think about the scalability of your design.  If you have success in your original model I guarantee that people will want you to expand it. 

For example; if you have a 10x10 grid, with descriptions for each cell, and calculations for each row and column, many designs will handle this.  Some will be more efficient to code/develop and some will be more efficient to maintain.  Some will scale better, and some more poorly. 

If you know that 10x10 is the largest the grid will ever grow, then which design to choose is simply a matter of preference.  A straight forward, blunt force design with no custom coding will work just fine. Make lots of characteristics, and write dependencies to do everything for every combination. 

If, however, you suspect that the design might grow by adding rows and columns it’s time to do a little fortune telling and see where those changes in scale might lead you.  With this growth in mind it might be more desirable to anticipate the necessary maintenance before choosing the design.  Then, when those changes in scale occur, they can be handled elegantly, with a minimum of changes required in Production.  There might be more effort up front.  It might require custom coding, use of non-intuitive logic, and database tables.  But this work is easier (and a better investment) when done in development before deployment. 

Another example; a table with 900 rows.  That scale is easily handled by a Variant Table.  If that table is expected to grow significantly, say by 50-100 rows a month, eventually that table will start to impact performance.  Anticipating that scaling and making it a database table right off the bat would be a prudent move. 

Even when you take all these considerations into account, and make your best guess as to the scalability of a specific design before implementation, it is still possible to get it wrong.

If you do get in a situation where the original design isn’t scaling well, it’s important to understand the situation.  Sometimes minor tweaks can make a big difference.  Obviously, examine that route first.  Revisiting a single assumption in the original design (or 2 or 3) might give you enough of a boost to get over the current problem.  Other times you’re just filling sandbags against the tidal wave.  Analyze the situation and know the difference. 

If you have decided that it’s the tidal wave, stop beating a dead horse and change mounts.  Recognize when an eventual failure is inevitable and start thinking about how to fix it.  Plan your changes before the situation becomes dire.  And this time keep the scalability requirement in mind. 

The take away here is that some perfectly designed models can suffer from scalability problems.  Ideally, you will be able to prevent them.  If you can’t prevent problems, mitigate them.  At the very least you need to understand that it is a scaling problem so you don’t keep banging your head against the immovable wall. 

x


Stay up to date on the leading industry solutions by subscribing to our blog digest.
What can we help you with?

CONTACT US

(585) 506-4600  |  (844) 506-4600