It seems easy for this to be sublinear growth or superlinear growth, depending on context.
If we imagine the space of the complex AI as split into two parts--the context model and the content model (that is, information and structure that is expected to be shared across entries vs. information and structure that is local to particular entries), then expanding the source material means we don't have much additional work to do on the context model, but whether the additional piece of the content model is larger or smaller depends on how connected the new material is to the old material.
That is, one of the reasons why Watson takes many times the space of its source material is because it stores links between objects, which one would expect to grow with roughly order n squared. If there are many links between the old and new material, then we should expect it to roughly quadruple in size instead of double; if the old material and new material are mostly unconnected and roughly the same in topology, then we expect the model to roughly double; if the new material is mostly unconnected to the old material and also mostly unconnected to itself, then we expect the model to not grow by much.