<body><script type="text/javascript"> function setAttributeOnload(object, attribute, val) { if(window.addEventListener) { window.addEventListener('load', function(){ object[attribute] = val; }, false); } else { window.attachEvent('onload', function(){ object[attribute] = val; }); } } </script> <div id="navbar-iframe-container"></div> <script type="text/javascript" src="https://apis.google.com/js/platform.js"></script> <script type="text/javascript"> gapi.load("gapi.iframes:gapi.iframes.style.bubble", function() { if (gapi.iframes && gapi.iframes.getContext) { gapi.iframes.getContext().openChild({ url: 'https://www.blogger.com/navbar.g?targetBlogID\x3d11473797\x26blogName\x3dDesign+by+Contract\x26publishMode\x3dPUBLISH_MODE_BLOGSPOT\x26navbarType\x3dBLUE\x26layoutType\x3dCLASSIC\x26searchRoot\x3dhttps://aabsdotnet.blogspot.com/search\x26blogLocale\x3den_US\x26v\x3d2\x26homepageUrl\x3dhttp://aabsdotnet.blogspot.com/\x26vt\x3d-474995814957422330', where: document.getElementById("navbar-iframe-container"), id: "navbar-iframe" }); } }); </script>
How to design and build a DBC system using C#, and NVelocity.
Previous Posts

Archives

Links
Design by Contract
Thursday, August 04, 2005

DBC in use

I am finally getting around to making use of Aabs.Dbc in my new open source porject Aabs.Norm. I figured that the best proving ground for a thing like this is a thing like that. That is, a highly complex, high performance application framework, that maintains a state in a variety of very complex ways, that uses polymorphism and .NET in sophisticated ways, and that will be adversely affected if the performance of Aabs.Dbc is poor at runtime. Here's an example of the interface to the core object - the Persistence Broker.
namespace Aabs.Norm.Core
{
    public interface IPersistenceBroker
    {
            [Requires("criteria != null")]
        PersistentObject RetrieveObject(Criteria criteria);

            [Requires("persistentObject != null")]
            [Ensures("$before(persistentObject) == 
                $after(persistentObject)")]
            [Ensures("persistentObject.IsPersistent == true")]
            [Ensures("(persistentObject.TimeStamp - DateTime.Now) < 
               new TimeSpan($time_ms)")]
            [Ensures("$result != null")]
            [Ensures("$result.Count == 0")]
        IList SaveObject(PersistentObject persistentObject);

            [Requires("persistentObject != null")]
            [Ensures("$before(persistentObject) == 
              $after(persistentObject)")]
            [Ensures("$result != null")]
            [Ensures("$result.Count == 0")]
            [Ensures("persistentObject.IsPersistent == false")]
        IList DeleteObject(PersistentObject persistentObject);

            [Requires("criteria != null")]
            [Requires("criteria.ForClass != null")]
            [Ensures("$result != null")]
            [Ensures("$result.Count >= 0")]
        IList ProcessCriteria(Criteria criteria);

            [Requires("persistentObject != null")]
            [Ensures("$before(persistentObject) == 
              $after(persistentObject)")]
            [Ensures("$result != null")]
            [Ensures("$result.Name != persistentObject.
                GetType().Name")]
        ClassMap GetClassMapFor(PersistentObject persistentObject);

            [Requires("tableMap != null")]
            [Ensures("$result != null")]
            [Ensures("$result.Count >= 0")]
        IList GetReferrersTo(TableMap tableMap);

            [Requires("persistentObject != null")]
            [Ensures("$result != null")]
            [Ensures("$result.Count >= 0")]
        IList GetLinkedClassMaps(PersistentObject persistentObject);

            [Requires("classMap != null")]
            [Ensures("$result != null")]
            [Ensures("$result.Count >= 0")]
        IList GetLinkedClassMaps(ClassMap classMap);

            [Requires("classMap != null")]
            [Requires("propertyName != null")]
            [Requires("propertyName.Length > 0")]
            [Ensures("$result != null")]
            [Ensures("$result.Count >= 0")]
        IList IncomingConnectionsToAttribute(ClassMap classMap, 
           string propertyName);

            [Requires("procedureName != null")]
            [Requires("procedureName.Length > 0")]
            [Requires("type != null")]
            [Requires("databaseName != null")]
            [Requires("databaseName.Length > 0")]
            [Requires("parameters != null")]
            [Requires("parameters.Count >= 0")]
            [Ensures("$result != null")]
            [Ensures("$result.Count >= 0")]
        IList ProcessStoredProcedure(string procedureName, 
            Type type, string databaseName, 
            NameValueCollection parameters);
            [Requires("procedureName != null")]
            [Requires("procedureName.Length > 0")]
            [Requires("type != null")]
            [Requires("databaseName != null")]
            [Requires("databaseName.Length > 0")]
            [Requires("parameters != null")]
            [Requires("parameters.Count >= 0")]
            [Ensures("$result != null")]
        string ProcessStoredProcedureRaw(string procedureName, 
            Type type, string databaseName, 
            NameValueCollection parameters);

        bool ProcessTransaction();
    }
}
Thursday, July 07, 2005

The big day has arrived

Well, it's all out there in the public domain now. Yayyy!!!!! I've created a project up on sourceforge, and advertised there for help with documentation, installers and general app developers. I've uploaded an initial source code release - and configured my paypal settings. I can't wait to see how it is received by the public. Pay the site a visit, and let me know what YOU think. And, of course, if you want to make a donation... ;o) Here's the SourceForge link.

Here goes...

I have taken this stuff to the point where it is not too scruffy to release to the world. What I need to do now is publicise it to the world at large. Any ideas about the best forums for that? I think SourceForgeand gotdotnet will help, but I can also imagine that sharptools and USENET might be good avenues to publicise it through. How will I manage the project when I open the code up to the open source community? The SourceForge project location is here. The GotDotNet project location is here.
Sunday, June 12, 2005

Almost ready to go open source

I have been tweeking and fiddling with the framework in readiness for making it open source. There are still quite a few things I ought to do before making any attempt to publicise it. I thought that since I've been very remiss in not posting to this blog lately I could do some posting. How better than to list what I've got to do before the system is unveiled to the public? I know I'm not exactly going about this in the true open-source way, but I guess I'm a little wary about releasing my code to the public without having crossed as many 't's and dotted as many 'i's as I can find. I'm sure there are plenty of improvements that people will find when I release them, but I don't want any of them to be dumb or outmoded errors! Anyway, enough of such insecurities! Here's what I have to do before D-day:
  • See if I can create a WIX installer that works. Currently I seem to be having problems with adding assemblies to the GAC. My best guess is that WIX is not able to add .NET 2.0 assemblies to the GAC because it's trying to use some sort of reflection on an incompatible assembly.
  • Create a user tutorial. I'll probably do that in this blog. It should be a pretty easy task, since I've gone to great lengths to make the framework as transparent as possible.
  • Document the API using NDoc. As ever, I've been slack in commenting my code. I guess I'll start with the main APIs and then keep at it in the months after release.
  • Run it through FxCop and make sure there are no obvious errors.
  • Run regression tests on the static code generation system! This is potentially of less value than the dynamic code generation system so I have ignored it for a while.
  • Make sure it still compiles on .NET 1.1! And how about Mono? I haven't seen any major commercial projects requiring Mono, but that may change...
  • Make sure that the compilation with NAnt works as well as the VS.NET 2005 beta 2 builds. Especially if WIX is not an option. I need to make tragets that will crerate binary and source distributions for release. This should then be used to upload overnight snapshots to the web each night.
  • Convert the system to work with CruiseControl.NET. I have (so far) had around 5 interviews with ThoughtWorks, so I really ought to be as obsequious as possible to boost my chances with them! ;-)

Once these tasks have been performed, then I ought to post it online and start trying to publicise it a bit more. I can then think a bit more about using the framework in earnest. I want this system because I have an object relation mapping (ORM) system that is just short of being releasable and I would like to augment it with DBC to see whether I can productise it that way. For more information on ORMs, google for Scott Ambler's original postings on a design for a robust persistence layer. I also want to make the second release of the framework self-hosting. That is - I want to use DBC within the DBC framework itself. How much more could I practice what I preach, eh?

May was a quiet month

It's been almost a month since the last time I posted, and I have made a few changes to the DBC framework since then. I'm making changes now in readiness for release to GotDotNet or Sourceforge. So I'm just making sure everything is as logical and consistent as I can make it. Firstly, I have completely rewritten the NVelocity template assembly to simplify it even more (as if that were possible!). Now you can use it like this:
public void TestDefaultCreationOfTemplate()
{
  ITemplate tpl = TemplateManager.CreateTemplate(@"Hello" +
    " ${otherpart}");
  tpl["otherpart"] = "world";
  Assert.AreEqual(tpl.ProcessedOutput, "Hello world",
    "strings didn't match");
}
The observant amongst you will also notice that I'm now using unit test cases from visual studio 2005's unit testing framework - why MS couldn't just use NUnit I don't know (well I do, and it doesn't reflect well on them), especially since their framework seems excessively complicated when using autogenerated tests. Anyway, the template class is now just a very thin veneer over the velocity framework. I am still able, should the need arise, to swap out NVelocity and replace it with something else, but the changeover was mostly involved in recompilation of NVelocity to .NET 2.0 and to add a strong name for deployment to the GAC. I've placed the code generation templates into the Core assembly as resources to allow easier deployment and to make them tamper proof and to get rid of the needless configuration settings to allow the system to find the templates. I've broken out the predicate attributes and the exception types to a separate assembly, so development can be done with DBC attrs even if the DBC system is not in use. It also means the attributes can be put to other uses like documentation or validation. Right now, I'm mostly in performance tweeking and refactoring mode. Here's an example of one of the tests I'm using that shows how the system is to be used now:
[TestMethod]
public void TestTimeToCreateAThousandProxyObjectPairs()
{
  ArrayList al = new ArrayList();
  DbcProxyDynRTCF cf = new DbcProxyDynRTCF();
  int totalToCreate = 10000;
  cf.CreateProxyFor(typeof(TestObjectWithEvents));
  DateTime start = DateTime.Now;
  for (int i = 0; i < totalToCreate; i++)
  {
    al.Add(cf.CreateProxyFor(typeof(TestObjectWithEvents)));
  }
  TimeSpan durn = DateTime.Now - start;
  double d = durn.TotalMilliseconds / ((double)totalToCreate);
  Debug.WriteLine("Average time to create a proxy-object pair" +
    " (ms): " + d.ToString());
  Assert.IsTrue(durn.TotalMilliseconds < ((double)
    totalToCreate) * .17);
}
Currently the average time needed to instantiate a proxy with a target object is around .2 milliseconds. Interestingly it was around 0.16ms with .NET 1.1, but now I'm working with .NET 2.0 the performance is roughly 25% slower. It may be that some of the techniques I am using can be performed in a better way now, but that remains to be seen -- the only deprecated feature that I have not updated is in my use of CreateCompiler to compile the generated code. This piece of code gets called only once, before the timer is turned on. So it couldn't affect the code performance unduly. So where is this much publicised performance boost that we are supposed to get from .NET 2.0?
public CompilerResults GenerateAssembly()
{
  Assert(SourceCollection != null && SourceCollection.
     Count > 0);
  AssertCodeGenParamsAreOK();
  CSharpCodeProvider cscp = new CSharpCodeProvider();
  ICodeCompiler compiler = cscp.CreateCompiler();
  CompilerParameters cp = ConstructCompilerParameters();
  string[] classes = new string[SourceCollection.Count];
  SourceCollection.CopyTo(classes, 0);
  CompilerResults cr = compiler.CompileAssemblyFromSourceBatch
    (cp, classes);

  if (cr.Errors.HasErrors)
  {
    foreach (CompilerError ce in cr.Errors)
    {
      Log(ce.ErrorText);
    }
  }
  return cr;
}
Tuesday, May 10, 2005

DBC & AOP don't mix

This time I'm going to explore some of the issues I've found in converting the prototype over to work in a JIT environment. If you've seen any of my previous posts, you'll know that I have a working prototype that uses static code generation to produce proxies to wrap objects that identify themselves with the DbcAttribute. These proxies are produced solely for those objects, so the developer has to know which items have predicates on them in order to instantiate the proxies rather than the target objects. If the programmer forgets, or doesn't know, that an object has predicates, he may create the target object and not benefit from the DbcProxy capabilities. I've therefore been looking at ways to guarantee that the proxy gets generated if the library designer has used DBC. There are a lot of ways to do this, and having not explored all of them, I want to find a way to construct the proxies that will allow any of these approaches to be employed at run time. That naturally makes me think of abstract class factories and abstraction frameworks. It is nice to allow the library designer to define predicates on interfaces, but that may not fit the design they are working on, so I don't want to mandate it. The solution I've looking at uses the code injection system that comes with the System.Enterprise namespace's ContextBoundObject classes. This framework guarantees that the object gets a proxy if it has an attribute that derives from ProxyAttributes. I would derive my DbcAttribute from ProxyAttribute and get an automatic request to create a proxy. Sounds perfect? Yes! But as you will see, there is a gotcha that will prevent this approach. Read on to find out more...

Status of the prototype

The prototype uses a Windows console application that is invoked as part of the build process to create the proxies for an assembly. The program is invoked in association with an application config file to provide it with the details of the target assembly, where the resulting proxy assembly is to be stored, and what it's to be called. It has a few other config settings to define how it will name the proxy for a target object, and what namespace it shall live in. So far, I have not added any guidance to it on how to handle the assemblies in a not static way. I haven't implemented any dispenser to intercede in the object creation process. In this blog I'm going to take a look at what config settings I need to add to control the choice between dynamic and static proxy generation. I'm using the Microsoft Enterprise Application Blocks to handle configuration, and I'm using log4net to take care of logging issues. NVelocity is being used to construct the source code for the proxy. None of that will change in the refactoring that is to follow. In fact, very little has to change in the event model to turn the system into a dual static/dynamic proxy generator. What changes is in the issue of how to handle the prodicates when you are using a dispatch interface. More on that to follow.

static code gen via CSharp compiler

The static code generation process has been described in a quite a bit of detail in previous posts, but in essence it works by scanning through all of the types in an assembly, looking for ones that are adorned wit the DbcAttribute. For each of those, it scans through each of the InvariantAttributes, and then through each of the members in the class to get their details and any predicates that are attached to them. It then creates a proxy class that imitates the target type, except that prior to each invocation it checks a set of pre-conditions, and afterwords checks post-conditions. This proxy type is kept around until all of the assemblies types have been scanned, and then a new proxy assembly is compiled and saved to disk. This Proxy assembly is then used instead of the target type. The CSharpCompiler provides a set of options to allow you to keep the source code used to create the assembly, and to create debug information to tie the assembly to the source code at run-time. This is useful for debugging the DBC system, but should not be necessary when using the framework in the field. If errors occur they will be on either side of the proxy (assuming I debug the proxies properly) and the user shouldn't want to see the source for the proxy itself. We therefore need to be able to turn this off, along with debug information. Another feature of the compiler is the ability to dispense an in-memory only assembly that is available for the lifetime of the process. This is just what we need for dynamic proxy generation.

New library for dynamic proxies

One refactoring that is inevitable is that we now need two front-ends to the DBC system. A command line tool for static proxy generation that can be invoked from VS.NET or NAnt. We also need something that can be used in-proc to dispense dynamic proxies on demand. This can't be another process unless you want to use remoting to access your objects. You may want to do that, but you need to have some control of that situation, and it would be a little intrusive to have that decision taken away from you.

Dynamic proxy code generation pattern

The process for the dynamic proxy is much the same as for the static proxy, it just tweaks the compiler settings a little, and provides some speed ups for handling the storage and retrieval of assemblies for types that have already been code generated. The procedure is as follows:
  • client requests an objects of type X
  • creation process is intercepted by the CLR, because the DbcAttributes is declared on the object or one of ins ancestors.
  • The DbcAttributes provides an object that can be used to intercept calls to the object, and it passes the creation of that object off to the dynamic proxy code generator.
  • The dynamic proxy CG checks in a HashTable whether the target type has been requested previously, if so it instantiates the generated proxy and returns.
  • if no type has been generated for this type it creates a scanner and a code generator as per the static proxy CG, and requests the scanner to scan the type.
  • After the type has been scanned, there will be exactly one element in the ProcessedTypes context variable in the CG.
  • The dynamic proxy CG then requests that the CG generate an assembly for the type, specifying that the assembly must be in-memory only.
  • The assembly is generated, and it is stored along with the target type in the HashTable for future reference.
  • The new proxy type is dispensed by the DbcAttribute, and the target object creation commences as normal.
  • Next time a call on the target object is performed the proxy object is allowed to intercept the call, and run its checks.
  • voila!
This seems almost exactly how I want the dynamic proxy to work. The client doesn't have an option but to use it if it is available. This convenience comes at a price. That price is detachment from the state of the target object. If you want to do any interesting kinds of test on the current state of the target object, you need a reference access to the various public members of the type. Without access to the target instance you can't make pre or post or invariant checks, because you can't access the state to see how it has changed or take snapshots of the initial state. It is in effect unusable. So does that mean that the dynamic proxy is incompatible with DBC? The problem here is not in the idea of a dynamically generated proxy, it is in the kind of proxy we are using, and in its relationship to the target object. It is not connected to the target object in any way - it could even be on a different machine from the target. Even if it was directly connected (within the enabling framework) it wouldn't be of much use since it is designed to deal in IMessage objects, not target object instances. The IMessage instance is a property bag that contains name value pairs for each of the parameters that is passed into a single method called "Invoke" which is able to then process the parameters. This pattern will be familiar to those who ever wrote COM components that needed to be compatible with VB - it is a dispatch interface. This sort of thing is great if you want to log the values of parameter calls, or to perform rudimentary checks on the incoming parameters. We want to go a lot further than that. You can't make many assertions about how an object behaves just by describing its input and output values. And that's the problem that I have with standard interfaces - apart from a name which may or may not mean anything they only have a signature. A signature is just a list of the parameters and return types. Nice to have obviously, but not a lot of value when it comes time to work out what happens inside the implementation(s) of the signature. So, we need direct access to the target object to be able to go any further. Does that mean that the context bound object approach is dead in the water. I'm not sure and this is fairly sparsely documented area of the framework so I haven't been able to track down an answer to that question yet. Answers on a postcard if you know a definitive answer to this. I am going to assume so for the time being. This hasn't been a wasted time - we've learned a useful fact about the limitations of dynamic proxies - namely that they are not compatible with aspect oriented programming paradigms. If anything they can the end-point of an AOP proxy chain, but they cannot sit anywhere in the middle of a chain of responsibility. It's a depressing fact of life - but useful to know. What else can we deduce from this? We know that any kind of DBC proxy is going to need access to the state that is somehow deterministic. The key concept is state here. Anything that transforms the state of the target object also changes the semantics. Imagine, for example that the DBC proxy sat at the far end of an AOP chain - it will have an instance to a target object but the target object will delegate all the way down to the real target object at the other end of the chain. All of the state inquiry methods are present, but are converted into IMessage objects and passed through the chain of proxies before arriving at the real target object. The AOP chain can modify the data going into the real target, and modify the results coming out of it. They can also change the timing characteristic making a synchronous call where previously the interface was not, or vice versa. They could transform XML documents or add and subtract values from parameters. The point is, when you are out of touch with the target object, you have no guarantees event hat the parameters you are making tests against are the same as are received by the real target object. You consequently face a situation where the semantics are similar but potentially subtly modified, and you have to track that in your assertions. This may sound like a rather academic discussion of the potential to modify an interface to have diferent semantics, but that is the essence of the adapter design pattern, and is an occupational hazard with SOA architectures. Don't forget that when you invoke an API on a third party software system, you think you know what is happening, but you often don't. You generally have some documentation that acts as a contract saying what goes on inside the service when the service gets invoked. We all know that such documentation often gets out of date, and we also know that there is little you can do to stop a well-meaning-fool from breaking your system. That's Murphy's law at work again. If something can go wrong, it generally will. And normally it is introduced by someone who thought they were solving some other problem at the time. So published interfaces aren't worth the paper they are written on, especially if you're playing a game of chinese wispers through layers of other people's code. We need a system that is direct and compulsory, with no middlemen to dilute the strength of the assertions that you make on a component. Next time I will describe how I solve this problem - probably by paying homage to the Gang Of Four...
Tuesday, April 26, 2005

Predicates and Expressions - Part 2

Last time I started discussing the expression language used to define predicates on types and members. My conclusions were:
  • C# expression syntax used since code must be transformed to that in the end, and the more similar the syntaxes of the expression language - the easier the transformation will be.
  • No side-effects. Side effect within the predicates complicate the tasks of determining whether a temporal comparison is actually effective. If the snapshots can't be guaranteed to be faithful representations of the state before and after the test, then you can't trust the tests, and you might as well not have bothered with them in the first place.
  • Pragmas, or pre-processor messages, are used in expressions to add a temporal ability to the predicates. $before(foo) allows a snapshot of the variable foo to be taken for later comparison with the exit state.
  • Pragmas may map to identity in some circumstances. Entry states can have no $after() since we haven't reached the exit state yet. Likewise, there is no notion $before() state since this is the before state, and we can't take a snapshot of the state before the before state (unless it's on some other superior method). Therefore, in a precondition it should be illegal to insert pragmas into predicate expressions. In some cases such as $before(foo) in a precondition, and $after(bar) in a post condition the pragmas map onto the current value of the variable. i.e. in a post condition $after(bar) can be transformed into bar.
The algorithm for converting predicates to guard code is shown below.

foreach assembly in assembly list
  foreach type in assembly
    clear list of invariance predicates
    clear list of processed methods
    foreach invariant attribute on type
      transform pragmas on predicate
      add pre/post analysis objs to list
      generate code for predicate
      append code to processedInvariants list
    end foreach invariant
    foreach member in type
      clear list of requires attrs
      foreach requires attr on member
        transform pragmas on predicate
        add pre/post analysis objs to list
        generate code for predicate
        append code to processedRequires list
      end foreach requires attr
      clear list of ensures attrs
      foreach ensures attr on member
        transform pragmas on predicate
        extract list of vars to be snapshot
        add to list processedSnapshots
        add pre/post analysis objs to list
        generate code for predicate
        append code to processedEnsures list
      end foreach ensures attr
      insert snapshots into code template
      insert pre/post analysis objs to template
      insert code for requires guards into template
      insert code for ensures guards into template
      generate code
      add generated code to processed methods list
    end foreach member
    insert code for members into type template
    clear list of invariance predicates
    generate code for type
    append code to processed types list
  end foreach type
  setup c# compiler with config params
  compile all types in processed types list
  store assembly as per instructions
end foreach assembly 
This algorithm conveniently glosses over the complexities of the depth first search event propagation mechanism described in previous posts, but the outcome is the same ultimately.
Sunday, April 24, 2005

Predicates and Expressions - Part 1

The predicates that are used to specify the in, out and inv conditions on a component need to follow certain rules. The first and least obvious is that it must follow the syntax and semantic rules of the C# language. I say that it is the least obvious, since it seems to be conventional to invent a new predicate language for these purposes. I can see the point of this, since it would allow a tailor made language to be used, and to extend it beyond the capabilities of a normal C# expression. I intend to start off with the syntax of C# because they will have to end up as C# expressions, and I want to have to intervene in the simplest way possible. I want the intervention to be achievable via regex replacements where possible. It it essential to make one significant enhancement to the expression language of C#. It needs to be temporal. It needs to be able to make statements about the value of a variable before and after a method has been invoked. In a temporal logic that might be represented by a prime (i.e. x'). That could get rather confusing when we are making comparisons with characters that need to be enclosed by single quotes. At the very least it would be hard to read. Likewise we could represent it as a function which can be made available through a supertype or a utility object. An expression might resemble this: "before(x) == after(x) - 1". The problem is that this pollutes the method namespace so we would have to have something like this in the expressions: "base.before(x)". Which is already looking ugly, and it also forces the user to inherit from a supertype that may wreck their application design. It seems that we need pragmas - preprocessor directives that can be used as clues to the framework. Since we are using the syntax of C# we can be sure that neither types nor member names will begin with a dollar sign. It seems safe to assume that we can begin pragmas with a dollar sign like so: "$before(x) == $after(x) + 1". It should not be confused with interpreted scripting language though, this is input for a pre-processor that outputs code, which is subsequently compiled. That applies whether we are performing dynamic code generation. The sharp-eyed of you will notice that in various kinds of predicate the $before(x) and $after(x) are synonymous with x itself. For example it makes no sense to refer to a $after(x) in a requires predicate. Likewise, there isn't much use for an $after() pragma since we are inserting code into the method of a target object. x will already embody the value in $after(x) so the $after() will just be getting in the way. So, we only need a $before(x) pragma to take a snapshot of the value of the variable at the beginning of the method, for later comparison with what the variable becomes. There are some rules that predicates must obey, and we just saw one of them before. In a pre-condition, $before(x) and x are synonymous, and therefore $before() is meaningless, and shouldn't be found there. This sort of rule can be detected and converted into harmless code by the preprocessor. Another hard and fast rule is that predicates must never have side-effects. That is - the state before testing the predicate should be exactly the same as the state after the test. There are circumstances in which that is untrue. If a predicate detects a bug, it may throw an exception, which may cause the stack to unwind, which will have a side effect. But that is a desired side effect that is part of the semantics of design by contract and error detection, as well as being permissible in a constant method (assuming such a thing were possible in C#). What I mean is undesired side-effects - side effects that are not implicit in the error checking framework, and which are not testable by the predicates themselves. There is a need to avoid harmful code that is harder to spot. The predicates should be without side-effects. It's easy enough to spot when someone accidentally inserts a "=" instead of a "==", but if they code a helper method such as "IsOkToDoIt(this)" that happens to change a variable as it runs, then the predicates change from being an external test of correctness into another kind of source code. We need to prevent this - we are interested in making statements about the state of the class, and how it changes from moment to moment. We don't need to be concerned with making changes to it, since that is what the component code is for. There is no way to prevent the predicates from having a side effect. Fullstop! Surprisingly C# doesn't support the ability to define constant methods. Constant methods are a powerful feature in the armoury of the C++ library writer. It makes it possible to assert the fact that a method has no side effects. In C# you are not able to make that kind of assertion. So we will just have to make it known to the DBC-driven programmer that coding predicates that have side-effects is asking for misery.
Wednesday, April 20, 2005

An interesting proposition

As you will recall from the previous post, I have been wondering about how to implement the functionality for dynamic proxies. Well I saw this great article, by Viji Sarathy, on the web, and think that this might be the way to go. It uses Context Bound Objects to ensure the interception of all calls by the CLR without any explicit framework setup by the user. We can guarantee that the contracts get followed, whether the user tries to evade them or not. The only misgiving I have is over the performance characteristics of the thing. Any thoughts?

Powered for Blogger by Blogger templates