An aggregation of all the Rock Solid Knowledge Blogs
Sometimes you have to wonder if this subject will ever go away …
A few weeks ago I posted on using OO constructs in your DataContracts. It’s one of those things that is understandable when .NET developers first start building message based systems. Another issue that raises its head over and over again goes along the lines of “I’m trying to send a DataSet back to my client and it just isn’t working properly”. This reminds me of the old joke:
Patient: Doctor, doctor it hurts when I do this (patient raises
his arm into a strange position over his head)
Doctor: Well don’t do it then
So what is the issue with using DataSets as parameters or return types in your service operations?
Lets start off with interoperability – or the total lack thereof. If we are talking about interoperability we have to think about what goes into the WSDL for the DataSet – after all it is a .NET type. In fact DataSets, by default serialize as XML so surely it must be ok! Here’s what a DataSet looks like in terms of XML Schema in the WSDL
<xs:element ref="xs:schema" />
In other words I’m going to send you some … XML – you work out what do with it. But hold on – if I use Add Service Reference, it *knows* its a DataSet so maybe it is ok. Well WCF cheats; just above that sequence is another piece of XML
<ActualType Name="DataSet"Namespace="http://schemas.datacontract.org/2004/07/System.Data "
xmlns="http://schemas.microsoft.com/2003/10/Serialization/ " />
So WCF cheats by putting an annotation only it understands into the schema so it knows to use a DataSet. If you really do want to pass back arbitrary XML as part of a message then use an XElement.
So how about if I have WCF on both ends of the wire? Well then you’ve picked a really inefficient way to transfer around the data. You have to remember how highly functional a DataSet actually is. Its not just the data in the tables that support that functionality, there is also : change tracking data to keep track of what rows have been added, updated and removed since the DataSet was filled; relationship data between tables; a schema describing itself. DataSets are there to support disconnected processing of tabular data, not as a general purpose data transfer mechanism.
Then you may say – “hey we’re running on an intranet – the extra data is unimportant”. So the final issue you get with a DataSet is tight coupling of the service and the consumer. Changes to the structure of the data on one side of the wire cascade to the other side to someone who may not be expecting the changes. Admittedly not all changes will be breaking ones but are you sure you know which ones will be and which ones won’t. As long as the data you actually want to pass isn’t changing why are you inflicting this instability on the other party in the message exchange. The likelihood that you will have to make unnecessary changes to, say, the client when the service changes is increased with DataSets
So what am I suggesting to do instead? Instead model the data that you do want to pass around using DataContract (or XElement if you truly want to be able to pass untyped XML). Does this mean you have to translate the data from a DataSet to this DataContract when you want to send it? Yes it does, but that code can be isolated in a single place. When you receive the data as a DataContract and want to process it as a DataSet, does this mean you have to recreate a DataSet programmatically? Yes it does, but again you can isolate this code in a single place.
So what does doing this actually buy you if you do that work? You get something which is potentially interoperable, that only passes the required data across the wire and that decouples the service and the consumer.>
I’ve been writing a lab on Workflow Services in 4.0 recently. Part of what I was showing was the new data orientated correlation (the same kind of mechanism that BizTalk uses for correlation). So I wanted to have two operations that were correlated to the same workflow instance based on data in the message (rather than a smuggled context id as 3.5 does it). As I was writing this lab I suddenly started getting an InvalidOperationException stating DispatchOperation requires Invoker every time I brought the .xamlx file up in a browser. It appeared that others had seen this as well but not really solved it. So I dug around looking at the XAML (workflow services can be written fully declaratively now) and the config file and could see no issues there. I asked around but no one I asked knew the reason.
So I created a simple default Declarative Workflow Service project and that worked ok. I compared my lab workflow and the default one and it suddenly dawned on me what was wrong. The default project has just one operation on the contract and has a ServiceOperationActivity to implement it. My contract had my two operations but I had, so far, only bound one ServiceOperationActivity. So in other words I had not implemented the contract. This is obviously an issue and looking back I’m annoyed I didn’t see it sooner.
However, the problem is that this is a change in behavior between 3.5 and 4.0. In 3.5 if I didn’t bind a ReceiveActivity to every operation I got a validation warning but I could still retrieve metadata; in 4.0 you get a fatal error. Its not hugely surprising that the behavior has changed – after all the whole infrastructure has been rewritten.
On the whole its a good thing that implementation of the contract is enforced – although it would be nice if the validation infrastructure caught this at compile time rather than it being a runtime failure.
I'm trying to use ActiveRecord outside of Rails and with Ruby 1.9. I kept getting an error
/opt/local/lib/ruby1.9/gems/1.9.1/gems /activerecord-2.2.2/lib/active_record/base.rb:394:in `<class:Base>': undefined method `cattr_accessor' for ActiveRecord::Base:Class (NoMethodError) from /opt/local/lib/ruby1.9/gems/1.9.1/gems/activerecord- 2.2.2/lib/active_record/base.rb:391:in `<module:ActiveRecord>' from /opt/local/lib/ruby1.9/gems/1.9.1/gems/activerecord- 2.2.2/lib/active_record/base.rb:4: in `<top (required)>' from /opt/local/lib/ruby1.9/gems/1.9.1/gems/activerecord- 2.2.2/lib/active_record.rb:34: in `require' from /opt/local/lib/ruby1.9/gems/1.9.1/gems/activerecord- 2.2.2/lib/active_record.rb:34: in `<top (required)>' from /opt/local/lib/ruby1.9/gems/1.9.1/gems/activerecord- 2.2.2/lib/activerecord.rb:1 :in `require' from /opt/local/lib/ruby1.9/gems/1.9.1/gems/activerecord- 2.2.2/lib/activerecord.rb:1: in `<top (required)>' from SimpleTest.rb:2:in `require' from SimpleTest.rb:2:in `
so and undefined method cattr_accessor in ActiveRecord::Base.
Trying the same code in 1.8 worked fine. After much searching I discovered that ActiveSupport had to be installed. This looked like it was installed when I installed ActiveRecord, but it possibly wasn't a complete install.
so after a
sudo gem install ActiveSupprt
I'm good to go
By this I mean running the latest Gem version not Edge. In config/environment.rb is a line that says something like
RAILS_GEM_VERSION = '2.1.0' unless defined? RAILS_GEM_VERSION
You can either set this to a specific version or if you want to run off the latest installed (Gem) version then simply comment out the line