Category name:.Net

InternalsVisibleTo and PublicKey or PublicKeyToken

A long time since my previous blog as nowadays I often tweet my ramblings but this one does not fit a tweet 🙂

Sometimes you are working with strong named assemblies and when you are having unit tests and want to access internals then you have to use the InternalsVisibleTo assembly attribute. So to discover the public key token I ran “sn.exe –tp project.publickey” and then you get the public key (long) and the public key  token (short).


Microsoft (R) .NET Framework Strong Name Utility  Version 4.0.30319.1
Copyright (c) Microsoft Corporation.  All rights reserved.

Public key is

Public key token is 67178dccc283ce39


So I used the following attribute:


[assembly: InternalsVisibleTo("My.Project.Tests, PublicKeyToken=67178dccc283ce39")]


And got this nice compiler error:

Friend assembly reference is invalid. Strong-name signed assemblies must specify a public key in their InternalsVisibleTo declarations.

Then I pasted the long variant in the InternalsVisibleTo attribute and it compiled but I knew for 100% that the short version had to work. After investigation there seem two ways to pass the required strong name public key information. You can choose if you want to pass the whole public key or the public key token.

When both assemblies are signed then you need to pass the full PublicKey.

Cool finalizer assert trick

I just saw a cool trick done in a finalizer of a class. When a class implements IDisposable then its creator needs to call Dispose when it is finishen. Lots of developers forget this and that usually results in system resources that are locked until the garbage collector thinks its time to do its work.

The code construction I saw was:


I have never thought of doing this but it makes sense to just add a assert to a finalizer to get notified that you didn’t dispose the object. The finalizer will never be called if it would because of the GC.SuppressFinalize(this); statement that should be done when calling IDisposable.Dispose on the object.

It could be that you are getting this in a service and then this doesn’t make any sense but then you could just log an error instead.

ForEach method exceptions and events

I just read this article by DigiMortal about List<T>.ForEach and exceptions. His assumption was that if an exception occurs while processing one of the items that the next item would still be processed which is not that case. Maybe he was aware of this (well he is now!) and somebody did not add exception handling to the method called (which by the way is not that methods responsibility IMHO).

Turns out that I had a similar problem a long time ago with events.

class Test
    public event EventHandler MyEvent;

    public void DoMyEvent()
            MyEvent(this, EventArgs.Empty);

This example allows subscriptions to the MyEvent event. At a certain time I experienced weird application behaviour and the cause was that a subscription raised an exception causing other subscribers not receiving my precious event! As this implementation did not know how stable the subscribers were it needed a redesign and that was the following:

class Test
    private readonly List<EventHandler> _myEvent = new List<EventHandler>();

    public event EventHandler MyEvent
        add { _myEvent.Add(value); }
        remove { _myEvent.Remove(value); }

    public void DoMyEvent()
        foreach (var subscriber in _myEvent)
                subscriber.Invoke(this, EventArgs.Empty);
            catch (Exception)
                // Do some logging here or just BAN the subscription!

This could contain some errors as I just did this from notepad but you bet the idea. The cool thing is that you can rewrite the DoMyEvent method to invoke all subscriptions simultaneously which can be very neat if they do expensive remote calls.

Get the physical path of a path that uses a subst drive

My previous employer used a tool to attach databases that were located on a substituted path. I needed this conversion logic in another kind of environment and used the almighty google. So I hit an article on Avner Kashtan‘s blog titled Query SUBST information . His code sample proved very usefull but it didn’t support relative paths on the current drive so I altered it a bit. Below is my version and put it here for other people to use and abuse.

using System;
using System.IO;
using System.Runtime.InteropServices;
using System.Text;

public static class Subst
public static string GetRealPath(string path)
const string MSG_PATH_INVALID = "The path/file specified does not exist.";
const int BUFFER_LENGTH = 1024;

if (path == null) throw new ArgumentNullException("path");
if (Directory.Exists(path))
path = new DirectoryInfo(path).FullName;
else if (File.Exists(path))
path = new FileInfo(path).FullName;
throw new ArgumentException(MSG_PATH_INVALID, "path");

string realPath = path;
StringBuilder pathInformation = new StringBuilder(BUFFER_LENGTH);
string driveLetter = Path.GetPathRoot(realPath).Replace(@"", string.Empty);
QueryDosDevice(driveLetter, pathInformation, BUFFER_LENGTH);

// If drive is substed, the result will be in the format of "??C:RealPath".
// after that strip the ?? prefix and combine the paths.
if (pathInformation.ToString().Contains(@"??"))
string realRoot = pathInformation.ToString(4, pathInformation.Length-4);
realPath = realRoot + realPath.Substring(2); // Remove the drive letter

return realPath;

static extern uint QueryDosDevice(string lpDeviceName, StringBuilder lpTargetPath, int ucchMax);

I tried to use Manoli’s csharp to HTML formatter but it screws up. So no syntax coloring this time.

Why read uncommitted data?

I just read Dennis post about his adventures in isolation level land. He says he does not know a good reason to read uncommitted data because of dirty reads.

Well he should refrase this like: Using the isolation level read uncommitted data could result in a dirty read.

Reading uncommitted data can be very interesting. Not only because you can read data that “is not modified at all but not accessible because of a lock”. For example a eventlog table, a table that contains statistics about page requests.

A side benefit is that a select normally also creates a lock on a table for itself. That is because a single query adheres the ACID rules. This means that a select will lock data while it is running. Sometimes you want non-blocking reads as for example Sahil Malik writes. Lets say you have a log table in your database. You know you only do inserts. Each insert will do locking. Lets say you have some SELECT queries that will result in a table scan. You really don’t want to have a table-lock while it is running because else the application will not be able to add new rows to the table. You know in advance that you will never read dirty data because you only do inserts. And that reminds me of a post of my own Change mssql isolation levels to read uncommitted data from some months ago

So when to use this? Well if locking a table will stall other operations and reading uncommitted data isn’t that interesting. You use this sort of queries mostly for reporting functionality or even just normal read operations. As in reading! Not read data and then update the data. You will need optimistic concurrency control for that and that requires a timestamp column to validate or a total record compare. With a timestamp.. you MUST be sure that the record data read in the first place is not dirty.You cant use the read uncommitted here. But the nice thing about the total record compare is that you can read uncommitted data because the OCC solution relies on the data and not on a timestamp. So it IS possible but in most environments you see timestamps because comparing one timestamp column is cheap.

VS2005: Solution to ‘Where is that damn exception dialog?’

Today I had a very annoying issue. I was debugging an application on an VS2005 instance of a computer that I did not prepare / installed myself. I wanted the “break on all exceptions” behaviour in the debugger. Normally you just go to the menu and select Debug -> Exceptions. But I was quite surprised that the Exception dialog menu item wasn’t available. Luckely the shortcut key CTRL+E still worked and I was able to continue debugging.

But I still wanted to know why the option was not available so after brainstorming I thought that it should be the IDE customization. I wanted to see if it was caused by the ‘default’ view in this instance and went to Tools -> Import and export settings. It was indeed the view because it stated Business intelligence settings. It was probably caused by the SQL2005 installation.

But that’s not all! The behaviour of the VS2005 Exception dialog differs with VS2003. In VS2003 you can make a selection of exceptions on which the debugger should break. But sometimes you want to just do a few runs in ‘break on all exceptions’ mode and then return to the ‘normal’ settings. This is possible in VS2003 but not in VS2005!

Tool : Add string as resource

I found this really nice little to add a string as a resource. You just select a string in the editor, right click to get the context menu and select to add it as a resource. You then get a dialog in which you can specify the key value and the resourcefile the string should be added too.

In most code I produce I use the string resource generator tool. But this little tool is a very nice addition although I prefer the type-safety that the string resource generator provides.

Don’t use viewstate compression! Use http compression instead!

Just google a bit with the keywords compression plus viewstate and lots of articles are to be found about this subject like this one. The problem is that in most situations programmers are doing something that the webserver already can do for you. Not only does it compress the viewstate, it compresses all (dynamic ) documents that you configure it do to. It becomes even worse when the webapplication compresses its viewstate and the webserver compresses the rendered page output *again*. Seems a bit inefficient to me 🙂

So let the webserver do in what it is good at. Serving/streaming rendered documents to the browser in the most efficient manner. Just configure it so that it is also compresses dynamic pages. If you dont know how to do this then just google on http compression iss and lots of articles that explain how to achieve this. It really doesn’t need an extra article from my hand 😉

One situation in which it could be of interest is when an http 1.0 browser performs a request which does not support gzip compression. This way you can achieve nice results by compressing the viewstate. But in the article mentioned at the beginning of the page can be read that the viewstate would sometimes be larger then 1 megabyte. Well if that is occuring in your application then the viewstate data must be *really* expensive to calculate or read from a store must it be that important to add as viewstate data. In most application retrieving data from a data source actually performs very well although it always depends on the total count of page request, the total amount of webservers in a farm and the number of databases as is the kind of state required for some functionality.

The && and the & operators

Today a collegue had me flabbergasted by using the & operator in a boolean comparison like the following example.

return x & y;

Both x and y were booleans and I said “Hey you just forgot an &! A single & is a bitwise operator.”. And he responded that this would work but that the behaviour of a single & is that it is not lazy evaluated (short-circuiting).

My knowledge about the csharp language is quite good as is my knowledge of the .net framework so this was quite a surprise!

The funny thing is that the msdn documentation about the & operator doesn’t say anything about lazy evaluation while the documentation about the && operator also mentiones the & operator with boolean operands.

Failed for 70-300 exam

I failed for the Microsoft 70-300 examen this afternoon. I am disappointed in the result because I expected to pass. When I saw the score it felt very weird to see I had failed. I really don’t have a clue which questions I did wrong…. so it is almost impossible to improve my knowledge for the next attemp.

The exam had three cases and each case had eleven questions. The cases itself were very easy and were interviews of people in a certain role.

The hardest part in this exam was the english language I think. I had several questions that I could anwser but I just didn’t see the (multiple) options I expected. I chatted with some collegues afterwards and that told me that had the exact same problem and succeeded at the second attemp without studying just by getting other cases.

Too bad I did not have the time to give it a second go :-). I started at 13:00 and was finished two hours later. Too late for our office hours.

  • Recent Posts
  • Recent Comments
  • Archives
  • Categories
  • Meta