Categories
Uncategorized

Fine Tuning an LCD Monitor Connected to an Analog Port

Quite a while ago, I discovered the joys and productivity of using multiple monitors. On my current system, I have two identical Samsung 19″ LCD monitors connected to an ATI Radeon 9800 Pro. Unfortunately, the video card only has a single DVI connector so my second monitor is hooked up through the analog SVGA port.
When I first hooked it up, I couldn’t get over how bad the monitor looked when it was hooked to the analog port. I knew it would be worse, but it looked defective, or as if the resolution was not set to the native LCD resolution. Also, I love ClearType, but on the monitor connected via SVGA it looked horrible no matter how much I used the ClearType tuner.
This is what I learned…

For a while, I just hit auto adjust when things looked particularly poor. It usually helped a bit, but invariably the poor quality would show up again in spades in some other context, and it never helped a lot. At one point, I hit auto adjust when the only thing displayed on the monitor was a solid blue background. The next time I dragged a window across, it looked like a car hit my screen. That’s when I suspected the feature was relying on feedback from the signal generated by the image to adjust itself correctly.
With my newfound realization, I started hunting for an image complex enough to give the electronics something difficult to chew on. My first attempt was to see if the old Windows 3.1 desktop patterns survived through Windows XP. The black and white checkerboard gray-scale seemed like an ideal pattern to help the monitor adjust itself. No dice, Microsoft finally put a stake through that particular feature after Windows 2000.
So I put together this quick little utility to assist my monitor with its auto adjusting woes. All it does is set various background patterns to display on the monitor to adjust. I am happy to say that by using the white and black pixel checkerboard pattern, the result of the auto-adjust function on my Samsung 930B resulted in an image rivaling the DVI connected 930B. ClearType looks beautiful, there are no more strange shimmering or jittering effects, and I can be productive on both of my monitors again!
Just to make sure I wasn’t imagining things, I pressed the auto adjust button on the monitor with a solid blue background again. When I dragged the checkerboard window across, it looked horrendous and ClearType was in a shambles. Now I hit the auto adjust with the checkerboard up, and watched as the monitor figured out how to display it perfectly again.
I’ve attached the utility as well as the source code. It is mind-numbingly trivial, but I’ve already thought of more patterns (and color patterns) to add based on what I learned about LCD technology. Right now I am just using some embedded resources and tiling the image…
Usage is pretty straight-forward:

  1. Run the program
  2. Maximize the window on the LCD monitor to adjust (connected to an analog port)
  3. The checkboard pattern should be all you need, but I added a couple other patterns via context menu

m
Video Patterns
Video Patterns Source Code

Categories
Programming

Exception Handling 101

I hate buggy software. However, as a developer, I know how difficult it is to write bug-free software and so I am always looking for new ways to learn how to write better software. One of those ways is exception based programming. Sadly, exceptions are often glossed over in samples and books so exception anti-patterns tend to propagate.
Take a look at the following code…

public void HorrifyingMethod()
{
	try
	{
		Cursor preCursor = Cursor.Current;
		Cursor.Current = Cursors.WaitCursor;
		DoSomething();
		DoSomethingElse();
		try
		{
			Log.Write("I did something");
		}
		catch {}
		Cursor.Current = preCursor;
	}
	catch (Exception ex)
	{
		if (ex.Message == "Failed")
			throw ex;
		else
			Log("Something failed", ex);
	}
}

There’s so much wrong with this method it makes my skin crawl. Let’s start with the obvious:
Catching the base exception type
There aren’t a lot of cases where this is ever justified. When you catch an exception, you are effectively stating that you understand the nature of the failure, and you are going to resolve the problem (logging, by the way, is not resolving the problem). Our nested try/catch block may as well say “if the server caught fire, I’m just going to ignore it.” When you write an exception method, it is helpful to say to yourself, “The managed runtime exploded, therefore I am …” and say what your catch block does. If you find yourself saying something like “The managed runtime exploded, therefore I am returning the default value” you can see how problematic this really is.
Discriminating on exception data rather than exception type
Exceptions should describe the nature of the problem, not where it came from. Every exception already comes with a stack trace so we know where it came from. A good exception is something like “TimeoutException” rather than “MailComponentException” If you find yourself commonly digging through exceptions to determine what exactly went wrong, you are using a poorly designed library. If you are throwing exceptions, use an existing exception class if it fits the problem, or write a new class that describes the problem. The exception type itself is the filter used for catching, so it’s important for exceptions to describe the nature of the failure.
Re-throwing a caught exception from a new catch block
There are times when you might catch an exception, and after doing some programmatic investigation decide that you can’t actually handle it and you need to rethrow it. Never rethrow the same instance that you caught in the catch or you’ll wipe out your stack. The correct way to rethrow a caught exception is just a single “throw;” statement with no variable.

public void CorrectRethrow()
{
	try
	{
		SomeMethod();
	}
	catch(Exception ex)
	{
		if (!ex.SomeProperty)
			throw;
	}
}

Eating the exception
This is probably by far one of the worst offenders. Exceptions work well because they are an opt-out method of detecting abnormalities rather than past “opt-in” methods such as errorcodes. Eating exceptions is rarely correct. If you are developing a library for use by other developers, you should not be making decisions for them with regard to exceptions. Always throw the exception to the library user so that she can handle it as she sees fit. The HorrifyingMethod() code shows two exception eating problems: The inner try/catch block is catching both managed and unmanaged exceptions, and completely hiding the fact that anything went wrong. This is all too common, and contributes to bugs, failures, and strange side-effects with nearly impossible to trace causes. The outer block logs the exception, making a token gesture of “handling” it. Imagine this method being called from a button click event. The user keep smashing the button, the program continues to fail to execute the procedure, and somewhere an obscure log file is recording all the detail. Logging an exception is not handling it.
(Note that if you do have more than one try/catch/finally in a method, your design is probably screaming for an ExtractMethod refactoring. Your method is almost certainly too large if there is a need for more than one)
Lack of a finally block to ensure consistency of changes in the method
There should be far more try/finally blocks in your code than try/catch blocks. In order to write exception-safe code, anything in a try block must be reverted by a finally block to leave the application in a consistent state. When an exception was thrown in Visio, it would bring up a dialog box saying that the application state was inconsistent and advised you to restart the application. This is decidedly not okay in the .NET world.

Categories
Software

Eye Nutrition

A few years ago, I got a Mac. It started when I read about Apple jettisoning their entire operating system core and starting over with the UNIX-based NeXT operating system. Every day after work, I stopped by CompUSA and just explored all the OS X systems there. This all happened in late 2002 and, at the time, there was just nothing that looked anything like OS X… it was just incredibly gorgeous. When Apple announced Jaguar, I decided to jump on the train and buy an iBook. It was relatively inexpensive, and a good way to step into the world of OS X.
Today I am using Vista, Ubuntu, and deeply exploring the depths of WPF, and the unique shine of OS X doesn’t seem as unique anymore. Now that everyone has pretty well jumped on the “eye candy” bandwagon, I’ve been doing a lot of thinking and observing how much many of them are missing the point.
The thing that Apple did so well with OS X aesthetically was to use the full power of the hardware and software to create a beautiful and usable system. There are many subtle hints in OS X that employ alpha channels, stencils, and color to great effect, but I don’t call it eye candy. In most cases, it’s eye nutrition. For example, look at the search of the System Preferences in Tiger. Immediately it not only lists the relevant results, but through visual design, you get even more immediate feedback on where you might be most interested to look.

OS X showing some eye nutrition in a System Preferences search
OS X showing some eye nutrition in a System Preferences search

The use of an alpha channel in this way really helps improve the usability, and communicates with the user. Of course, the typical use of an alpha channel in most systems is to make semi-transparent, difficult to read windows.
OS X also uses the alpha channel to display a shadow under each window. The top-most window has the deepest shadow and provides an instant visual cue to the user about the arrangement of the windows on the screen.
None of this is to say that OS X doesn’t have eye candy as well, but it’s a lot sweeter when there’s nutrition to go with it.