Tuesday, January 22, 2013

PowerShell as a Testing Environment


In developing ASP.NET web applications, I got tired of the cycle of compile, wait for JIT compile and app domain load, click [click|type, click|type...], validate. It is useful to create test harnesses to validate output from things like string parsing and XSL transformations, and any non-trivial application should have some sort of automated testing performed on it. But I don't like the ridiculous amount of overhead generated by TDD (Test-Driven Development) models, and any test harness can become so much extra work that for small methods it hardly seems worth the effort just to see the effect. Or consider exploring an API that you know will give the desired output, if only you can find the right methods and properties.

However, there is an alternative: Windows PowerShell. Due to its integration with the .NET framework, one can easily write C# code directly on the command line to experiment with and test different objects while incurring no real overhead. Even better, any binary loaded in the GAC can be referenced directly without importing anything. 

Here's a session from my prototyping of some code to parse a URI:

PS C:\Users\Darton Williams.IOTA-CORP> [System.Uri]::TryCreate("dkw.mce.dev",[System.UriKind]::Absolute, [ref] $url)

False

PS C:\Users\Darton Williams.IOTA-CORP>[System.Uri]::TryCreate("http://dkw.mce.dev", [System.UriKind]::Absolute, [ref] $url)

True

PS C:\Users\Darton Williams.IOTA-CORP> Write-Host $url.AbsoluteUri

http://dkw.mce.dev/

PS C:\Users\Darton Williams.IOTA-CORP>[System.Uri]::TryCreate("http://dkw.mce.dev/abc", [System.UriKind]::Absolute, [ref] $url)

True

PS C:\Users\Darton Williams.IOTA-CORP> Write-Host $url.AbsoluteUri

http://dkw.mce.dev/abc

PS C:\Users\Darton Williams.IOTA-CORP> $urltext = $url.GetComponents([System.UriComponents]::SchemeAndServer, [System.UriFormat]::SafeUnescaped)

PS C:\Users\Darton Williams.IOTA-CORP> [System.Uri]::TryCreate($urltext, [System.UriKind]::Absolute, [ref] $url)

True

PS C:\Users\Darton Williams.IOTA-CORP> Write-Host $url.AbsoluteUri

http://dkw.mce.dev/

Tuesday, September 4, 2012

Editing Visual Studio Templates the Easy Way

One of the double-edged swords of modern IDEs like MS Visual Studio and Eclipse is that they generate code. In Visual Studio, items (classes, web files, etc.) created within a project or solution, as well as project and solution files themselves, are based on templates which generate skeleton code using the filename and other metadata as parameters. Some of the most common generated elements are namespace imports (C# using statements), default namespace, and default constructors for classes. By changing the templates, we can change the generated code.

I won't delve into the reasons or technical aspects of changing the templates; if you're reading this you probably already have a use case and have found the basics at http://msdn.microsoft.com/en-us/library/6db0hwky%28v=vs.100%29 (for VS 2010). My purpose is to show how to make change testing easier and avoid the ridiculously long wait for the devenv /installvstemplates command.

The actual templates are stored as zip files in %DevEnvDir%ItemTemplates (from the VS command prompt). They are also cached, unzipped, in %DevEnvDir%ItemTemplatesCache. Generally, the procedure given for editing the templates is to unzip the files from ItemTemplates, edit the item, then either refresh the cache (the long-running command mentioned above) or copy the files, unzipped, to the cache directory. The important thing to understand is that the cache directory is actually what VS uses when it creates a new item, and changes made to files in that directory will take effect immediately. So why not edit directly from the cache? This way, changes made to the template can be tested without even restarting Visual Studio. Once the changes are tested and validated, the files can be zipped and backported to the ItemTemplates directory.

In short, by doing things backwards (editing the cache files and saving changes to the actual template directory), we eliminate the need to refresh the cache. Future installation changes or extensions to VS may perform this step automatically, so it is important to save the changes back to the zip file, but there is no need to run devenv /installvstemplates in this scenario.

Thursday, June 14, 2012

Code# Concepts: Frameworks

A New Perspective

These articles are based on 14 years of experience, research, and lessons learned in the technology industry. Since getting back to writing code full-time (as of Jan 2012), I've learned and re-learned a lot of things and gained an entirely new understanding of others. I thought now would be a good time to start distilling knowledge and discussing concepts and theories, as I'm currently neck-deep in a multisite (portal-style) C#/ASP.NET 4.0 application backed by SQL Server 2008: MyClassEvaluation. I want to know what other developers and IT pros think of these ideas too. Leave a comment or contact me on Google+.

It's All About the Framework

What are we really doing when we write software? How is the code I write today to access a database and manipulate the data on a web page any different from the code I wrote 10 years ago to access a different database and manipulate that data? When you think about it, other than taking advantage of new language features and framework libraries, the vast majority of the work we do as developers consists of repeating essentially the same tasks with slight variations.

In fact, it could almost be said that there are no new algorithms. Cryptography and cutting-edge research aside, the basic problems of analyzing, sorting and filtering have been solved many times over. In frameworks such as .NET, Java, even PHP, the breadth and depth of functionality built into the base libraries is staggering. Chances are, for any feature to implement or task to perform you can think of, the framework contains something that will get you about 90% of the way there. If not, a little searching will usually turn up a download to solve the problem (discounting licensing concerns for the sake of brevity). We ignore this axiom at our own peril. I have seen massive amounts of time wasted on custom code that could instead have leveraged classes built into the base .NET libraries.

This leads us to one of the rules that I always follow for software development:

Never re-invent the wheel. Always ask the question, "Has anyone solved this problem before?"

It may be tempting to build that socket server or caching mechanism from scratch, perhaps imagining benefits over framework libraries such as reduced complexity or better performance. But such implementations are usually a mistake, with consequences extending far beyond the time invested up front in writing the code. When we make our own wheels, we immediately introduce problems:
  1. More lines of code means more possible defects
  2. Maintenance becomes more costly
  3. Re-use becomes more difficult, as lesser-used code paths may hide broken compatibility
If we maximize utilization of what the framework gives us for free, we write far less code to achieve the same or better results: the old "code less, do more" adage. I believe that one measure of software quality is the number of lines of code written to meet the requirement - and fewer is almost always better, for the reasons listed above and more.

Think about this: if Microsoft engineers spent months or years to design, build and test a library that does almost everything you need, why would you throw away that effort just because it isn't exactly what you wanted? The key factor here is testing. I can trust that the framework libraries I use are generally bug-free and performant; can I afford to invest that much time and effort in making my own code as robust? Incidentally, there is a huge difference between what one wants and what one needs in software development, even more so than most other things in life. A library that meets my functional requirements (what I need) but doesn't provide a certain interface or accept certain method parameters that I am already using (what I want) is usually a sign that I am misusing something else, and may point to an architectural or design flaw.

Every language and framework I use was developed by people far more qualified than I to make decisions on what is necessary and useful. But because all software is designed by people, they may make decisions that I disagree with or have a hard time understanding. Despite the possible friction, one of the most important lessons I've learned in my career, and one of my strictest rules, is:


Never fight the framework. Use it as it was designed and intended to be used, whether you agree with/like it or not.

In ASP.NET, I've seen many cases where a developer reverted to familiar patterns to match their aesthetics and experience, such as rendering a table with static HTML as strings while iterating over a recordset. While this pattern works fine and may even execute faster than some options, it is a nightmare to maintain such code compared with simply using the built-in methods of databinding to a GridView. If your recordset does not facilitate that, it's time to rewrite the query or do some more advanced databinding. In the end, even if you dislike a certain programming paradigm that is used throughout a framework, you will definitely be able to appreciate it when you reap the benefits of sticking to that paradigm - notably less frustration and a simpler, more maintainable implementation.


Update (07/21/2012): I recently caught myself breaking this rule, where I persisted in using a custom base class and UserControl I had previously designed for handling data entry/edit forms. It had an event model to notify the form of an edit or update on its database object, but due to the rendering and event order of .NET controls and containers, I was unable to hook the form's events to a RadGrid (Telerik's specialized .NET GridView) for use as an edit form. I wasted several hours trying to work around the problem, and finally solved it (with less code) by just using the form template functionality built into the RadGrid and giving up on the custom class.

Wednesday, June 13, 2012

A Lesson in Shooting Yourself in the Foot

Sometimes I forget everything I know.Take a Fedora 14 configuration issue, for instance, on one of my old machines:

Last week Nautilus was suddenly unable to access special GVFS URIs like Computer: or Trash:. Kind of annoying, but not a killer bug. I forgot about it for a few days, until I plugged in a USB drive and found that it didn't automount. Okay, more annoying, I thought, and probably related to the same GVFS error. Time for some Googling on the error message.


The first page of results revealed the problem to be my own fault: I had compiled Anjuta from source, and it required updated versions of GIO and GVFS, among many other GTK+ libraries. Unthinking, I had installed Anjuta and the new libs in /usr/local. Everything ran fine, until I rebooted. I only do that once every few months on this machine, and the error seemed disconnected from my earlier actions by that time. Once I read the post, it made sense. I renamed /usr/local/lib, lib64, etc. and rebooted, and Nautilus was fixed.

So this hiccup sparked a conversation between myself and a colleague: Why was it so easy to screw my system up, and what could or should be done about it? This, of course, led to a more philosophical discussion on the nature of an open-source OS, general UNIX principles inherent in Linux, and the ability to have one's cake and eat it too.

The basic UNIX principle that I failed to acknowledge when installing a new GIO/GVFS was the multi-user system. Users can be either remote (network) or local (physically at the console), and the system loads libraries, determines paths, and does a lot of other setup depending on what kind of user you are. This is accomplished by a simple but ingenious mechanism: directory inheritance and overriding via the PATH environment variable.

This principle extends to the individual user, where files or directories starting with "." in a home directory (~) can override system or local defaults. It is one of the features that makes Linux so infinitely customizable, meshing perfectly with the open-source philosophy. But it bit me this time; judging by the search for my simple error, this same oversight has bitten other users for years and will likely continue to do so. With root/administrative privileges, it is very easy to render any OS unstable or unbootable. I've seen more than one instance of users accidentally deleting their Windows system directories. Everything works until they reboot...

In a nutshell, the lesson learned is that almost any user installation or customization can and should be performed without root privileges. I should have installed the new libraries, and possibly Anjuta, in my home directory by specifying the prefix and libdirs during the build. So should anything be done to protect me from shooting myself in the foot again? Probably not. When one is constantly installing new development libraries and dependencies to hack and build the latest shiny version of application foo, some of them will inevitably conflict with application bar. The safest way to install, of course, is to use only the distribution's package manager and repositories. However, I can envision SELinux providing an extra layer of security by protecting distribution files.

Wednesday, May 4, 2011

"Incompatible Browser" on FAFSA

I visited the FAFSA website recently (today, actually) and was greeted with the following message on Firefox:

"We have redesigned FAFSA on The Web with you in mind! All your FAFSA options can be accessed by clicking Start Here.

Click Close to continue.

We hope you enjoy the new look and features!"

Naturally, I clicked Close to continue, then clicked Start Here to access all my FAFSA options. Immediately I was greeted by an Incompatible Browser error (https://fafsa.ed.gov/FAFSA/app/errors?page=incompatibleBrowser) from the website. The error page does not actually take any action for a browser, it just prints out some information. In case that link dies, the gist of it is this:

Supported Mozilla Firefox Browsers:
Windows XP - Mozilla Firefox 3.5.x and 3.6.x
Windows Vista - Mozilla Firefox 3.5.x and 3.6.x
Windows 7 - Mozilla Firefox 3.5.x and 3.6.x
Macintosh Operating System 10.5 - Mozilla Firefox 3.5.x and 3.6.x
Macintosh Operating System 10.4 - Mozilla Firefox 3.6 


There's the problem: Firefox 3.6.16 on Linux is not supported. Supported by what, you might ask? One answer is that a script or application calling itself BrowserDetectService doesn't like it (from the same error page):

[Mozilla/5.0 (X11; U; Linux x86_64; en-US; rv:1.9.2.16) Gecko/20110322 Fedora/3.6.16-1.fc14 Firefox/3.6.16]
[BrowserDetectService:OS[UNKNOWN] browser[FIREFOX] browser version[FIREFOX3_6] AppName[FAFSA1112] detection status [BLOCKED]]

The first line is the User Agent (UA) string as sent by my browser. The second line is the output from this erroneous BrowserDetectService. I shudder to think that this may be an enterprise web service, making it accessible to Education Department applications other than just FAFSA. Not one to give up easily, I changed Firefox's UA string and logged on. Then, Just to be a smartass, I changed the UA on Seamonkey and successfully navigated the site with that. I went to the same error page and got the following:


[Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 6.1)]
[BrowserDetectService:OS[WIN7] browser[IE] browser version[IE8] AppName[FAFSA1112] detection status [SUPPORTED]]


Note the detection status of "SUPPORTED" vs. "BLOCKED" in the BrowserDetectService line; the crucial difference is the operating system. So clearly it is supported by my browser, since I was able to use the site. However, I should not have to perform a hack like that to access a public government website. Artificial barriers like this one are deceptive and discourage the adoption of free/OSS software by creating a false perception of incompatibility.

At the end of the error page we see this little irony:

*Compliant Web Browser - Standard
For the past few years, every major Web browser released has been built around a set of open standards designated by the World Wide Web Consortium, a non-profit organization charged with overseeing the continuing development of the Web. What this means is that one piece of code now looks the same on every modern browser, whether it be Internet Explorer, Firefox, Safari, Opera, or others. The majority of our users are using these modern browsers, so we can present content which is optimized for them.


That's kind of the point, isn't it? Firefox is Firefox, regardless of the operating system. So are Opera, and Safari, and Chrome. Yes, there are slight rendering differences between platforms. A lot of these can probably be attributed to often incorrect assumptions made by many of us in the tech world. Installed fonts (I don't have a font named "Arial"), screen resolution, browser settings and many other factors can affect spacing, shift elements and break functionality of even the most carefully planned layout or application.

Let's talk standards then, since that is their defense. First let's check compliance with the W3C XHTML standard: http://validator.w3.org/check?verbose=1&uri=https%3A%2F%2Ffafsa.ed.gov%2F (11 errors in the markup). Next, the W3C CSS2 standard: http://jigsaw.w3.org/css-validator/validator?profile=css21&warning=0&uri=https%3A%2F%2Ffafsa.ed.gov%2F (18 errors in the stylesheets). Clearly, standards compliance is a flimsy excuse.

So I've described the problem; what about the solution? Don't use UA detection, for one thing. Correct browser detection boils down to coding with two simple principles in mind:

1. Target the standards and not particular browsers
2. Use feature oriented object detection

These recommendations are repeated all over the web, but probably most succinctly at https://developer.mozilla.org/en/Browser_Detection_and_Cross_Browser_Support. Many  methods exist to address these problems and gracefully degrade the application. Javascript libraries such as jQuery point the way by encouraging detection of general features which are necessary for proper application support.

In fairness, I must note that 11 markup errors and 18 CSS errors on the FAFSA site is actually an admirable level of compliance, as you will see if you check this very blog and many, many other sites. Nice job, Google!

See also: http://jeffhoogland.blogspot.com/2011/03/no-fafsa-for-foss-users.html

Friday, February 19, 2010

IDE-Experienced

Since I've been working on a lot of Drupal sites lately, my primary Integrated/Idiosyncratic Development Environment (IDE) has been Eclipse. The PHP editor, debugger and other tools (part of the PDT)  as well as Subversion, CVS and a bunch of other plugins make Eclipse a good choice for cross-platform development teams working on a wide range of project types. It's usually easy to set up and extend, but I did experience one problem on my Linux (FC12) laptop: Eclipse Update did not work at all, so the usual simple installing/updating of plugins by checking them off a list wasn't an option.

After some googling, I found that the problem was a bug (also here and many other places), which may or may not have been fixed on certain GNU/Linux distributions or with certain versions of Eclipse. The workaround mentioned here was the quickest fix, with one small improvement: I renamed the "eclipse" executable to "runeclipse" and named the shell script "eclipse", which calls the renamed executable. This way, all references to the original executable (think menus, .desktop files, Eclipse restarting itself) run the shell script with GDK_NATIVE_WINDOWS=true. So, Eclipse problems solved. Subversion integration worked without a hitch, and I was up and running.

The next project I'm working on is a C#/ASP.NET web application, with a heavy API of its own and plenty of library goodness to develop and integrate. The app will run on Windows/IIS, but I want to use my native desktop environment (GNOME/Linux) for .NET development. Plus, I'd really like to hack some GNOME apps, many of which run on Mono. Rather than attempting to make Eclipse run everything, I thought it was time to try out the obvious choice, MonoDevelop. Browsing the Fedora repositories (including RPM Fusion), I found the newest package available was 2.1. Since MonoDevelop doesn't maintain a Fedora package (hmm, I'm starting to see a pattern here) and I wanted 2.22, I embarked upon a good old-fashioned installation from source.

Things got interesting right away. Using the instructions from the README file in the source directory, I typed:

./configure --prefix=`pkg-config --variable=prefix mono`

First error:

checking for MONO_ADDINS... configure: error: Package requirements  (mono-addins >= 0.4) were not met:

No package 'mono-addins' found

Consider adjusting the PKG_CONFIG_PATH environment variable if you installed software in a non-standard prefix.

Alternatively, you may set the environment variables MONO_ADDINS_CFLAGS and MONO_ADDINS_LIBS to avoid the need to call pkg-config.
See the pkg-config man page for more details.

I already knew mono-addins was installed; I had done this earlier because I (almost always) RTFM. However, I had done it the nice, graphical Fedora/GNOME way: using gpackagekit to search for "mono-addins", selecting it and clicking "Apply". I checked just in case:


Yes, I actually make it a point to manage my system using the GUI tools provided with GNOME. After several years of remote Linux and FreeBSD systems administration using only SSH terminal sessions, I thought it was time to see how much functionality I could get out of a free desktop like a normal Windows or Mac user. It's the ultimate test for me and the feature set; I only resort to terminals when the GUI malfunctions or is missing a feature. Kind of like fisticuffs with one hand tied behind your back. More on that later.

Okay, then: man pkg-config tells me that pkg-config is a utility used to return metadata about assemblies to callers such as the configure script above. Running pkg-config --libs mono-addins returned:

Package mono-addins was not found in the pkg-config search path.
Perhaps you should add the directory containing `mono-addins.pc' to the PKG_CONFIG_PATH environment variable
No package 'mono-addins' found 

A quick check of the search path, and sure enough there was no PKG_CONFIG_PATH environment variable set. Yet pkg-config --list-all showed a long list of registered packages. Searching for *.pc, the configuration files turned up in both /usr/lib/pkgconfig and /usr/share/pkgconfig, but no mono-addins.pc. I had installed mono-addins from a Fedora repository, and confirmed that the libraries were in their correct locations. Then I checked some of the other libraries, such as monodoc:


What a surprise - the monodoc-devel package literally is just the .pc file. In fact, many of the devel packages for mono are just pkg-config files. Why package a 207-byte file separately? Who knows. I installed the devel packages for mono-addins and all the other mono and *-sharp packages, and configuration succeeded.

After this, I ran make and make install, which gave plenty of warnings but finished without error. It even installed a menu item in my Applications -> Programming list, and MonoDevelop started up quickly on the first launch. Hopefully this experience will help someone else, and I'll post more as I continue to use MonoDevelop.

Wednesday, December 9, 2009

Google Chrome comes to Linux (and other browser comparisons)

Just in time to save my Fedora experience from buggy plugin support and bad font rendering, along comes Google Chrome in its first public beta for Linux. First impression: this browser is so impressive I will probably leave Firefox for it.

I've used Chrome on Windows since the first public betas, and while it has been fast, simple and powerful, I never saw a compelling reason to switch from Firefox, of which I have been a faithful user and proponent since early betas as well. Until I decided to eat my own dog food and switch to Linux (specifically Fedora 12).

A picture is worth 1000 words, so without further ado check out the differences in rendering (click images to see full size). I'll update this post at some point with a full review of Linux browser options.


Google Chrome rendering Blogger



Firefox 3.5 rendering Blogger

Of course, the standard developer tools are present and accounted for:



For the fun of it, I took screenshots of some other Linux browsers. They look a lot like Firefox even though two use different rendering engines. Epiphany (bottom) and perhaps WebKit in general seems to do a little better at subpixel hinting.


Galeon, a good Gecko-based (like Firefox) browser for gnome

Konqueror, a full-featured KDE browser using KHTML

Epiphany, a Webkit-based (like Safari and Chrome) browser