Wednesday, November 19, 2014

How to use Excel and tSQL to help you Generate Code


It's amazing to me the reactions I get when I show people seemingly simple tricks on how to automate or generate code and/or script generation. Whenever I see a pattern, that means there is a way to simplify or automate it. Here are some very simple tips and tricks to help you get some of that mundane and repetitive work out of the way quickly.

Using Excel's Concatenate Function

The CONCATENATE function is a nice hidden gem that many people apparently don't know about, this has helped me get many things done quickly and because Excel is tabular, it is very easy to work with and hone in on specifics using formulas. The best way to understand how to use this is by example of course.

Simple Example

If you were not aware, everything in Excel uses VB, you can even write your own VB Modules (Static Classes) and use them in your sheets, therefore some of the syntax may be a little strange or familiar to you depending on how much legacy coding you have done. This example is showing three very simple ways to use the concatenate function.

The formulas used for Columns C through E are as follows:

' Column C: Concatenate the names into "First, Last" format
=CONCATENATE($A2, ", ", $B2)

' Column D: Concatenate the names into "Last, First" format
=CONCATENATE($B2, ", ", $A2)

' Column E: Concatenate the names into "First Last" format, but only use the first letter of each to produce initials

This is a very simple example of how to use the concatenate function, as you can see it simply allows you to append (aka concatenate) cell contents. If you have never seen this before, let your imagination run wild with what you can do with this. Imagine being able to export data from SQL Server into an excel sheet and playing with the data like this. Go nuts.

SQL Scripting Examples

Now that you know how to concatenate cells together, think about how much easier it is to manipulate data in a database when you are trying to clean up your data or insert data for a first time.

Inserting New Data

Let's assume that there is a Person table that you are creating, it has 4 columns in it (ID, First, Last, Age) and you got your data from someone in an Excel sheet or managed to just put it in an Excel sheet yourself. You can generate the insert statements when you are happy with your column names and data.

Column E is hidden on purpose for this example
The formula used for Column F is:

' Column F: Creating an Insert Statement for sheet data
=CONCATENATE("INSERT INTO Person (First, Last, Age) VALUES ('",B2, "', '", C2, "', ", D2, ");")

As you can see that formula is a little long and hard to read, but the pattern is simple - you are stringing the columns together and formatting them according to data type; just like you would if you were writing it by hand in SSMS. Please note that the ID column (Column A) is assumed to be sequential integers and that the Person table has an auto-incrementing identity column named "ID".

Updating Existing Data

Let's say you made a mistake with some of the data upon insert and now you need to update that data. In the image below you will see that Column D is the old data and Column E is the corrected data. I kept column D in the shot so you can see the differences.

Column F is hidden on purpose for this example
The formula used for Column G is:

' Column G: Creating an Update Statement for sheet data
=CONCATENATE("UPDATE Person SET Age = ", E2, " WHERE ID = ", A2,";")

As you can see, you can easily fix your data now by mass producing these update statements.

General Precautions

Be very careful when producing these kinds of statements and make sure to triple check your statements before executing them. Work with a single statement to make sure it works first before applying the formula to an entire column or group of cells. There have been times where I don't realize I have either referenced the wrong cell or downshifted a cell by accident which leads to trouble.

Keep in mind these are single hits, this is not intended for large volumes as your DBA would probably kill you if he found out that you were executing several hundred statements in this format. This is more for smaller sets of data, where writing complex SQL Statements just isn't worth the hassle. This methodology allows you to bypass syntax restrictions and annoying table joins most of all.

Excel Knowledge Milestone

If you have not seen this before, then I hope your mind is blown and you can go nuts with it. I showed SQL generation examples, but this can be used for literally anything. I have generated classes and HTML markup using this methodology. Make sure to explore the other functions in Excel that will help you along with your coding.

On an Aside: Soft Quotes in Formulas

This is a rather annoying thing to find out the hard way, if you need to include a soft quote ["] in your formula as text, then you must escape it. 

In order to escape soft quotes you need to double up on the soft quotes. In other words, for every soft quote you must add a soft quote if you want it to be a literal. Here are the formulas for the above:

' Row 2
=CONCATENATE("I am totally """, A2, """ to your complaints.")

' Row 3
=CONCATENATE("I do not """, A3, """ about your cat photos.")

' Row 4
=CONCATENATE("I need these Jennifer Lawrence photos for """, A4, """.")

Using tSQL's Concatenation Operator

This methodology is very useful when you have all of the data you want to manipulate right in SQL Server and instead of using Excel, you can just work directly in SSMS using tSQL to get your work done. The difference here is instead of using a function or method - we will be using a tSQL operator which is the plus sign "+"

This operator of course will do different things depending on the context. Therefore tSQL is not forgiving (by design) in this respect, where as in Excel it didn't matter what your data was. In Excel your data will be converted to a string automatically and appended together into text. Therefore keep in mind that you must convert your non-string values into strings in order to concatenate them together. Ok well let's dive right into this shall we?

Simple Example

For the sake of this example let's assume the Person table exists in a Dummy database. This Person table is an exact match from the previous section. In fact, I used those insert statements to insert the data. Here is the SQL for the Person table:

CREATE TABLE [dbo].[Person](
 [ID] [int] IDENTITY(1,1) NOT NULL,
 [First] [varchar](50) NOT NULL,
 [Last] [varchar](50) NOT NULL,
 [Age] [varchar](50) NOT NULL

Since the table has data in it, I will go ahead and re-perform the simple example from the previous section, but using tSQL this time:

Execution results of the query below
This virtual table was achieved by executing the following SQL:

 FirstLast = First + ', ' + Last,
 LastFirst = Last + ', ' + First,
 Initials = LEFT(First, 1) + LEFT(Last, 1)
FROM Person

Notice that this example is basically exactly the same as the simple example in the previous section.

SQL Scripting Example

Since the data is already inserted I am going to skip that as an example and go right into the update scenario. Let's assume that you want to age everyone by one year, then you can do the following:

Execution results of the query below
This virtual table was achieved by executing the following SQL:

 sqlUpdate = 'UPDATE Person SET Age = ' + cast((Age + 1) as varchar(10)) + ' WHERE ID = ' + cast(ID as varchar(10))
FROM Person

Notice how the integers have to be casted as varchar in order to be concatenated properly. Now this isn't exactly the best example because realistically you can just write that update statement directly in SSMS and execute it in one shot, but this is just a demonstration to open up the topic. If you have not seen this before, then this should open up several doors for you. Some people just don't think to do this is all.

Generating an Enumeration from a Lookup Table Example

This is a stronger example of how to use tSQL to generate something more useful for you. Let's assume that you have a PersonType lookup table in your Dummy database as follows:

CREATE TABLE [dbo].[PersonType](
 [PersonTypeId] [int] NOT NULL,
 [Description] [varchar](50) NOT NULL

This table describes different types of people, but you want to code against it as an enumeration. In that case then why bother writing it by hand if you have a lookup table already. The following screenshot shows the data and enumerations written out already:

Execution results of the query below
This is accomplished by running the following SQL:

 Enum = Description + ' = ' + cast(PersonTypeId as varchar(10)) + ', ' 
FROM PersonType

The only thing you have to do now is copy and paste the output into Visual Studio and surround it with enumeration syntax. Don't forget to get rid of that last comma.

Bonus Tip (If you didn't know already that is)

You can display a lot of information about a table by just highlighting its name and pressing "Alt + F1". This is by far one of the most useful things you will ever use in SSMS. This is unique to SSMS in that it is a keyboard combination trick. You can quickly remind yourself of a table's structure or column size without having to use the object explorer and it keeps your hands on the keyboard which is good for productivity.

Example using the Person table:

tSQL Knowledge Milestone

Once again, if you did not know about any of this, I hope that this helps you get your mundane portions of your work done more quickly. Honestly there is no point in writing out really boring repetitive stuff when you don't have to. Explore more ways to work with tSQL Concatenation - there are a wonderful number of functions provided by Microsoft for dealing with string manipulation in tSQL.


Work smart not hard. These are all tricks I have picked up over the years from other people and things I have discovered on my own. Many of these things are what have compelled me to work on the Simple Class Creator that I produced, because once again, "If there is a pattern - then you can automate it!"

Tuesday, November 18, 2014

TSQL: Passing objects to stored procedures via XML


This is a subject I have heard about repeatedly, but I never bothered looking into it until recently. I was told that writing hard coded insert and update statements is bad practice. To which I responded with "Okay, I am not using an ORM here, then how should I be doing this then?" I was then informed about what I knew already - "Use XML." sigh... It has finally caught up with me, I will just learn how to do this then. I am not a big fan of XML because it is bloated and not very friendly to work with regardless of context. This is why I have avoided using it for so long. Traversing XML is always a big to do as well - no one does it exactly the same way, so why should it follow any basic methodology in tSQL? It doesn't... Looking on the bright side, now I know how to do it - however I will not claim to be an expert on the subject, but this article will show you some basics to at least get started.

I don't have time to read all of this crap!

Trust me I understand that feeling and I feel it all too often when trying to solve a problem under the gun, so I will say that if you are trying to pass an object to a stored procedure, then view these sections:
  1. Passing an Object from Code to a Stored Procedure
  2. Consuming the XML

Using XML with tSQL

When ever you look this subject up, everyone shows you the equivalent of a hello world example - which is all fine and dandy until you need to do some real work. I will show you the Micky Mouse example then move on to a REAL and USEFUL example with some additional stuff no one else has covered properly. All of the articles I have read have hinted at passing your objects as XML to a stored procedure - but no one has provided the exact mechanisms to get it done. Well I am changing that here and now.

Reading in a list of elements (The Micky Mouse Example)

This example is working with a list of scalars.

DECLARE @XMLlist XML = '<list><i>2</i><i>4</i><i>6</i><i>8</i><i>10</i><i>15</i><i>17</i><i>21</i></list>'


SELECT r.c.value('.','int') AS IDs
FROM @XMLList.nodes('/list/i') AS r ( c )

As you can see there is a list of integers being set into an XML parameter. Using the nodes method with it's XQuery string parameter; the XML is translated into a list of integers by reading each row and the column. Each column's content is converted to an integer using the value method. This is as basic as it gets, it is useful to play around with, but not useful if you want to pass a multidimensional object (object with multiple properties) aka a matrix. This example is only showing how to convert a list of scalar values - which is definitely not the case most of the time.

Output in SSMS

Passing in an object (More realistic and less "hello worldy")

This example is working with a single matrix (object).

As I have pointed out in the previous example, that is just working with scalars. When dealing with objects you are dealing with a matrix or an object with multiple properties. Therefore this example shows you how to read in XML that represents a simple object.

-- Simple object example
DECLARE @xml XML = '<obj><a>John</a><b>Smith</b><c>10</c></obj>'


-- Method 1: Full path to each value

-- Method 2: Selecting the base node
FROM @xml.nodes('/obj') AS r(c)

-- Method 3: Individual hits
SELECT r.c.value('.','varchar(10)') FROM @xml.nodes('/obj/a') AS r(c)
SELECT r.c.value('.','varchar(10)') FROM @xml.nodes('/obj/b') AS r(c)
SELECT r.c.value('.','int') FROM @xml.nodes('/obj/c') AS r(c)

The above is demonstrating three different ways to accomplish the same end result (for the most part). Let's explore each of these methods (the SSMS output is below this list):

  1. Full path to each value - Notice there is no FROM clause, this is because the XQuery in the value method is pointing directly to a node using a full path. Notice that there is a "[1]" after value - this isn't a mistake, it is for selecting the first element found in "node a".
  2. Selecting the base node - Notice that there is a FROM clause this time. The reason for this is to declare the path being queried using the nodes method a single time instead of repeatedly declaring it in the value method. This can be more useful for longer paths. The syntax "r(c)" denotes row [r] and column [c] - you can use any letters you want, some people use x and y - I just think r and c makes more sense.
  3. Individual hits - this is the method being used for the Mickey Mouse example above, notice how inefficient this is with respect to traversing an object - this method requires three queries instead of one and it returns three virtual tables which isn't terribly useful for querying and I will further show why down below. This is why showing someone how to select a list of integers isn't an optimal way of explaining to them how to use XML with tSQL.
Output in SSMS

Passing an Object from Code to a Stored Procedure

Taking the examples shown above into consideration, you can easily perform inserts and updates in a stored procedure instead of having to declare every single property/parameter in the procedure's signature (which is tedious, annoying and pointless work which I never bothered).

Writing your XQueries

However before I move on to passing objects from code into stored procedures I am going to give you a query that will help you reduce the amount of time you are spending writing out XQuery strings:

 xQueryPath = 'r.c.value(''(./' + column_name + ')[1]'',''' + data_type + '(' + CAST(ISNULL(Character_maximum_length, 0) as varchar(10)) + ')''),'
from YourDatabaseNameHere.information_schema.columns

This query will produced XQueries for each column in all of the tables in your database. The output is just starter code - you still need to manipulate it to make it work properly. This is better than writing it by hand.

The Hard Coding Dilemma and Trade Off at the Cost of Speed

Whenever people would argue with me about hard coding my insert and update queries in code as in-line SQL (assuming the absence of an ORM or Generated Service Layer), everyone always falls back to saying - "hey just use XML", but I would always throw into their face "Okay, so you want me to hard code XML instead? How is that better?" to which I always just get a broken and distant look. The point is you are going to be hard coding something UNLESS you use XML serialization or some other form of custom serialization - all of which inevitably use reflection.

Reflection will ALWAYS make things slower - however it is far more convenient, almost maintenance free and safer than hard coding XML. This convenience comes at an inconvenience though - the cost of a small performance decrease, but if the point here was to stop the taboo of hard coding SQL in code, then you cannot be a hypocrite and hard code XML instead - that's just pointless and stupid.

Code for Executing a Non-Query by passing a Target Object as XML

This is code that I wrote for passing a target object as XML to a target stored procedure regardless of what it does. As you have read ad nauseam by this point, the target object is serialized into XML using Magic unless you read the next section to understand how this is done.

//Using blocks required
using System.IO;
using System.Xml;
using System.Xml.Serialization;

protected int ExecuteNonQuery<T>(T anyObject, string storedProcedureName)
 SQLText = ConstructXmlArguments(anyObject, storedProcedureName);

 //This method is not provided to you in this example - use a basic Execute Non Query method here
 return ExecuteNonQuery();

protected int ExecuteInsert<T>(T anyObject, string storedProcedureName)
 SQLText = ConstructXmlArguments(anyObject, storedProcedureName);

 //This method is not provided to you in this example - use a basic Execute Insert or Execute Scalar method here
 return ExecuteScalar().ConvertToInt32();

//This method is fully functional on its own and does not need to be modified
public static string SerializeToXmlString<T>(T anyObject)
 string strXML = string.Empty;

 XmlSerializer ser = new XmlSerializer(typeof(T));

 //Setting the namespace to nothing on purpose
 XmlSerializerNamespaces ns = new XmlSerializerNamespaces();
 ns.Add(string.Empty, string.Empty);

 //Setting the rest of the xml writer settings to remove inconvenient parts for tSQL
 XmlWriterSettings settings = new XmlWriterSettings();
 settings.Indent = false;
 settings.NewLineHandling = NewLineHandling.None;
 settings.OmitXmlDeclaration = true;

 //Write everything into memory
 using (MemoryStream ms = new MemoryStream())
  //Write the XML to memory
  using (XmlWriter xmlWriter = XmlWriter.Create(ms, settings))
   ser.Serialize(xmlWriter, anyObject, ns);

  ms.Position = 0;

  //Read the stream into a string
  using (StreamReader sr = new StreamReader(ms))
   strXML = sr.ReadToEnd();

 return strXML;

Please note that this is the partial implementation and that this code was taken from a DAL that I wrote. You need to retrofit it to work with your DAL that you intend on using it with, the above code is for basic idea and reference only.

SerializeToXmlString Method

This method is geared specifically to serializing an object into XML that is acceptable for use with tSQL. What I mean by this is that it does not include (List of Gotchas!):
  • The default XML Namespace - it is blank on purpose
  • Removed indents
  • Removed new lines
  • Removed the XML Document tag
If any of the above things were present in the XML that you were passing over to your stored procedure, it would break it during run time. Those things are not allowed period. Therefore when someone tells you, "Just use XML" - look at them and quiz them on what they are talking about - none of this stuff is obvious.

  • As usual though, if you try to serialize an object that is not serializable, you will end up having a run time error. This is not out of the norm.
  • If someone removes or renames a class property, they have to be conscious enough to know that they must now go alter the accompanying Stored Procedures. This is a definite con when relying on a Serializer. Had the XML been hard coded, this would not matter at all. If the XML is hard coded then:
    • In the case of removing a property - you would not be able to compile the code unless you modified that XML as well - in which case you know you need to update the Stored Procedure(s)
    • In the case of renaming a property - you could rename the property and forget to update the XML without worrying about causing any run time errors. I am not saying this is proper, I am just saying it wouldn't cause a problem.
These are examples of invalid and valid XML for tSQL usage/consumption. Make sure to compare and contrast the two examples to see how they relate to the bullet points listed above. Each one of these details matters when trying to pass or utilize XML in tSQL.

Invalid XML for tSQL Usage (Gotcha!)
This is what the default output would be for a "Bob" object if you did not configure the serialization manually. None of this is okay for usage because the xml document tag shows up, the namspaces (xmlns:{xxx}) are listed and there are indentations (tab characters) and new lines (carriage return and line feed characters).

<?xml version="1.0" encoding="utf-16"?>
<Bob xmlns:xsd="" xmlns:xsi="">
  <Name>Bob the 5th</Name>

Valid XML for tSQL Usage
The main difference to note here is how this example adheres to every bullet point in the list above. The XML is all on one line and it is very simplistic.
<Bob><Name>Bob the 5th</Name><MaryInstance><IsMary>true</IsMary></MaryInstance><MaryInstance2><IsMary>false</IsMary></MaryInstance2><Age>62</Age></Bob>

Consuming the XML

Using a similar example from the previous section, we are going to serialize a "Bob" object and insert it into the Bob table. The Bob class was modified to be flat like a rudimentary DTO with some basic mundane properties. Pay close attention to the DateTime property and how it has to be dealt with, more explanation to follow.

The Bob class (Bob.cs if you will)
This is a very flat class purely for demonstration purposes. An instance of this class is serialized into XML.

public class Bob
 public string Name { get; set; }
 public string Occupation { get; set; }
 public int Age { get; set; }
 public string Mood { get; set; }
 public DateTime CreatedOn { get; set; }

Bob Object Serialized
I am showing the Bob object formatted here for readability, no one wants to read this crap on a straight line - just keep in mind, during run time this will be on a straight line for all of the reason listed above. Note the format of the CreatedOn node value. This is passed to the stored procedure.

 <Name>Bob the 5th</Name>
 <Occupation>Fast Food Engineer</Occupation>
 <Sex>None Yet</Sex>
 <Mood>Resentful of others</Mood>

Bob Insert Stored Procedure
As you can see in the the tSQL below, the Bob object is being inserted directly into the Bob table, there are no other intermediary transfer parameters. This would be different with an Update Statement I would imagine. Note how the CreatedOn node is being handled.

 @parameters XML

    INSERT INTO [dbo].[Bob]
  CAST(r.c.value('(./CreatedOn)[1]','varchar(35)') as datetime2)
 FROM @parameters.nodes('/Bob') AS r(c)

XML to tSQL Gotchas!

There are two very important things to note when working with XML and tSQL (that I have found to be very obscure so far):
  • Casing matters! When you are writing your XQueries - the node names are case sensitive. Therefore if your class Properties are beginning with an upper case letter, then your node names are going to use an upper case letter etc... Therefore if you run into a weird parsing error where a column cannot be found, it is because of the casing more than likely.
  • There is no simple way to cast an XML serialized DateTime node value into a datetime2 sql parameter - you must do as I have shown above which is a double cast basically. First convert to a varchar(35), then cast that string to a datetime2. I have looked into changing how the XML serializes and formats the DateTime objects, but it is far more trouble than it is worth - this is a much easier approach.


Yeah... so "Just use XML" huh? So simple... I appreciate the journey that I went through learning about all of the trouble of just using XML, but it does feel like a bit much just to avoid using in-line SQL. The good news is once you are finished writing the code, it isn't a big deal anymore, but this took a few hours for me to understand (hence the article as documentation). I cannot say this was simple. Lastly there is definitely the trade off of using an XML Serializer.

I honestly don't think in-line SQL is too bad and I think it get's a bad wrap because people just don't implement it properly. It just depends on it's usage scenario. For example I think SELECT statements should always be stored procedures regardless of their simplicity, but if this is a SELECT statement that is being constructed on the fly using Dynamic SQL in a stored procedure - I would rather do that in code depending on the complexity because it is much easier to work with in code, than it is to do in a stored procedure - not to mention using business logic in stored procedures is a worse taboo than in-line SQL is. The reason for this is simple, now you have to maintain logic in two places instead of one - this is NEVER okay and leads to bad programming practices and spaghetti code.

Dynamic SQL in Stored Procedures is abused horribly by a lot of developers. I have seen some very bad practices in Stored Procedures which mostly involve a lot of copying and pasting of queries with minor differences in the where clause or joining on the dogs table if the cat bit is set to zero or null. My point is Dynamic SQL in Stored Procedures is more of a determent than a help if you have the wrong people working with it. Anytime someone introduces duplication, they are writing bad code aka Spaghetti code. If you are having a very hard time representing something in SQL you should probably consider NOT writing it in SQL and just do it in code instead... you know where the business logic lives.

People like to argue that you should use parameterized queries in code instead of in-line SQL and I strongly disagree. The only attractiveness that parameterized queries have is protection against SQL Injection - other than that - the amount of overhead of writing the code for this is ridiculous and I would rather not. This is not me being lazy, this is me saying I would rather just use a stored procedure if I am going to go through the trouble of writing one in code, which is basically what parameterized queries are. Other than protection against SQL Injection, the only usefulness I have found in parameterized queries is the insert and update operations for blobs (byte arrays). That's it. Frankly that is all I will use it for - other than that it is inflexible and overkill to use.

If your code is not written in a way where you are not protecting yourself from SQL Injection attacks - then you deserve to be hacked and lose your data. Stop being lazy and do it right.

Monday, November 17, 2014

Dyslexic Apps Updated

I have updated the Dyslexic Apps page with all of the software that I have available on google code. Compiled version of the executables and/or installers are available via drop box.

Check it out here

Saturday, October 25, 2014

PlanetSide 2 All Access Members Station Cash Trip Wire

SOE All Access Gotcha

SOE's Middle Finger to you - the customer (view link below)
Membership Link

FAQ Questions 9 - 12 in Graphic above

This post is more of a good to know for people who pay for their PlanetSide 2 account. You would think with all of the damn rewards that they offer you to be a member, they would automatically load your 500 SC directly into your account like they used to do when I first signed up, but some jaggoff (or group of jaggoffs) I am sure decided that members have to actually sign in to get their SC. That's bullcrap as far as I am concerned. SOE apparently doesn't care about it's adult professional players/gamers out there because I sometimes go 3 or more months without playing any video games when I am busy working on a huge project at work. I'm lucky if I can sleep, let alone log into my PS2 account so that I can click on a "claim" button.

It wasn't apparent to me that my SC balance hadn't increased until after I actually had time to play again and it had been like 4 months. SOE screwed me out of 2000 SC that I PAID for because they decided to change their account policies.

This is equivalent to telling an employee that works for a company, that unless they ask for their paycheck each month, they won't get one - even though you are signed up for direct deposit and to make matters worse you pay your employer to work there.

Therefore I have every right to point at SOE and call them a bunch of assholes for pulling this. You guys are dicks. Thanks for the so called encouragement to play more, but no thanks - I will play the goddamn game when I feel like it BECAUSE IT IS A VIDEO GAME NOT A JOB.

This is the first time I have paid for membership for a game. Granted I love playing the game, but I do not think a PAYING member should be CHASTISED and PUNISHED for not playing the game. That is a load of crap that some idiot in SOE's business department came up with to increase profit margins. That's dirty.

Sunday, October 5, 2014

Recycle Application Pool using WMI via Powershell


I handle the builds for a particular piece of software at my place of work. I am responsible for deploying any new versions of code to all environments; including but not limited to: Development (Dev), Quality Assurance (QA), Staging aka User Acceptance Testing (UAT) and most importantly Production (Prod). When I was given this responsibility there was no existing structure or procedure for getting the job done. A build could take an hour or longer to execute because files were being copied in the worst possible way imaginable - by hand... In this day and age there is no reason for anyone, let alone a developer, to be doing any type of repetitive zero brain function task manually. Therefore I did what I always do when I see a pattern and a way to make it faster, I wrote a program. What used to take over an hour to do I can now do in less than a minute. I'm not exaggerating.

Windows Management Instrumentation (WMI)

The goal here for me was to copy a published version of code to multiple destinations and then recycle the application pool for each site. I never had to do remote IIS operations before - enter WMI. WMI, from what I understand, is both loved and hated by all. It can be buggy and sometimes it works just fine. In my particular case it worked wonderfully and continues to work very well to this day. If you want to read up on it take a look here.


  1. WMI is only as powerful as to what you have access too. Just like you need permissions to alter files and folders on a remote system via UNC file paths - this is no different, WMI will do nothing for you if you do not have the proper access on the target system.
  2. WMI must be enabled on your system (host running the script) and the target system (web server) in order for WMI to work. WMI runs as a service.
  3. The script I am sharing will only work with IIS7 and greater. IIS6 uses something different and older obviously. I strongly suggest moving to IIS7.5 at this point. For IIS6 view the example here.

Powershell Script Functions

I have three functions to share and they are as follows:
  1. GetApplicationPools - literally get all application pools on a target web server
  2. RecycleApplicationPools - recycle a specific list of application pools on a target web server
  3. ApplicationPoolStateDescription - translate the application pool state unsigned integer into a human readable string
You will see below that for the Authentication argument it is hardcoded to the magic number 6. I don't accept using magic numbers, but for this case I am giving it a pass since this is a script. The 6 represents PacketPrivacy apparently. The credentials being used for the WMI execution are purely based on who is running the script. If you start Powershell as an Administrator - then that session will have Administrative privileges and your identity will be shoved along for all commands executed. Get-WmiObject -Authentication

Get All Available Application Pools

#Get all application pool names for the specified server name
#Returns the names as an Array
function GetApplicationPools
 $lst = Get-WmiObject -Namespace 'root\webadministration' `
      -Class ApplicationPool `
      -ComputerName $server `
      -Authentication 6
 $arrNames = @();

 #optional loop for additional filtering
 foreach($p in $lst)
  #if($p.Name.ToLower().Contains("pattern") -eq $true)
  $arrNames = $arrNames + @($p.Name)

 return $arrNames

Recycle Specified List of Application Pools

#Get all application pool names for the specified server name
function RecycleApplicationPools
 param([string]$server, [Array]$poolNames)
 if($arrPools.Length -gt 0)
  (write-host ("Restarting Application Pools") -foregroundcolor "yellow" -backgroundcolor "black")

  [string]$strState = [string]::Empty;
  $pool = $null

  foreach($p in $poolNames)
   #Using ApplicationPool WMI object
   $pool = Get-WmiObject -Namespace 'root\webadministration' `
        -Class ApplicationPool `
        -ComputerName $server `
        -Authentication 6 `
        -Filter ('Name="' + $p + '"')

   $strState = ApplicationPoolStateDescription -state $pool.GetState().ReturnValue
   (write-host ("`t" + $pool.Name + " = " + $strState) -foregroundcolor "cyan" -backgroundcolor "black")
  (write-host ("No application pools were restarted because the provided list was empty") -foregroundcolor "red" -backgroundcolor "black")

Translate Application Pool State Unsigned Integer into Human Readable String

#Get the application pool's state's description
#Converts UInt to a string equivalent
#Returns the string
function ApplicationPoolStateDescription

 [string]$strState = [string]::Empty;
 #GetState() Method's output as integer to string
  0 { $strState = "starting" }
  1 { $strState = "started" }
  2 { $strState = "stopping" }
  3 { $strState = "stopped" }
  4 { $strState = "unknown" }
  default { $strState = "unknown2: " + $state.ToString() }
 return ,$strState;


Don't work harder than you have to. If you can automate a redundant process, you should - you aren't gaining anything from doing it manually repeatedly other than muscle memory and a large loss of time. My deployments take less than a minute to do now. Work smart - not hard.


Sunday, August 24, 2014

If you use Blogger with Google+ for comments you will lose the ability to review comments

This is just a quick FYI - I have turned off Google+ for my blog because I cannot moderate the comments properly using Google+. I was hoping that Google would fix this sooner rather than later and it seems like it isn't a priority for them. The reason this is a problem for me is that when someone leaves me a comment in Google+ I have no idea unless I am looking at my Google+ feed. I never look at my feed because I don't like social media platforms.

The old Blogger comments system lets me know if I have comments pending and I can review them all in one place without having to go to each post individually. With Google+ I have to visit each of my posts individually to find unanswered/unreviewed comments; I refuse to fish for my pending comments.

Anyhow - sorry for any inconvenience this may have caused for people who have left Google+ comments in my past posts, but I cannot continue using Google+ until Google wises up and provides me a proper way to review comments. I was reading that there is a way to make each comment system run side by side, but I need more time to look into this claim.

Related links:

Saturday, August 23, 2014

How I got Trovi on my computer and why you should never install CDisplayEx


I wanted to view some comics a friend recently gave me. If you haven't heard of Berserk, you should go check it out. The anime does the manga justice, but the manga, just like all original sources, is way better than the anime because it doesn't leave anything out. Anyhow, getting to the problem. In all of my excitement to start reading the manga I downloaded the first comic viewer I found because I vaguely remembered the name and I never would have suspected what happened next.


Before I start ranting, I just need to say that I almost never present opinion or uncertainties on my blog because I don't like putting misinformation out on the internet. I am slamming CDisplayEx because the installer they offer on their site,, gave me malware. Not just one piece of malware, I got about 20-25 pieces (objects) of malware on my machine and I never agreed to any of it while installing the software. I am usually very careful about this kind of thing and I don't install a lot of programs without recommendation or research... usually. Therefore I will be putting my foot in my mouth right now because the one time I put my guard down because I am excited to read my manga, I get hit with a program that is acting as a Trojan horse for 20-25 pieces (objects) of malware.

Hence I am telling anyone who is reading this right now to NOT INSTALL CDisplayEx EVER it harbors bullshit in it. If you check out that site I provided above, there is no mention anywhere on the site that you are installing a bunch of other crap with CDisplayEx. The EULA says you are taking the program as-is, yeah usually that mean just that program - not that you are going to get hit with a bunch of malware.

Before I ran Malware bytes, I uninstalled Mezza and Search Protect manually from my PC. I got hit with the following list of Malware which luckily I was able to remove using Malwarebytes. Thank goodness for free antivirus scans and removal tools and shame on Microsoft for their native antivirus, Windows Defender, for failing horribly on Windows 8. I ran a scan with Windows Defender and it found nothing! One of those pieces of Malware was a search engine hijacker named Trovi. I will get into why this is bad news for you if you use Chrome.
Registry Keys: 2
  1. PUP.Optional.SearchProtect.A, HKU\S-1-5-21-3800025958-4139502185-2390362187-1001-{ED1FC765-E35E-4C3D-BF15-2C2B11260CE4}-0\SOFTWARE\MICROSOFT\INTERNET EXPLORER\SEARCHSCOPES\{014DB5FA-EAFB-4592-A95B-F44D3EE87FA9}, , [03992d9c95e6da5c83c63b35768c2dd3], 
  2. PUP.Optional.Mezza, HKLM\SYSTEM\CURRENTCONTROLSET\CONTROL\SAFEBOOT\NETWORK\MZA, , [712b52775526d85e260bf5f24bb757a9], 
Folders: 4
  1. PUP.Optional.OpenCandy, C:\Users\[UserName]\AppData\Roaming\OpenCandy, , [25776267c8b34de9c0231aa27989619f], 
  2. PUP.Optional.OpenCandy, C:\Users\[UserName]\AppData\Roaming\OpenCandy\F2DCE73D13D54943ACEEE0596DA783E6, , [25776267c8b34de9c0231aa27989619f], 
  3. PUP.Optional.Extutil.A, C:\Users\[UserName]\AppData\Local\Temp\D7ADFCCA-EE7E-442C-9999-C4D14FEF360B, , [861620a94f2c231331dd0dc836cc817f], 
  4. PUP.Optional.Managera.A, C:\Users\[UserName]\AppData\Local\Temp\38fdaae5-8e0e-493c-88ec-e05c3be06e42, , [f6a6d1f8a7d4e254729d1cb99b6728d8], 
Files: 13
  1. PUP.Optional.OpenCandy.A, C:\Users\[UserName]\AppData\Roaming\OpenCandy\F2DCE73D13D54943ACEEE0596DA783E6\dlm.exe, , [c8d49732f68523134220ce5d748deb15], 
  2. PUP.Optional.Mezza, C:\Users\[UserName]\AppData\Roaming\OpenCandy\F2DCE73D13D54943ACEEE0596DA783E6\MZAAppSetupx30001.exe, , [712b488196e596a036748f1ae41d6997], 
  3. PUP.Optional.SearchProtect.A, C:\Users\[UserName]\AppData\Local\Temp\nsjC834.tmp, , [1785e6e3b8c30b2bf5865e3a09f8e31d], 
  4. PUP.Optional.Conduit.A, C:\Users\[UserName]\AppData\Local\Temp\nskFDC0.exe, , [316b0cbd1b605adc90c4b9d5d829cc34], 
  5. PUP.Optional.Conduit.A, C:\Users\[UserName]\AppData\Local\Temp\nsl11C.exe, , [5d3f3e8b82f9e3532f25e7a71ae7b749], 
  6. PUP.Optional.Conduit.A, C:\Users\[UserName]\AppData\Local\Temp\nsrDD92.exe, , [c5d7ad1ce3980e281044e0aec23fee12], 
  7. PUP.Optional.Conduit.A, C:\Users\[UserName]\AppData\Local\Temp\nst949F.exe, , [aeee329795e65ed8490b7f0f3ac7e11f], 
  8. PUP.Optional.Conduit.A, C:\Users\[UserName]\AppData\Local\Temp\nsvDAF1.exe, , [fba10ebb7efd82b4aba9eea040c1c43c], 
  9. PUP.Optional.Extutil.A, C:\Users\[UserName]\AppData\Local\Temp\D7ADFCCA-EE7E-442C-9999-C4D14FEF360B\bk.js, , [861620a94f2c231331dd0dc836cc817f], 
  10. PUP.Optional.Extutil.A, C:\Users\[UserName]\AppData\Local\Temp\D7ADFCCA-EE7E-442C-9999-C4D14FEF360B\cs.js, , [861620a94f2c231331dd0dc836cc817f], 
  11. PUP.Optional.Extutil.A, C:\Users\[UserName]\AppData\Local\Temp\D7ADFCCA-EE7E-442C-9999-C4D14FEF360B\manifest.json, , [861620a94f2c231331dd0dc836cc817f], 
  12. PUP.Optional.Managera.A, C:\Users\[UserName]\AppData\Local\Temp\38fdaae5-8e0e-493c-88ec-e05c3be06e42\cs.js, , [f6a6d1f8a7d4e254729d1cb99b6728d8], 
  13. PUP.Optional.Managera.A, C:\Users\[UserName]\AppData\Local\Temp\38fdaae5-8e0e-493c-88ec-e05c3be06e42\manifest.json, , [f6a6d1f8a7d4e254729d1cb99b6728d8], 

How does this affect Chrome?

I have Chrome installed on all of my computers, which is about five PCs right now. Two of those PC's are my work PC's. Imagine my psychotic rage when I found a browser hijacker on my work PC. I don't take risks with my work machine EVER. How the hell did Trovi find it's way on my work PC. Well the one thing that I LOVE about Chrome is a double edged sword in this situation. My default search engine was switched on all chrome instances on all five PC's to Trovi. This makes it ultra confusing to find the culprit. The culprit being my Windows 8 Surface Pro 2 because I installed that nefarious piece of software, if it can be called that now, CDisplayEx.

I had a different experience removing Trovi from each PC. I wasn't recording what I was doing while I was panicking to remove this pest from my machines. I was more concerned about eliminating the threat than writing a blog article. The experience I had pretty much came down to one thing:
  1. Uninstall Chrome completely
  2. When it asks you if you want to delete your data, to be on the safe side say Yes
  3. Reinstall Chrome - reconnect your Google account etc etc etc
This helped me get rid of Trovi. Unfortunately with viruses and malware, you are never 100% sure if you got rid of it. Just because Malwarebytes and Windows Defender are saying that I don't have any threats detected doesn't mean that something isn't still lurking around.

Your host file is altered without your permission

This may not mean much to a lot of people, but it is a big deal if you actually use your host file. I use my host file heavily and the only reason I even noticed that this even happened was because I have my host file open on my work PC all of the time. What is scary about this is that this happened on a PC that just had the search engine changed on it to Trovi via Chrome as the delivery method. That's pretty screwed up. I am still scratching my head on that one.

Anyhow, if you want to check your host file go to this directory:
C:\Windows\System32\drivers\etc      - the file is named "host" no extension

I had two different experiences:
  1. My host file was deleted outright
  2. My host file was backed up for me (gee thanks!) - which is still kind of like being deleted

Actions taken

This pissed me off enough to report these bastards to the FBI. Now I know you had to take a double take at what you just read, yes you can submit a complaint to the IC3 in order to report internet and cyber crimes. This definitely falls under that category because programs were installed on my computer with malicious intent and without my consent. I want to see how far I can take this. The IC3 works, I have reported sites before for different reasons and I had bad sites investigated and eventually shut down. Let's see how far I can take this.

Update 11/14/2015

I was looking for a new comic book reader to use and I stumbled across this article by LifeHacker:

I am adding this to my post to defend my position on CDisplayEx and why you shouldn't install it. Call this reinforcement of my argument. I don't care if it was a good program, the point is it isn't anymore because of the Malware that is obviously present.

Here is an excerpt from the article dated 12/28/14 8:00am:
Earlier this week we asked you to tell us which comic book readers you thought were the best, since our previous picks were getting a little out of date (and our previous champion, CDisplayEx, apparently is bundled with a boatload of malware that many of you have written in to complain about.) You offered up tons of great nominations—and defenses of CDisplayEx—but we only have room for your top five. Here they are, in no particular order:
 Therefore once again, those of you who want to criticise this post are just being belligerent because I am presenting facts not opinion. In other words even if you disagree with me, you are wrong.


Don't just install programs that look okay, even when you are excited about reading your favorite manga of all time.

Friday, June 13, 2014

Service Unavailable: HTTP Error 503. The service is unavailable. - Your Application + SSRS + Reports folder = obscure mess


Wow... another error message that tells you NOTHING!
This error message reads:
Service Unavailable: HTTP Error 503. The service is unavailable.
I need to start off by saying that 503 errors can happen for a potpourri of reasons. This is by no means a silver bullet or all encompassing solution, this is for a very specific scenario which is very vague, frustrating, infuriating and both an obscure problem & gotcha! This error will be going into my "Application Errors Hall of Shame". When you have a 503 error chances are your application isn't going to even break on a break point. Sometimes this is due to code blowing up prematurely before Page_Load is even reached, other times it can be windows taking a crap, in which case a restart can fix that. Then there are problems like this one.

I was scouring google for an answer about this problem, when I found a very striking similarity that my problem had with a post I found. We had the exact same problem and the folder name involved was not a coincidence, it couldn't be - it was too exact. On to the stupid problem!

The Stupid Problem

When I would try to access my reports folder from my application I was met with the above 503 error for no obvious reasons I could think of. Example: http://myApplication/Reports/someReport.aspx
Keep your focus on the fact that I have a folder named "Reports". I tried the following until I found the solution:

  • Added a dummy file (test.txt) to the directory and tried to access it - did not work
  • Reset permissions on the folder - did not work
  • Removed site and application pool from IIS - did not work
  • Made a copy of the directory and changed the name - it worked??? Why? This indicates that there is a conflict on that virtual directory. When I checked IIS, I did not see this directory listed anywhere. WTF?

The Jaw Dropping Answer to the Stupid Problem

So... Do you have SSRS installed on the machine where you are having this problem? If you do have SSRS installed on your machine, move to the next section. If you do not - I can't promise I have an answer for you, but I can give you some hints - go to the last section.

I have SSRS installed

Oh you are in for a treat. It turns out that SSRS's default virtual directory is also named "Reports". If you have the SSRS service turned off, you will get a 503 error. If you do not have it turned off you will be prompted to login - which should be confusing, because it is out of place. Therefore in order to fix this specific 503 error, you must do the following:
  1. Open up the "Reporting Services Configuration Manager". You can navigate here via: "Start > Microsoft SQL Server 2008 R2 > Configuration Tools" - I'm assuming you are using SQL Server 2008 R2, even if you are not, it should be the same idea. [Figure 1]
  2. Turn on the SSRS service if it is not started already. This is temporary, you can turn it off again when finished.
  3. Click on "Report Manager URL" on the side bar. Sometimes it takes a little while to load, just wait. [Figure 2]
  4. Take this moment to be upset because I know I was - you will see that the Report Manager URL clearly says it is "Reports". This is not a coincidence, this is what is causing the conflict regardless of whether the service is running or not [Figure 2]
  5. Change the name to something else. I changed mine to "ReportsForMfingSSRS_StupidConflict" - that way this never happens again.
  6. Optional - stop the service again. I keep mine off for the most part.
  7. Rejoice
Figure 1: Opening up the Reporting Services Configuration Manager
Figure 2: SSRS Virtual Directory is named Reports!

I do not have SSRS installed

If you do not have SSRS installed then your problem is being caused by something else obviously. I have some suggestions, but once again these suggestions may not work - but it is worth a shot:
  1. Put a break point in your Global.asax.cs on the "Application_Error" event - this can sometimes help. This event is triggered when you do not handle your exceptions on your page. This is when you would get a "Yellow Screen of Death". If you do not get to this point, either the exception is being handled or your application is never being hit which means there is an external problem such as the above example with SSRS. There is interference.
  2. An internal windows service has stopped running properly and is causing bogus ass problems. Restart your computer. If that doesn't work then it could still be code.
  3. Referring to the page that is giving you a problem, look at your global variables on that page. If you have global variables being instantiated outside of the "Page_Load" event or other place - then those objects could be the cause of your problems. I had a problem where the constructor of one of my objects which happened to be a global variables was blowing up on instantiation. This wouldn't even let me get to page load, I would just get the 503 error. However - my Global.asax.cs would catch the error! Go to suggestion 1 for more debugging. 
If none of those suggestions help, then you need to keep digging - i'm sorry. Just try to be patient and work through it. The SSRS bug took me 3 hours to figure out.


My search brought up a lot of articles, but really only one post is what worked for me. The answer is at the bottom and there were no instructions.

Sunday, May 25, 2014

Simple Class Creator and DTO Generator


A long time ago back around the middle and towards the end of 2012 I decided to create a console application that would help me get very mundane repetitive work out of the way. I dubbed it the "Simple Class Creator" - it would simply take a query or table name and turn it into a concrete class. Nothing more, nothing less. I tried submitting my code and article to Code Project, but it was rejected for being too awesome to handle. I went on to just ignore the Code Project administrators and do my own thing as I usually do. I published my code to (update 11/14/2015, moved to, but I haven't updated the SVN repository in a very long time. I finally got around to it now.


New Link for code:

Simple Class Creator

I have updated the Simple Class Creator tremendously. It looks nothing like it used to and I gave it a GUI written in WPF. I made it as flexible as possible, but it still needs work obviously. I only work on it when I get a chance to - usually out of need. However my offer still stands - if anyone needs a tool like what I am providing by all means have at it. The code is out there, do whatever you want with it.

I want to update it more to remove some of the stub code, but like I said before I just haven't had a chance.

DTO Generator

I ran into an annoying problem recently where I found myself having to write Data Transfer Objects (DTO) by hand. This, just like the Simple Class Creator, was a feature I created to avoid mundane repetitive fruitless work which could be super time consuming at times. Especially if you have a class with over 100 properties in it. So just like everything else in my career, when I don't like something, I fix it. I tried using AutoMapper, but the performance hit was huuuuuuge and it didn't generate DTOs for me, which is what I really needed. So nothing against AutoMapper which is a great tool, but it just wouldn't work for me speed wise.

The code is far from finished, but the core concept is indeed finished and works very well. Simply put, you provide a target dll, you provide the fully qualified path to the class that you want to generate a DTO for and hit the Generate button.

What license are you using?

I am using a standard issue IDGAFF Copy Left CPOL FUintheA license. So in other words, I could care less. If you want to use my code, go ahead, but please cite me at the very least so I get some credit.

WCF Security: Requiring Authorization to lock down your Exposed Services


WCF Security is definitely not something that can be picked up easily. It isn't plug and play or something that can simply be turned on or off; and it most definitely is not easy to understand or programmer friendly. I have been banging my head against a wall trying to understand WCF Security for a long time now, with many unsuccessful attempts repeatedly. I am stubborn though and keep coming back to it whether I get a black eye or not. However when something becomes necessary to understand - the urgency goes up, it doesn't matter how many times you get a black eye - you figure it out. Sink or swim. In other words I struggled with this topic a lot and I finally figured it out. My first mistake was I was going about the topic all wrong. You need to ask yourself a very important question when you are working on Service Security in general - what are you trying to achieve? Do you want authentication, authorization or both?


To make sure we are all on the same page, you have to make sure you know what the difference is between authentication and authorization.
  • Authorization - giving permission or authority. Are you allowed to be here? Yes, then welcome - No, then hit the road jack, come back when you are given permission.
  • Authentication - confirming the truth of an attribute of a single piece of data or entity. Are you who you say you are? Let's check your username and password. If it checks out, then come on in. If it does not check out, halt - you are not allowed any further!
Once you understand what the difference is between those two aspects, you need to take a step back and realize, that maybe you were looking at the problem incorrectly. Trying to do both at the same time when they are completely different. When I realized that this was my situation I started phrasing my google searches differently because googling for "WCF Security" gives you tremendously different results from "WCF Authorization". 

Inevitably you will see the term "Security Certificate" - don't worry we don't need that here - although it is important and recommended that you use SSL when you can. On your own this is more difficult to deal with because you need to, assuming you are using IIS, setup IIS properly to use your certificate - this alone impacts which WCF Endpoint Bindings you can use. For example BasicHttp (HTTP) vs. WSHttp (HTTPS).

After discovering all of this I realized I needed to implement Authorization into my WCF Service, it made perfect sense for my needs. My scenario is simple: I have internal Web Services (WS) that need to talk to each other - no third parties involved here. If you need to deal with third parties, then this solution is going to be too rigid, however with some creativity you can make this work for you.

Let's work with Authorization


This may be a bit much to take in at first, but just keep an open mind and read carefully and try your best to understand along the way. There are several steps involved with getting Authorization to work and this requires cooperation from the client and from the server (which is why working with third parties would be difficult). Those steps are:
  1. Build your Custom Message Header - these message headers are NOT to be confused with Http Headers, they are specifically SOAP message headers when using the basicHttpBinding or wsHttpBinding. This introduces a problem that I will explain below.
  2. Build your Message Inspectors
    1. Dispatch Message Inspector: IDispatchMessageInspector
    2. Client Message Inspector: IClientMessageInspector
  3. Build your Custom Behavior
    1. Behavior Attribute itself - wcf endpoint behavior definition
    2. Behavior Extension Element - required for app/web config, let's assume web.config moving forward
  4. Modify your server side web.config
  5. Modify your client side web.config
This may seem like a lot of steps, but really it isn't. The good news is that once you implement the code the first time, all you have to do in the future is modify your configs - which is pretty simple as well. I strongly recommend putting all of this code into a single project. This project will be referenced by both the server and client sides. That way you are 100% ensured uniformity, especially if you have to modify the code - there is absolutely no reason to implement this code more than once.

Build your Custom Message Header

As mentioned earlier these message headers are message headers which are not to be confused with Http Headers - they are not the same! The message header we have in question here is what will actually carry the authorization arguments over to the server from the client. The client will inspect the outgoing message before it is sent and inject this Message Header into the message. The server will inspect each incoming message for 4 parts:
  1. Header Name - the name of the message header
  2. Header Namespace - the namespace of the message header
  3. Token Name - the name of our authorization node
  4. Token Value - the value of the authorization node
The actual Custom Message Header is a class you have to write and this class must inherit from the MessageHeader class. I have named my class AuthorizationHeader. You have to override a few elements as demonstrated in the code. The most important thing to keep in mind here is serializing the message contents in the OnWriteHeaderContents(...) method. For my design I made the the header name, header namespace, token name and token value all dynamic. I did this for two reasons: 
  1. I did not want to hardcode any strings into this class so it will remain fully configurable from the web.config
  2. Just in case the values of the aforementioned properties became compromised, they could be changed immediately without having to recompile anything
I am extra paranoid about people getting values through packet sniffers and other means, so I figured instead of making the header properties and values human readable why not make them all resemble passwords? Here is an example of a Message Header:

Example of a SOAP message's message headers section containing a custom message header
Please note that the Token Name and Value are properties I came up with - this is not required and it can be named anything else, but for my design I called it a Token. This token is the contents of the message. More contents could have been added, but since this has to be checked twice and for every message, keeping it small is better.

Code: AuthorizationHeader.cs

Build your Message Inspectors

The hardest part was building the Custom Message Header. Believe it or not this part is very easy. Your message inspectors are what are going to:
  1. Inject the authorization Header into each outgoing message. This is done by the Client message inspector using the "BeforeSendRequest(...)" method.
  2. Inspect each incoming message to make sure it contains an Authorization Header with the 4 appropriate aforementioned values. This is done by the Dispatch (Server) message inspector using the "AfterReceivedRequest(...)" method. If those values exist then the requester is Authenticated, if those values do not exist or do not match, then the requester is not authenticated and they will be issued an UnauthorizedAccessException.
Example of an end to end transaction involving a message inspector
In the above example we are assuming that there is a web method in this WCF Service (I am calling it the AwesomeService) that will add numbers via the "AddNumbers(...)" method. However we don't want to let anyone add numbers, only consumers with the proper Authorization are allowed to do this.

For the following three sections I recommend looking at the code while reading along:

Dispatch Message Inspector

The Dispatch Message Inspector can inspect messages after arrival (AfterReceiveRequest(...)) and before sending a response (BeforeSendReply(...)). For my implementation purposes I only defined the "AfterReceiveRequest" method, as that is all I required at the moment. The general idea for this method is simple and can be described in the following questions:

  1. Is the expected Authorization Header present?
  2. Can the header be deserialized from XML into an AuthorizationHeader object?
  3. Does the Authorization Token match what the server is expecting?

Message Inspectors and REST

One of the problems I ran into here was that I was trying to satisfy REST calls and SOAP calls. This was very tricky and at that point it was very clear why WCF is not the greatest platform for REST. The message inspector and header messages are geared towards usage with non webHttpBinding. In other words - how do you append a header messages in a REST call? Let me be clear - I am not talking about http headers, I am talking about Message Headers. In the context of the basicHttpBinding or wsHttpBinding it would be a SOAP header. You can't use the Client Message Inspector for REST, as it is only triggered in a server side context, for bindings other than webHttpBinding it seems.

Therefore in order to combat this problem, I came up with a not so wonderful way of handling it. I cannot condone what I did, but it works. However once again, I won't say this is the right way to do it - it is just a work around. In order to get this to work with REST I had to use the Query String in an unsavoury manor. I came up with a arbitrary scheme for passing the AuthorizationHeader information over to the message inspector and it looks like this:

So basically the scheme I made up is: ?AuthorizationHeaderName=AuthorizationHeaderNameSpace&AuthorizationToken=AuthorizationTokenValue

This is picked up by the message inspector first, if it is present then it is evaluated, if not then the SOAP headers are inspected following the same rules as usual.

Client Message Inspector

The Client Message Inspector can inspect messages before sending a request (BeforeSendRequest(...)) and after receiving a reply (AfterReceiveReply(...)). For my implementation purposes I only defined the "BeforeSendRequest" method, as that is all I required at the moment. The general idea here is very straight forward and obvious:

  1. Create a new AuthorizationHeader with the appropriate AuthorizationTokenValue
  2. Add this AuthorizationHeader to the current request's Headers collection
There isn't much more to the client message inspector than that.

Build your Custom Behavior

This is a more interesting step to implement - even though it is literally a copy/paste step, you are actually creating an avenue for implementing your Message Inspector via the web.config. This is done by defining a behavior which can be implemented by your WCF Service. This behavior is required by both the server side and client side because this is what causes the message inspection to actually happen, hence it being a behavior. Your client and server will behave according to the Message Inspector definitions set for each inspector respectively.

Behavior Attribute

The Behavior Attribute implements the IEndpointBehavior interface which has four methods, but we are only interested in two of the four:
  • ApplyClientBehavior - use the client message inspector logic (BeforeSendRequest)
  • ApplyDispatchBehavior - use the dispatch message inspector logic (AfterReceiveRequest)
These behaviors are triggered depending on where they are registered. There are examples below, but before we get to the examples there is just one more piece to this puzzle and that is one more copy/paste step which is the Behavior Extension Element.

Behavior Extension Element

This class is literally just a necessity in order to expose the Custom Behavior. This will make more sense once we go to the web.config to review the changes required in the web.config. The behavior extension element is utilized in the "extensions" node: <system.serviceModel><extensions /></ system.serviceModel>.

Server Side Web.Config (Dispatch Message Inspector)

Here is a server side (Dispatch Message Inspector) web.config example that should tie everything together nicely for you and give you more clarity. It is a large screenshot because one of the arguments cannot be broken up on multiple lines.

Click on the image to expand it. Cross reference it with the code and understand what is happening here.
Basically what is happening here is that the Behavior Extension Element is the gateway to leveraging the Message Inspector code. Starting from the top you can see that Behavior Extension Element must be defined explicitly for use in the web.config. You are defining a new XML node named "authorizationBehavior" in the behaviorExtensions section. The class that this node will use to operate is the AuthorizationBehaviorExtensionElement class. This class points to the AuthorizationBehavior class. This class points to the AuthorizationMessageInspector class. In this context, since it is for the Services section of the web.config, it will use the Dispatch Message Inspector method: "AfterReceiveRequest".

Client Side Web.Config (Client Message Inspector)

This is the client side web.config example and it isn't different from the server side example except for the obvious fact that the context is for the client and not the server. Therefore the only change in the story is that the Dispatch Message Inspector method that will be used is the "BeforeSendRequest" method.

Click on the image to expand it

App.Config/Web.Config - Configuration

Config Examples: Web.Config/App.Config

The provided configuration file is for reference only - it purposely excludes the binding section because that really doesn't matter for this article. The config is broken up into three parts:
  1. Application Settings
  2. Server Side Settings
  3. Client Side Settings
Here are a few suggestions for when you are configuring your Web.Config and App.Config. If you already have a massive config file, I suggest taking a look at this link. Consider taking different config sections and breaking them down into external files. It makes life a lot easier to navigate your main config and to even deploy changes. You can work on an individual file as opposed to working on your main config and making it grow ridiculously large.

When you are describing the "WCFServiceSecurity" dll in the web.config (or your own copy of this dll - feel free to change it however you want) you need to keep in mind that if you make any mistakes or put any line breaks in the "type" attribute - your service will fail during runtime. Also make sure the dll is present - as stupid as that sounds, that is usually the problem aside from mal configuration.

type="WCFServicesSecurity.AuthorizationBehaviorExtensionElement, WCFServicesSecurity, Version=, Culture=neutral, PublicKeyToken=null" />

name="[Name of the Endpoint Behavior Node]" 
type="[DLL Name].[Name of Behavior Extension Element Class], [DLL Name], Version=, Culture=neutral, PublicKeyToken=null" />

The way I designed this assembly is to use shared keys for the client and the server. If the keys aren't the same for both - then the client will not be authorized. So basically make sure those keys are in sync.

If for whatever reason you need to disable Authorization, just remove the "behaviorConfiguration" attribute from your bindings.

Credit and Sources

I got 95% of my information and source code from the following articles written by Paolo Pialorsi. His articles are very well written. I just happened to take two of his articles and slap them together to make one solid solution for myself. This stuff is slightly complicated, but once you get it - you get it.
The rest of my information came from MSDN white pages and StackOverflow - which really just all led me to Paolo's articles.

Sunday, May 18, 2014

Make a transfer between Bank of America and another bank without paying a fee

The Stupid Problem

Bank of America is greedy and nickels and dimes you to death for everything. I can't transfer money from my BOA account to another external non-BOA account without being charged a fee. This is stupid. Chase doesn't do that, why does BOA?

The Stupid work around to the Stupid Problem

Well I'm smarter than BOA. I write myself a check from my BOA account and deposit it into my Chase account using my phone. It takes more effort than a few clicks, but I still don't have to go to the bank or pay a fee. I have been doing this for a while, I just never shared it with anyone because I thought it was an obvious work around.


Bank of America is stupid.

Sunday, March 16, 2014

Nexus 4 on Android 4.4.2 Loses Bluetooth Connection Repeatedly in Car

The Very Unfair Problem
If you want the work around - skip to the next section - I am just describing and ranting about the problem here...

After I upgraded my phone from Android 4.4.1 to 4.4.2 my Bluetooth stopped working properly. I would be in my car (2012 Hyundai Elantra), connected via Bluetooth - on a call and about 1 or 2 minutes in my call drops. I thought that was strange and possibly just a hiccup, because stuff happens... Whatever, so Bluetooth reconnects, call continues, drops again after 10 seconds, reconnects, drops again after 10 seconds, reconnects drops again... this continued indefinitely until I turned off Bluetooth. Wow - that's very strange... Restarted my phone, tried again, same result. Instant outrage because I knew immediately this was tied to the Android OS upgrade I performed the night before!

"Okay... ", I thought to myself, "Google isn't Apple so they wouldn't treat us like a bunch of suckers would they? Noooo, I'm sure they will admit they made a mistake and they will be fixing the problem promptly - I mean hell this is their flagship well known product that I bought for $320 dollars - they will make this right."

I couldn't be more wrong...

I started researching the issue and this is what I came up with:
Issue 62854
Issue 63056 - Issue 62854 was eventually merged into 63056

I tried emailing Google Play support. That went no where - they wouldn't acknowledge the problem's existence and basically blamed me saying I installed an application that caused the problem. I told them they were wrong and that all the problems started happening after I upgraded to 4.4. Then they told me to do a factory reset, I just stopped talking to them after that. I was then prompted to leave them a customer service review - I gave them a very big piece of my mind.

Want to try them yourself? Here is the contact info:

1. Click on Contact Us
2. Click on Email on the bottom left
3. Write your rant and submit

Long story short - google refuses to acknowledge the problem from their support level; and the Android developers don't care about the problem it feels like since it was marked as a small priority... Nice...

People can't use Bluetooth? Meh - small priority, who needs Bluetooth anyhow?

The Stupid Work Around to the Very Stupid and Unfair Problem
I really hate this solution because it causes other obvious problems. In order to work around this issue simply turn off your WiFi when you are using Bluetooth. *Sigh* Not the greatest solution, but at least it works. I have seen other users report this as a solution and I have been doing this for weeks now. It works consistently.

Sadly I have noticed different problems with different Bluetooth devices. Especially volume control on wireless bluetooth speakers or headphones etc...

Well here is the good news, I made a meme for the problem - hooray!

I take very big issue with paying a lot of money for a product, then the product's main features made useless by the manufacturer. Then to top it off, they won't acknowledge and/or fix the problem. I think I am going to abandon Android all together, this really just pushed me over the edge. Developing for Android is a total pain in the ass anyhow.

People have told me, "Dude just downgrade to Jellybean 4.3" or "Just root your phone and install a Cyanogen". Yeah I know I can do those things, but I didn't dick up my phone - google did. This is google's fault, google needs to fix the problem or else I don't want to be a Android consumer anymore. Push comes to shove I will just flash my phone, but blehhhhhh... I don't wanna! I just need my phone to work!