No reason not to use automated deployments, with Hudson or Jenkins

by MikeHogg 17. June 2012 19:31

 

 

Just discussion not really having to do with Jenkins but related to Building and Deployment of team projects. Feel free to add.

Build and Deploy vs BuildDeploy

You can use batch file xcopy or add other build steps to deploy to an environment as part of your build. Or you can use the Promote plugin and put your deployment in the promote steps rather than the build steps. I use both. Every build automatically copies to a 'dev server' and then Stage deployment is an extra step. Jenkins is so configurable that your imagination is probably the only limit. I like having a dev server so I or devs can see the latest but leaving Stage server alone for Account Services to review, until they want to see the latest version. You can implement Prod as a an additional Promote step, using the Access Control for Approvers, or you can have a separate .ps1 file or .ftp hosts file that the admin manually drops into place before running a (not really) Stage promotion.

Prod Access

You can just use the Promote Approver Access control, and/or you can have the prod ftp hosts set up in System Config, but not in any Project Configs, and the admin being the only person who can change the Project Configs, can go in and make that change to the ftp host in the Project Config, promote/build, and then change it back (actually I've seen enterprise teams do something like this). Or you can just use your ftp client manually, and favorite the jenkins artifacts directory, but then you still have to take your time not to make a mistake ftping the wrong version, which is the whole point of automating deployments.

Library Projects

I used a simple one project web application to try out one particular build and deployment approach for library projects. I wanted to stay true to DRY and leverage a shared library, even without the benefit of automated unit testing on a build server. There are many ways to do similar things, this is just the approach that I'm used to using. My project had a dependency on EuroMVC library. I left this as a separate project in SVN/Jenkins that built its dlls into an SVN directory, and then copied those dlls into my project. Development of Euro MVC library may continue, and my web app could remain unaffected with the same dll version it was tested with. I left it up to the project leader of the web app, if they ever wanted, to go back and get an updated version of the dlls. Visual Studio lets you seamlessly debug into your library code from the dlls, if you also have the symbols and library source (which we have in this scheme).

Database Projects

I have found new to VS 2010 is a DB project that deploys incremental changes to different DB environments. This has made my DB deployments one click also, although I have not yet implemented them in Jenkins command line. It has removed from me the management of so many .sql scripts and the maintenance of those scripts through the development process. I really like it.

Approach: ConfigurationManager, Batch scripts, and Different Web Apps for the same Sln

My approach: I set my MSBuilds to outputdir to a publish directory, sibling to my project .sln file. These are separated into a directory for each environment/.sln configuration. Then I create a post build step to archive artifacts. This copies the whole publish directory off to ../build/archive/BuildNumber/ directories. This does two things- first is that you can retrieve these versions even if the project version goes on and on, you can always go back to the version that was built/on prod last month and revert/ftp that. Second is that Jenkins automatically keeps track of these for Promote steps, so you don't even have to revert manually, you can just re-promote whichever build you like in the list, anytime. Between the msbuild arguments and batch file build steps you should be able to nail down Artifact Publish tailored to each environment you use. They can be time consuming to script at first, but once you nail it down you don't worry about it ever again. I've already been using the Configuration Manager settings for .sln files to get distinct artifact directories. I wonder how hard it would be to set up different configurations for a .sln file to output a Windows service and two distinct web apps for Jenkins to promote. It's probably much the same. And if you can't remove directories from the publish through the file/folder Properties in VS, then a simple batch script step will remove them from the artifact directories.

 

Jenkins can be found here:

http://jenkins-ci.org/ Their wiki and documentation is some of the best I've seen, but I will try and match it here, to our interest.

History/Jenkins vs Hudson

http://en.wikipedia.org/wiki/Jenkins_(software) The short version: Kawaguchi worked for Sun. He authored Hudson as an open source java program and won awards. Sun was bought by Oracle. Oracle had issues with direction of the project. Kawaguchi and the devs renamed the project Jenkins. Oracle continued work on the original trunk, so now we have two branches, the original devs on Jenkins, and Oracle on Hudson. Both projects are still active. Hudson just released a new major version fall 2012, and Jenkins has three minor releases in the first week of 2012 alone.

Installing

I installed using the regular stable windows installer from the first link above. I changed Path To Install to something from root like c:\Jenkins because we are going to refer to the Jenkins Home path a lot.

 

(Requires nothing if you use the installer. The setup.exe will handle installation of .net 2.0 if necessary (to run as service) and the msi bundles JVM (the only real requirement).) After installation, browse to the site. It installs by default on port 8080, although you can change that if you need to in %HOME%/jenkins.xml. First thing I did was check the Enable Security checkbox, and under Security Realm choose the Jenkin's Own User Database option. Make sure under Authorization you have Allow Users To Do Anything selected until the next step.   Save that, then Sign Up. That's it. Proceed to Manage Users step to lock down the access.

In the teams I've been on, devs were set up to do anythign except promote to prod, and one dev was the assigned Admin if there wasn't a Change Control team. Here is my Authorization section in the Configure Hudson screen. Note that Hudson does not have roles, although there is a special token user 'authenticated' you can use. Note that the Promote plugin has its own Access Control, so this is for Build rights, not necessarily Deployment rights. See the note in the bottom of this screen. There is also an option to choose Matrix Access like you see here, but for each project individually.   This could be all you need. An Admin, and allow all auth'd users to read and build. If so, then continue to Configure your First Project

Creating a new project is two steps: Name It, and choose Free Style (or copy from existing project), and click Create to go to the configure screen. The important parts of the configure screen: 1. Set your Source Control to Subversion, and enter your Repo Url. You should get a helpful validation error that prompts you to enter credentials, which will then be saved behind the scenes.   2. Click Add Build Step ( MSBuild ). If You have not yet added MSBuild plugin, then add it to your system configuration. Find your path to the .sln file for this field: Jenkins will download the solution automatically from SVN into something like C:/Jenkins/jobs/JenkinsProjectName/workspace/

Command Line Arguments. This is the most complicated part so far. Depends on your strategy and requirements. For me they usually look like this. Note there are some Jenkins variables you can use here like ${BUILD_TAG} to put each build in its own separate directory. With the new Jenkins I found this unnecessary but the option remains for more complicated scenarios. Here I am also doing a Configuration= for each of my Web.Transforms, and putting each of those into a separate directory, so my workspace structure looks like this: 

 

Jenkins automatically creates all of these directories. All that you decide is the HOME directory C:\Jenkins. Jenkins creates workspace, builds, promotions directories. The Workspace directory is where the SVN source goes and where Jenkins will build the project. The build and promotions directories are mostly just logging (builds is where Jenkins also archives but you don't need to know that). I want to MSBuild to an output directory that I can archive. The publish directory location and artifacts placed in it come from my approach using the MSBuild Configuration parameters. In Hudson, I was doing this manually with bacth scripts and my own directory structures, but Jenkins is more advanced and handles that automatically if we follow a couple conventions. So I put my Publish directory here under Workspace because the Artifact Copy (a later step) root directory is the workspace directory. My MSBuild commandline that works for webapps: /t:Rebuild /p:OutDir=..\publish\Debug\;Configuration=Debug;UseWPP_CopyWebApplication=True;PipelineDependsOnBuild=False Just below Build Steps you see here I add a Post Build Step to Archive the Artifacts. This approach is discussed here

3. Click Save. And That's It! "Where's my prod deployment", you ask? Note the two different builds you added above. That means for each build you run, you will get a directory of artifacts (the Publish) of your project, one transformed for each build step you specify. So when you want to move to prod, just copy from publish/Release for that build number. That means that you can continue committing and building and when an older version passes User Testing, you can copy that specific build version to prod. There is tons more you can do. Move on to the Promote and FTP Plugins for one click deployments.

Promote builds is a way to add a step, after build. This is the way I achieved post-build deployments. Install from the plugins page, and then see this one line checkbox for the Promote section sneak up in the Project Configure screen.

Here you see how I set up Approvers

As you see here, I use this in conjunction with Send Build Artifacts over FTP

Download and install the FTP Plugin from the System Manage Plugins page. Note: There is two. You want the one called specifically "Publish over FTP" Unfortunately, in Hudson, at the time, their FTP plugin was not great, and I settled on a combo xcopy and powershell FTP script so I don't have experience setting up this ftp plugin, but looking at the documentation, it has all the features included that I had to script in the old version. Actually, the new plugin works great. Everything I wished for six months ago. Set your hosts up in System Config:

Then set up your Promote Step in Project Config to use that host. I found these settings worked for my case:

 

 

This was the old way I set up FTP in Hudson, before there existed a plugin. I leave it here as an example of the power of Powershell (plugin): I used two promote actions with my script- first an XCopy

xcopy ..\publish\Stage\hudson-%PROMOTED_JOB_NAME%-%PROMOTED_NUMBER%\_PublishedWebsites c:\Web\DEPLOYED /ICERY
rem this is to setup the powershell script next, because powershell plugin doesn't recognize %PROMOTED_JOB_NAME% etc

Then, the powershell script called with parameters for Stage. Script is attached.

& 'C:\Users\mhogg\.hudson\jobs\CE.Ohio\workspace\Ohio\promote.ps1' "C:\Web\DEPLOYED\Ohio" "switchtoconstellation.discoverydev.com" "switchtoconstellation-d
Param(
	[parameter(Mandatory=$true)]
	[alias("d")]
	$deploymentpath,
	[parameter(Mandatory=$true)]
	[alias("s")]
	$server,
	[parameter(Mandatory=$true)]
	[alias("u")]
	$username,
	[parameter(Mandatory=$true)]
	[alias("p")]
	$password,
	[parameter(Mandatory=$true)]
	[alias("r")]
	$remotepath)
#$deploymentpath = "C:\Web\Deployed\Ohio" 
#$server = "switchtoconstellation.discoverydev.com"
#$username = "switchtoconstellation-dev"
#$password = 'w8b%duu#9r'
#$remotepath = "www"
$ftpfile = "temp.ftp"
$currftppwd = $remotepath
function AddItem($path){
    foreach($f in Get-ChildItem($path))
    {
        #Write-Host "testing $f" 
        if ($f.PSIsContainer -eq $True)
        {
            #Write-Host "recursing $f"
            AddItem($f.PSPath);
        }
        else 
        {
            $filename = $f.fullname
            #Write-Host "writing $filename to $ftpfile" 
            $parentpath = $f.Directory.fullname.Replace($deploymentpath, "")
            if ($currftppwd -ne "\$remotepath$parentpath"){
                AppendFtpCmd("MKDIR \$remotepath$parentpath")  
                AppendFtpCmd("CD \$remotepath$parentpath") 
                $currftppwd = "\$remotepath$parentpath"
            }
            AppendFtpCmd("PUT $filename")
        }
    }
}
 
# need encoding: .net prepends null char for some reason 
function AppendFtpCmd($ftpcmd){
    #$ftpfile = "temp.ftp"
    $ftpcmd | out-file -filepath $ftpfile -encoding "ASCII" -append
}    
 
"OPEN $server" | out-file -filepath $ftpfile -encoding "ASCII" 
AppendFtpCmd("USER $username")
AppendFtpCmd("$password")
AppendFtpCmd("CD $remotepath")
AppendFtpCmd("LCD $deploymentpath")
AddItem("$deploymentpath")
AppendFtpCmd("DISCONNECT")
AppendFtpCmd("BYE")
ftp -n -i -s:$ftpfile

Tags:

Automation

Encryption

by MikeHogg 31. May 2012 09:50

A really interesting project had me implementing encryption algorithms for a Point Of Sale vendor interface.  It was the closest thing I’ve done to ‘computer science’ and I was fascinated at manipulating integers that were one thousand digits long.  The vendor used a symmetric encryption wrapped in an asymmetric method, plus an additional byte manipulation algorithm, making it a few layers deep.  I used a proven Big Integer implementation, and some of the MS encryption libraries for certain steps of the algorithm, but a lot of it was byte level manipulation. 

In one of my favorite parts of the algorithm, I used a bit shift operator.  Never found a use for that in Business Intelligence!

        private static byte[] ApplyOddParity(byte[] key)
        {
            for (var i = 0; i < key.Length; ++i)
            {
                int keyByte = key[i] & 0xFE; // 254? mask  
                var parity = 0;
                for (var b = keyByte; b != 0; b >>= 1) parity ^= b & 1; // shift right until empty, setting parity  xor b bitand 1
                key[i] = (byte)(keyByte | (parity == 0 ? 1 : 0)); // set byte = byte bitor (unchange if match) 1 if not parity or 0 for odd
            }
            return key;
        }
        public static string EncryptEAN(string eanhex, string decryptedmwkhex)
        {
            byte[] decryptedmwk = ConvertHexStringToByteArray(decryptedmwkhex);            
            byte[] asciiean = Encoding.ASCII.GetBytes(eanhex.PadRight(8, ' '));   
            
            TripleDESCryptoServiceProvider p = new TripleDESCryptoServiceProvider();
            p.Padding = PaddingMode.None;
            p.IV = new byte[8];
            // p.Mode = CipherMode.CBC; //  default 
            byte[] random = p.Key;     // testing: random = FDCrypt.ConvertHexStringToByteArray("95:e4:d7:7c:6d:6c:6c")         
            byte checksum = GetCheckSum(asciiean);            
            byte[] eanblock = new byte[16];
            Array.Copy(random, 0, eanblock, 0, 7);
            eanblock[7] = checksum;
            Array.Copy(asciiean, 0, eanblock, 8, 8);   // BitConverter.ToString(eanblock)
            p.Key = decryptedmwk;
            ICryptoTransform e = p.CreateEncryptor();
            
            byte[] result = e.TransformFinalBlock(eanblock, 0, 16);
            return BitConverter.ToString(result, 0).Replace("-",String.Empty);
        }
 
  public static string GetEncryptedMWK(string decryptedmwkhex, byte[] kek)
        {
            byte[] decryptedmwk = FDCrypt.ConvertHexStringToByteArray(decryptedmwkhex);
            TripleDESCryptoServiceProvider p = new TripleDESCryptoServiceProvider();
            p.Padding = PaddingMode.None;
            p.IV = new byte[8];
            // p.Mode = CipherMode.CBC; //  default 
            byte[] random = p.Key;     //random = FDCrypt.ConvertHexStringToByteArray("e7:11:ea:ff:a0:ca:c3:ba")
            p.Key = decryptedmwk;  // BitConverter.ToString(decryptedmwk)
            ICryptoTransform e = p.CreateEncryptor();
            byte[] checkvalue = e.TransformFinalBlock(new byte[8], 0, 8);       // BitConverter.ToString(checkvalue)   
            byte[] keyblock = new byte[40];
            Array.Copy(random, keyblock, 8);  
            Array.Copy(decryptedmwk, 0, keyblock, 8, 24);
            Array.Copy(checkvalue, 0, keyblock, 32, 8);   // BitConverter.ToString(keyblock)
             
            p.Key = kek;              
            e = p.CreateEncryptor();
            byte[] encryptedkeyblock = e.TransformFinalBlock(keyblock, 0, 40);   
            string result = BitConverter.ToString(encryptedkeyblock,0, 40);            
            return result.Replace("-",String.Empty); // should be 81 bytes inc null term?
        }

 

For testing, I built a UI in WPF.  Here you see how I wanted to encapsulate all the encryption stuff in a separate library (later to be used in a web site), yet needed a UI stub to go through the lengthy 18 step, two month long testing and certification process with the vendor.  I knew that UI could leverage my experience with the MVVM pattern in WPF to expose over 20 fields and half a dozen steps in fast iterations as we went through the vetting process, and the WPF UI became more of a helpful tool than a code maintenance drain like most UI’s. 

 

 

 

 

 

 

 

 

 

 


Tags:

WPF | C# | Encryption

WCF vs MVC REST API

by MikeHogg 28. May 2012 15:25

 

What is this REST API that I keep hearing about?  I have been using WCF for years, but now the new buzzword is REST API for web services.

First, a good background found on this page: http://www.codeproject.com/Articles/255684/Create-and-Consume-RESTFul-Service-in-NET-Framewor

What is REST & RESTful?

Representational State Transfer (REST) is introduced by Roy Fielding on 2000; it is an architectural style of large-scale networked software that takes advantage of the technologies and protocols of the World Wide Web. REST illustrate how concentrated data objects, or resources, can be defined and addressed, stressing the easy exchange of information and scalability.

In 2000, Roy Fielding, one of the primary authors of the HTTP specification, wrote a doctoral dissertation titled Architectural Styles and the Design of Network-based Software Architectures.

REST, an architectural style for building distributed hypermedia driven applications, involves building Resource-Oriented Architecture (ROA) by defining resources that implement uniform interfaces using standard HTTP verbs (GET, POST, PUT, and DELETE), and that can be located/identified by a Uniform Resource Identifier (URI).

REST is not tied to any particular technology or platform – it’s simply a way to design things to work like the Web. People often refer to services that follow this philosophy as “RESTful services.”

My current user case asked for three clients served by one codebase- one WPF client and two web site clients, and so I figured WCF was the best way to go. But I wanted to see what new tech MS has for us...

I saw many examples of REST Controller actions in MVC, but they were using REST architecture, over Http, without typed endpoints and instant Clients from WSDL, whcih was the main reason why WCF would have been so good for my case.  WCF is so mature now that you rarely have to do more than click a few times and add some properties to a project config before you have strong typed client behaviors.  What do I get with this new REST stuff?  A lot of manual work and no strong typed objects.  It sounds like a step backwards to me.

Phil Haack agreed with me...

http://haacked.com/archive/2009/08/17/rest-for-mvc.aspx

"When your service is intended to serve multiple clients (not just your one application) or hit large scale usage, then moving to a real services layer such as WCF may be more appropriate." 

I finally found (the background I linked to above) what I was looking for in the WCF Starter Kit built on 4.0. It has strong typing, and automated client creation. It built REST on top of WCF and added some attributes you could decorate your WCF project with to work over a new protocol WebHttpEndpoint? http://www.codeproject.com/Articles/255684/Create-and-Consume-RESTFul-Service-in-NET-Framewor

This was what I was looking for, but since it built ON TOP of WCF I didn't see the point. To my point, Sam Meacham warned in Sep 2011 not to use WCF REST Starter Kit in the discussion on that page:

http://www.codeproject.com/Articles/255684/Create-and-Consume-RESTFul-Service-in-NET-Framewor?fid=1652761&df=90&mpp=50&noise=3&prof=False&sort=Position&view=Quick&fr=51#xx0xx

"The WCF REST Starter kit is abandoned, and will no longer be developed. WCF was designed to be protocol agnostic. REST services are generally built on the HTTP protocol, using all of the richness of http for your rest semantics. So WCF as it existed was actually a really bad choice for building rest services. You basically had to factor back in all of the http-ness that wcf had just factored out.

Glenn Block at Microsoft, who (with the community) developed the Managed Extensibility Framework (MEF) was reassigned to work on the WCF REST story at MS going forward. They are currently developing the WCF WEB API[^], which will be the new way to create REST services with WCF.

Also, keep in mind that REST has no service description language like WSDL or anything, so things like service location and automatic client generation don't exist. WCF certainly isn't your only chance for creating REST services in .NET. I created the RestCake? library for creating REST services based on IHttpHandler?. Also, IHttpHandler? is a very simple interface for creating REST services. A lot of people prefer to use MVC 3."

So, I conclude WCF is not going away, and is the appropriate tool for this case.  the WCF Web API that I heard rumor about appears to still be in development, coming in MVC4.

I will look at that for a future project but not this one... http://wcf.codeplex.com/wikipage?title=WCF%20HTTP

 

----

PS

Time passed, and I found myself playing with some Android development and wanted to hook up to some WCF service when I found out what is probably one of the big reasons why REST adoption is so strong- Android java libraries don't support SOAP well at all even with third party libraries! 

Tags:

Architecture | REST | WCF

An example of one of my most favorite projects

by MikeHogg 21. May 2012 18:58

One time I inherited a system of sorts that supported a single user, with her third party data warehouse application.  We didn’t support the warehouse, but we were supposed to get the data extracts that she imported into the warehouse at monthly intervals.  The existing IT process was very manual, and very time intensive.  As well as involving data from 4 different sources and the queries or processes to get them, it involved a dozen files per run, sometimes up to three people from different departments, with several runs per month, taking four to eight hours each run, and no history or state tracking except to keep the files in folders forever. 

 

The initial attempt to automate this also left behind a number of files and processes to maintain, and it had been running for over a year with 60 monthly man hours of IT dedicated to it and now several hundred files, folders, and processes in assorted locations.

 

This is one of my favorite jobs.  People put a mess in front of me and I turn it into something easy to use that saves time.  One of the things that bugged me about the existing process was that there was no history and it took too long.  I expanded our small database to include tables for each of our entities, and started automating the extracts in a nightly process.  This had the effect of making the user’s request time drop from several hours for the complicated queries to almost instant since we were now caching the data ourselves, as well as provided an easy way for IT to hook into historic runs. 

 

Another thing that I wanted to change was to streamline the steps.  The existing process exported from data sources, inserted into databases, extracted into files, joined with other datasources, imported into databases again.  So I built an SSIS package that did the data transformations on our Oracle database and inserted the data directly into the warehouse MSSQL server.  This removed the need for the files and a whole staging process, and made the whole process easier to maintain from an IT perspective.

 

Another thing that I wanted to change was to remove the IT resource component.  I don’t believe IT needs to be involved for day to day business operation requests, unless something breaks.  So I built a simple WPF click-once intranet application with a handful of features, enabling the user to run the whole extract/import instantly for any date they choose, and even view the data by Excel export if they want.  I like that it not only provided so much convenience for the user, but that it dropped the IT cost to maintain from an average of 60 monthly man hours to almost zero.

Tags:

Automation | Me

An example of one of my least favorite projects

by MikeHogg 16. May 2012 14:36

One of my least favorite projects where I had control over the outcome was my first WPF project. I had been doing aspnet web apps and winform apps for a few years. I hadn’t really learned a lot about patterns or architecture, but I was exposed to a senior consultant who had a particular effect on me. Under his influence, I started to open my eyes to new technology. I realized that I needed to accelerate my learning or my career was not going to go anywhere.

So among other things, I tried WPF for my next project instead of Winforms. The problem was, that I applied the event driven, static design of Winforms to WPF and it was not built for that.

Once I had invested enough time in the initial design and started to hit my first roadblocks, I realized that WPF was built to work against a pattern called MVVM, and I didn’t want to learn a new pattern on top of a new framework. I kept hitting roadblocks in UI development and each time I found solutions were always in MVVM and so they were not available to me. I ended up writing lots of hacks and disorganized code instead of learning about MVVM.

I delivered in nine months but it was a long nine months. My immediate next opportunity was a small deliverable, and I did that in WPF while learning MVVM, and realized my mistake. I was amazed at how easy it was if I used the correct pattern.  New technologies are as much, if not more, about patterns as they are about the nuts and bolts.

Tags:

Architecture | Me

Password hashing

by MikeHogg 11. May 2012 15:08

After some research this year, since the last time I had to write any password system was in 2006 or 2007, I am under the impression that the BCrypt library is the defacto standard in encryption and is available in C#.  the main point going for BCrypt is that it has a difficulty factor built in.  This prevents super hardware from brute forcing requests at sub millisecond attempts if it can get to it, and so limits dictionary attacks. 

 

Using it is simple.  I drop this BCrypt file into each of my projects.  BTW in it you will find the header with links to the project doc and license info.

BCrypt.cs (34.97 kb)

 

Now, your Membership provider just needs to store passwords BCrypted, like so

       public static bool SavePassword(string username, string newPassword)
        {
            string salt = lib.BCrypt.GenerateSalt(6);
            string hash = lib.BCrypt.HashPassword(newPassword, salt);
 
            return lib.DatabaseHelper.SavePassword(username, hash);
        }

(the SALT is the difficulty factor)

... and use the Bcrypt library to test passwords with its verify() method like this

public override bool ValidateUser(string username, string password)
{
    string hash = GetPassword(username, null);
    if (hash.Equals(string.Empty)) return false;
    return lib.BCrypt.Verify(password, hash);
}

Tags:

C# | Encryption

Logging From Day One (and Exception Handling)

by MikeHogg 9. May 2012 09:50

NLog is so easy to use, it really is like plug and play. Or drag and drop. Add dll to your References. Add this to your web.config, use either file, or db table(what I use). Then, in any class you want to use Logger, just add a line for the static instance:

    public class HomeController : MH.Controllers.AController
    {
        private static NLog.Logger logger = NLog.LogManager.GetCurrentClassLogger(); 

 

 

And then to use it:

 

    logger.Info("Some mess");

 

No reason not to have logging available in every web app from the start. I usually use a Log table described like my web.config shows here


<configuration>
  <configSections>
    <section name="nlog" type="NLog.Config.ConfigSectionHandler, NLog"/>...  </configSections>
...  <nlog xmlns="http://www.nlog-project.org/schemas/NLog.xsd" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" >
    <targets> 
      <target name="db" xsi:type="Database" connectionStringName="CONN"
              commandText="insert into Log(Level, Source, Message, Audit_Date) values(@level, @logger, @message, @time_stamp);">
        <parameter name="@time_stamp" layout="${date}"/>
        <parameter name="@level" layout="${level}"/>
        <parameter name="@logger" layout="${logger}"/>
        <parameter name="@message" layout="${message}"/>
      </target> 
    </targets>
 
    <rules>
      <logger name="*"  writeTo="db"></logger> 
    </rules>
  
  </nlog>

If you can't get it to start working, try using a log file first, or you can add atts like this example:
  <nlog xmlns="http://www.nlog-project.org/schemas/NLog.xsd" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
        internalLogFile="c:\mike.log" internalLogToConsole="true" throwExceptions="true">
    <targets>
      <target xsi:type="File" name="file" fileName="${basedir}/n.log" />

Oh and while we're here, ELMAH is always in my projects even before NLog.  It's just as easy, and actually comes with more features.  I use it with teh DB Table, and automatic emails.  This is all you need to get up and running...

<configuration>
  <configSections>
    <sectionGroup name="elmah">
      <section name="security" requirePermission="false" type="Elmah.SecuritySectionHandler, Elmah" />
      <section name="errorLog" requirePermission="false" type="Elmah.ErrorLogSectionHandler, Elmah" />
      <section name="errorMail" requirePermission="false" type="Elmah.ErrorMailSectionHandler, Elmah" />
      <section name="errorFilter" requirePermission="false" type="Elmah.ErrorFilterSectionHandler, Elmah" />
    </sectionGroup>
  </configSections>...
 
    <httpModules>
      <add name="ErrorLog" type="Elmah.ErrorLogModule, Elmah" />
      <add name="ErrorMail" type="Elmah.ErrorMailModule, Elmah" />
      <add name="ErrorFilter" type="Elmah.ErrorFilterModule, Elmah" />
    </httpModules>
...  <system.webServer>
    <validation validateIntegratedModeConfiguration="false"/>
    <modules runAllManagedModulesForAllRequests="true"> 
        <add name="ErrorLog" type="Elmah.ErrorLogModule, Elmah" preCondition="managedHandler" />
        <add name="ErrorMail" type="Elmah.ErrorMailModule, Elmah" preCondition="managedHandler" />
        <add name="ErrorFilter" type="Elmah.ErrorFilterModule, Elmah" preCondition="managedHandler" />
    </modules> 
  </system.webServer>... and 
  <elmah>
    <!--
        See http://code.google.com/p/elmah/wiki/SecuringErrorLogPages for 
        more information on remote access and securing ELMAH.   -->
    <security allowRemoteAccess="true" />
    <errorLog type="Elmah.SqlErrorLog, Elmah" connectionStringName="CONN"   >
    </errorLog>
    <errorMail
       to="mike.hogg@havasdiscovery.com"
       subject="[ELMAH] ACMT_Web Exception"  >
    </errorMail> 
    
  </elmah>
  <location path="elmah.axd" inheritInChildApplications="false">
    <system.web>
      <httpHandlers>
        <add verb="POST,GET,HEAD" path="elmah.axd" type="Elmah.ErrorLogPageFactory, Elmah" />
      </httpHandlers>
      <!-- 
        See http://code.google.com/p/elmah/wiki/SecuringErrorLogPages for 
        more information on using ASP.NET authorization securing ELMAH.      -->
      <authorization>
        <allow roles="Admin" />
        <deny users="*" />
      </authorization>
    </system.web>
    <system.webServer>
      <handlers>
        <add name="ELMAH" verb="POST,GET,HEAD" path="elmah.axd" type="Elmah.ErrorLogPageFactory, Elmah" preCondition="integratedMode" />
      </handlers>
    </system.webServer>
  </location>
</configuration> 

There's a db script to create the necessaries. I think that's it.  Comes with an Admin Area automatically and a dashboard app, if you set up authorization in your web then you should be able to see it with the Admin role and no further configuration.  ELMAH is good for catching all uncaught exceptions.  It has replaced my standard libraries and error handling methods in global.asax.

 

I also set up my own ErrorController, and some views, for my handled (known) errors.

public class ErrorController : AController
    {
        public ActionResult Index()
        { 
            Models.Error e = GetError();
            e.Title = "Error!";
            e.Message = "We are sorry.  An error has occurred.  Please try again or contact support";
 
            return View(e);
        }
 
        public ActionResult NotFound()
        {
            Models.Error e = GetError();
            e.Title = "Page Could Not Be Found";
            e.Message = "Sorry, that page could not be found";
 
            return View(e);
        }
 
        private Models.Error GetError()
        {
            Models.Error result = new Models.Error();
            Exception ex = null;
 
            try
            {
                ex = (Exception)HttpContext.Application[Request.UserHostAddress.ToString()];
            }
            catch { }
 
            if (ex != null) result.Exception = ex;
            
            return result;
        }

If you want to manually log errors in your app using ELMAH, just do this (wrapped in my lib/logger library):

 

 

public static void LogWebException(Exception ex)
        {
            try
            {
                Elmah.ErrorSignal.FromCurrentContext().Raise(ex, System.Web.HttpContext.Current);

 

Or... add a filter to Exception handling and in that hook tell ELMAH to log handled. Now all of your handled exceptions will be logged also.

namespace MH.Web.Mvc3.Controllers
{
    public class ElmahHandledErrorLoggerFilter : IExceptionFilter
    {
        public void OnException(ExceptionContext context)
        {
            // Log only handled exceptions, because all other will be caught by ELMAH anyway.
            if (context.ExceptionHandled)
                Elmah.ErrorSignal.FromCurrentContext().Raise(context.Exception);
        }
 
        // ADD THIS TO GLOBAL ASAX
        ///public static void RegisterGlobalFilters (GlobalFilterCollection filters)
        //{
        //    filters.Add(new ElmahHandledErrorLoggerFilter());
        //    filters.Add(new HandleErrorAttribute());
        //}
    }
}

 

 

 

ELMAH has a habit of becoming bothersome with all the 404s for robot.txt.   Put this in  your web.config to stop them..

 

 

    <errorFilter>
      <test>
        <or>
          <and>
            <equal binding="HttpStatusCode" value="404" type="Int32" />
            <equal binding="Context.Request.Path" value="/favicon.ico" type="string" />
          </and>
          <and>
            <equal binding="HttpStatusCode" value="404" type="Int32" />
            <equal binding="Context.Request.Path" value="/robots.txt" type="string" />
          </and>
        </or>
      </test>
    </errorFilter>
    
  </elmah>

You can Depend on it

by MikeHogg 12. April 2012 10:09

Imagine for a second, that you wrote an entirely (well mostly) self contained application.  Let's say it has its own home built web server, uses file based persistence, and entirely compiled in one executable.  All it needs is a particular OS and filesystem to run. It doesn't happen like that but we are pretending for a minute. Now, let's say the list of web server features to implement in the next release has grown so long that you don't know how you are going to deliver them all, and someone suggests looking at third party web servers out there- pretend IIS and Apache, or Tomcat. And now you invite IIS into your little ecosystem, for better or for worse, til death do you part(or whenever the rewrite comes along) not only to your language compiler, OS, Html interpretations, and NTFS filesystem, but to this IIS application.

Of course it sounds ludicrous, because we are so used to Having Apache, or IIS. It is a standard requirement.

It's everywhere now and this is good. Your GoDaddy account has two options, Apache or IIS, and nobody thinks twice about it. Same with libraries.

You never think about it but even using .net 1.1 or .net 4.0 is a dependency.

It is something that becomes a responsibility to manage for the life of your application.

 

I used to work on an 8 year old legacy .net web application, created by contractors who had long gone and whose names nobody remembered any longer.  It was a mess, sure- few codebases exist that don't look like messes to anyone but the authors), but it was 8 years old and running just fine.  I worked on this job next to a contractor superstar, one of those famously notorious contractors that sell you the latest and greatest and the moon on top of that and in half the time you wanted, and are not around three months after delivery to answer questions about a behemoth application they delivered held together by shoestrings.

 

Mind you- I learned a ton from this guy.  I used to be partial to old technology, to the point of mistrusting anything new.  Prejudicial, even.  I used to rationalize it as being loyal to the old team... trusting only the tried and true, and being cool, a real nerd who only used obscure old command line tools and found fault with anything that tried to take away my manual control.  In retrospect, I wonder how much of it was just a simple fear of learning new things.  Anyway, this contractor showed me what it was like to be on fire about new technology.  He was always a couple versions ahead in everything.  We would be discussing some new feature that we just found out about in c# 2.0 and he would tell us to just wait and see how fast we get along when we finally get to c# 3.0.  We'd be discussing the upcoming 3.0 framework and he'd be talking about the 3.5 update and the new features in 4.0.  He'd bolt on Application Blocks by the six pack, dotnetnukes and bootstrapper libraries on a whim it seemed.  And he made a lot of money.  He was an independent contractor, made his own hours, drew his own contracts, worked hard, and made a lot of money.  But his answer to any of my questions about the inner workings of some framework call or how to expose a property properly would usually be that I should download and add some application to my codebase.

 

Of course, the way I paint it, it doesn't sound all that good.  And you can guess how the story will end.  But I did learn how to overcome my fear of the new tech.  Learn or be run over by developers that are learning.  But maybe with a bit of moderation.  The story continues... Our team eventually had to upgrade the servers that our project was hosted on from Server 2000 to 2008, and we took the opportunity to upgrade from 1.1 to 3.5.  Along with the one or two library application blocks we depended on from 2002, we had 50 or 60 projects in our web application, and it was fairly painless to upgrade everything, and that application I am fairly confident today, is still chugging along, with new features being added to it even, 11 years old...

 

A couple years later I was asked to get into a project to fix some application that wasn't working.  Turns out it was that contractor's application from the story- Most of the eight or ten feature sets/third party application blocks that composed the total web application had stopped working within the first three or six months of release, long after he had gone, and the users had just not been able to find resources to fix it.

 

Most of the standard dependencies, the web servers, the language frameworks, the operating systems, the filesystems, they all are pretty low maintenance.

I can imagine most applications lasting 8 or 10 years before being absolutely forced to think about these dependencies.

But they are there, and everytime you add one more to the pot, you're adding to that total cost.

You're adding to the time and effort some developer- likely not you- will have to spend pouring through your code five years from now looking for conflicts

or compile errors against a new framework, or hunting down obscure bugs that happened when someone decided to just upgrade and see if it all works.

You're adding to that. There is the possibility that business decisions are made to rewrite the application, or buy a replacement from a vendor,

or spend time/money to fix it. There is a possibility that your pride and joy might die an early death at the first sign of trouble

instead of fading away into the sunset several generations down the road.

So someone explains all the benefits of relational databases, and you find that to be a Good Thing, and add that to your application.

You weigh the odds, make a judgment call, add a third party library to your project Logging.

And another JQuery. Add some new architecture pattern AWS. Add a new framework MVC4.

And now the future is not as stable as before. What is your life expectancy now? What is your confidence of that number?

How will you approach the decision to add the next dependency to your application?

Tags:

Architecture

Hacking up a WCF Client for a nonstandard SOAP service

by MikeHogg 12. March 2012 21:11

I set up a console shell application and framework for a team that collected data from hundreds of sources, but most of the data came from web scrapes and web services.

Setting up clients in VS usually requires just a few clicks, but when the servers are third party, and they are not using Microsoft technologies, this sometimes doesn’t work.  The VS tool will just error out with some vague message about not loading the WSDL.  Using the command line will give you some hints and sometimes you can download their WSDL to your local, and make a couple of edits, and then SvcUtil your client.

In one case in particular, even this didn’t work for me.  I was already resorting to writing custom XML requests and inspecting the responses with Fiddler to get my requests right.  I think it was some Java JBoss server, and apparently they are known for not serving a standard SOAP format.  I forget the details why...  But I knew that I could write my own DataContract and OperationContract classes and even write custom channel parsers if I had to.  They were serving lots and lots of datatypes, and methods, though, and I didn’t need but a few of them.  I had to dissect their huge wdsl file, pulling out just the Data Objects I needed and writing them by hand, instead of using the svcutil, and then running tests to find what I was missing.  I had to use XmlSerializerFormat instead of DataContractSerializer attributes for some obscure reason.

Here was my client, constructed in c# to get the requests just right:

    class MPRClient : ClientBase<IMPR>, IMPR
    {
        public MPRClient()
            : base()
        {
            System.ServiceModel.BasicHttpBinding binding = new BasicHttpBinding();
            binding.Security.Mode = BasicHttpSecurityMode.Transport;
            binding.Security.Transport.Realm = "eMPR Authentication";
            binding.Security.Transport.ClientCredentialType = HttpClientCredentialType.Basic;
            CustomBinding cbinding = new CustomBinding(binding);  // need to set keepalive=false or we get 505 after auth, this is one way
            foreach (BindingElement be in cbinding.Elements)
            {
                if (be is HttpsTransportBindingElement) ((HttpsTransportBindingElement)be).KeepAliveEnabled = false;
            }
            Endpoint.Binding = cbinding;
        }
        public queryResponse query(queryRequest request)
        {
            queryResponse result = Channel.query(request);
            return result;
        }

Here are some of my Data Classes that I figured out from the testing you will see my request and response objects, and take note of how I constructed the child objects, as arrays were the only way to get the serialization to line up just right…

    /// <summary>
    /// some's empr query web service
    /// </summary>
    [XmlSerializerFormat]
    [ServiceContract(Name = "empr", Namespace = "http://empr.some.com/mpr/xml")]
    interface Impr
    {
        /// <summary>
        /// 
        /// </summary>
        /// <param name="queryRequest">
        /// query takes two parms- CompanyName(LSE) and Day
        /// </param>
        /// <returns>
        /// sample data you can get from this service:
        /// <PeakLoadSummary Day="2012-01-23">
        ///     <LSE>NEV</LSE>
        ///     <ZoneName>AECO</ZoneName>
        ///     <AreaName>AECO</AreaName>
        ///     <UploadedMW>70.4</UploadedMW>
        ///     <ObligationPeakLoadMW>70.064</ObligationPeakLoadMW>
        ///     <ScalingFactor>0.99523</ScalingFactor>
        ///     </PeakLoadSummary>
        /// </PeakLoadSummarySet>
        /// </returns>
        [XmlSerializerFormat]
        [OperationContract(Action = "/mpr/xml/query")]
        queryResponse query(queryRequest queryRequest);
    }
    [MessageContract(WrapperName = "QueryRequest", WrapperNamespace = "http://empr.some.com/mpr/xml", IsWrapped = true)]
    [XmlSerializerFormat]
    public class queryRequest
    {
        [MessageBodyMember(Namespace = "http://empr.some.com/mpr/xml", Order = 0)]
        [System.Xml.Serialization.XmlElement("QueryPeakLoadSummary")]
        QueryPeakLoadSummary[] Items;
        public queryRequest() { }
        public queryRequest(QueryPeakLoadSummary[] items)
        {
            Items = items;
        }
    }
    [XmlSerializerFormat]
    [System.Xml.Serialization.XmlType(AnonymousType = true, Namespace = "http://empr.some.com/mpr/xml")]
    public class QueryPeakLoadSummary
    {
        [System.Xml.Serialization.XmlAttribute]
        public string CompanyName;
        [System.Xml.Serialization.XmlAttribute]
        public string Day;
        public QueryPeakLoadSummary() { }
    }
    [MessageContract(WrapperName = "QueryResponse", WrapperNamespace = "http://empr.some.com/mpr/xml", IsWrapped = true)]
    [XmlSerializerFormat]
    public class queryResponse
    {
        [MessageBodyMember(Namespace = "http://empr.some.com/mpr/xml", Order = 0)]
        [System.Xml.Serialization.XmlElement("PeakLoadSummarySet")]
        public PeakLoadSummarySet[] Items;
        public queryResponse() { }
        public queryResponse(PeakLoadSummarySet[] Items)
        {
            this.Items = Items;
        }
    }
    [XmlSerializerFormat]
    [System.Xml.Serialization.XmlTypeAttribute(AnonymousType = true, Namespace = "http://empr.some.com/mpr/xml")]
    public class PeakLoadSummarySet
    {
        [System.Xml.Serialization.XmlElement("PeakLoadSummary", Order = 0)]
        public PeakLoadSummary[] PeakLoadSummary;
    }
    [XmlSerializerFormat]
    [System.Xml.Serialization.XmlType(AnonymousType = true, Namespace = "http://empr.some.com/mpr/xml")]
    public class PeakLoadSummary
    {
        [System.Xml.Serialization.XmlElement(Order = 0)]
        public string LSE;
        [System.Xml.Serialization.XmlElement(Order = 1)]
        public string ZoneName;
        [System.Xml.Serialization.XmlElement(Order = 2)]
        public string AreaName;
        [System.Xml.Serialization.XmlElement(Order = 3)]
        public string UploadedMW;
        [System.Xml.Serialization.XmlElement(Order = 4)]
        public string ObligationPeakLoadMW;
        [System.Xml.Serialization.XmlElement(Order = 5)]
        public double ScalingFactor;
        [System.Xml.Serialization.XmlAttribute]
        public DateTime Day;
        public PeakLoadSummary() { }
    }

 

My client config was just a one line endpoint, since the options to set the keepaliveenabled were not available in the config, and I put it in the c# initialization:

  <system.serviceModel>
    <bindings>
      
      <netTcpBinding> 
        <binding name="pooledInstanceNetTcpEP_something else
      </netTcpBinding>
      
      <basicHttpBinding>
        <binding name="OperatorInterfaceSoap" closeTimeout="00:01:00" openTimeout="00:01:00" receiveTimeout="00:10:00" sendTimeout="00:01:00" allowCookies="false" bypassProxyOnLocal="false" hostNameComparisonMode="StrongWildcard" maxBufferPoolSize="524288" maxReceivedMessageSize="65536" messageEncoding="Text" textEncoding="utf-8" useDefaultWebProxy="true">
          <readerQuotas maxDepth="32" maxStringContentLength="8192" maxArrayLength="16384" maxBytesPerRead="4096" maxNameTableCharCount="16384"/>
          <security mode="Transport">
            <transport clientCredentialType="Basic"/> 
          </security>              
        </binding>
          
      </basicHttpBinding>
 
    </bindings>
 
    
    <client> 
      
      <endpoint address="net.tcp://Tosomethingelse
        </identity>
      </endpoint>
      <endpoint address="https://b2bsomething else
      </endpoint>
      <endpoint address="https://rpm.pjm.com/erpm/services/query"  binding="basicHttpBinding"
                contract="jobs.IRPM" >
      </endpoint>
 
      
    </client>     
  </system.serviceModel>

And then I could write business code just like normal:

        private List<queryResponse> GetResponses(List<Account> accounts, DateTime date)
        {
            List<queryResponse> result = new List<queryResponse>();
            foreach (Account account in accounts)
            {
                MPRClient r = new MPRClient();
                r.ClientCredentials.UserName.UserName = account.Username;
                r.ClientCredentials.UserName.Password = account.Password;
                result.Add(r.query(new queryRequest(
                    new QueryPeakLoadSummary[] { 
                         new QueryPeakLoadSummary{ CompanyName = account.Company, Day = date.ToString("yyyy-MM-dd") }
                     }
                    )));  // day must be 00 digits
            }
            return result;
        }

Tags:

WCF

Loading log files into Oracle

by MikeHogg 8. March 2012 17:51

One of my last Oracle projects was pretty neat, because I started working with the new 11g feature, external tables.  This allowed Oracle to mount a file as a table, and was incredibly fast compared to using sqlloader, which was what we had been doing for years. 

In this case I was loading unix log files in the order of millions of rows for each daily file, by loading the external table, and then processing that table into our permanent logging table.  The data sets involved here were pretty big, and so usual manipulation like inserts for millions of rows would take hours and hours, so changing from Sql Loader to external tables saved a lot of time, but I still had a lot of inserts to make, so I added some tweaks, like dropping indices and recreating them after, and then updated stats with the new indices for Oracle’s Query Optimizer. 

Once I had the files shared on a network location accessible to this Oracle unix server, I loaded them with this proc:

  procedure LoadExtTable(filedate varchar2) is 
  
  
  begin  
            
    execute immediate 'create table mdf_meta_activity_dump ( IP_ADDRESS VARCHAR2(255), PID NUMBER,' ||
                      'SYMBOL VARCHAR2(255), USER_ID VARCHAR2(50), APPLICATION VARCHAR2(60),' ||
                      'HOSTNAME VARCHAR2(60), SYMBOL_MESSAGE VARCHAR2(255), SYMBOL_FORMAT VARCHAR2(255),' ||
                      'SCRIPT_NAME VARCHAR2(255), PROCMON_PROCESS VARCHAR2(255), TIME_STAMP DATE )' ||
                      'organization external (type oracle_loader default directory exttabdir access parameters ' ||
                      '(RECORDS DELIMITED BY NEWLINE FIELDS TERMINATED by ''|'' ' ||
                      
                      ' ) LOCATION (''\someplace\somedb\udpserver\udp.txt''));';
                      
  
  
  
  end;

I would process the dump with this proc, which also updated two other tables and was written to be re-runnable, so that, in case of failure or just manual mistake, running the same file of millions of rows would not result in a mess of a million duplicates. 

You will also see here Oracle bulk statements, and logging, which allowed someone to monitor the process realtime, as it usually took some minutes or hours.

  procedure ProcessActivityDump is
    
    cursor c_log(p_file_date date) is 
           select s.id, d.user_id, d.symbol_message, d.time_stamp, p_file_date, trunc(d.time_stamp), to_char(d.time_stamp,'M')
            from mdf_meta_symbol s
            join mdf_meta_activity_dump d
              on s.name = d.symbol
              ;
              
  type t_activity is table of c_log%rowtype;
  r_activity t_activity;
  v_count number; 
  
  v_file_date date;
  
  begin
    -- PROCS
    merge into mdf_meta_proc p
    using (select distinct procmon_process, script_name from mdf_meta_activity_dump) d
    on (p.procmonjob = d.procmon_process and p.script = d.script_name)    
    when not matched then 
      insert (id, procmonjob, script, active_fg, insert_date, audit_date, audit_user)
      values(seq_mdf_id.nextval, procmon_process, script_name, 1, sysdate, sysdate, 'PKG_META');
    
    Log_This('PKG_META.ProcessActivityDump','MDF_META_PROC new rows inserted: ' || sql%rowcount ,'INFO');
    
    -- SYMBOL, rerunnable
    merge into mdf_meta_symbol s
    using (select distinct symbol, p.id from mdf_meta_activity_dump join mdf_meta_proc p on procmon_process = procmonjob and script_name = script) d
    on (s.name = d.symbol)
    when not matched then 
      insert(id, name, proc_id) values (seq_mdf_id.nextval, symbol, d.id);
    Log_This('PKG_META.ProcessActivityDump','MDF_META_SYMBOL new rows inserted: ' || sql%rowcount ,'INFO');    
    
    -- ACTIVITY
    select file_date into v_file_date from (
                     select trunc(time_stamp) file_date, count(*) 
                       from mdf_meta_activity_dump 
                      group by trunc(time_stamp) 
                      order by count(*) desc) where rownum = 1;
                        
    -- delete existing activity for this day, to make rerunnable   
    delete from mdf_meta_activity where file_date = v_file_date; 
    Log_This('PKG_META.ProcessActivityDump','Dump_Date: ' || v_file_date || ' rows deleted in preparation for new load: ' || sql%rowcount ,'INFO');
        
    -- now add the activity, logging only every 200k or so
    -- maybe need to drop idx and recreate after
    -- create index IDX_MDF_META_ACT_SYMID on MDF_META_ACTIVITY (SYMBOL_ID)
    open c_log(v_file_date);    
    v_count := 0;
    loop 
    fetch c_log bulk collect into r_activity limit 1000;
    exit when r_activity.count = 0;
    
      forall idx in 1..r_activity.count
        insert into mdf_meta_activity
        values   r_activity(idx);
            
      v_count := v_count + r_activity.count;
      if Mod(v_count, 200000) = 0  then
        Log_This('PKG_META.ProcessActivityDump','Cumulative insert now at ' || v_count || ' rows','INFO');
      end if;
          
    end loop; 
   
    RebuildIndices;
    GatherStats;
    
  end ProcessActivityDump;
  

And that’s it.

Tags:

Oracle | Automation

About Mike Hogg

Mike Hogg is a c# developer in Brooklyn.

More Here

Favorite Books

This book had the most influence on my coding style. It drastically changed the way I write code and turned me on to test driven development even if I don't always use it. It made me write clearer, functional-style code using more principles such as DRY, encapsulation, single responsibility, and more. amazon.com

This book opened my eyes to a methodical and systematic approach to upgrading legacy codebases step by step. Incrementally transforming code blocks into testable code before making improvements. amazon.com

More Here