Quartz Scheduler

by MikeHogg26. October 2013 13:46


Quartz Scheduler seems to be a very robust and mature job scheduler originally created in java.  I’ve only just found out about it and find it useful enough to make a note here about it for future use.  You can find lots of tutorials and samples and users online for it.  It is ported to .Net with Quartz.net.  Adding a quartz scheduler to your project is as easy as a Nuget.  Simple scheduling can happen right out of the box.  It can handle complicated cases, and just about all of the scheduling cases that I've seen.  JobDetails, Jobs, and Triggers are all classes you can mix and match up in different combinations.  It also handles persistence if you want to hook up to an ado store (several Sql and NoSql avail), and logging is built in with another Nuget- Common.Logging.

 

If you want to inject services into your IJobs, though, you will need to create your own IJobFactory and start the scheduler that way

Dim schFactory As ISchedulerFactory = _container.Resolve(Of ISchedulerFactory)()
_scheduler = schFactory.GetScheduler()
_scheduler.JobFactory = New ScheduleJobFactory(_container)

Your factory implements NewJob (and with Quartz 2.2, ReturnJob (do nothing unless you have dispose requirements)) and here is where you craete the job with your Container of choice (Autofac here) so the container can inject what it needs...

public IJob NewJob(TriggerFiredBundle bundle, IScheduler scheduler)
        {
            var jobtype = bundle.JobDetail.JobType;
 try
            {
                var schjobtype = typeof(MyJob<>).MakeGenericType(jobtype);
                var schjob = (IJob)Activator.CreateInstance(schjobtype, _container);
 return schjob;
            }
 catch (Exception e)
            {
 using (var l = _container.BeginLifetimeScope())
                {
                    var logger = _container.Resolve<ILogger>();
                    logger.Error(e);
                }
            }
 return new NoOpJob();
        }

... Your JobFactory gets the container from it's constructor

public class ScheduleJobFactory : ISchedulerFactory
    {
        ILifetimeScope _container;
public ScheduleJobFactory(ILifetimeScope container)
        {
            _container = container;
        } 
    }

I couldn't have figured this part out:  Your jobfactory points to a jobtype class of T that does the container resolve of T (!) clever pattern.

public class MyJob<T> : IJob where T : IJob
    {
        ILifetimeScope _container;
public MyJob(ILifetimeScope container)
        {
            _container = container;
        }
public void Execute(IJobExecutionContext context)
        {
 using (var lscope = _container.BeginLifetimeScope())
            {
                var someJob = lscope.Resolve<T>();
                someJob.Execute(context);
            }
        }

 

Your Jobs still inherit from IJob and don't reference the generic job... just register them like normal, and specify these specific classes, in your quartz.jobs.xml

 

...

 

Diagnostics- if you have trouble getting your job xml files to work (or are trying to use deprecated 1.0 version xml with upgraded 2.0 quartz) and want to see the logging, you can add Common.Logging config to your app.config.  The section def:

 

<sectionGroup name="common">
<section name="logging" type="Common.Logging.ConfigurationSectionHandler, Common.Logging" />
</sectionGroup>

The config-

<common>
<logging>
<factoryAdapter type="Common.Logging.Simple.ConsoleOutLoggerFactoryAdapter, Common.Logging">
<arg key="level" value="DEBUG" />
<arg key="showLogName" value="true" />
<arg key="showDataTime" value="true" />
<arg key="dateTimeFormat" value="yyyy/MM/dd HH:mm:ss:fff" />
</factoryAdapter>
</logging>
</common>

 

 

I've seen lots of online users hook up their common.logging to log4net to monitor the scheduler process, and there is a FactoryAdapter for log4net if you want to go that route.

 

While we are in config, note that the quartz looks for config in 3 or 4 places.  First in a java style file quartz.config, then in app.config, then in some other places.  If you want to remove the quartz.config and use just .net style, this is a sample app.config section def-

<section name="quartz" type="System.Configuration.NameValueSectionHandler, System, Version=1.0.5000.0,Culture=neutral, PublicKeyToken=b77a5c561934e089" />

 

and config-

<quartz>
<add key="quartz.scheduler.instanceName" value="_scheduler" /> <!-- whatfor-->
<!--     Configure Thread Pool -->
<add key="quartz.threadPool.type" value="Quartz.Simpl.SimpleThreadPool, Quartz" />
<add key="quartz.threadPool.threadCount" value="10" />
<add key="quartz.threadPool.threadPriority" value="Normal" />
<!--     Configure Job Store --><!--
    <add key="quartz.jobStore.type" value="Quartz.Simpl.RAMJobStore, Quartz" />
      <add key="quartz.plugin.xml.type" value="Quartz.Plugin.Xml.JobInitializationPlugin, Quartz" />-->
<add key="quartz.plugin.xml.fileNames" value="~/config/ScheduledJobs.config" />
<!--<add key="quartz.plugin.xml.scanInterval" value="10" />-->
<add key="quartz.plugin.xml.type" value="Quartz.Plugin.Xml.XMLSchedulingDataProcessorPlugin, Quartz" />
</quartz>

That uses the defaults.  The XmlSchedulingDataProcessorPlugin I think is the included jobs.xml reader and required if you use xml, as the default it something else.

NHibernate midstream

by MikeHogg7. March 2013 14:30

NHibernate is an active ORM product, and has been one for some time.   Coming to it as I have, at version 3 can be difficult to digest, especially as you are soaking up new syntax, even if the patterns are not unfamiliar.  Especially since there are many options and support libraries to choose from, and the core library has changed so much from version one to two and again to three. 

So just blindly googling lines of code and quickstarts can easily get you in lots of dark woods unless you pay attention to the dates of web posts, and understand how NH evolved over time. 

First, the patterns.  NH works from a Repository pattern by using db sessions.  Inject the NH SessionFactory into your repositories and then your Repos can run all the NH Query goodness instead of you writing a whole bunch of db layer stuff.  In its simplest form, you can use an NHHelper class to statically create a SessionFactory inside your Repo, or you can use Construction Injection and let your Injection Library do that for you automatically, like here with Ninject

public class NHibernateSessionFactory
    {
public ISessionFactory GetSessionFactory()
        { //later }
public class NHibernateSessionFactoryProvider : Ninject.Activation.Provider<ISessionFactory>
        {
 protected override ISessionFactory CreateInstance(Ninject.Activation.IContext context)
            {
                var sessionFactory = new NHibernateSessionFactory();
 return sessionFactory.GetSessionFactory();
            }
        }

 

then your Repo looks something like this:

 

 public class Repository<T> :  IRepository<T> where T:class
    {
private readonly ISession session;
public Repository(ISession s)
        {
            session = s;
        }
 

So GetSessionFactory is where all the NH code is config'd.

Here are your options for configuring...

Originally, everything was XML and config based, so you would have the base NH.config() in your code, and your web.config would have a section with the required and optional properties in it.

 

 
public ISessionFactory GetSessionFactory()
        {
            var config = new Configuration();
            config.BuildSessionFactory();
 

 

Then your web.config would have all the hookup:

 

 <configSections>
<section name="hibernate-configuration" type="NHibernate.Cfg.ConfigurationSectio
nHandler, NHibernate" />
</configSections>
<connectionStrings>
<add name="CONN" connectionString="blahblah" providerName="System.Data.SqlClient" />
</connectionStrings>
<hibernate-configuration xmlns="urn:nhibernate-configuration-2.2">
<session-factory>
<property name="connection.provider">NHibernate.Connection.DriverConnectionProvider</property>
<property name="connection.driver_class">NHibernate.Driver.SqlClientDriver</property>
<property name="dialect">NHibernate.Dialect.MsSql2008Dialect</property>
<property name="connection.connection_string_name">CONN</property>
<property name="show_sql">true</property>
</session-factory>
</hibernate-configuration>
 

Then came Fluent, and config was moved into code. so, if you use that library, you no longer need the web.config and you can just use this in your SessionFactory...

 

        
public ISessionFactory GetSessionFactory()
        {
 // fluent style config.  We use both to apply loquacious mapping by code to old style config, and then create FluentConfiguration with it
            FluentNHibernate.Cfg.FluentConfiguration fluentConfiguration = FluentNHibernate.Cfg.Fluently.Configure()
                                                   .Database(FluentNHibernate.Cfg.Db.MsSqlConfiguration.MsSql2008.ShowSql()
                                                    .ConnectionString(c => c.FromConnectionStringWithKey("CONN")))
 // only need one From Assembly to point to all the maps These are for Fluent MAPS inherting from ClassMap
                                                   .Mappings(m => m.FluentMappings.AddFromAssemblyOf<Maps.EventMap>()) 
                                                   .ExposeConfiguration(cfg => cfg.SetProperty("adonet.batch_size", "20"))
                                                   .ExposeConfiguration(c => c.SetProperty("generate_statistics", "true"));
 //.ExposeConfiguration(BuildSchema) 
 return fluentConfiguration.BuildSessionFactory();

In addition to config, when setting up your NH Environment, you need to init your maps.  If you already have database tables, the first order of business is to get NMG, NHibernateMappingGenerator.  This reads the db and outputs entity classfiles and classmaps and is a huge help.  Note in the screenshot below, in the upper right hand side of the preferences, how you can choose which format of Mapping files you want.

 

Originally, these would be XML files, (convention) named something.hbm.xml.  We no longer need to do that.  You see also FluentNHibernate and the new Loquacious format.  FNH uses ClassMap parent class, and I believe that might be the convention that allows AddAssemblyFromMaps<>() to work.  Loquacious uses ClassMapping parent, and I believe offers more flexibility in filtering types from Assembly to map.

With NH 3 the project merged the Loquacious (Mapping By Code) codebase that was previously a library, but it is not very mature so a lot of posts I have seen are where it is being used for simpler project requirements or in tandem with a large base of existing Fluent mappings.  I could not figure out a way to add MBC maps to FluentMappings, so in case you ever want to do that here is code to initialize and map both ways, using a base NH.Config with web.config for MBC maps and building your FluentConfig with that base, adding FluentMaps in that process...

 
public ISessionFactory GetSessionFactory()
        {
 
 // old style config 
            var mm = new NHibernate.Mapping.ByCode.ModelMapper();
            Type[] mappingTypes = typeof(Maps.EventMap).Assembly.GetExportedTypes().Where(t => t.Name.EndsWith("Map")).ToArray();
            mm.AddMappings(mappingTypes);
            var config = new Configuration();
            config.AddMapping(mm.CompileMappingForAllExplicitlyAddedEntities());
 // fluent style config.  We use both to apply loquacious mapping by code to old style config, and then create FluentConfiguration with it
            FluentNHibernate.Cfg.FluentConfiguration fluentConfiguration = FluentNHibernate.Cfg.Fluently.Configure(config)
                                                   .Database(FluentNHibernate.Cfg.Db.MsSqlConfiguration.MsSql2008.ShowSql()
                                                    .ConnectionString(c => c.FromConnectionStringWithKey("CONN")))
 // only need one From Assembly to point to all the maps These are for Fluent MAPS inherting from ClassMap
                                                   .Mappings(m => m.FluentMappings.AddFromAssemblyOf<Maps.EventMap>()) 
                                                   .ExposeConfiguration(cfg => cfg.SetProperty("adonet.batch_size", "20"))
                                                   .ExposeConfiguration(c => c.SetProperty("generate_statistics", "true"));
 //.ExposeConfiguration(BuildSchema) 
 return fluentConfiguration.BuildSessionFactory();
        }

Static Repositories (vs Instance)

by MikeHogg30. July 2012 10:10

I find for small projects that I always used to create static repositories.  It can be a speedy mechanism to develop with, and I probably took the lead from MS' Membership classes and their treatment of Membership classes, plus years of working against Oracle databases when no .Net ORM existed that would go against Oracle...

 

I like static because I use them to return typed collections, and they, with linq if necessary, provide easy one liners in most cases where I interface with data access, and don't need to cache or watch my calls for performance, like in Web sites. But, in web cases, note that on each POST or GET, you are recreating all objects anyway, so I would just add predicate calls to static repo.  Here's what many of my repo Getxxx methods look like:

 

 

internal static IEnumerable<Models.FileModel> GetFileModels(string username = null)
        {
            List<SqlParameter> parms = new List<SqlParameter>();
            parms.Add(new SqlParameter("@EmailAddress", username));
            IDataReader r = DatabaseHelper.GetDataReader("sp_GetFileModels", parms);
            DataTable t = new DataTable();
            t.Load(r);
            var result = from DataRow row in t.Rows
                         select new Models.FileModel
                         {
                             Id = Convert.ToInt32(row["Id"]),
                             RequestDate = DateTime.Parse(row["RequestDate"].ToString()),
                             TitleName = row["TitleName"].ToString(),
                             SomethingId = Convert.ToInt16(row["SomethingId"]),
                             SomethingTypeId = Convert.ToInt16(row["SomethingTypeId"]),
                             FileName = row["Name"].ToString(),
                             FileTypeId =  Convert.ToInt32(row["FileTypeId"]),
                              FileTypeName = row["FileTypeName"].ToString(),
                              ContentEncoding = row["ContentEncoding"].ToString(),
                              ContentLength = Convert.ToInt32(row["ContentLength"]),
                             ActiveFlag = Convert.ToBoolean(row["ActiveFlag"]),
                             EnabledFlag = Convert.ToBoolean(row["EnabledFlag"]),
                             SomeCompanyName = row["SomeCompanyName"].ToString(),
                             SomeNumber = row["SomeNumber"].ToString(),
                             EstimatedNumberOfSomethings = MH.lib.DatabaseHelper.ConvertDBNullsToNInt(row["EstimatedNumberOfSomethings"]),
                             EstimatedDeadline = Convert.IsDBNull(row["EstimatedDeadline"]) ? new DateTime() : DateTime.Parse(row["EstimatedDeadline"].ToString()),
                             FinalDeadline = DateTime.Parse(row["FinalDeadline"].ToString()),
                             ActivityDate = DateTime.Parse(row["ActivityDate"].ToString()), 
                         };
 return result;
        }

 

 

And then in my business layer I simply Getxxx and if I need a single or a filtered set I simply add a linq predicate.

 

 

model = lib.Repository.GetFileModels(UserNameOrNullIfAdmin()).FirstOrDefault(f =>
                    f.SomethingId == somethingid &&
                    (f.FileTypeId == filetypeid || ( filetypeid == 0 && f.SomethingTypeId == lib.CONST.ACONSTANTID &&// in case of xyz filetypeid arg is zero
                                                        (f.FileTypeId == lib.CONST.SOMECONSTANTID || f.FileTypeId == lib.CONST.ANOTHERCONSTANTID)))
                    && f.ActiveFlag == true);
 if (model == null)
                {
                    Models.Something s = lib.Repository.GetSomethings(UserNameOrNullIfAdmin(), somethingid).FirstOrDefault();
                    model = new Models.FileModel
                  {
                      SomethingId = somethingid,
etc., etc.

 

I know this isn't pushing my predicate all the way to the database like Entity Framework so it is something to watch for.  In my experience, most small cases can get to the order of several hundreds of business objects and only require light attention to performance.  And this handles most non-business related applications.

 

But most of the MVC framework codebases and examples I have seen out there that is shared or posted on blogs, happen to use the instance repository pattern... And I hate not knowing Why something is the way it is, so … what are the reasons NOT to use static Repos?

 

I searched for a while, but I only found maybe one informative post, with a couple good answers.

http://stackoverflow.com/questions/5622592/why-arent-data-repositories-static

 

It makes a lot of sense.  Instantiated Repositories can be mocked, and your front layers tested.  That is the number one reason.  I'm not sure that instantiation is a requirement for caching, but it makes sense if you are getting into a site that large.  Second reason – for me this is a big one – is that you can use the inheritance and interfaces to practice DRY and the Repository pattern and not have to write all of your boilerplate data access code in each repository. 

 

The first point speaks to testing, and I haven't seen or heard of any local developers who work at a place that actually promote unit tests.  But I do on occasion in some of my projects, and it helps my writing style.  Food for thought...

Encryption

by MikeHogg31. May 2012 09:50

A really interesting project had me implementing encryption algorithms for a Point Of Sale vendor interface.  It was the closest thing I’ve done to ‘computer science’ and I was fascinated at manipulating integers that were one thousand digits long.  The vendor used a symmetric encryption wrapped in an asymmetric method, plus an additional byte manipulation algorithm, making it a few layers deep.  I used a proven Big Integer implementation, and some of the MS encryption libraries for certain steps of the algorithm, but a lot of it was byte level manipulation. 

In one of my favorite parts of the algorithm, I used a bit shift operator.  Never found a use for that in Business Intelligence!

private static byte[] ApplyOddParity(byte[] key)
        {
 for (var i = 0; i < key.Length; ++i)
            {
 int keyByte = key[i] & 0xFE; // 254? mask
                var parity = 0;
 for (var b = keyByte; b != 0; b >>= 1) parity ^= b & 1; // shift right until empty, setting parity  xor b bitand 1
                key[i] = (byte)(keyByte | (parity == 0 ? 1 : 0)); // set byte = byte bitor (unchange if match) 1 if not parity or 0 for odd
            }
 return key;
        }
public static string EncryptEAN(string eanhex, string decryptedmwkhex)
        {
 byte[] decryptedmwk = ConvertHexStringToByteArray(decryptedmwkhex);
 byte[] asciiean = Encoding.ASCII.GetBytes(eanhex.PadRight(8, ' '));
 
            TripleDESCryptoServiceProvider p = new TripleDESCryptoServiceProvider();
            p.Padding = PaddingMode.None;
            p.IV = new byte[8];
 // p.Mode = CipherMode.CBC; //  default 
 byte[] random = p.Key;// testing: random = FDCrypt.ConvertHexStringToByteArray("95:e4:d7:7c:6d:6c:6c") 
 byte checksum = GetCheckSum(asciiean);
 byte[] eanblock = new byte[16];
            Array.Copy(random, 0, eanblock, 0, 7);
            eanblock[7] = checksum;
            Array.Copy(asciiean, 0, eanblock, 8, 8);// BitConverter.ToString(eanblock)
            p.Key = decryptedmwk;
            ICryptoTransform e = p.CreateEncryptor();
 
 byte[] result = e.TransformFinalBlock(eanblock, 0, 16);
 return BitConverter.ToString(result, 0).Replace("-",String.Empty);
        }
public static string GetEncryptedMWK(string decryptedmwkhex, byte[] kek)
        {
 byte[] decryptedmwk = FDCrypt.ConvertHexStringToByteArray(decryptedmwkhex);
            TripleDESCryptoServiceProvider p = new TripleDESCryptoServiceProvider();
            p.Padding = PaddingMode.None;
            p.IV = new byte[8];
 // p.Mode = CipherMode.CBC; //  default 
 byte[] random = p.Key;//random = FDCrypt.ConvertHexStringToByteArray("e7:11:ea:ff:a0:ca:c3:ba")
            p.Key = decryptedmwk;// BitConverter.ToString(decryptedmwk)
            ICryptoTransform e = p.CreateEncryptor();
 byte[] checkvalue = e.TransformFinalBlock(new byte[8], 0, 8);// BitConverter.ToString(checkvalue) 
 byte[] keyblock = new byte[40];
            Array.Copy(random, keyblock, 8);
            Array.Copy(decryptedmwk, 0, keyblock, 8, 24);
            Array.Copy(checkvalue, 0, keyblock, 32, 8);// BitConverter.ToString(keyblock)
 
            p.Key = kek;
            e = p.CreateEncryptor();
 byte[] encryptedkeyblock = e.TransformFinalBlock(keyblock, 0, 40);
 string result = BitConverter.ToString(encryptedkeyblock,0, 40);
 return result.Replace("-",String.Empty); // should be 81 bytes inc null term?
        }

 

For testing, I built a UI in WPF.  Here you see how I wanted to encapsulate all the encryption stuff in a separate library (later to be used in a web site), yet needed a UI stub to go through the lengthy 18 step, two month long testing and certification process with the vendor.  I knew that UI could leverage my experience with the MVVM pattern in WPF to expose over 20 fields and half a dozen steps in fast iterations as we went through the vetting process, and the WPF UI became more of a helpful tool than a code maintenance drain like most UI’s. 

 

 

 

 

 

 

 

 

 

 


Tags:

WPF | C# | Encryption

Password hashing

by MikeHogg11. May 2012 15:08

After some research this year, since the last time I had to write any password system was in 2006 or 2007, I am under the impression that the BCrypt library is the defacto standard in encryption and is available in C#.  the main point going for BCrypt is that it has a difficulty factor built in.  This prevents super hardware from brute forcing requests at sub millisecond attempts if it can get to it, and so limits dictionary attacks. 

 

Using it is simple.  I drop this BCrypt file into each of my projects.  BTW in it you will find the header with links to the project doc and license info.

BCrypt.cs (34.97 kb)

 

Now, your Membership provider just needs to store passwords BCrypted, like so

public static bool SavePassword(string username, string newPassword)
        {
 string salt = lib.BCrypt.GenerateSalt(6);
 string hash = lib.BCrypt.HashPassword(newPassword, salt);
 return lib.DatabaseHelper.SavePassword(username, hash);
        }

(the SALT is the difficulty factor)

... and use the Bcrypt library to test passwords with its verify() method like this

public override bool ValidateUser(string username, string password)
{
string hash = GetPassword(username, null);
if (hash.Equals(string.Empty)) return false;
return lib.BCrypt.Verify(password, hash);
}

Tags:

C# | Encryption

Miscellaneous code to tweak IE access rights on random servers

by MikeHogg23. January 2012 21:33

I set up a framework where developers could write data parsers for a variety of sources that could be scheduled, normalized, logged, and archived for various purposes.  There were hundreds of data sources and jobs so a lot of them had enough similarity that a framework with standard libraries made sense.  A large portion of these were web scrapes.  (I used WatIn for those- a great mocking solution for our purposes).  I would run into problems, because our scheduler would run on a server farm of hundreds, with new machines/instances being created at intervals, and so we basically had to write everything into our jobs.  Here are some of the hacks I put into a Web library to get usual https sites working with the web scrapes, where I had a variety of popup problems (I needed to turn off IE’s built in popup blocker), file download problems (add domain to trusted sites)…  They are all probably obsolete with Server 2012 now, but they might come in handy for the next few years still…

private static bool IsPopupEnabled(string domain)
        {
 string keylocation = @"Software\Microsoft\Internet Explorer\New Windows\Allow";
            Microsoft.Win32.RegistryKey cu = Microsoft.Win32.Registry.CurrentUser;
            Microsoft.Win32.RegistryKey parent = cu.OpenSubKey(keylocation);
 if (parent != null) return parent.GetValue(domain) != null;
 else return false;
        }
private static bool EnablePopup(string domain)
        {
 string keylocation = @"Software\Microsoft\Internet Explorer\New Windows\Allow";
            Microsoft.Win32.RegistryKey cu = Microsoft.Win32.Registry.CurrentUser;
            Microsoft.Win32.RegistryKey parentkey = cu.CreateSubKey(keylocation);
            parentkey.SetValue(domain, Microsoft.Win32.RegistryValueKind.Binary);
 return IsPopupEnabled(domain);
        }
private static bool TrustedSiteAddition(string domain)
        {
 const string domainsKeyLocation = @"Software\Microsoft\Windows\CurrentVersion\Internet Settings\ZoneMap\Domains";
            Microsoft.Win32.RegistryKey currentUserKey = Microsoft.Win32.Registry.CurrentUser;
            Microsoft.Win32.RegistryKey parentKey = currentUserKey.OpenSubKey(domainsKeyLocation, true);
            Microsoft.Win32.RegistryKey key = parentKey.CreateSubKey(domain);
 object objSubDomainValue = key.GetValue("http");
 if (objSubDomainValue == null || Convert.ToInt32(objSubDomainValue) != 0x02)
            {
                key.SetValue("http", 0x02, Microsoft.Win32.RegistryValueKind.DWord);
            }
            objSubDomainValue = key.GetValue("https");
 if (objSubDomainValue == null || Convert.ToInt32(objSubDomainValue) != 0x02)
            {
                key.SetValue("https", 0x02, Microsoft.Win32.RegistryValueKind.DWord);
            }
 return IsTrusted(domain);
        }
private static bool IsTrusted(string domain)
        {
 const string domainsKeyLocation = @"Software\Microsoft\Windows\CurrentVersion\Internet Settings\ZoneMap\Domains";
 string keyLocation = string.Format(@"{0}\{1}", domainsKeyLocation, domain);
            Microsoft.Win32.RegistryKey currentUserKey = Microsoft.Win32.Registry.CurrentUser;
            Microsoft.Win32.RegistryKey foundRegistryKey = currentUserKey.OpenSubKey(keyLocation, false);
 if (foundRegistryKey != null)
 return true;
 else
 return false;
        }
private static bool CheckFileDownloadEnabled()
        { 
 string keylocation = @"Software\Microsoft\Windows\CurrentVersion\Internet Settings\Zones\2";
            Microsoft.Win32.RegistryKey cu = Microsoft.Win32.Registry.CurrentUser;
            Microsoft.Win32.RegistryKey parent = cu.OpenSubKey(keylocation,true);
 if (parent != null && (int)parent.GetValue(@"2200") != 0x00)
            {
                parent.SetValue(@"2200", 0x00);
 return true;
            }
 else if (parent != null && (int)parent.GetValue(@"2200") == 0x00)
            {
 return true;
            }
 else return false;
        }
public static bool IEFullyEnabled(string domain)
        {
 if (!IsPopupEnabled(domain) && !EnablePopup(domain)) return false;
 if (!IsTrusted(domain) && !TrustedSiteAddition(domain)) return false;
 if (!CheckFileDownloadEnabled()) return false;
 return true;
        }

Tags:

Automation | C#

Using Moq to write Unit tests in VS2008

by MikeHogg5. October 2011 18:26

I’ve read about Test Driven Development, and after trying it out myself I have noticed a difference in the code that I write, even if I am not using it for a particular project.  My classes have become simpler, my functions more atomic.  I write more for the Open Closed principle, that my classes are closed for modification, but open for extension, and I find my projects are more organized and development seems to be quicker even, as the base classes get written quicker, and then additional features drop right into place as if they were expected, rather than the spaghetti code that I used to eventually end up with.

There is a difference between Unit tests and other kinds of tests.  I have heard other developers say that they can’t test because most of their code is dependent upon some specific data layer, or web UI, and I think they might not know that Integration Testing is different than Unit testing.  This was what I meant about changing my writing style.  TDD prompted me to write smaller more abstract classes, with unit testable atomic functions, and then to write application specific interfaces and more specific inherited classes. 

In some cases, though, your tests cross the line, and you need integration testing, but there are several Mocking libraries out there now that make it easy to do this.  The main ones all appear to be similar.  Here is a simple project where I used Moq to fake my Data Service.  The Project had a few components.  The end user UI was WPF, and was built with MVVM. 

Because these UI related elements are decoupled from the actual UI in MVVM, I can unit test them.  Here are some tests for the MainViewModel, which takes a user in Constructor Dependency Injection.

        [TestMethod()]
        [DeploymentItem("AUM.exe")]
public void CanRestartServiceTest()
        {
            MainViewModel_Accessor target = new MainViewModel_Accessor(AdminUser);
            Assert.IsTrue(target.CanRestartService());
            target = new MainViewModel_Accessor(ReadonlyUser);
            Assert.IsFalse(target.CanRestartService());
        }
        [TestMethod()]
        [DeploymentItem("AUM.exe")]
public void CanLoadAllTablesTest()
        {
            MainViewModel_Accessor target = new MainViewModel_Accessor(AdminUser);
            Assert.IsTrue(target.CanLoadAllTables());
 
            target._backgroundmanager.IsRunning = true;
            Assert.IsFalse (target.CanLoadAllTables());
            target = new MainViewModel_Accessor(ReadonlyUser);
            Assert.IsFalse(target.CanLoadAllTables());
        }
 
        [TestMethod()]
        [DeploymentItem("AUM.exe")]
public void StartProgressBarTest()
        {
            MainViewModel_Accessor target = new MainViewModel_Accessor(AdminUser);
            target.StartProgressBar();
            Assert.IsTrue(target.ProgressVisibility == Visibility.Visible);
            Assert.IsTrue(target.ProgressValue == 0);
        }
        [TestMethod()]
        [DeploymentItem("AUM.exe")]
public void BM_TimerFinishedTest()
        {
            MainViewModel_Accessor target = new MainViewModel_Accessor(AdminUser);
            TimerFinishedEventArgs e = new TimerFinishedEventArgs();
            e.Message = "TickTock";
            target.BM_TimerFinished(new object(), e);
            Assert.IsTrue(target.ProgressVisibility == Visibility.Hidden);
            Assert.IsTrue(target.Dialog.Visibility == Visibility.Visible);
            Assert.IsTrue(target.Dialog.Title.Contains("Exceeded"));
        }
        [TestMethod()]
        [DeploymentItem("AUM.exe")]
public void HandleErrorTest()
        {
            MainViewModel_Accessor target = new MainViewModel_Accessor(AdminUser);
            target.HandleError(new Exception("Test Exception"));
            Assert.IsTrue(target.Error.Visibility == Visibility.Visible);
            Assert.IsTrue(target.Error.Message.Contains("Test Exception"));
        }

I created all of my authorization features as properties on the VM, and then exposed Enabled in the UI to them, so I can test if the VM is Enabled to Admin users or Readonly users, as I do in CanRestartServiceTest.  I also disable certain controls through this mechanism when running certain background jobs, and CanGetTablesTest tests that.  I have a progressbar control, also with its own VM, and hook into it and expose a property called ProgressVisibility in the Main VM, so the StartProgressBarTest can test that it is working.  This UI runs several Background jobs and I wrote a custom BackgroundJobManager class to manage all of them.  BM_TimerFinishedTest tests against one of the behaviors of the manager as it is implemented in the VM.  And HandleErrorTest tests against the DialogErrorVM I am using in the Main VM.  So with MVVM it is possible to write unit tests of a sort for your UI components.

So, it’s great that I can in some way test the UI plumbing, but most of what this app does is interact with a windows service, a DTS package through a COM wrapper, and a Web service, and MVVM doesn’t help me test these integration layer features.  In the past, since I have written each of these as interfaces, I would have to write test classes for each of these.  These would be just stubs, returning dummy data.  And that is fine, but with a Mocking library, you no longer have to write all those test classes.  You can usually write up your mocked class and dummy results in a line or two.  And there is a load of generic flexibility built into it. 

Here I am setting up a mocked object based on my IDataService interface, with one instantiated method, StageAcorn(Datetime) which I set to take Any DateTime as an argument.  My MVVM method takes an IDataService injected as an argument, and so I can now test my method,  without writing any IDataService test stub.

        [TestMethod()]
        [DeploymentItem("AUM.exe")]
public void StageTablesTest()
        {
            MainViewModel_Accessor target = new MainViewModel_Accessor(AdminUser);
            var mock = new Moq.Mock<AUM.DataServiceReference.IDataService>();
            mock.Setup<string>(f => f.StageAcorn(Moq.It.IsAny<DateTime>())).Returns("DTSER is Success");
            target.StageTables(mock.Object);
 
            Assert.IsTrue(target.Dialog.Visibility == Visibility.Visible);
            Assert.IsTrue(target.Dialog.Message.Contains("Success"));
            Assert.IsFalse(target.IsInError);
        }

Here are a couple other similar, super simple tests also using mocked interfaces…

        [TestMethod()]
        [DeploymentItem("AUM.exe")]
public void GetSavePathTest()
        { 
            var mock = new Moq.Mock<IFileDialogService>();
            mock.Setup(f => f.GetSaveFolder()).Returns("testfolderpath");
            Assert.AreEqual("testfolderpath", MainViewModel_Accessor.GetSavePath(mock.Object));
        } 
        [TestMethod()]
        [DeploymentItem("AUM.exe")]
public void RestartServiceTest()
        {
            MainViewModel_Accessor target = new MainViewModel_Accessor(AdminUser);
 
            var mock = new Moq.Mock<AUM.RestarterServiceReference.IRestarterService>();
            mock.Setup(f => f.StopAcornAppService()).Returns(true);
            mock.Setup(f => f.StartAcornAppService()).Returns(true);
 
            target.RestartService(mock.Object);
            Assert.IsTrue(target.ProgressVisibility == Visibility.Visible);
 bool running = true; int testtimeout = 0;
 while (running)
            {
 if (target.ProgressVisibility == Visibility.Hidden) running = false;
                System.Threading.Thread.Sleep(100);
 if (testtimeout++ > 200)
                {
                    Assert.Inconclusive("Test Timeout");
                    running = false;
                }
            }
            Assert.IsFalse(target.IsInError);
        }
 

 

These are very simple examples, and not every method on every class is meant to be unit tested, but they show how easy it is to get started with MS testing projects.

Using Impersonation to query MSMQ and putting results in a Dundas Gauge.

by MikeHogg29. April 2009 19:58

I had a lot of functions like this, all running on BackgroundWorker threads, on a status page that refreshed itself every three minutes.  Most of them queried different database, most of them also were hooked up to more complicated Dundas Gauges and Charts.  Here’s one that’s really simple, but illustrates accessing a particular MSMQ, which needed impersonation of a service account.

private void loadMSMQGauge()
    {
int numInQueue = -1;
try
        {
            Utils.ImpersonateUser iu = new Utils.ImpersonateUser();
            iu.Impersonate("corp", "someserviceaccount", "password");
 //PerformanceCounter pc = new PerformanceCounter("MSMQ Queue",
 //    "Messages in Queue", "someservername\\private$\\ssome_error_queue", "someservername");
 // HP openview kept horking the msmq perf counter, after re-registering the counters several times 
 //   and getting horked again after reboots we will take the long route.
 using (MessageQueue mq = new MessageQueue("FormatName:DIRECT=OS:someservername\\private$\\some_error_queue", QueueAccessMode.Peek))
            {
                Message[] messages = mq.GetAllMessages();
                numInQueue = messages.Length;
            }
 this.GaugeContainer1.LinearGauges[4].Pointers[0].Value = numInQueue; //  pc.RawValue;
 this.GaugeContainer1.NumericIndicators[4].Value = this.GaugeContainer1.LinearGauges[4].Pointers[0].Value;
            iu.Undo();
        }
catch (System.InvalidOperationException)  // this was from PerfCounter object
        {
 //empty queue, do nothing
        }
catch (MessageQueueException ex) //this comes from mq object, just write it out also
        {
            Response.Write(ex.ToString());
        }
catch (Exception ex)
        {

 

Simple hookup, but it showed on a Widescreen TV hanging on the wall of our department, the current state of one of our systems, at a glance, with big digital numbers and red and yellow colors.

Tags:

C#

Add a background tag on all of your web pages showing the current Environment

by MikeHogg23. March 2009 20:41

This was a neat trick.  When working with UAT and STAGE and DEV and however many other environments, it can sometimes be confusing which database your particular web server is actually hooked up to.  Here I set up an HttpHandler to write out a string as an image memory stream, and then with some CSS trickery it shows up repeating with low opacity all over each page, faint enough that it doesn’t bother you, but enough so that you won’t ever mistake yourself for being in a different environment.

First in the BasePage PreRender I check for conditional, in case, for instance, you don’t want to use this on Production:

protected override void OnPreRender(EventArgs e)
        {
 //todo: we could make this a webresource instead of static img
            Image img = new Image();
 try
            {
 string prod = System.Configuration.ConfigurationSettings.AppSettings["dontShowHeaderForThisDatabase"];
 if (!LIB.Gen_Util.getDBName().ToUpper().Contains(prod.ToUpper()))
                {
                    img.ImageUrl = "DBImage.ashx";
                    img.Style.Add("width", "100%");
                    img.Style.Add("height", "100%");
                    img.Style.Add("z-index", "-1");
                    img.Style.Add("position", "absolute");
                    img.Style.Add("top", "20px");
 // this is a pain- if we have <% %> tags in page then this will break
 //this.Form.Controls.Add(img);
 this.Page.Controls.Add(img);
                }
 base.OnPreRender(e);
            }

 

 

DBImage.ashx is created once then cached in the HttpHandler:

public class HttpHandler :IHttpHandler
    {
        #region IHttpHandler Members
public bool IsReusable
        {
 get { return false; }
        }
public void ProcessRequest(HttpContext context)
        {
 try
            {
 byte[] ba;
 if (HttpContext.Current.Cache["dbimage"] == null)
                {
                    ba = Gen_Util.CreateHeaderImage(Gen_Util.getDBName());
 if (ba != null)
                    {
                        HttpContext.Current.Cache["dbimage"] = ba;
                    }
                }
 else
                {
                    ba = (byte[])HttpContext.Current.Cache["dbimage"];
                }
 if (ba != null)
                {
                    context.Response.BinaryWrite(ba);
                }
                context.Response.End();
            } 
        }
        #endregion
    }

 

It will get called for each Request, with this line in the web.config:

<httpHandlers>
      ...
<add verb="GET" path="DBImage.ashx" type="CEG.CPS.Settlements.LIB.HttpHandler" />

 

And the CreateHeaderImage is the tricky CSS part:

public static byte[] CreateHeaderImage(string text)
        { 
 try
            {
                Bitmap bm = new Bitmap(320, 240, PixelFormat.Format32bppArgb);
                Graphics g = Graphics.FromImage(bm);
                g.SmoothingMode = SmoothingMode.HighQuality;// ?
                g.TextRenderingHint = System.Drawing.Text.TextRenderingHint.ClearTypeGridFit;// ?
                g.Clear(Color.White);
                GraphicsPath p = new GraphicsPath();
                Font f = new Font("Impact", 20F);
                Rectangle r = new Rectangle(0, 0, 320, 240);
                StringFormat sf = new StringFormat();
                String repeatedText = string.Empty;
 for (int x = 0; x < 48; x++)  // 8 rows of 6
                { 
 if (x % 6 == 0 && x != 0)
                    {
                        repeatedText += "\n";
                    } repeatedText += text + "";
                } 
                p.AddString(repeatedText, f.FontFamily, (int)f.Style, f.Size, r, sf);
 
 // transparency shade 75
                SolidBrush b = new SolidBrush(Color.FromArgb(75,Color.Gray));
 
                g.FillPath(b, p);
 
                f.Dispose();
                b.Dispose();
                g.Dispose();
                MemoryStream ms = new MemoryStream();
                bm.Save(ms, ImageFormat.Bmp);
                bm.Dispose();
 return ms.GetBuffer();
            }
    }

 

And that’s it.

Tags:

C# | ASP.Net

Using an abstract Server Control as an updateable module

by MikeHogg19. March 2009 14:15

The main part of this feature was to show a grid of some data, different database uptimes and other performance metrics, on a web page.  The interesting part was that the databases that were tracked changed often.  Sometimes there were 12, and then one would get killed, and two more were created, and the next week another new one would be built. Not only would it be better to put the datasources in a config of some sort, but it would be even better if a manager could edit that config through an easy-to-use UI on a web page. 

Rather than build two separate pages, I saw that there were some re-useable components in this use case, and so I created a Server control, with an XML config.  Most of the XML I/O I put in an Abstract class, and then from that I inherited and Admin Control and a Display Control.

First the Common Elements:

interface IGrid
    { 
        DataTable getDataSource();
void createColumns(DataTable aTable);
void styleGrid();
    }
public abstract class AGrid : GridView, IGrid
    {
public string xFile{get; set;}
public string headerBackColor{get; set;}
public string gridBackColor {get;set;} 
public string title { get; set; } 
protected DataTable myTable;
protected override void OnInit(EventArgs e)
        {
 try
            {
 base.OnInit(e);
 this.AutoGenerateColumns = false;
                myTable = getDataSource();
 if (Page.IsPostBack)
                {
 this.DataSource = myTable;
 if (this.EditIndex > -1 )
                    {
 this.DataBind();
                    }
                }
            } 
        }
protected override void OnLoad(EventArgs e)
        { 
 base.OnLoad(e);
 if (!Page.IsPostBack)
                {
 if (myTable != null)
                    {
                        createColumns(myTable);
 this.DataSource = myTable;
 this.DataBind();
                        styleGrid();
                    }
                } 
        } 
public DataTable getDataSource()
        { 
            <snip>
        }
public virtual void createColumns(DataTable myTable){
 try
            { <snip>
        }
public void styleGrid()
        {
 try
            {
 if (this.gridBackColor != String.Empty)
                {
 this.BackColor = System.Drawing.Color.FromName(this.gridBackColor);
                }
 else this.BackColor = System.Drawing.Color.Silver;
 this.BorderStyle = BorderStyle.Ridge;
 this.Font.Size = 10;
 this.Font.Name = "Verdana";
 this.Font.Bold = true;
 
 this.GridLines = GridLines.Horizontal;
 this.Style.Add("font-variant", "small-caps");
 this.Style.Add("text-align", "right");
 this.CellPadding = 2;
 if (this.headerBackColor != String.Empty)
                {
 this.HeaderStyle.BackColor = System.Drawing.Color.FromName(this.headerBackColor);
                }
 else this.HeaderStyle.BackColor = System.Drawing.Color.MidnightBlue;
 this.HeaderStyle.ForeColor = System.Drawing.Color.White;
 this.HeaderStyle.Font.Size = 12;
 if (this.title != String.Empty)
                {
 this.Caption = this.title;
                }
 else this.Caption = "Grid Monitor 1.0";
 
 this.CaptionAlign = TableCaptionAlign.Top;
 this.BorderWidth = 1;
            }
 catch (NullReferenceException e)
            {
                Console.WriteLine("Probably xml issue: " + e.ToString());
            }
        }

 

The Admin Grid was just a simple DataGrid and used the built-in OnDataBound to add editting controls and commands, and the OnRowEditing events to Add/Edit/Remove nodes in the XML file.

 

    [DefaultProperty("Text")]
    [ToolboxData("<{0}:AdminGrid runat=server></{0}:AdminGrid>")]
public class AdminGrid : AGrid
    {
        [Bindable(true)]
        [Category("Appearance")]
        [DefaultValue("")]
        [Localizable(true)]
 
protected override void OnInit(EventArgs e)
        {
 this.AutoGenerateEditButton = true;
 this.ShowFooter = true;
 
 base.OnInit(e);
        } 
protected override void OnPagePreLoad(object sender, EventArgs e)
        {
 base.OnPagePreLoad(sender, e);
 if (Page.IsPostBack && this.EditIndex == -1)
            {
 this.DataBind();
            }
        } 
protected override void OnDataBound(EventArgs e)
        {
 base.OnDataBound(e);
 foreach (DataControlFieldCell cell in this.FooterRow.Cells)
            {
 if (cell.ContainingField.GetType().Equals(typeof(System.Web.UI.WebControls.BoundField)))
                {
                    TextBox myC = new TextBox();
                    cell.Controls.Add(myC);
                }
 else if (cell.ContainingField.GetType().Equals(typeof(System.Web.UI.WebControls.CheckBoxField)))
                {
                    CheckBox myC = new CheckBox();
                    cell.Controls.Add(myC);
                }
 else if (cell.ContainingField.GetType().Equals(typeof(System.Web.UI.WebControls.CommandField)))
                {
                    LinkButton myC = new LinkButton();
                    myC.Text = "Add New";
                    myC.ID = "btnAddNew";
                    myC.CommandName = "New";
                    myC.CommandArgument = "New";
                    cell.Controls.Add(myC);
                }
            }
protected override void OnRowCommand(GridViewCommandEventArgs e)
        {
 try
            {
 base.OnRowCommand(e);
 if (e.CommandName == "New")
                {
                    DataRow newRow = myTable.NewRow();
 //insert
 for (int x = 0; x < myTable.Columns.Count; x++)
                    {
                        Control myControl = this.FooterRow.Cells[x + 1].Controls[0];
 if (myControl.GetType().Equals(typeof(CheckBox)))
                        {
                            newRow[x] = ((CheckBox)myControl).Checked;
                        }
 else if (myControl.GetType().Equals(typeof(TextBox)))
                        {
                            newRow[x] = ((TextBox)myControl).Text;
                        }
                    }
                    myTable.Rows.Add(newRow);
                    WriteXml(myTable, HttpContext.Current.Server.MapPath(xFile));
 this.DataSource = myTable;
 this.DataBind();
                }
            }
protected override void OnRowUpdating(GridViewUpdateEventArgs e)
        { 
 //DataTable myTable = (DataTable)HttpContext.Current.Session["myTable"];
            DataTable oldTable = (DataTable)this.DataSource;
            GridViewRow myRow = this.Rows[e.RowIndex];
 for (int x = 0; x < myRow.Cells.Count; x++)
            {
                Control myControl = myRow.Cells[x].Controls[0];
 
 if (myControl.GetType().Equals(typeof(CheckBox)))
                {
                    oldTable.Rows[e.RowIndex][x - 1] = ((CheckBox)myControl).Checked;
 //myTable.Rows[e.RowIndex][]
                }
 else if (myControl.GetType().Equals(typeof(TextBox)))
                {
                    oldTable.Rows[e.RowIndex][x - 1] = ((TextBox)myControl).Text;
                }
                    WriteXml(myTable, HttpContext.Current.Server.MapPath(xFile));
 this.DataSource = myTable;
 this.DataBind();
                }
            }
        }

 

The DisplayGrid has some neat UI components

[assembly: WebResource("DBMonitor.Resources.button_red.png","image/png")]
[assembly: WebResource("DBMonitor.Resources.button_green.png", "image/png")]
[assembly: WebResource("DBMonitor.Resources.button_yellow.png", "image/png")]
namespace DBMonitor
{
    [DefaultProperty("Text")]
    [ToolboxData("<{0}:ServerControl1 runat=server></{0}:ServerControl1>")]
public class DisplayGrid : AGrid
    {
        [Bindable(true)]
        [Category("Appearance")]
        [DefaultValue("")]
        [Localizable(true)]
public string Text <snip>
protected override void OnInit(EventArgs e)
        { 
 this.title = "Database Environments";
 this.headerBackColor = "MidnightBlue";
 this.gridBackColor = "Silver";
 this.xFile = "Config/dbmon.xml";
 base.OnInit(e);
        }
protected override void OnLoad(EventArgs e)
        {
 base.OnLoad(e);
            DataTable myTable = this.DataSource as DataTable;
 if (myTable.Columns.IndexOf("Active") >= 0)
            {
                myTable.DefaultView.RowFilter = "Active = true";
 this.DataBind();
            }
        }
protected override void OnRowDataBound(GridViewRowEventArgs e)
        {
 base.OnRowDataBound(e);
 try
            {
 //find BE Last Refreshed value
 int x = ((DataTable)this.DataSource).Columns.IndexOf("BE_Last_Run");
 if (x >= 0 && e.Row.RowType != DataControlRowType.Header)
                {
                    System.Web.UI.WebControls.Image myLight = new System.Web.UI.WebControls.Image();
 
                    DateTime lastBE = new DateTime();
 if (e.Row.Cells[x].Text.Contains(':'))
                    {
 try
                        {
                            lastBE = DateTime.Parse(e.Row.Cells[x].Text);
                        }
 catch (FormatException ex)
                        {
                            lastBE = DateTime.MinValue;
 
                        }
                        e.Row.Cells[x].Text = lastBE.ToShortTimeString();
                        TimeSpan myAge = DateTime.Now - lastBE;
 if (ConfigurationSettings.AppSettings["debug"] == "true")
                        {
                            logThis("myAge: " + myAge.ToString() + " and now is " + DateTime.Now.ToString() +
                                " and lastBE is " + lastBE.ToString() +
                                " and timespan.fromhours(1) is " +
                            TimeSpan.FromHours(1).ToString() +
                            " and now minus lastBE is " + myAge, EventLogEntryType.Information);
                        }
 if (myAge > TimeSpan.FromHours(1))
                        {
                            myLight.ImageUrl = this.Page.ClientScript.GetWebResourceUrl(this.GetType(),
                                    "DBMonitor.Resources.button_red.png");
                        }
 else if (myAge > TimeSpan.FromMinutes(10))
                        {
                            myLight.ImageUrl = this.Page.ClientScript.GetWebResourceUrl(this.GetType(),
                                "DBMonitor.Resources.button_yellow.png");
                        }
 else
                        {
                            myLight.ImageUrl = this.Page.ClientScript.GetWebResourceUrl(this.GetType(),
                                "DBMonitor.Resources.button_green.png");
                        }
                    }
 else
                    {
                        myLight.ImageUrl = this.Page.ClientScript.GetWebResourceUrl(this.GetType(),
                                    "DBMonitor.Resources.button_red.png");
                    }
                    e.Row.Cells[0].Controls.Add(myLight);
                    e.Row.Cells[0].BackColor = Color.White;
                }
            }

Adding it to a page is just a couple lines then- here the first line registers the namespace, and the last selected line places it in a div.  In this case I populated my attributes in the DisplayGrid class, but if I was to use this in several other places, I could remove those and populate my attributes here in the html element.

<%@ Register Assembly="DBMonitor" Namespace="DBMonitor" TagPrefix="dbm"  %>
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">
<html xmlns="http://www.w3.org/1999/xhtml" >
<head runat="server">
<title>Status Page</title>
<link href="StyleSheet.css" rel="stylesheet" type="text/css" />
<link href="mobile.css" rel="stylesheet" type="text/css" media="handheld"/>
<script src="stock-ticker.js" type="text/javascript"></script>
<meta http-equiv=Refresh content="120" />
<script type="text/javascript">
        var ticker;
        function loadTicker(){
            ticker = new Ticker('ticker','myTicker',4,15);
            ticker.start();
 
        }
</script>
</head>
<body onload="loadTicker();">
<form id="form1" runat="server">
<table style="table-layout:fixed"><tr><td style="width:200px;overflow:hidden">
<div id="myTicker" class="ticker" style="width:5000px" nowrap="nowrap">
 <asp:Xml DocumentSource="~/Config/ticker.xml" runat="server" ID="ticker" TransformSource="~/ticker.xslt"></asp:Xml>
</div></td></tr></table>
<div id="newEnv">
<dbm:DisplayGrid ID="something" runat="server"></dbm:DisplayGrid>
</div>

 

And that’s it.

About Mike Hogg

Mike Hogg is a c# developer in Brooklyn.

More Here

Favorite Books

This book had the most influence on my coding style. It drastically changed the way I write code and turned me on to test driven development even if I don't always use it. It made me write clearer, functional-style code using more principles such as DRY, encapsulation, single responsibility, and more.amazon.com

This book opened my eyes to a methodical and systematic approach to upgrading legacy codebases step by step. Incrementally transforming code blocks into testable code before making improvements. amazon.com

More Here