Creating a new ARM template pipeline in 2020

by MikeHogg20. May 2020 16:57
##BackgroundTwo different approaches to have pipeline automated ARM deployments
- AzureResourceManagerTemplateDeployment and 
- Powershell task where the ps1 script runs what you were typing in the Azure Powershell manually
(from - https://docs.microsoft.com/en-us/azure/azure-resource-manager/templates/add-template-to-azure-pipelines)
(note: Azure cli is separate from Azure Powershell and offers a non template alternative and AzDO has a task for it
    https://docs.microsoft.com/en-us/cli/azure/install-azure-cli?view=azure-cli-latest)
(also there is Azure Resource Manager REST api. so many options.)
AzureResourceManagerTemplateDeployment copies your templates to a StagingDirectory and then runs the 
AzureResourceManagerTemplateDeployment tasks that run those templates.  It needs powershell to use Outputs anyway.  
The Powershell approach seems simpler.
Powershell way uses Deploy-AzureResourceGroup.ps1, this is the same one that Visual Studio uses when you 
right click->Deploy your Azure Resource Manager Project. 
(Azure Resource Manager Project is a new type of Visual Studio project, 
get latest templates to see it and get some helpful tooling)
for pipeline, add this task (note the @3 version)
```
pool:
  name: Hosted Windows 2019 with VS2019
  demands: azureps
steps:
- task: AzurePowerShell@3
  inputs:
    azureSubscription: '<service connection name>'
    ScriptPath: './Deploy-AzureResourceGroup.ps1'
    ScriptArguments: -ResourceGroupName 'mike-test' -ResourceGroupLocation 'eastus2'
    azurePowerShellVersion: LatestVersion
```
In the Powershell approach, there is now a new way.  the new way uses Powershell@4 Tasks
The new script is Deploy-AzTemplate.ps1, it must run in powershell 
(not Visual Studio, and it requires powershell 5.1 on windows but 
 there are some conflicts using it on 5.1 and some templates use features
 from later versions.  latest today is 7.0.1. Best to upgrade)
> .\Deploy-AzTemplate.ps1 -ArtifactStagingDirectory . -Location centralus -TemplateFile WebSiteSQLDatabase.json -TemplateParametersFile WebSiteSQLDatabase.parameters.json##TLDRor for pipeline use AzurePowerShell@4
```
pool:
  name: Hosted Windows 2019 with VS2019
  demands: azureps
steps:
- task: AzurePowerShell@4
  inputs:
    azureSubscription: '<service connection name>'
    ScriptPath: './Deploy-AzTemplate.ps1'
    ScriptArguments: -ResourceGroupName 'mike-test' -ResourceGroupLocation 'eastus2'
    azurePowerShellVersion: LatestVersion
```
notes: 
- resource group name- task creates if not exists already
- I decided not to use a single template with multiple parameter files for example for the two appServices. 
    This  added complexity for no good reason, so everything is just repeated if necessary and straightforward.
- If the new azModule supported linked templates it also added complexity, when we could just add a step 
    in our pipeline to call each additional template.  no SAS token, so artifact staging directory needed. 
    (notice we set ArtifactStagingDirectory to the SourceCodeDirectory in yaml which is kind of a quick hack)
- search for New-AzResourceGroupDeployment - that is the new azModule module that powershell ultimately calls.
phase 2:
- key vault.  Use parameter file
```
"<second-parameter-name>": {
      "reference": {
        "keyVault": {
          "id": "<resource-id-key-vault>"
        },
        "secretName": "<secret-name>"
      }
    }
```
this would be for prod.parameters.json of course, the lower dev.parameters.json and qa.parameters.json just have values## Further Reference:best summary to date i've found is Mark Heath https://markheath.net/post/deploying-arm-templates-azure-cli
tied with https://clearmeasure.com/az-cli-vs-azure-powershell-vs-arm-templates/
he also links to the quickstart template repo where you get the azTemplate.ps1 used above in powershell@4
https://github.com/Azure/azure-quickstart-templates
Best Practices: https://docs.microsoft.com/en-us/azure/azure-resource-manager/templates/template-best-practices
100s pages of docs for the new azModule: https://docs.microsoft.com/en-us/powershell/azure/new-azureps-module-az?view=azps-4.1.0

No reason not to use automated deployments, with Hudson or Jenkins

by MikeHogg17. June 2012 19:31

 

 

Just discussion not really having to do with Jenkins but related to Building and Deployment of team projects. Feel free to add.

Build and Deploy vs BuildDeploy

You can use batch file xcopy or add other build steps to deploy to an environment as part of your build. Or you can use the Promote plugin and put your deployment in the promote steps rather than the build steps. I use both. Every build automatically copies to a 'dev server' and then Stage deployment is an extra step. Jenkins is so configurable that your imagination is probably the only limit. I like having a dev server so I or devs can see the latest but leaving Stage server alone for Account Services to review, until they want to see the latest version. You can implement Prod as a an additional Promote step, using the Access Control for Approvers, or you can have a separate .ps1 file or .ftp hosts file that the admin manually drops into place before running a (not really) Stage promotion.

Prod Access

You can just use the Promote Approver Access control, and/or you can have the prod ftp hosts set up in System Config, but not in any Project Configs, and the admin being the only person who can change the Project Configs, can go in and make that change to the ftp host in the Project Config, promote/build, and then change it back (actually I've seen enterprise teams do something like this). Or you can just use your ftp client manually, and favorite the jenkins artifacts directory, but then you still have to take your time not to make a mistake ftping the wrong version, which is the whole point of automating deployments.

Library Projects

I used a simple one project web application to try out one particular build and deployment approach for library projects. I wanted to stay true to DRY and leverage a shared library, even without the benefit of automated unit testing on a build server. There are many ways to do similar things, this is just the approach that I'm used to using. My project had a dependency on EuroMVC library. I left this as a separate project in SVN/Jenkins that built its dlls into an SVN directory, and then copied those dlls into my project. Development of Euro MVC library may continue, and my web app could remain unaffected with the same dll version it was tested with. I left it up to the project leader of the web app, if they ever wanted, to go back and get an updated version of the dlls. Visual Studio lets you seamlessly debug into your library code from the dlls, if you also have the symbols and library source (which we have in this scheme).

Database Projects

I have found new to VS 2010 is a DB project that deploys incremental changes to different DB environments. This has made my DB deployments one click also, although I have not yet implemented them in Jenkins command line. It has removed from me the management of so many .sql scripts and the maintenance of those scripts through the development process. I really like it.

Approach: ConfigurationManager, Batch scripts, and Different Web Apps for the same Sln

My approach: I set my MSBuilds to outputdir to a publish directory, sibling to my project .sln file. These are separated into a directory for each environment/.sln configuration. Then I create a post build step to archive artifacts. This copies the whole publish directory off to ../build/archive/BuildNumber/ directories. This does two things- first is that you can retrieve these versions even if the project version goes on and on, you can always go back to the version that was built/on prod last month and revert/ftp that. Second is that Jenkins automatically keeps track of these for Promote steps, so you don't even have to revert manually, you can just re-promote whichever build you like in the list, anytime. Between the msbuild arguments and batch file build steps you should be able to nail down Artifact Publish tailored to each environment you use. They can be time consuming to script at first, but once you nail it down you don't worry about it ever again. I've already been using the Configuration Manager settings for .sln files to get distinct artifact directories. I wonder how hard it would be to set up different configurations for a .sln file to output a Windows service and two distinct web apps for Jenkins to promote. It's probably much the same. And if you can't remove directories from the publish through the file/folder Properties in VS, then a simple batch script step will remove them from the artifact directories.

 

Jenkins can be found here:

http://jenkins-ci.org/ Their wiki and documentation is some of the best I've seen, but I will try and match it here, to our interest.

History/Jenkins vs Hudson

http://en.wikipedia.org/wiki/Jenkins_(software) The short version: Kawaguchi worked for Sun. He authored Hudson as an open source java program and won awards. Sun was bought by Oracle. Oracle had issues with direction of the project. Kawaguchi and the devs renamed the project Jenkins. Oracle continued work on the original trunk, so now we have two branches, the original devs on Jenkins, and Oracle on Hudson. Both projects are still active. Hudson just released a new major version fall 2012, and Jenkins has three minor releases in the first week of 2012 alone.

Installing

I installed using the regular stable windows installer from the first link above. I changed Path To Install to something from root like c:\Jenkins because we are going to refer to the Jenkins Home path a lot.

 

(Requires nothing if you use the installer. The setup.exe will handle installation of .net 2.0 if necessary (to run as service) and the msi bundles JVM (the only real requirement).) After installation, browse to the site. It installs by default on port 8080, although you can change that if you need to in %HOME%/jenkins.xml. First thing I did was check the Enable Security checkbox, and under Security Realm choose the Jenkin's Own User Database option. Make sure under Authorization you have Allow Users To Do Anything selected until the next step.   Save that, then Sign Up. That's it. Proceed to Manage Users step to lock down the access.

In the teams I've been on, devs were set up to do anythign except promote to prod, and one dev was the assigned Admin if there wasn't a Change Control team. Here is my Authorization section in the Configure Hudson screen. Note that Hudson does not have roles, although there is a special token user 'authenticated' you can use. Note that the Promote plugin has its own Access Control, so this is for Build rights, not necessarily Deployment rights. See the note in the bottom of this screen. There is also an option to choose Matrix Access like you see here, but for each project individually.   This could be all you need. An Admin, and allow all auth'd users to read and build. If so, then continue to Configure your First Project

Creating a new project is two steps: Name It, and choose Free Style (or copy from existing project), and click Create to go to the configure screen. The important parts of the configure screen: 1. Set your Source Control to Subversion, and enter your Repo Url. You should get a helpful validation error that prompts you to enter credentials, which will then be saved behind the scenes.   2. Click Add Build Step ( MSBuild ). If You have not yet added MSBuild plugin, then add it to your system configuration. Find your path to the .sln file for this field: Jenkins will download the solution automatically from SVN into something like C:/Jenkins/jobs/JenkinsProjectName/workspace/

Command Line Arguments. This is the most complicated part so far. Depends on your strategy and requirements. For me they usually look like this. Note there are some Jenkins variables you can use here like ${BUILD_TAG} to put each build in its own separate directory. With the new Jenkins I found this unnecessary but the option remains for more complicated scenarios. Here I am also doing a Configuration= for each of my Web.Transforms, and putting each of those into a separate directory, so my workspace structure looks like this: 

 

Jenkins automatically creates all of these directories. All that you decide is the HOME directory C:\Jenkins. Jenkins creates workspace, builds, promotions directories. The Workspace directory is where the SVN source goes and where Jenkins will build the project. The build and promotions directories are mostly just logging (builds is where Jenkins also archives but you don't need to know that). I want to MSBuild to an output directory that I can archive. The publish directory location and artifacts placed in it come from my approach using the MSBuild Configuration parameters. In Hudson, I was doing this manually with bacth scripts and my own directory structures, but Jenkins is more advanced and handles that automatically if we follow a couple conventions. So I put my Publish directory here under Workspace because the Artifact Copy (a later step) root directory is the workspace directory. My MSBuild commandline that works for webapps: /t:Rebuild /p:OutDir=..\publish\Debug\;Configuration=Debug;UseWPP_CopyWebApplication=True;PipelineDependsOnBuild=False Just below Build Steps you see here I add a Post Build Step to Archive the Artifacts. This approach is discussed here

3. Click Save. And That's It! "Where's my prod deployment", you ask? Note the two different builds you added above. That means for each build you run, you will get a directory of artifacts (the Publish) of your project, one transformed for each build step you specify. So when you want to move to prod, just copy from publish/Release for that build number. That means that you can continue committing and building and when an older version passes User Testing, you can copy that specific build version to prod. There is tons more you can do. Move on to the Promote and FTP Plugins for one click deployments.

Promote builds is a way to add a step, after build. This is the way I achieved post-build deployments. Install from the plugins page, and then see this one line checkbox for the Promote section sneak up in the Project Configure screen.

Here you see how I set up Approvers

As you see here, I use this in conjunction with Send Build Artifacts over FTP

Download and install the FTP Plugin from the System Manage Plugins page. Note: There is two. You want the one called specifically "Publish over FTP" Unfortunately, in Hudson, at the time, their FTP plugin was not great, and I settled on a combo xcopy and powershell FTP script so I don't have experience setting up this ftp plugin, but looking at the documentation, it has all the features included that I had to script in the old version. Actually, the new plugin works great. Everything I wished for six months ago. Set your hosts up in System Config:

Then set up your Promote Step in Project Config to use that host. I found these settings worked for my case:

 

 

This was the old way I set up FTP in Hudson, before there existed a plugin. I leave it here as an example of the power of Powershell (plugin): I used two promote actions with my script- first an XCopy

xcopy ..\publish\Stage\hudson-%PROMOTED_JOB_NAME%-%PROMOTED_NUMBER%\_PublishedWebsites c:\Web\DEPLOYED /ICERY
rem this is to setup the powershell script next, because powershell plugin doesn't recognize %PROMOTED_JOB_NAME% etc

Then, the powershell script called with parameters for Stage. Script is attached.

& 'C:\Users\mhogg\.hudson\jobs\CE.Ohio\workspace\Ohio\promote.ps1' "C:\Web\DEPLOYED\Ohio" "switchtoconstellation.discoverydev.com" "switchtoconstellation-d
Param(
	[parameter(Mandatory=$true)]
	[alias("d")]
	$deploymentpath,
	[parameter(Mandatory=$true)]
	[alias("s")]
	$server,
	[parameter(Mandatory=$true)]
	[alias("u")]
	$username,
	[parameter(Mandatory=$true)]
	[alias("p")]
	$password,
	[parameter(Mandatory=$true)]
	[alias("r")]
	$remotepath)
#$deploymentpath = "C:\Web\Deployed\Ohio"
#$server = "switchtoconstellation.discoverydev.com"
#$username = "switchtoconstellation-dev"
#$password = 'w8b%duu#9r'
#$remotepath = "www"
$ftpfile = "temp.ftp"
$currftppwd = $remotepath
function AddItem($path){
    foreach($f in Get-ChildItem($path))
    {
        #Write-Host "testing $f"
if ($f.PSIsContainer -eq $True)
        {
            #Write-Host "recursing $f"
            AddItem($f.PSPath);
        }
else
        {
            $filename = $f.fullname
            #Write-Host "writing $filename to $ftpfile"
            $parentpath = $f.Directory.fullname.Replace($deploymentpath, "")
 if ($currftppwd -ne "\$remotepath$parentpath"){
                AppendFtpCmd("MKDIR \$remotepath$parentpath") 
                AppendFtpCmd("CD \$remotepath$parentpath") 
                $currftppwd = "\$remotepath$parentpath"
            }
            AppendFtpCmd("PUT $filename")
        }
    }
}
# need encoding: .net prepends null char for some reason
function AppendFtpCmd($ftpcmd){
    #$ftpfile = "temp.ftp"
    $ftpcmd | out-file -filepath $ftpfile -encoding "ASCII" -append
} 
"OPEN $server" | out-file -filepath $ftpfile -encoding "ASCII"
AppendFtpCmd("USER $username")
AppendFtpCmd("$password")
AppendFtpCmd("CD $remotepath")
AppendFtpCmd("LCD $deploymentpath")
AddItem("$deploymentpath")
AppendFtpCmd("DISCONNECT")
AppendFtpCmd("BYE")
ftp -n -i -s:$ftpfile

An example of one of my most favorite projects

by MikeHogg21. May 2012 18:58

One time I inherited a system of sorts that supported a single user, with her third party data warehouse application.  We didn’t support the warehouse, but we were supposed to get the data extracts that she imported into the warehouse at monthly intervals.  The existing IT process was very manual, and very time intensive.  As well as involving data from 4 different sources and the queries or processes to get them, it involved a dozen files per run, sometimes up to three people from different departments, with several runs per month, taking four to eight hours each run, and no history or state tracking except to keep the files in folders forever. 

 

The initial attempt to automate this also left behind a number of files and processes to maintain, and it had been running for over a year with 60 monthly man hours of IT dedicated to it and now several hundred files, folders, and processes in assorted locations.

 

This is one of my favorite jobs.  People put a mess in front of me and I turn it into something easy to use that saves time.  One of the things that bugged me about the existing process was that there was no history and it took too long.  I expanded our small database to include tables for each of our entities, and started automating the extracts in a nightly process.  This had the effect of making the user’s request time drop from several hours for the complicated queries to almost instant since we were now caching the data ourselves, as well as provided an easy way for IT to hook into historic runs. 

 

Another thing that I wanted to change was to streamline the steps.  The existing process exported from data sources, inserted into databases, extracted into files, joined with other datasources, imported into databases again.  So I built an SSIS package that did the data transformations on our Oracle database and inserted the data directly into the warehouse MSSQL server.  This removed the need for the files and a whole staging process, and made the whole process easier to maintain from an IT perspective.

 

Another thing that I wanted to change was to remove the IT resource component.  I don’t believe IT needs to be involved for day to day business operation requests, unless something breaks.  So I built a simple WPF click-once intranet application with a handful of features, enabling the user to run the whole extract/import instantly for any date they choose, and even view the data by Excel export if they want.  I like that it not only provided so much convenience for the user, but that it dropped the IT cost to maintain from an average of 60 monthly man hours to almost zero.

Tags:

Automation | Me

Loading log files into Oracle

by MikeHogg8. March 2012 17:51

One of my last Oracle projects was pretty neat, because I started working with the new 11g feature, external tables.  This allowed Oracle to mount a file as a table, and was incredibly fast compared to using sqlloader, which was what we had been doing for years. 

In this case I was loading unix log files in the order of millions of rows for each daily file, by loading the external table, and then processing that table into our permanent logging table.  The data sets involved here were pretty big, and so usual manipulation like inserts for millions of rows would take hours and hours, so changing from Sql Loader to external tables saved a lot of time, but I still had a lot of inserts to make, so I added some tweaks, like dropping indices and recreating them after, and then updated stats with the new indices for Oracle’s Query Optimizer. 

Once I had the files shared on a network location accessible to this Oracle unix server, I loaded them with this proc:

procedure LoadExtTable(filedate varchar2) is
begin
 
execute immediate 'create table mdf_meta_activity_dump ( IP_ADDRESS VARCHAR2(255), PID NUMBER,' ||
                      'SYMBOL VARCHAR2(255), USER_ID VARCHAR2(50), APPLICATION VARCHAR2(60),' ||
                      'HOSTNAME VARCHAR2(60), SYMBOL_MESSAGE VARCHAR2(255), SYMBOL_FORMAT VARCHAR2(255),' ||
                      'SCRIPT_NAME VARCHAR2(255), PROCMON_PROCESS VARCHAR2(255), TIME_STAMP DATE )' ||
                      'organization external (type oracle_loader default directory exttabdir access parameters ' ||
                      '(RECORDS DELIMITED BY NEWLINE FIELDS TERMINATED by ''|'' ' ||
 
                      ' ) LOCATION (''\someplace\somedb\udpserver\udp.txt''));';
 
end;

I would process the dump with this proc, which also updated two other tables and was written to be re-runnable, so that, in case of failure or just manual mistake, running the same file of millions of rows would not result in a mess of a million duplicates. 

You will also see here Oracle bulk statements, and logging, which allowed someone to monitor the process realtime, as it usually took some minutes or hours.

procedure ProcessActivityDump is
cursor c_log(p_file_date date) is
select s.id, d.user_id, d.symbol_message, d.time_stamp, p_file_date, trunc(d.time_stamp), to_char(d.time_stamp,'M')
 from mdf_meta_symbol s
 join mdf_meta_activity_dump d
 on s.name = d.symbol
              ;
 
  type t_activity is table of c_log%rowtype;
  r_activity t_activity;
  v_count number;
  v_file_date date;
begin
-- PROCS
    merge into mdf_meta_proc p
using (select distinct procmon_process, script_name from mdf_meta_activity_dump) d
on (p.procmonjob = d.procmon_process and p.script = d.script_name) 
when not matched then
insert (id, procmonjob, script, active_fg, insert_date, audit_date, audit_user)
values(seq_mdf_id.nextval, procmon_process, script_name, 1, sysdate, sysdate, 'PKG_META');
    Log_This('PKG_META.ProcessActivityDump','MDF_META_PROC new rows inserted: ' || sql%rowcount ,'INFO');
-- SYMBOL, rerunnable
    merge into mdf_meta_symbol s
using (select distinct symbol, p.id from mdf_meta_activity_dump join mdf_meta_proc p on procmon_process = procmonjob and script_name = script) d
on (s.name = d.symbol)
when not matched then
insert(id, name, proc_id) values (seq_mdf_id.nextval, symbol, d.id);
    Log_This('PKG_META.ProcessActivityDump','MDF_META_SYMBOL new rows inserted: ' || sql%rowcount ,'INFO');
-- ACTIVITY
select file_date into v_file_date from (
 select trunc(time_stamp) file_date, count(*) 
 from mdf_meta_activity_dump
 group by trunc(time_stamp) 
 order by count(*) desc) where rownum = 1;
 
-- delete existing activity for this day, to make rerunnable
delete from mdf_meta_activity where file_date = v_file_date;
    Log_This('PKG_META.ProcessActivityDump','Dump_Date: ' || v_file_date || ' rows deleted in preparation for new load: ' || sql%rowcount ,'INFO');
-- now add the activity, logging only every 200k or so
-- maybe need to drop idx and recreate after
-- create index IDX_MDF_META_ACT_SYMID on MDF_META_ACTIVITY (SYMBOL_ID)
open c_log(v_file_date);
    v_count := 0;
    loop
fetch c_log bulk collect into r_activity limit 1000;
exit when r_activity.count = 0;
      forall idx in 1..r_activity.count
insert into mdf_meta_activity
values   r_activity(idx);
 
      v_count := v_count + r_activity.count;
if Mod(v_count, 200000) = 0then
        Log_This('PKG_META.ProcessActivityDump','Cumulative insert now at ' || v_count || ' rows','INFO');
end if;
 
end loop;
    RebuildIndices;
    GatherStats;
end ProcessActivityDump;

And that’s it.

Miscellaneous code to tweak IE access rights on random servers

by MikeHogg23. January 2012 21:33

I set up a framework where developers could write data parsers for a variety of sources that could be scheduled, normalized, logged, and archived for various purposes.  There were hundreds of data sources and jobs so a lot of them had enough similarity that a framework with standard libraries made sense.  A large portion of these were web scrapes.  (I used WatIn for those- a great mocking solution for our purposes).  I would run into problems, because our scheduler would run on a server farm of hundreds, with new machines/instances being created at intervals, and so we basically had to write everything into our jobs.  Here are some of the hacks I put into a Web library to get usual https sites working with the web scrapes, where I had a variety of popup problems (I needed to turn off IE’s built in popup blocker), file download problems (add domain to trusted sites)…  They are all probably obsolete with Server 2012 now, but they might come in handy for the next few years still…

private static bool IsPopupEnabled(string domain)
        {
 string keylocation = @"Software\Microsoft\Internet Explorer\New Windows\Allow";
            Microsoft.Win32.RegistryKey cu = Microsoft.Win32.Registry.CurrentUser;
            Microsoft.Win32.RegistryKey parent = cu.OpenSubKey(keylocation);
 if (parent != null) return parent.GetValue(domain) != null;
 else return false;
        }
private static bool EnablePopup(string domain)
        {
 string keylocation = @"Software\Microsoft\Internet Explorer\New Windows\Allow";
            Microsoft.Win32.RegistryKey cu = Microsoft.Win32.Registry.CurrentUser;
            Microsoft.Win32.RegistryKey parentkey = cu.CreateSubKey(keylocation);
            parentkey.SetValue(domain, Microsoft.Win32.RegistryValueKind.Binary);
 return IsPopupEnabled(domain);
        }
private static bool TrustedSiteAddition(string domain)
        {
 const string domainsKeyLocation = @"Software\Microsoft\Windows\CurrentVersion\Internet Settings\ZoneMap\Domains";
            Microsoft.Win32.RegistryKey currentUserKey = Microsoft.Win32.Registry.CurrentUser;
            Microsoft.Win32.RegistryKey parentKey = currentUserKey.OpenSubKey(domainsKeyLocation, true);
            Microsoft.Win32.RegistryKey key = parentKey.CreateSubKey(domain);
 object objSubDomainValue = key.GetValue("http");
 if (objSubDomainValue == null || Convert.ToInt32(objSubDomainValue) != 0x02)
            {
                key.SetValue("http", 0x02, Microsoft.Win32.RegistryValueKind.DWord);
            }
            objSubDomainValue = key.GetValue("https");
 if (objSubDomainValue == null || Convert.ToInt32(objSubDomainValue) != 0x02)
            {
                key.SetValue("https", 0x02, Microsoft.Win32.RegistryValueKind.DWord);
            }
 return IsTrusted(domain);
        }
private static bool IsTrusted(string domain)
        {
 const string domainsKeyLocation = @"Software\Microsoft\Windows\CurrentVersion\Internet Settings\ZoneMap\Domains";
 string keyLocation = string.Format(@"{0}\{1}", domainsKeyLocation, domain);
            Microsoft.Win32.RegistryKey currentUserKey = Microsoft.Win32.Registry.CurrentUser;
            Microsoft.Win32.RegistryKey foundRegistryKey = currentUserKey.OpenSubKey(keyLocation, false);
 if (foundRegistryKey != null)
 return true;
 else
 return false;
        }
private static bool CheckFileDownloadEnabled()
        { 
 string keylocation = @"Software\Microsoft\Windows\CurrentVersion\Internet Settings\Zones\2";
            Microsoft.Win32.RegistryKey cu = Microsoft.Win32.Registry.CurrentUser;
            Microsoft.Win32.RegistryKey parent = cu.OpenSubKey(keylocation,true);
 if (parent != null && (int)parent.GetValue(@"2200") != 0x00)
            {
                parent.SetValue(@"2200", 0x00);
 return true;
            }
 else if (parent != null && (int)parent.GetValue(@"2200") == 0x00)
            {
 return true;
            }
 else return false;
        }
public static bool IEFullyEnabled(string domain)
        {
 if (!IsPopupEnabled(domain) && !EnablePopup(domain)) return false;
 if (!IsTrusted(domain) && !TrustedSiteAddition(domain)) return false;
 if (!CheckFileDownloadEnabled()) return false;
 return true;
        }

Tags:

Automation | C#

Using a Data Sandbox to allow users to push their own data changes to Prod

by MikeHogg14. November 2011 19:03

Most Pl/Sql code is fairly boring, selecting lots of columns from lots of joins and maybe doing calculations and decodes on the fly, or groupings and other aggregations, and sub queries.  But one project I had was pretty neat.  Users had two lists of Products, and although the names and attributes were not usually the same, they were the same, just called different things by different companies. 

When we started using a new Product, we had to find the two matching ones and map them together.  This involved a process that ultimately led to what we called a data fix.  No code was changed, but data that the business process relied on had to be created or changed in the production database.  To do this required an IT resource to team up with a business resource and first make the changes in a test environment, and then test, and make adjustments, and test again, and then do the same exact cumulative change they did in the test environment, in the production environment. 

This was time intensive, and highly manual.  I wanted to make this quick, and automatic, without any room for manual errors.  So one of the features of the new tool I built was for the users to make these changes themselves, in a test environment, with full auditing and editing capability, and be able to see for themselves the effects their changes had, and when they were ready, push a button to have the exact same results applied to their production environment. This worked wonderfully, and worked with full auditing and history tracking in case any recovery was necessary. Besides some finely tuned controls in the new UI, all it required was an Oracle database link to the test environment.

 

procedure PushToProd(p_audit_user varchar2,
                       p_id         my_product.id%type,
                       p_notevarchar2) is
begin
insert into my_product_history
      (audit_user,  audit_date,  type,
       id,  name, pf_code,  pf_type, 
       cc_code, exch_id,  gs_name,
       pf_commodity, lot_size,  uom_cd, 
       some_lot_sz, term, opt_style,
       product_type,  calendar,  location,
       description_id, trader_group,  settle_date_rule,
       eot_date_rule,  uses_dst, exchange)
      (select p_audit_user,   sysdate, 'PTP',
              nvl(p.id, pp.id),  nvl(p.name, pp.name),  nvl(p.pf_code, pp.pf_code),  nvl(p.pf_type, pp.pf_type),
              nvl(p.cc_code, pp.cc_code),  nvl(p.exch_id, pp.exch_id),  nvl(p.gs_name, pp.gs_name),
              nvl(p.pf_commodity, pp.pf_commodity),  nvl(p.lot_size, pp.lot_size),  nvl(p.uom_cd, p.uom_cd),
              nvl(p.some_lot_sz, pp.some_lot_sz),  nvl(p.term, pp.term),   nvl(p.opt_style, pp.opt_style),
              nvl(p.product_type, pp.product_type),  nvl(p.calendar, pp.calendar),  nvl(p.location, pp.location),
              nvl(p.description_id, pp.description_id),  nvl(p.trader_group, pp.trader_group),  nvl(p.settle_date_rule, pp.settle_date_rule),
              nvl(p.eot_date_rule, pp.eot_date_rule),  nvl(p.uses_dst, pp.uses_dst), nvl(p.exchange, pp.exchange)
from my_product p
full outer join my_product@SOMEW_LINK pp on p.id =
                                                                  pp.id -- handles new 
where p.id = p_id
or pp.id = p_id);
    pkg_SOME_log.WRITEMESSAGE('pkg_ProductMapper.PushToProd',
                              'DEBUG',
                              'Note:' || p_note || '; ' || SQL%ROWCOUNT ||
                              ' new record(s) inserted by User ' ||
                              p_audit_user || ' in my_product_history');
delete from my_product where id = p_id;
delete from some_fee where product_id = p_id;
insert into my_product
      (select * from my_product@SOMEW_LINK where id = p_id);
    pkg_SOME_log.WRITEMESSAGE('pkg_ProductMapper.PushToProd',
                              'DEBUG',
                              'Note:' || p_note || '; ' || SQL%ROWCOUNT ||
                              ' new record(s) inserted by User ' ||
                              p_audit_user || ' in my_product');
insert into some_fee
      (select *
from SOME.some_fee@SOMEW_LINK
where product_id = p_id);
    pkg_SOME_log.WRITEMESSAGE('pkg_ProductMapper.PushToProd',
                              'DEBUG',
                              'Note:' || p_note || '; ' || SQL%ROWCOUNT ||
                              ' new record(s) inserted by User ' ||
                              p_audit_user || ' in some_fee');
exception
when others then
rollback;
      raise;
end;
procedure PushMappingToProd(p_audit_user varchar2,
                              p_v_id       my_values.id%type,
                              p_notevarchar2) is
begin
insert into my_values_history
      (audit_user,  audit_date,  type,
       id, key,  value,
       val_type, schema,  rule,  exch_id)
select p_audit_user,  sysdate,  'PTP',
             nvl(v.id, vv.id),  nvl(v.key, vv.key),  nvl(v.value, vv.value),
             nvl(v.val_type, vv.val_type),  nvl(v.schema, vv.schema),
             nvl(v.rule, vv.rule),  nvl(v.exch_id, vv.exch_id)
from my_values v
full outer join my_values@SOMEW_LINK vv on v.id =
                                                                vv.id
where v.id = p_v_id
or vv.id = p_v_id;
    pkg_SOME_log.WRITEMESSAGE('pkg_ProductMapper.PushMappingToProd',
                              'DEBUG',
                              'Note:' || p_note || '; ' || SQL%ROWCOUNT ||
                              ' record(s) inserted by User ' ||
                              p_audit_user || ' in my_values_history');
delete from my_values where id = p_v_id;
insert into my_values
      (select * from my_values@SOMEW_LINK where id = p_v_id);
    pkg_SOME_log.WRITEMESSAGE('pkg_ProductMapper.PushMappingToProd',
                              'DEBUG',
                              'Note:' || p_note || '; ' || SQL%ROWCOUNT ||
                              ' new record(s) inserted by User ' ||
                              p_audit_user || ' in my_value');
exception
when others then
rollback;
      raise;
end;

 

I also put some auditing features in the new UI.  The system relied on the premise that the two databases to start with exact primary key ids and to keep them in sync between the two systems.  I built several auditing screens, showing even the detail for each user’s changes in each environment, so that more thatn one user could see what other users had done.  Here’s one of the auditing tools that showed the user the differences between some of the aggregations of the two systems.  This is the boring sql I was talking about before.  This also works with that oracle database link.

procedure ViewLotDiff(p_valuation_date in me_trade.valuation_date%type,
                        cur_out          out cur) is
begin
-- Added condition v.expiration_date >=  p_valuation_date
open cur_out for
select gg.anothercompany_name,
 max(pf_code),
 case
 when sum(some_w_lots) <>sum(gs_lots) then
 case when sum(some_p_lots) <>sum(gs_lots) then 'BOTH'
 else 'some_W' end
 else
                'some_P'
 end as issue, 
 sum(some_w_lots) total_some_w_lots,
 sum(some_p_lots) total_some_p_lots,
 sum(gs_lots) total_gs_lots
from (select some_w_lots,
                     some_p_lots,
                     gs_lots,
                     src,
                     pf_code,
                     anothercompany_name
 from (select round(sum(v.lots),1) some_w_lots,
                             0 some_p_lots,
                             0 gs_lots,
                             'some_W' src,
                             v.PF_CODE,
                             v.anothercompany_NAME
 from some_.MAP_A_VW v
 where 1 = 1
 and v.VALUATION_DATE = p_valuation_date
 and (v.term <> 'DAY' or
                             v.contract_month >= p_valuation_date)
 and ( ( v.pf_commodity in ('WEA','WTI') and nvl(v.expiration_date,to_date('12/31/9999','mm/dd/yyyy')) >= p_valuation_date )
 or
                             ( v.pf_commodity not in ('WEA','WTI') and nvl(v.settle_date,to_date('12/31/9999','mm/dd/yyyy')) >= trunc(to_date(p_valuation_date),'MM')))
 group by v.anothercompany_NAME, v.PF_CODE )
 union (select 0 some_w_lots,
                           round(sum(d.lots),1) some_p_lots,
                           0 gs_lots,
                           'some_P' src,
                           d.pf_code,
                           p.GS_NAME anothercompany_NAME
 from some_.me_trade_detail@some_P_LINK d
 join some_.me_trade_header@some_P_LINK h on d.eti = h.eti
 join some_.me_product@some_P_LINK p on d.pf_code = p.pf_code and d.pf_type = p.pf_type and h.exchange = p.exchange
 where 1 = 1
 and d.VALUATION_DATE = p_valuation_date
 and nvl(d.some__status,'nvl') <> 'ORI'
 and (h.term <> 'DAY' or
                           d.contract_month >= p_valuation_date)
 and ( ( p.pf_commodity in ('WEA','WTI') and nvl(d.expiration_date,to_date('12/31/9999','mm/dd/yyyy')) >= p_valuation_date  )
 or
                             ( p.pf_commodity not in ('WEA','WTI') and nvl(d.settle_date,to_date('12/31/9999','mm/dd/yyyy')) >= trunc(to_date(p_valuation_date),'MM')))
 group by p.GS_NAME, d.pf_code )
 union (select 0 some_w_lots,
                           0 some_p_lots,
 sum(g.net_qty) gs_lots,
                           'GS' src,
                           '' pf_code,
                           g.prod_description anothercompany_name
 from gsd_open_positions g
 where 1 = 1
 and g.stmt_date = p_valuation_date
 and g.expiry_date >= p_valuation_date
 group by g.prod_description ) 
              ) gg
group by anothercompany_name
having ( (sum(some_w_lots) - sum(some_p_lots) <> 0) or (sum(gs_lots) - sum(some_p_lots) <> 0) or (sum(gs_lots) - sum(some_w_lots) <> 0) )
order by anothercompany_NAME;
end;

Using Moq to write Unit tests in VS2008

by MikeHogg5. October 2011 18:26

I’ve read about Test Driven Development, and after trying it out myself I have noticed a difference in the code that I write, even if I am not using it for a particular project.  My classes have become simpler, my functions more atomic.  I write more for the Open Closed principle, that my classes are closed for modification, but open for extension, and I find my projects are more organized and development seems to be quicker even, as the base classes get written quicker, and then additional features drop right into place as if they were expected, rather than the spaghetti code that I used to eventually end up with.

There is a difference between Unit tests and other kinds of tests.  I have heard other developers say that they can’t test because most of their code is dependent upon some specific data layer, or web UI, and I think they might not know that Integration Testing is different than Unit testing.  This was what I meant about changing my writing style.  TDD prompted me to write smaller more abstract classes, with unit testable atomic functions, and then to write application specific interfaces and more specific inherited classes. 

In some cases, though, your tests cross the line, and you need integration testing, but there are several Mocking libraries out there now that make it easy to do this.  The main ones all appear to be similar.  Here is a simple project where I used Moq to fake my Data Service.  The Project had a few components.  The end user UI was WPF, and was built with MVVM. 

Because these UI related elements are decoupled from the actual UI in MVVM, I can unit test them.  Here are some tests for the MainViewModel, which takes a user in Constructor Dependency Injection.

        [TestMethod()]
        [DeploymentItem("AUM.exe")]
public void CanRestartServiceTest()
        {
            MainViewModel_Accessor target = new MainViewModel_Accessor(AdminUser);
            Assert.IsTrue(target.CanRestartService());
            target = new MainViewModel_Accessor(ReadonlyUser);
            Assert.IsFalse(target.CanRestartService());
        }
        [TestMethod()]
        [DeploymentItem("AUM.exe")]
public void CanLoadAllTablesTest()
        {
            MainViewModel_Accessor target = new MainViewModel_Accessor(AdminUser);
            Assert.IsTrue(target.CanLoadAllTables());
 
            target._backgroundmanager.IsRunning = true;
            Assert.IsFalse (target.CanLoadAllTables());
            target = new MainViewModel_Accessor(ReadonlyUser);
            Assert.IsFalse(target.CanLoadAllTables());
        }
 
        [TestMethod()]
        [DeploymentItem("AUM.exe")]
public void StartProgressBarTest()
        {
            MainViewModel_Accessor target = new MainViewModel_Accessor(AdminUser);
            target.StartProgressBar();
            Assert.IsTrue(target.ProgressVisibility == Visibility.Visible);
            Assert.IsTrue(target.ProgressValue == 0);
        }
        [TestMethod()]
        [DeploymentItem("AUM.exe")]
public void BM_TimerFinishedTest()
        {
            MainViewModel_Accessor target = new MainViewModel_Accessor(AdminUser);
            TimerFinishedEventArgs e = new TimerFinishedEventArgs();
            e.Message = "TickTock";
            target.BM_TimerFinished(new object(), e);
            Assert.IsTrue(target.ProgressVisibility == Visibility.Hidden);
            Assert.IsTrue(target.Dialog.Visibility == Visibility.Visible);
            Assert.IsTrue(target.Dialog.Title.Contains("Exceeded"));
        }
        [TestMethod()]
        [DeploymentItem("AUM.exe")]
public void HandleErrorTest()
        {
            MainViewModel_Accessor target = new MainViewModel_Accessor(AdminUser);
            target.HandleError(new Exception("Test Exception"));
            Assert.IsTrue(target.Error.Visibility == Visibility.Visible);
            Assert.IsTrue(target.Error.Message.Contains("Test Exception"));
        }

I created all of my authorization features as properties on the VM, and then exposed Enabled in the UI to them, so I can test if the VM is Enabled to Admin users or Readonly users, as I do in CanRestartServiceTest.  I also disable certain controls through this mechanism when running certain background jobs, and CanGetTablesTest tests that.  I have a progressbar control, also with its own VM, and hook into it and expose a property called ProgressVisibility in the Main VM, so the StartProgressBarTest can test that it is working.  This UI runs several Background jobs and I wrote a custom BackgroundJobManager class to manage all of them.  BM_TimerFinishedTest tests against one of the behaviors of the manager as it is implemented in the VM.  And HandleErrorTest tests against the DialogErrorVM I am using in the Main VM.  So with MVVM it is possible to write unit tests of a sort for your UI components.

So, it’s great that I can in some way test the UI plumbing, but most of what this app does is interact with a windows service, a DTS package through a COM wrapper, and a Web service, and MVVM doesn’t help me test these integration layer features.  In the past, since I have written each of these as interfaces, I would have to write test classes for each of these.  These would be just stubs, returning dummy data.  And that is fine, but with a Mocking library, you no longer have to write all those test classes.  You can usually write up your mocked class and dummy results in a line or two.  And there is a load of generic flexibility built into it. 

Here I am setting up a mocked object based on my IDataService interface, with one instantiated method, StageAcorn(Datetime) which I set to take Any DateTime as an argument.  My MVVM method takes an IDataService injected as an argument, and so I can now test my method,  without writing any IDataService test stub.

        [TestMethod()]
        [DeploymentItem("AUM.exe")]
public void StageTablesTest()
        {
            MainViewModel_Accessor target = new MainViewModel_Accessor(AdminUser);
            var mock = new Moq.Mock<AUM.DataServiceReference.IDataService>();
            mock.Setup<string>(f => f.StageAcorn(Moq.It.IsAny<DateTime>())).Returns("DTSER is Success");
            target.StageTables(mock.Object);
 
            Assert.IsTrue(target.Dialog.Visibility == Visibility.Visible);
            Assert.IsTrue(target.Dialog.Message.Contains("Success"));
            Assert.IsFalse(target.IsInError);
        }

Here are a couple other similar, super simple tests also using mocked interfaces…

        [TestMethod()]
        [DeploymentItem("AUM.exe")]
public void GetSavePathTest()
        { 
            var mock = new Moq.Mock<IFileDialogService>();
            mock.Setup(f => f.GetSaveFolder()).Returns("testfolderpath");
            Assert.AreEqual("testfolderpath", MainViewModel_Accessor.GetSavePath(mock.Object));
        } 
        [TestMethod()]
        [DeploymentItem("AUM.exe")]
public void RestartServiceTest()
        {
            MainViewModel_Accessor target = new MainViewModel_Accessor(AdminUser);
 
            var mock = new Moq.Mock<AUM.RestarterServiceReference.IRestarterService>();
            mock.Setup(f => f.StopAcornAppService()).Returns(true);
            mock.Setup(f => f.StartAcornAppService()).Returns(true);
 
            target.RestartService(mock.Object);
            Assert.IsTrue(target.ProgressVisibility == Visibility.Visible);
 bool running = true; int testtimeout = 0;
 while (running)
            {
 if (target.ProgressVisibility == Visibility.Hidden) running = false;
                System.Threading.Thread.Sleep(100);
 if (testtimeout++ > 200)
                {
                    Assert.Inconclusive("Test Timeout");
                    running = false;
                }
            }
            Assert.IsFalse(target.IsInError);
        }
 

 

These are very simple examples, and not every method on every class is meant to be unit tested, but they show how easy it is to get started with MS testing projects.

An example of one of my favorite projects

by MikeHogg27. September 2011 15:00

Here’s a small one I like.

One time I inherited a system of sorts that supported a single user, with her third party data warehouse application. We didn’t support the warehouse, but we were supposed to get the data extracts that she imported into the warehouse at monthly intervals. The existing IT process was very manual, and very time intensive. As well as involving data from 4 different sources and the queries or processes to get them, it involved a dozen files per run, sometimes up to three people from different departments, with several runs per month, taking four to eight hours each run, and no history or state tracking except to keep the files in folders forever.

The initial attempt to automate this also left behind a number of files and processes to maintain, and it had been running for over a year with 60 monthly man hours of IT dedicated to it and now several hundred files, folders, and processes in assorted locations.

This is one of my favorite jobs. People put a mess in front of me and I turn it into something easy to use that saves time. One of the things that bugged me about the existing process was that there was no history and it took too long. I expanded our small database to include tables for each of our entities, and started automating the extracts in a nightly process. This had the effect of making the user’s request time drop from several hours for the complicated queries to almost instant since we were now caching the data ourselves, as well as provided an easy way for IT to hook into historic runs.

Another thing that I wanted to change was to streamline the steps. The existing process exported from data sources, inserted into databases, extracted into files, joined with other datasources, imported into databases again. So I built an SSIS package that did the data transformations on our Oracle database and inserted the data directly into the warehouse MSSQL server. This removed the need for the files and a whole staging process, and made the whole process easier to maintain from an IT perspective.

Another thing that I wanted to change was to remove the IT resource component. I don’t believe IT needs to be involved for day to day business operation requests, unless something breaks. So I built a simple WPF click-once intranet application with a handful of features, enabling the user to run the whole extract/import instantly for any date they choose, and even view the data by Excel export if they want. I like that it not only provided so much convenience for the user, but that it dropped the IT cost to maintain from an average of 60 monthly man hours to almost zero.

Using .Net to let users create Scheduled Tasks to send themselves issue tracking reports from MySql

by MikeHogg11. March 2009 19:39

I had written a reporting page in asp.Net that queried the MySql database of our issue tracking system.  Users in different teams could use the page to create reports on the issues created, resolved, and worked on by their team as well as filtering by any number of other attributes. 

The page in itself was neat, and necessary, as the issue tracking system had grown at a pace of 50 new issues weekly for a couple of years.  But it was neater to build a way for different users to save their more complicated report filters, and even neater to allow them to send them an automatic email containing their report on any daily or weekly schedule.  Although it sounded like a lot of work at first, it turned out to be easy to set up.

The first part was to only show each user their own saved reports, and allow them to edit and save again, without affecting or viewing anybody else’s.  Because we were in an intranet environment, this was easy enough, using Windows as our authentication scheme meant I always already had the current user, and I could then save the reports in a table with their Id, and only show links to reports that they owned. 

The report page had over twenty filters and group by’s and sorts so saving all of the settings individually was not the way to go.  Instead I would take the output of that page, the actual query, generated dynamically, and save that.  Then all I had to do was get the query, saved in the database, and run it.  The code that created the query can be a great exercise for refactoring, and it is incredibly long, but you can imagine it was quick to write initially, and I only ever had to add one or two features to it …

using (MySqlDataAdapter myAdp = new MySqlDataAdapter())
        {
            myCmdQuery.Append("select distinct iss_id  Issue_ID ");
 if (project) { myCmdQuery.Append(",prj_title Project "); };
 if (category) { myCmdQuery.Append(",prc_title Category "); };
 if (summary) { myCmdQuery.Append(",iss_summary  Summary "); };
 if (assignee)
            {
                myCmdQuery.Append(", (select group_concat(usr_full_name order by usr_full_name separator ', ') " +
                    " from eventum_user join eventum_issue_user " +
                    " on isu_usr_id = usr_id where isu_iss_id = iss_id) Assignees ");
            };
 if (status) { myCmdQuery.Append(",sta_title Status "); };
 if (priority) { myCmdQuery.Append(",pri_title Priority "); };
 if (createDate) { myCmdQuery.Append(",iss_created_date Created_Date "); };
 if (createAge) { myCmdQuery.Append(",datediff(curdate(), iss_created_date) Days_Old "); };
 if (updateDate) { myCmdQuery.Append(",iss_updated_date Updated_Date "); };
 if (updateAge) { myCmdQuery.Append(", datediff(curdate(),iss_updated_date) Days_Since_Update "); };
 if (updateUser)
            {
                myCmdQuery.Append(",(select usr_full_name from eventum_user join eventum_issue_history " +
                      "on usr_id = his_usr_id where his_id = (select max(his_id) from " +
                      "eventum_issue_history where his_iss_id = iss_id)) Update_User ");};
 if (updateSummary) { myCmdQuery.Append(",(select his_summary from eventum_issue_history where " +
                "his_id = (select max(his_id) from eventum_issue_history where his_iss_id = iss_id)) Update_Summary "); };
 if (abbrev)
            {
 if (notes) { myCmdQuery.Append(", left(n.not_note,200) Last_Internal_Note "); };
 if (closeComments) { myCmdQuery.Append(", left(nc.not_note,200) Closed_Comment "); };
            }
 else
            {
 if (notes) { myCmdQuery.Append(", n.not_note Last_Internal_Note "); };
 if (closeComments) { myCmdQuery.Append(", nc.not_note Closed_Comment "); };
            }
 if (custom) { myCmdQuery.Append(", cfo_value Custom "); };
 if (expResDate) { myCmdQuery.Append(", iss_expected_resolution_date Exp_Res_Date "); };
 if (dupe) {myCmdQuery.Append(", iss_duplicated_iss_id dupe "); };
            myCmdQuery.Append("FROM eventum_issue e ");
            myCmdQuery.Append("join eventum_status s on e.iss_sta_id = s.sta_id ");
            myCmdQuery.Append("join eventum_project p on e.iss_prj_id = p.prj_id ");
            myCmdQuery.Append("join eventum_project_category pc on e.iss_prc_id = pc.prc_id ");
 if (priority) { myCmdQuery.Append("join eventum_project_priority pp on e.iss_pri_id = pp.pri_id "); };
 if (notes)
            {
                myCmdQuery.Append("left outer join eventum_note n on iss_id = not_iss_id and not_id = " +
                    "  (select max(not_id) from eventum_note nn " +
                    "   where n.not_iss_id = nn.not_iss_id and nn.not_title <> 'Issue closed comments') ");
            }
 if (closeComments)
            {
                myCmdQuery.Append("left outer join eventum_note nc on iss_id = nc.not_iss_id and nc.not_id = " +
                    "  (select max(ncc.not_id) from eventum_note ncc " +
                    "   where nc.not_iss_id = ncc.not_iss_id and ncc.not_title = 'Issue closed comments') ");
            }
 if (custom)
            {
                myCmdQuery.Append("left outer join eventum_issue_custom_field on icf_iss_id = iss_id " +
                    "left outer join eventum_custom_field_option on cfo_id = icf_value ");
            } 
 if (this.ddlUserAssignment.SelectedIndex > 0)
            {
                myCmdQuery.Append("join eventum_issue_user on iss_id = isu_iss_id ");
            }
            myCmdQuery.Append("WHERE prj_status = 'active' ");
 if (this.ddlProject.SelectedIndex > 0)
            {
                myCmdQuery.Append("and prj_id = " + this.ddlProject.SelectedValue.ToString() + " ");
            }
 if (this.ddlUserAssignment.SelectedIndex > 0)
            {
                myCmdQuery.Append(" and isu_usr_id = " + this.ddlUserAssignment.SelectedValue.ToString() + " ");
            }
 if (this.rblDate.SelectedIndex == 3)  //NOT touched
            {
                myCmdQuery.Append("and not exists(select 3 from eventum_issue_history where his_iss_id = iss_id " );
 
 if (this.ddlGroupAction.SelectedIndex > 0)
                {
                    myCmdQuery.Append(" and his_usr_id in (select usr_id from eventum_user where usr_grp_id = " +
                        ddlGroupAction.SelectedValue + ") ");
                }
 if (this.ddlUserAction.SelectedIndex > 0)
                {
                    myCmdQuery.Append(" and his_usr_id = " + ddlUserAction.SelectedValue + " ");
                }
 if (numberOfDays != null)
                {
                    myCmdQuery.Append(" and datediff(curdate(), his_created_date) <= " + numberOfDays + " ");
                }
 else
                {
 if (start != null)
                    {
                        myCmdQuery.Append(" and his_created_date >= '" + start + "' ");
                    }
 if (end != null)
                    {
                        myCmdQuery.Append(" and his_created_date <= '" + end + "' ");
                    }
                }
                myCmdQuery.Append(") ");
            }
 if (this.rblDate.SelectedIndex == 2)  //touched
            {
                myCmdQuery.Append("and exists(select 2 from eventum_issue_history where his_iss_id = iss_id ");
 
 if (this.ddlGroupAction.SelectedIndex > 0)
                {
                    myCmdQuery.Append(" and his_usr_id in (select usr_id from eventum_user where usr_grp_id = " +
                        ddlGroupAction.SelectedValue + ") ");
                }
 if (this.ddlUserAction.SelectedIndex > 0)
                {
                    myCmdQuery.Append(" and his_usr_id = " + ddlUserAction.SelectedValue + " ");
                }
 
 if (numberOfDays != null)
                {
                    myCmdQuery.Append(" and datediff(curdate(), his_created_date) <= " + numberOfDays + " ");
                }
 else
                {
 if (start != null) { myCmdQuery.Append(" and his_created_date >= '" + start + "' "); };
 if (end != null) { myCmdQuery.Append(" and his_created_date <= '" + end + "' "); }
                }
                myCmdQuery.Append(") ");
            }
 else if (this.rblDate.SelectedIndex == 1)  //closed
            {
                myCmdQuery.Append("and exists(select 1 from eventum_issue_history where his_iss_id = iss_id " +
                    "and his_htt_id = 23 ");
 
 if (this.ddlGroupAction.SelectedIndex > 0)
                {
                    myCmdQuery.Append(" and his_usr_id in (select usr_id from eventum_user where usr_grp_id = " +
                        ddlGroupAction.SelectedValue + ") ");
                }
 if (this.ddlUserAction.SelectedIndex > 0)
                {
                    myCmdQuery.Append(" and his_usr_id = " + ddlUserAction.SelectedValue + " ");
                }
 if (numberOfDays != null)
                {
                    myCmdQuery.Append(" and datediff(curdate(), iss_closed_date) <= " + numberOfDays + " ");
                }
 else
                {
 if (start != null) { myCmdQuery.Append(" and iss_closed_date >= '" + start + "' "); };
 if (end != null) { myCmdQuery.Append(" and iss_closed_date <= '" + end + "' "); };
                }
                myCmdQuery.Append(") ");
            }
 else if (this.rblDate.SelectedIndex == 0) //created
            {
 if (this.ddlGroupAction.SelectedIndex > 0)
                {
                    myCmdQuery.Append(" and iss_usr_id in (select usr_id from eventum_user where usr_grp_id = " +
                        ddlGroupAction.SelectedValue + ") ");
                }
 if (this.ddlUserAction.SelectedIndex > 0)
                {
                    myCmdQuery.Append(" and iss_usr_id = " + ddlUserAction.SelectedValue + " ");
                }
 
 if (numberOfDays != null)
                {
                    myCmdQuery.Append(" and datediff(curdate(), iss_created_date) <= " + numberOfDays + " ");
                }
 else
                {
 if (start != null) { myCmdQuery.Append(" and iss_created_date >= '" + start + "' "); };
 if (end != null) { myCmdQuery.Append(" and iss_created_date <= '" + end + "' "); };
                }
 
            }
 if (closed && !open) { myCmdQuery.Append(" and iss_closed_date is not null "); };
 if (open && !closed) { myCmdQuery.Append(" and iss_closed_date is null "); };
 if (!open && !closed) { Response.Write("umm.  Not Closed and Not Open?  Kindly try again."); return; };
 if (this.ddlGroupAssignment.SelectedIndex > 0)
            {
                myCmdQuery.Append(" and iss_id in (select isu_iss_id from eventum_issue_user " +
                    "join eventum_user on isu_usr_id = usr_id " +
                    "where usr_grp_id = " + this.ddlGroupAssignment.SelectedValue + ") ");
            }
 if (this.txtKeyword.Text.Length > 0)
            { 
                myCmdQuery.Append(" and iss_id in ( select * from ((SELECT DISTINCT(iss_id)  FROM evdb.eventum_issue " +
                    "WHERE  MATCH(iss_summary, iss_description) AGAINST ('" + this.txtKeyword.Text + "' IN BOOLEAN MODE) " +
                    ") UNION (  SELECT DISTINCT(not_iss_id)  FROM  evdb.eventum_note " +
                    "WHERE  MATCH(not_note) AGAINST ('" + this.txtKeyword.Text + "' IN BOOLEAN MODE) " +
                    ") UNION ( SELECT DISTINCT(sup_iss_id) FROM evdb.eventum_support_email, " +
                    "evdb.eventum_support_email_body WHERE sup_id = seb_sup_id AND " +
                    "MATCH(seb_body) AGAINST ('" + this.txtKeyword.Text + "' IN BOOLEAN MODE) ) ) a ) ");
            }
 if (this.ddlCategory.SelectedIndex > 0) { myCmdQuery.Append(" and iss_prc_id = " + this.ddlCategory.SelectedValue.ToString()); };
 if (this.ddlPriority.SelectedIndex > 0) { myCmdQuery.Append(" and iss_pri_id = " + this.ddlPriority.SelectedValue.ToString()); };
            myAdp.SelectCommand = new MySqlCommand(myCmdQuery.ToString());
            myAdp.SelectCommand.Connection = new MySqlConnection(ConfigurationManager.ConnectionStrings["evdbConn"].ToString());
            Session["_query"] = myCmdQuery.ToString();
            DataTable myTable = new DataTable();
 try
            {
                myAdp.Fill(myTable);
                DataView myDV = new DataView(myTable);
 // insert sorts in reverse order here  FIRST USER SORTS, THEN GROUPINGS
 if (this.ddlSort3.SelectedIndex > 0)
                {
                    addSort(ddlSort3.SelectedItem.Text.Replace(" ", "_"), ref myDV);
                }
 if (this.ddlSort2.SelectedIndex > 0)
                {
                    addSort(ddlSort2.SelectedItem.Text.Replace(" ", "_"), ref myDV);
                }
 if (this.ddlSort1.SelectedIndex > 0)
                {
                    addSort(ddlSort1.SelectedItem.Text.Replace(" ", "_"), ref myDV);
                }
 
 if (this.chkGroupAssignees.Checked)
                {
                    addSort("Assignees", ref myDV);
                }
 if (this.chkGroupCategory.Checked)
                {
                    addSort("Category", ref myDV);
                }
 if (this.chkGroupProject.Checked)
                {
                    addSort("Project", ref myDV);
                }

The result grid also had grouping on any of three separate attributes and was also written in the long declarative style, which I am not proud of, but it resulted in an organized and detailed report grid.

 

The second part required the use of the webserver’s Scheduled tasks to run daily and check the database for any reports that were scheduled to go at that time.  The task ran a simple console script.  I saved the task using impersonation and  .Net interfaced with Scheduled Tasks easily.

        ScheduledTasks st = new ScheduledTasks();
        Trigger newTrigger;
        Task newTask;
string username = User.Identity.Name.Replace("SOMECORP\\","").ToLower();
string reportName = "QBR" + username + this.txtScheduledReportName.Text;
if (this.txtScheduledReportName.Text.Length > 0 && username.Length > 0)
        {
 try
            {
 int adjustedHour = (this.ddlAM.SelectedIndex == 0) ? Convert.ToInt16(this.ddlHour.SelectedValue) :
                    (Convert.ToInt16(this.ddlHour.SelectedValue)) + 12;
                st.DeleteTask(reportName); // in case we are saving new version
                DeleteReport(reportName);
                newTask = st.CreateTask(reportName);
 if (newTask != null)
                {
                    newTask.ApplicationName = "c:\\bin\\QbertEmailer.exe";
                    newTask.Parameters = "\"" + reportName + "\"";
                    newTask.SetAccountInformation("someserviceaccount", "password");
 if (this.ddlFrequency.SelectedIndex == 0)
                    {
                        newTrigger = new DailyTrigger(
                            (short)adjustedHour,
                            (short)Convert.ToInt16(this.ddlMinute.SelectedValue.Replace(":", "")));
                    }
 else
                    {
                        newTrigger = new WeeklyTrigger(
                            (short)adjustedHour,
                            (short)Convert.ToInt16(this.ddlMinute.SelectedValue.Replace(":", "")),
                            (DaysOfTheWeek)Convert.ToInt16(this.ddlWeekday.SelectedValue));
                    }
                    newTask.Triggers.Add(newTrigger);
 // Call LogonUser to get a token for the user
                    Utils.ImpersonateUser iu = new Utils.ImpersonateUser();
                    iu.Impersonate("corp", "someserviceaccount", "password");
                    newTask.Save();
                    newTask.Close();
                    iu.Undo();
 this.bindGrid();
 this.SaveReport(reportName, true);
                    resetScheduleForm();
                }
 else
                {
                    Response.Write("Problem creating report name.  Please try again");
                }
            }
 catch (Exception ex)
            {
                logThis(ex.ToString(), EventLogEntryType.Error);
                Response.Write("Could not save report.  Click back and try again or contact your favorite QBERT support representative");
 //continue
            }
 finally
            {
                st.Dispose();
            }
        }
else
        {
            Response.Write("Bad User or Report Name, cannot save.");
        }
    }

 

AD lookups provided email addresses.  I didn’t put the email addresses in our database, because that would just add another place users had to manage their email addresses, and employees already managed their emails through the company’s AD system. 

 using (DirectorySearcher ds = new DirectorySearcher("(samaccountname=" + user + ")"))
                {
 
                    ds.PropertiesToLoad.Add("mail");
                    SearchResult sr = ds.FindOne();
 if (sr == null)
                    {
 return;
                    }
 if (sr.Path.Length == 0)
                    {
 return;
                    }
                    user = sr.Properties["mail"][0].ToString();
                }

The email was simple html, just formatting the results of the saved report query, run at that moment.

htmlReport.Append("<html><table cellspacing='0' border='1'style='border-collapse:collapse;font-size:9pt'>" +
                        "<tr style='background-color:Silver;font-size:X-Small;'>");
 foreach (DataColumn dc in dt.Columns)
                        {
                            htmlReport.Append("<td>" + dc.ColumnName + "</td>");
                        }
                        htmlReport.Append("</tr><tr>");
 int x = 1;
 foreach (DataRow dr in dt.Rows)
                        {
 if (x == 1)
                            {
                                htmlReport.Append("<tr>");
                                x = x * -1;
                            }
 else
                            {
                                htmlReport.Append("<tr style='background-color:AliceBlue;'>");
                                x = x * -1;
                            }
 foreach (object fieldVal in dr.ItemArray)
                            {
 if (dr.ItemArray[0].ToString().Equals(fieldVal.ToString()))
                                {
                                    htmlReport.Append("<td><a href='http://boitqueue/view.php?id=" +
                                        fieldVal.ToString() + "'>" + fieldVal.ToString() + "</td>");
                                }
 else
                                {
                                    htmlReport.Append("<td>" + fieldVal.ToString() + "</td>");
                                }
                            }
                            htmlReport.Append("</tr>");
                        }
                        htmlReport.Append("</table></html>");

Using a heartbeat to find time of death on constantly changing databases

by MikeHogg1. April 2008 20:31

Another unusual request I had one day that I enjoyed writing.  The team I worked on employed up to a dozen copies of the same database, some of them in different stages of testing like Prod and UAT and Stage and Stage2, and some of them purposed for data analysis or short investigative projects. 

Only some of these were active, in that they were running our continuous batch processing procedures, which continually imported data from upstream systems and processed it.  At times we wanted to know, in the cases of the inactive systems, when they were taken offline, and made inactive.  When they stopped receiving upstream data. 

So I set up a console application that had two functions.  The first was to update a particular database with a timestamp, running every five minutes or so.  I slipped the command for this into the same scheduler that ran the batch processing, so it would only run on Active databases.  Also, the timestamp was to be encrypted. It was the case sometimes that we absolutely needed to know that our timestamp wasn’t modified by outside sources.  It might be a little overboard but it was an interesting project.

The second part was a method to find this out from any database.  So this same console app, dropped in a NAS script location, allowed anybody on the network to find on their command line the actual last timestamp, decrypted, of a given database.

Here are the two routines:

static private int updateDB()
        {
 byte[] key = {};
 byte[] IV = {  0x49, 0xCD, 0x24, 0x37, 0x95, 0xDB, 0xFE, 0xBD };
            String systimestamp;
            String timecode;
 
            OracleCommand myCmd = new OracleCommand("",Conn);
 try
            {
 
                key = System.Text.Encoding.UTF8.GetBytes("pacmanlx");
 
 
                DESCryptoServiceProvider myDES = new DESCryptoServiceProvider();
 //Rijndael myR = Rijndael.Create();
 
 byte[] inputString = Encoding.UTF8.GetBytes(DateTime.Now.ToString( "MMM-dd" ));
                System.IO.MemoryStream ms = new System.IO.MemoryStream();
                CryptoStream cs = new CryptoStream(ms, myDES.CreateEncryptor(key, IV), CryptoStreamMode.Write);
                cs.Write(inputString, 0, inputString.Length);
                cs.FlushFinalBlock();
 //encrypt this baby
                timecode = Convert.ToBase64String(ms.ToArray());
 
 //not this one
                systimestamp = DateTime.Now.ToString();
                myCmd.CommandText = "update some.valid_general_cd v set v.decode = '" +
                systimestamp + "', column_cat_cd = '" + timecode + "' where v.code_id = 4444";
                Conn.Open();
 int myReturn = myCmd.ExecuteNonQuery();
                myCmd.Dispose();
                Conn.Close();
 return myReturn;
 
            }
 catch (Exception ex) {
 if (Conn.State == System.Data.ConnectionState.Open)
                {
                    Conn.Close();
                }
 
 throw new Exception(ex.ToString());
            }
        }
static private void decryptTime(string db)
        {
 byte[] key = { };
 byte[] IV = {  0x49, 0xCD, 0x24, 0x37, 0x95, 0xDB, 0xFE, 0xBD };
 byte[] inputByteArray = { };
            OracleConnection Conn = new OracleConnection("Data Source=" + db + ".somecorp.net;User Id=somedbuser;Password=password;");
            OracleCommand myCmd = new OracleCommand("",Conn);
            String outputString;
 try
            {
                key = System.Text.Encoding.UTF8.GetBytes("pacmanlx");
                DESCryptoServiceProvider myDES = new DESCryptoServiceProvider();
 //Rijndael myR = Rijndael.Create();
                myCmd.CommandText = "select v.column_cat_cd from some.valid_general_cd v where v.code_id = 4444";
                Conn.Open();
 
                OracleDataReader myReader = myCmd.ExecuteReader();
                myReader.Read();
 //object = myReader.getsomething
                outputString = myReader.GetString(0);
 
 //decrypt object
                inputByteArray = Convert.FromBase64String(outputString);
 
 //now decrypt the regular string
                System.IO.MemoryStream ms = new System.IO.MemoryStream();
                CryptoStream cs = new CryptoStream(ms, myDES.CreateDecryptor(key, IV),
                                               CryptoStreamMode.Write);
                cs.Write(inputByteArray, 0, inputByteArray.Length);
                cs.FlushFinalBlock();
 
                System.Text.Encoding encoding  = System.Text.Encoding.UTF8;
 //write date to console
                Console.WriteLine(encoding.GetString(ms.ToArray()));
                myReader.Close();
 
            }
 catch (Exception ex)
            {
 if (Conn.State == System.Data.ConnectionState.Open)
                {
                    Conn.Close();
                }
                Console.WriteLine(ex.ToString());
            }

About Mike Hogg

Mike Hogg is a c# developer in Brooklyn.

More Here

Favorite Books

This book had the most influence on my coding style. It drastically changed the way I write code and turned me on to test driven development even if I don't always use it. It made me write clearer, functional-style code using more principles such as DRY, encapsulation, single responsibility, and more.amazon.com

This book opened my eyes to a methodical and systematic approach to upgrading legacy codebases step by step. Incrementally transforming code blocks into testable code before making improvements. amazon.com

More Here