Baseline Security Analyzer

For those of you maintaining your solution on-premise, remember that the Microsoft Baseline Security Analyzer is an easy win for internal IT audits.  You can use it as long as you have administrative access to the server.  Launch it from your desktop and you'll see the interface shown below.

2017-11-05_8-30-15.png

If you click the "Scan a computer" link you'll then be shown this....

2017-11-05_8-39-45.png

Once you click Start Scan, you'll see a progress bar...

2017-11-05_8-39-54.png

After it's done you'll see a list of all the known vulnerabilities and the current score for each...

2017-11-05_8-32-09.png

The VM I'm using for the JFK archives had one issue I needed to address.  It's nice that it gives me information about how to correct issues. 

Make sure you've run this on any server exposed to the public!

Digging into SQL errors when copying a dataset

There was an interesting question over on the forum.  I figured I should follow-up with some information about the process of copying a dataset using the migration feature when creating a new dataset.  This is a relatively new feature for some organizations upgrading, so let's dive right on in!

To get things started, I opened the Enterprise Studio and clicked Create Dataset.

2017-10-28_17-52-12.png

Then I entered the new dataset's name, ID, and RDBMS platform (dataset type)...

2017-10-28_15-54-10.png

After clicking Next, I clicked the KwikSelect icon on the dataset connection string property. 

2017-10-28_16-05-52.png

This displays the Data Link Properties dialog where I can enter the server name, authentication mechanism, and select the database name.  I want to configure this to point to the new database, which will end up being a duplicate of an existing dataset.  If you select SQL authentication (the use a specific user name and password option), be sure to also check "allow saving password".  

Be sure to point to the new database!  The source content will be bulk loaded into this database.

Be sure to point to the new database!  The source content will be bulk loaded into this database.

After configuring the data link properties dialog, I clicked OK and was returned to the new dataset connection dialog.  I changed the command timeout to 300.  The default value equates to 30 seconds, a value far too low that will just lead to timeout errors for users. 

2017-10-28_16-05-52.png

I clicked Next to  move onto the Options page of the create new dataset wizard.  It prompted me for a working path for bulk load.  I pointed it to a share on my workgroup server.  Then I clicked Next (skipping the IDOL index staging field).

2017-10-28_16-14-50.png

I provided a path for my new document store and then clicked Next.

2017-10-28_16-43-04.png

On the Storage dialog I simply clicked Next and did not change any of the options.  You would only change these options if you have created additional files within the database.  Contact your database administrator for an explanation and review of these settings.

2017-10-28_16-19-59.png

Within the initialization page of the wizard I must select the Migrate radio button, which then enabled me to select my source dataset and migration style.  It's best to select a basic setup data migration style when promoting from development (DEV) to user acceptance testing (UAT) or UAT to production (PROD).  Otherwise I just pick Entire Dataset (you can always purge records or locations later).  Note that I have not selected to support Unicode characters, but you may need to (discuss with your DBA).  

2017-10-28_16-44-52.png

I clicked Next one last time and am presented with the run parameters page of the wizard.  I clicked Finish without changing any options.

2017-10-28_16-50-06.png

Then I had to click OK on this warning dialog.

2017-10-28_16-50-48.png

Finally, I can click Start to kick this copy off!

2017-10-28_16-51-47.png

It completed step one (of four) and then bombed with this error message...

2017-10-28_16-52-45.png

Well the error is saying there's an access denied error on a file, so I open windows explorer and look at the file....

2017-10-28_16-53-54.png

The existence of a non-zero file tells me that there were appropriate permissions to create the file.  Therefore the issue must be with permissions reading the file, which makes sense given the error states "the file ... could not be opened". 

Since I'm using Windows as an authentication mechanism the SQL Server must send my credentials for validation.  This fails because SQL Server cannot transmit my credentials to the remote share (this would be a double-hop situation).  If I change my bulk load share to one created on the SQL Server then my credentials won't need to be transmitted and the process works correctly.   If I changed the identity of the SQL Server service away from the virtual network service account (shown below) and to a defined domain user, I could enable constrained delegation.  That would allow the double-hop of credentials to work without failure.

SQL Server is set to use a virtual network service account

SQL Server is set to use a virtual network service account

If I cannot create a share off the SQL server then I must switch from Windows authentication to SQL authentication.  That would also require that I change the identity of SQL Server from the virtual network service (shown in the picture above) to a domain account.  Additionally, I would need to configure constrained delegation on that new account.  The SQL authentication route is significantly more complicated to configure.  Therefore the best option for me is to use a local share (local to the SQL Server since Windows authentication fails without constrained delegation).

My quick fix is to re-do all the steps above but to use "\\apps\bulkload" as my work path for bulk load...

2017-10-28_17-44-58.png

Once I did that my process works flawlessly...

2017-10-28_17-46-14.png

I've now got two datasets defined.  Next I should configure events, IDOL, and any other settings unique to the new dataset.

2017-10-28_17-47-44.png

Export Mania 2017 - Tag and Task

Warning: this post makes the most sense if you've read the previous post....

If I use the thick client I can tag one or more records and then execute any task (I prefer to refer to these as commands, since that's how the SDK references it).  In this post I'll show these tasks, the output, and how they may (or may not) achieve my goal.  My goal being the export of a meta-data file with corresponding electronic documents (where the file name is the record number).

The commands we can execute include:

  1. Supercopy
  2. Check-out
  3. Save Reference
  4. Print Merge
  5. Export XML
  6. Web Publish

If I tag several records and then initiate any of these commands, I'll get prompted to confirm if I'm intending to use the highlighted item or tagged items.  You can suppress this by unchecking the checkbox, but I let it alone (so there's no confusion later).  Once I click OK I can see the dialog for the selected task.

2017-10-15_21-54-20.png

As you can see below, the supercopy command let's me save just the electronic documents into a folder on my computer or my offline records tray.  The resulting electronic documents are titled using the suggested file name property of the record(s).

2017-10-15_21-55-12.png

The resulting files contain just the suggest file name.  It does not include record number, dataset id, or any additional information.  I also cannot get a summary meta-data file.  So this won't work for my needs.

2017-10-15_22-05-22.png

Check-out does the exact same thing as supercopy, but it updates CM to show that the document(s) are checked-out.  Since emails cannot be checked-out, this task fails for 2 of my selected records.  That means they won't even export.  

2017-10-15_22-07-50.png

So supercopy and check-out don't meet my requirements.  Next I try the "Make reference" feature, which gives me two options:

Here's what each option creates on my workstation...

Single Reference File

2017-10-15_22-11-43.png

Multiple Reference Files

2017-10-15_22-10-50.png

When I click the single reference file Content Manager launches and shows me a search results window with the records I tagged & tasked.  The multiple reference files all did the same thing, with each record in its' own search results window.  In both cases there are no electronic documents exported.

Single Reference File

2017-10-15_22-13-24.png

Multiple Reference Files

2017-10-15_22-16-19.png

Now I could craft a powershell script and point it at the results of my reference file(s).  The reference file includes the dataset ID and the unique record IDs. As you can see below, the structure of the file is fairly easy to decipher and manage.

Single reference file format

Single reference file format

I don't really see a point in writing a powershell script to make references work.  Next on the list is Print Merge. As shown below, this is a nightmare of a user interface.  It's one of the most often complained about user interfaces (from my personal experience).  The items are not in alphabetical order! 

2017-10-15_22-28-00.png

It's funny because this feature gives me the best opportunity to export meta-data from within the client, but it cannot export electronic documents.  So I need to move onto the next option: web publisher.

The web publisher feature would have been cool in 1998.  I think that's when it was built and I don't think anyone has touched it since.  When I choose this option I'm presented with the dialog below.

2017-10-15_22-34-49.png

I provided a title and then clicked the KwikSelect icon for the basic detail layout.  I didn't have any, so I right-clicked and selected New.  I gave it a title, as shown below.

2017-10-15_22-32-26.png

Then I selected my fields and clicked OK to save it.  

When I selected the new basic layout and then clicked OK, I get the following results.  Take note of the file naming convention.  That makes 3 different conventions so far (DataPort, Supercopy, Webpublisher).

2017-10-15_22-39-44.png

Opening the index file shows me this webpage...

 
2017-10-15_22-40-28.png
 

I'm pretty sure the idea here was to provide a set of files you could upload to your fancy website, like a city library might do. I tell people it's best for discovery requests... where you can burn someone a CD that let's them browse the contents. 

Time to move onto the last option: XML.  When I first saw this feature my mind immediately thought, whoa cool!  I can export a standardized format and use Excel source task pane to apply an XML map.  Then when I open future XML exports I can easily apply the map and have a fancy report or something.  I was wrong.

Here's the dialog that appears when you choose XML Export... I pick an export file name, tell it to export my documents, and throw in some indents for readability.  Note the lack of meta-data fields and export folder location.

2017-10-15_22-44-29.png

Then I let it rip and check out the results...

2017-10-15_22-47-09.png

I would gladly place a wager that these 4 different naming conventions were created by 4 different programmers.  The contents of the XML isn't really surprising...

<?xml version="1.0" encoding="ISO-8859-1" standalone="no" ?>
<TRIM version="9.1.1.1002" siteID="sdfsdfsdfsdfd" databaseID="CM" dataset="CMRamble" date="Sunday, October 15, 2017 at 10:46:50 PM" user="erik">
  <RECORD uri="1543">
    <ACCESSCONTROL propId="29">View Document: &lt;Unrestricted&gt;; View Metadata: &lt;Unrestricted&gt;; Update Document: &lt;Unrestricted&gt;; Update Record Metadata: &lt;Unrestricted&gt;; Modify Record Access: &lt;Unrestricted&gt;; Destroy Record: &lt;Unrestricted&gt;; Contribute Contents: &lt;Unrestricted&gt;</ACCESSCONTROL>
    <ACCESSIONNUMBER propId="11">0</ACCESSIONNUMBER>
    <BARCODE propId="28">RCM000016V</BARCODE>
    <CLASSOFRECORD propId="24">1</CLASSOFRECORD>
    <CONSIGNMENT propId="22"></CONSIGNMENT>
    <CONTAINER uri="1543" type="Record" propId="50">8</CONTAINER>
    <DATECLOSED propId="7"></DATECLOSED>
    <DATECREATED propId="5">20170928093137</DATECREATED>
    <DATEDUE propId="9"></DATEDUE>
    <DATEFINALIZED propId="31"></DATEFINALIZED>
    <DATEIMPORTED propId="440"></DATEIMPORTED>
    <DATEINACTIVE propId="8"></DATEINACTIVE>
    <DATEPUBLISHED propId="111"></DATEPUBLISHED>
    <DATERECEIVED propId="1536">20170928093449</DATERECEIVED>
    <DATEREGISTERED propId="6">20170928093449</DATEREGISTERED>
    <DATESUPERSEDED propId="1535"></DATESUPERSEDED>
    <DISPOSITION propId="23">1</DISPOSITION>
    <EXTERNALREFERENCE propId="12"></EXTERNALREFERENCE>
    <FOREIGNBARCODE propId="27"></FOREIGNBARCODE>
    <FULLCLASSIFICATION propId="30"></FULLCLASSIFICATION>
    <GPSLOCATION propId="1539"></GPSLOCATION>
    <LASTACTIONDATE propId="21">20171015224650</LASTACTIONDATE>
    <LONGNUMBER propId="4">00008</LONGNUMBER>
    <MANUALDESTRUCTIONDATE propId="122"></MANUALDESTRUCTIONDATE>
    <MIMETYPE propId="82">image/png</MIMETYPE>
    <MOVEMENTHISTORY propId="33"></MOVEMENTHISTORY>
    <NBRPAGES propId="83">0</NBRPAGES>
    <NOTES propId="118"></NOTES>
    <NUMBER propId="2">8</NUMBER>
    <PRIORITY propId="13"></PRIORITY>
    <RECORDTYPE uri="2" type="Record Type" propId="1">Document</RECORDTYPE>
    <REVIEWDATE propId="32"></REVIEWDATE>
    <SECURITY propId="10"></SECURITY>
    <TITLE propId="3">2017-09-28_9-31-37</TITLE>
    <RECORDHOLDS size="0"></RECORDHOLDS>
    <ATTACHEDTHESAURUSTERMS size="0"></ATTACHEDTHESAURUSTERMS>
    <LINKEDDOCUMENTS size="0"></LINKEDDOCUMENTS>
    <CONTACTS size="4">
      <CONTACT uri="6185">
        <FROMDATETIME propId="157">20170928093449</FROMDATETIME>
        <ISPRIMARYCONTACT propId="161">No</ISPRIMARYCONTACT>
        <LATESTDATETIME propId="158">20170928093449</LATESTDATETIME>
        <LOCATION uri="5" type="Location" propId="155">erik</LOCATION>
        <NAME propId="150">erik</NAME>
        <RETURNDATETIME propId="159"></RETURNDATETIME>
        <TYPEOFCONTACT propId="152">0</TYPEOFCONTACT>
        <TYPEOFRECORDLOCATION propId="151">3</TYPEOFRECORDLOCATION>
      </CONTACT>
      <CONTACT uri="6186">
        <FROMDATETIME propId="157">20170928093449</FROMDATETIME>
        <ISPRIMARYCONTACT propId="161">No</ISPRIMARYCONTACT>
        <LATESTDATETIME propId="158">20170928093449</LATESTDATETIME>
        <NAME propId="150">FACILITY-HRSA-4647 (At home)</NAME>
        <RETURNDATETIME propId="159"></RETURNDATETIME>
        <TYPEOFCONTACT propId="152">1</TYPEOFCONTACT>
        <TYPEOFRECORDLOCATION propId="151">0</TYPEOFRECORDLOCATION>
      </CONTACT>
      <CONTACT uri="6187">
        <FROMDATETIME propId="157">20170928093455</FROMDATETIME>
        <ISPRIMARYCONTACT propId="161">No</ISPRIMARYCONTACT>
        <LATESTDATETIME propId="158">20170928093455</LATESTDATETIME>
        <NAME propId="150">FACILITY-HRSA-4647 (In container)</NAME>
        <RETURNDATETIME propId="159"></RETURNDATETIME>
        <TYPEOFCONTACT propId="152">2</TYPEOFCONTACT>
        <TYPEOFRECORDLOCATION propId="151">1</TYPEOFRECORDLOCATION>
      </CONTACT>
      <CONTACT uri="6188">
        <FROMDATETIME propId="157">20170928093449</FROMDATETIME>
        <ISPRIMARYCONTACT propId="161">No</ISPRIMARYCONTACT>
        <LATESTDATETIME propId="158">20170928093449</LATESTDATETIME>
        <LOCATION uri="5" type="Location" propId="155">erik</LOCATION>
        <NAME propId="150">erik</NAME>
        <RETURNDATETIME propId="159"></RETURNDATETIME>
        <TYPEOFCONTACT propId="152">0</TYPEOFCONTACT>
        <TYPEOFRECORDLOCATION propId="151">2</TYPEOFRECORDLOCATION>
      </CONTACT>
    </CONTACTS>
    <RELATEDRECORDS size="0"></RELATEDRECORDS>
    <RENDITIONS size="0"></RENDITIONS>
    <REVISIONS size="0"></REVISIONS>
    <CONTENTSOF>
    </CONTENTSOF>
    <FORRECORD>
    </FORRECORD>
    <ELECTRONICDOCUMENTLIST>
      <FILE>records_1543.PNG</FILE>
    </ELECTRONICDOCUMENTLIST>
  </RECORD>

On the positive side, I do get the file name, title, and record number.  However, the uniqueness of this XML structure is for the birds.  I could craft a powershell script to tackle renaming the files and such, but I refuse to do so.  I protest the rubbish in this file.  Fly away little Xml document.... fly, fly away.

All this tagging and tasking helps me know my options for the future, but it also demonstrates clearly that I'm looking in the wrong places.  Unique requirements like these (exporting documents with numbered file names) means I need to either build something custom.  In the next posts I'll show several options for custom exporting.