Monday, January 14, 2013

Data Migration Framework Overview & Shortcomings

When I attended the AX Technical Conference in Seattle this last October, I was mildly impressed with some of the new features coming into the AX circle.  One of the tools I was impressed with was the Data Migration Framework (DMF), which was, and still is, in its 'beta' phase.  Now, I've only just recently been exposed to AX implementations in August this year when I started this job as a technical consultant for a Microsoft Gold Partner, so my knowledge of what is required for data migrations is still quite limited at this point.  In fact, the partner I work for typically has went through another partner to handle data migrations.  However, in attempt to further grow our firm, my boss has asked me to undertake the task of researching data migration and the tools that are available to us.  I did a little research, and like other AX-related Google searches, I found very little.  However, since I had been to the Technical Conference, I knew about DMF and some of the theory behind it.  From my understanding, if you are moving from another version of AX to a newer version of AX, the 'Data Import' is what should be used.  However, if you are moving from a non-AX system to AX, the DMF is the recommended solution (minus the fact that it is still in Beta 2.0 AND Microsoft doesn't provide support for it).  However, after playing with the DMF, I've found it to be very buggy and wouldn't recommend it just yet.  Although, like anything else Microsoft has released that's new, it's really great in theory, there are just too many things, that I imagine will be fixed in further iterations, that are just not quite there yet.  I recommend using AIF instead, I will be providing a similar analysis of AIF in the near future, follow me on Twitter for that update and other AX related tweets.  In the meantime, I will give you a general overview of the process I used to use the tool and what I've found along the way.

Installation


The first step is the installation of the tool.  There is an article that helps with this.  It's not an intuitive install.  In addition to executing the DmfSetup.exe, which MUST be 'Run as Administrator', you also have to install an .axmodel file (which is extracted from the setup) using ax-util via PowerShell, which the setup neither informs you of nor just does for you, that would just make too much sense.  The article I mentioned above walks you through it.  Once it is fully installed, you have to wait 8+ hours for the AOT to do a full compile and also have to kick off a full CIL compile.  Having completed that and waiting like 2 work days, the 'Data migration framework' shows up as a module in the navigation pane of AX.

Setup


The next step is setup, there isn't too much to it.  The first thing you must do is set up a shared directory where your source files will reside.  This is set up in 'Data migration framework parameters' (pictured below).  You must click the Validate button to continue to use DMF.  At first I was getting an error: "Unable to load file. AOS user does not have access to shared folder."  This was resolved by adding permissions to the shared directory I was selecting.  To see what user should be granted access, you will need to look at the AOS service and figure out what user it is executing under.  When the proper permissions were added, the shared directory then would validate.  I didn't even need to restart the AOS, that's always a bonus.


Another part of the setup was setting up 'Source data formats'.  This is where you specify what your input file format will be.  For instance, I am primarily working with tab-delimited files, so as pictured below, you can see where I specified that.  Other options of input data include fixed width data files and also a direct database connection through ODBC.  There is another option called AX which I am unsure about at this point so I won't further pretend like I know what that does.  But, we can make a good guess as to how ODBC works, but I won't discuss this either as in my case, I don't have an environment that has direct access to the source database, so this option will not work.  In the meantime, I'm sticking with using delimited flat files.


Introduction


Now that we are set to begin, I will start with the high level overview of the theory behind how the data progresses through DMF.  You have the "Source", the "Stage", and the "Target".  The Source is in my case the flat file I will be using.  The source file's data will migrate to one flat temporary staging table within AX.  From the staging table, the data will migrate to a target.  That target is in most, if not all, cases are multiple tables that are related to each other.

Because DMF comes shipped with many pre-defined entities (table mappings), 'Target entities' (pictured below) will be the easiest thing to look at right away; this is the relationship between Stage and Target.


This is a listing of all the mappings that come shipped with DMF.  You can also define your own custom entities here if you have a custom data that must be imported into AX, but I won't go over that.  The interesting thing to note is the AOT objects referenced here.  The Staging table is a table in AX, the Entity class is a class within AX, and the Target entity is a query object within AX.  So, if we click on the 'Entity structure', we will see the same structure defined as the query object mentioned before.


This is the magic behind how data will get from one flat staging table to multiple target tables, but something is missing yet.  We need to map fields from the staging table to the target fields.  Luckily, the target entities that come shipped with DMF already have these defined.  Click on 'Modify target mapping'.


This looks chaotic but it has to be.  It is a complex table relation.  The lucky part is that this is defined already.  Truthfully, the hard part will be getting the data into the staging table.

One thing to note here that I won't elaborate too much is the 3 boxes in the middle.  These are 3 methods located in the Entity class I referred to earlier.  These are created when there are complex data transformations that are needed before reaching the target tables.  For instance, you see the GenerateDefaultDimension method; this is because AX 2012 handles financial dimensions using a very different table structure than before.

Looking at this, you might be confused as to why the target shows up as one table and not multiple as I mentioned before.  The simple explanation is that this is built off a query object.  Imagine that you execute this query against the database, you will get one result set back; that is what this is.  AX will manage this similar to how the rest of AX handles table buffers just like in forms and form data sources, it might look like one grid of data, but it may come from multiple sources, the same applies here except that the data is flowing in the opposite direction.  Notice how ItemId in the Staging table is mapped to multiple Target fields; this is because ItemId exists as fields in multiple tables of the target.

That pretty much wraps up the introduction, everything as it is shipped with.  Everything other than that are things we have to custom-define and execute manually.  This involves how the data gets from the Source to the Staging table.

Source-to-Stage


Now we are ready to start migrating data from the Source to the Staging table.  First, we will need to create a 'Processing group'.  This is basically just a way to group all the types of data you will be processing together.  For instance, we will be trying to import data relating to items, so we will create a group for that.


Notice the above example, the Entities button is the only option available to us (never mind the 'Export to' option).  If we click on Entities, it brings up another form.  This is basically where we set up everything source file related.


If we add a record here, we need to specify an Entity, a data format, and the path to the sample file.  I selected Items as the Entity, as we are importing items.  I also chose Tab, as we set up earlier, as my data format as my flat file is tab-delimited.  This is the only setup needed as far as the source file is concerned.  The next part is to define the table mappings between the Source and the Staging table.

First thing that must be done is the 'Generate source mapping' option at the top.  This sets up your Source, giving it the ability to be mapped.  Careful in pressing this later on, this will reset all your mappings to the default.  Most often, your column names in the source file won't match AX naming conventions, so nothing may map at all by default.  In fact you may get a warning: "There are field(s) which are not mapped to Entity field(s)".  This is okay and expected.

Now we have the ability to change the default mappings by clicking 'Modify source mapping'.  As pictured below, we can see that no fields were mapped by default.


So, from here, I can click and drag from any black dot by a column name of the Source and drag it to a black dot of another column in the Staging table.  I added only a few just to keep this example simple.


One thing to note here, there is an option at the top to see a 'Mapping details' view.  Here you will see exactly what you can see in the visualization, only this is in grid format.


If you select one of the source fields, you can click the Conversion button.  Here, you can define conversions or transformations that will be expected from the source file.  In my example, I have a column called 'type' which is mapped to the ItemType field.  AX will be expecting either 'Item' or 'Service' as valid values.  However, my flat file only has 'I' and 'S' as values.  The ItemType within AX will not know how to handle these.  So, we can add a conversion to say when the value is 'S', change it to 'Service'.  When it is 'I', change it to 'Item'.  This is a very neat feature of DMF.

Also, don't forget to hit Save before you exit, a message will warn you if you didn't.  Now that the mappings have been defined, we can hit 'Validate' and it will make sure that all the required fields are mapped.  Hopefully by this point we should be set, but before we move on we should preview what it will look like when it goes into the Staging table.  Click 'Preview source file', a grid should appear in the Preview pane, pictured below.


At this point, it looks great!  We are now ready to load data into the Staging table.  We can close out of the Entities form and a new option appears in our Processing group, 'Get staging data'.


Go ahead and click it.  We are asked to supply a Job ID, the default value will do.  The Job ID is used to track what data was processed successfully and also what failed with errors.  We could run more jobs later but still be able to tell what errors came out of the initial run.


This data execution form appears and lets us know what is expected to be run and what hasn't run yet.  We can click on Run and a batch dialog appears.  At this point, we're not going to batch process this, so we can just leave the defaults and run this right away.  An infolog message shows: "'12' 'Items' record(s) inserted in staging" for however many records were inserted.  If there were any errors, it would let you know.  For more specifics on the errors, you can open up the Execution history form to see the log.

Now, the data should all be loaded into the Staging table.  We can verify this easily by looking at the DMF table in the AOT.

Stage-to-Target


Back at the 'Processing groups' form, we now see all options available to us.  The option 'Copy data to target' is the one we are interested in next.  We click on that, we are prompted for the Job ID we were using earlier.  If we have run several jobs and loaded different pieces of data, we can easily segregate the migration by choosing only specific jobs.  There is also the option to only run records that previously had errors or records by user selection.  Click OK to continue.

Another execution form appears that is similar to the Staging table execution.  If we click Run, it will start processing records.

Unfortunately, this is where my tutorial ends.  I get a stack trace error and cannot seem to continue.  If I someday figure out why this happens or Microsoft fixes this bug, I will update this post.  But this should help get someone started if they were interested in looking to the future and setting up DMF and playing around a bit.

Shortcomings


In reviewing this tool, I found a couple things that I would call shortcomings of this software.  These are definite issues that Microsoft should definitely have a look at.

1) Stack Trace Error:
In the last step, I mentioned a stack trace error I've been getting.  I go to Run the Copy data to target and my debugger pops up with the following error:
Error executing code: The field with ID '0' does not exist in table 'Common'.
Stack trace
(C)\Classes\DMFEntityWriter\write - line 534
(C)\Classes\DMFEntityWriter\run - line 83
(C)\Classes\DMFEntityWriter\main - line 24
(C)\Classes\xMenuFunction\run
(C)\Classes\MenuFunction\run - line 87
(C)\Forms\DMFWriteData\Designs\DesignList\DMFEntityWriterBatch\Methods\Clicked - line 16
Sounds like a very generic error.  But considering the Stage-to-Target stuff is out of the box functionality, this should never happen in any case.

2) Cannot Map One-to-Many
When setting up the Source file mapping, it appears like you are able to map one Source field to multiple Stage fields.  However, when you attempt to run the process to copy the data into the Staging table, an error appears:
Error updating flat file source properties-Exception from HRESULT: 0xC0207015
After Google'ing that exception code, all signs were pointing to multiple columns having the same name.  Of course, I'm positive I didn't have two columns named the same.  I double-checked myself and there wasn't, but I realized it was because I had a one-to-many mapping on the Source.  After I removed the mapping, I didn't get the error.  After fighting with DMF, I decided that it would just be easier to create duplicate columns in the source flat file itself, with a different column name of course, and use the duplicated column for the mapping instead.

3) No Defaulted Values
Although we can specify Conversions per each field in the mappings, there is no option to have a default value.  Most often, I'd assume the data you are trying to import will not match what AX is expecting.  There should be an option to have a default value when none of the Conversions are matched.  This would allow for a much cleaner and smoother data migration.

4) Source Data Types Not Recognized
When we initially generate the source mapping of a flat file, all columns show up as String data types, regardless if all the values in that column are numeric or dates.  So, when you try to map these String fields to Date or Real or Int fields in the Staging table, an error pops up letting you know that the types don't match.  This is a big issue!

The only workaround I can think of is to somehow load your flat file into a database and then use an ODBC connection to read your source data, but that seems like a lot of work just to avoid something that should already be working.

5) Mandatory Fields Not Complete
When DMF is shipped, it comes with all the DMF* Staging tables.  These tables have certain fields marked as Mandatory in the AOT properties.  However, some of the Entities and the tables relating to it actually require more fields than what is specified in these DMF* Staging tables.  This is something the user must keep in mind, even if your Source validates and successfully gets to the Staging table, it may not successfully get to the Target tables because it might be missing some required fields that it did not inform you of earlier.

Wednesday, November 21, 2012

Accessing Enum Labels from Outside AX 2012

Oh, I've done it again.  I've cracked some binary sequence of metadata goodness.  All of you switching over to AX have probably noticed that Enum labels are not accessible as they were in 4.0 and 2009.  In another blog post of mine, I described a solution to accessing Enums in 4.0 and 2009.  I think that method may still work in 2012, but I don't like the idea of always updating your model (not to be confused with the new kind of Models in 2012...oh, the ambiguity) every time you make a change to an Enum.

As you know, AOD files no longer exist in 2012, which stored all the metadata for all AOT objects.  Instead, the metadata is now stored in a model database.  The beauty of it is that we can now simply run queries to view this metadata from outside AX.  However, you will run into a brick wall when trying to view Enum data.  You will only see records for the BaseEnum object, not the Enums beneath it.

Upon further research, I found a field called Properties in the ModelEntityData table.  It is a binary field in which I would later on find that the Enums are stored within this binary structure.

If you want to hear about the technical structure of this Properties field, please read on.  Otherwise, skip to the solution below.

So, I began from the beginning.  I created a new BaseEnum in the AOT and made small tweaks to it to see how the Properties field changes throughout.  Below, you will find a chart that shows both the binary data and the action I took that got it there.  The below chart is a condensed version (so it could fit on this blog) that I made up that disregards changes in ConfigurationKey and CountryRegionCode properties, as those took up a lot of space.  I also kept string sizes small, as this impacted the length of the data as well.  The real chart I used can be found here.

  1    2  Label    H CC FLAG   CK 3  4  5  EnumLabel(SAS)---| EnumName(SAS)--------| 6    ECK CC   ECC(SAS)-----| ChangeDetails
---------------------------------------------------------------------------------------------------------------------------------------------------
0x0600 01               100204       00 00 0000               0000                            0000                --new enum, no props
0x0600 01               100004       00 00 0000               0000                            0000                --useenumvalue=yes
0x0600 02 5A000000      100204       00 00 0000               0000                            0000                --useenumvalue=no label=Z
0x0600 02 5A000000      000204       05 00 0000               0000                            0000                --displaylength=5
0x0600 02 5A000000      000204       00 00 0000               0000                            0000                --displaylength=0
0x0600 02 5A000000      200204       00 00 0000               0000                            0000                --style=radio
0x0600 02 5A000000      300204       00 00 0000               0000                            0000                --displaylength=auto
0x0600 02 5A000000      300206    02 00 00 0000               0000                            0000                --analysisusage=attribute
0x0600 02 5A000000      200206    02 05 00 0000               0000                            0000                --displaylength5
0x0600 02 5A000000      200006    02 05 00 0000               0000                            0000                --useenumvalue=yes
0x0600 02 5A000000      20000E    02 05 01 0200 41000000      0200 61000000          00       0000 0100 0000      --new enum Name=a Label=A Value=0
0x0600 02 5A000000      20000E    02 05 01 0200 41000000      0200 61000000          07       0000 0100 0000      --enum value=7
0x0600 02 5A000000      20000E    02 05 02 0300 41000000 0000 0400 61000000 62000000 0708     0000 0200 0000 0000 --new enum Name=b Label= Value=8
0x0600 02 5A000000      20000C       05 02 0300 41000000 0000 0400 61000000 62000000 0708     0000 0200 0000 0000 --analysisusage=none
0x0600 02 5A000000      20020C       05 02 0300 41000000 0000 0400 61000000 62000000          0000 0200 0000 0000 --useenumvalues=no



String columns (char until 0x0000)

  • H - Help
  • CC - CountryRegionCode (on BaseEnum)
  • CK - ConfigurationKey (on BaseEnum)
  • ECK - ConfigurationKey (on Enum)
  • ECC - CountryRegionCode (on Enum)

String Array Sections (SAS)

The first 2 bytes (LE short) indicates the number of chars in the section, each char is 2 bytes

Value columns

  • 1 - Always 0x0600?
  • 2 - Seems to always be the number of strings following this column minus one (ie. if 0x03, then there are 2 strings following)
    • First string is always the BaseEnum Label, second is Help, third is CountryRegionCode
  • 3 - 0x02 if AnalysisUsage flag is set in FLAG section, otherwise null
  • 4 - DisplayLength value, if set to Auto, this value is 0x00
  • 5 - Number of enums
  • 6 - Value (on Enum), is null if UseEnumValue flag is set in FLAG section, then enum values start at 0 and increment per enum

FLAG Column

There are 6 hexadecimal digits, we only care about the first, third, fourth, and sixth digit of each (A, B, C, & D, respectively), as the rest are always 0s.  So, 0xA0BC0D shows the variables in place of the values.
Sorry: I didn't intentionally use valid hexadecimal values as variables.

Each hexadecimal digit can be expressed as a 4-digit binary number (each binary digit is represented as W, X, Y, and Z, [W is 8 spot, Z is 1 spot]):

  • For A:
    • W - Always 0
    • X - Always 0
    • Y - Style property (0-Combo box 1-Radio button)
    • Z - DisplayLength property (0-<some value> 1-Auto) if 0, the value specified in the property is stored in column #4 above
  • For B;
    • W - ConfigurationKey property on BaseEnum (0-<blank> 1-<some value>) - if 1, the string value of the config key is stored in the CK column, else the CK column is null
    • X - ConfigurationKey property on Enum (0-<no configs> 1-<at least one config>) if 1, there is at least one enum that has a config key specified, they show up in the ECK column, else the ECK column is null
    • Y - Always 0
    • Z - Always 0
  • For C:
    • W - Always 0
    • X - Always 0
    • Y - UseEnumValue property (0-Yes 1-No) if 0, the enum values will be stored in column #6, else column #6 is null
    • Z - Always 0
  • For D:
    • W - if 0, no enums exist, else they do
    • X - Always 1
    • Y - AnalysisUsage property (0-None 1-Attribute) if 1, a 0x02 shows up in column #3, else null
    • Z - Same as BX in FLAG section

SOLUTION

After reverse engineering the Properties field, I developed a UDF for SQL that you may use to grab the Enum label for displaying on report or whatever you may need it for.  It will require you to provide a string value of the name of the BaseEnum along with the value you are looking up.  NOTE: You will be required to edit the query near the beginning and end of the function.  You must specify which server and database your model data resides (2012 R2 splits this to a separate database, if you upgraded to R2 from a prior version, DO NOT use the model tables that exist in your regular AX database, those tables hold information from your prior version of AX).  Also, you must change the LanguageId in the query at the end if you do not use 'en_us'.

Usage:

DECLARE @status AS int;
SET @status = 1;
SELECT ENUM2STR('SalesStatus', @status); --returns 'Open order'

Downloads:

  • Enum2Str.sql (Must read SOLUTION section above to implement)

Changelog:
  • 2013-02-08 - Bug - 'Module' column criteria was added to label query.
  • 2013-11-27 - Bug - Duplicate entries are eliminated by only pulling the outermost layer's version.
  • 2014-12-10 - Bug - When UseEnumValue is set to Yes, return the Value instead of <NOT FOUND>.
  • 2015-04-16 - Bug - Binary values larger than 8000 character are no longer truncated

***Also, if there is anything else in AX like this (or not like this) you would like me to look at to figure out, please let me know.  I know there is a lot out there that has yet to be tapped into.  I seem to have a natural talent for analyzing complex structures of data and would love to put it to good use.***

Monday, November 19, 2012

Financial Dimension Lookup for AX 2012

If you just took the dive into AX 2012, you may have noticed quite a change in how financial dimensions are stored.  I'm here to bring light to how to access the dimension values with the new structure so that you may use them as you did in the 4.0/2009 system.

As a review: In AX 4.0 and 2009, a business was limited to the number of financial dimensions they wanted, I believe the default amount of dimensions was 3, but additional dimensions could be purchased and was regulated by the license file provided by Microsoft when AX is purchased.  The various tables that did utilize these financial dimensions simply had a column for each dimension, namely Dimension[1], Dimension[2], Dimension[3], etc... (in SQL: Dimension, Dimension2_, Dimension3_, etc...) depending on how many dimensions your business purchased, of course.  Each one of these fields stored the actual value of the dimension right there in that table.  If you wanted to reference the name or description of that Dimension, it was easy.  AX had a Dimensions table where you could just do a Dimensions::find() and provide the dimension value stored in the above table and your DimensionCode enum type (to specify which dimension you are supplying).  Presto, upon returning the Dimensions record, you reference the Description field and you have the dimension description/name.

AX 2012 handles this quite differently...

Instead of there being a limit on the number of financial dimensions, the team at Microsoft found a way to lift that restriction and now allow for infinite dimensions.  Yahoo!  But it doesn't come for free (don't think monetarily).  The price you pay is the complexity of it.  Instead of storing each of the dimensions on a given table, their solution is to store a reference (to the related table's RecId) of a record in a centralized dimension table in a column called DefaultDimension.  This is the value that will eventually get you to your dimensions.

If you don't care about a technical data structure oversight and just want a simple solution, you can skip down to the solution below, otherwise continue reading (this is starting to feel like a Goosebumps "Give Yourself" book).

The centralized dimesion table I was referring to is DimensionAttributeValueSetItem.  The DimensionAttributeValueSet field in this table is going to be the field that DefaultDimension relates to.  For each financial dimension that is specified for a record, there will be that many records in the DimensionAttributeValueSetItem table.  There is a field called DisplayValue in this table, this is the value of the dimension!  Yahoo!  Don't get too excited yet, there are no values in this table that will tell us which financial dimensions are which.  We will need to join to DimensionAttributeValue and again to DimensionAttribute first.  From here it is tricky if you're looking to find the human-readable description in addition, depending on what kind of dimension it is, it will link to a different table.  I suggest only linking to the tables you need to.  If you have custom dimensions, you will outer join to DimensionAttributeDirCategory and DimensionFinancialTag (as shown below).  Any other types of dimensions link to various Views with a "DimAttribute" prefix but also must be linked to the current company that is selected.  I've included an example of linking to Customer and Project dimensions.  Below you will find a SQL query which will define the join criteria.
SELECT davsi.DisplayValue, da.Name, dft.Description, dact.Name, dapt.Name
FROM DimensionAttributeValueSetItem davsi
JOIN DimensionAttributeValue dav
    ON dav.RecId = davsi.DimensionAttributeValue
JOIN DimensionAttribute da
    ON da.RecId = dav.DimensionAttribute
--Custom dimensions
LEFT JOIN DimensionAttributeDirCategory dadc
    ON dadc.DimensionAttribute = da.RecId
LEFT JOIN DimensionFinancialTag dft
    ON dft.FinancialTagCategory = dadc.DirCategory
AND dft.RecId = dav.EntityInstance
--Customer dimension
LEFT JOIN DimAttributeCustTable dact
    ON dact.Value = davsi.DisplayValue
    AND dact.RecId = dav.EntityInstance
    AND dact.DataAreaId = 'dat'
--Project dimension
LEFT JOIN DimAttributeProjTable dapt
    ON dapt.Value = davsi.DisplayValue
    AND dapt.RecId = dav.EntityInstance
    AND dapt.DataAreaId = 'dat'
WHERE davsi.DimensionAttributeValueSet = '5637144584'
--where '5637144584' is the DefaultDimension reference value
The interesting part to note about the above query is the da.Name field.  This is the AX 2012 equivalent to the DimensionCode Enum, only AX 2012 doesn't use Enums for dimensions (because Enums cannot (and shouldn't) be created dynamically by your end users, but financial dimensions now can be and are meant to be defined by your end users).  So, when coding, you will either need to hard code these dimension names, or create your own Enum or Macro that you must manage separately.  I don't like it either but it is what it is.

SOLUTION

I have developed a couple of helper methods that simply the retrieval of dimension data.  These can be included in a helper class and referenced to as static methods are normally, but I recommend including these in your Global class, then you can just call the method directly.  I have 2 methods: dimValue() and dimDesc(), they return the value and description, respectively.  Code and usage is below:
//Usage for dimValue() and dimDesc()
static void JobDimensionUsage(Args _args)
{
    SalesLine       sl;

    DimensionValue  value;
    Description     desc;
    ;
    select firstonly sl;

    value = dimValue(sl.DefaultDimension, 'Department');
    desc = dimDesc(sl.DefaultDimension, 'Department');

    info(strfmt("%1: %2", value, desc));
}
static DimensionValue dimValue(RefRecId _defaultDimension, Name _name)
{
    DimensionAttributeValueSetItemView  davsi;
    DimensionAttribute                  da;
    ;
    select DisplayValue from davsi
    where davsi.DimensionAttributeValueSet == _defaultDimension
    join RecId from da
    where da.RecId == davsi.DimensionAttribute
        && da.Name == _name;
 
    return davsi.DisplayValue;
}

static Description dimDesc(RefRecId _defaultDimension, Name _name)
{
    DimensionAttributeValueSetItemView  davsi;
    DimensionAttribute                  da;
    DimensionFinancialTag               dft;
    DimensionAttributeDirCategory       dadc;
    DimAttributeCustTable               dact;
    DimAttributeProjTable               dapt;
    ;
    select DimensionAttributeValueSet from davsi
    where davsi.DimensionAttributeValueSet == _defaultDimension
    join RecId from da
    where da.RecId == davsi.DimensionAttribute
        && da.Name == _name
    outer join Name from dadc
    where dadc.DimensionAttribute == da.RecId;
    outer join Description from dft
    where dft.RecId == dav.EntityInstance
        && dft.FinancialTagCategory == dadc.DirCategory
    outer join Name from dact
    where dact.RecId == dav.EntityInstance
        && dact.Value == davsi.DisplayValue
    outer join Name from dapt
    where dapt.RecId == dav.EntityInstance
        && dapt.Value == davsi.DisplayValue;
 
    return (dft.Description ? dft.Description : (dact.Name ? dact.Name : (dapt.Name ? dapt.Name : "")));
}

Friday, March 9, 2012

Increase Performance by Removing Unused Overridden Methods?

It's been in the back of my mind for quite awhile now to figure out how the kernel of Dynamics AX actually handles it's delete_from and update_recordset logic. Sometimes, the kernel decides to emit exactly one DELETE FROM query to the database versus one 'DELETE FROM table WHERE RecId = @P1' per record in the table buffer.

I found a case on PriceDiscTable (this situation is not limited to just this object), a noticeably large table especially for our business, where I noticed multiple DELETE FROM statements getting emitted to the database, taking a very long time to complete a daily task our business runs. I looked at the table and noticed that the delete() method on the table was indeed overridden, but upon investigating the details of the method, only super() was getting called inside (along with some commented out code that we used in the past and plan to use in the future, but that's beside the point).

I removed the method as there really is no functional difference in just leaving it hidden (even though I know the reason why we wanted the commented code there, we didn't want to lose it). Lo and behold, our daily task now emits one single DELETE FROM with the criteria we specified in the WHERE clause.

One would think that the kernel would notice and see that the compiled version of the overridden delete() method with just the super() call is identical to the default "hidden" delete() method and handle them the same. But, we've been duped again. We'll have to store the commented out code we had in there somewhere else even though it would've been much more convenient to just keep in where it was.
As a side note, having at least one Delete Action on your table object will also force the kernel to emit multiple DELETE FROM queries. Removing them will tell the kernel that it doesn't have to perform a row-by-row operation and can emit the DELETE FROM in one statement.

And as (Joris de Gruyter of Dynamics AX Musings) has graciously pointed out in a comment, having database logging or alerts enabled on your table and/or having any MEMO fields on that table will also force a multiple DELETE FROM, who knew?

Friday, February 17, 2012

Missing Enums in SRSAnalysisEnums

Forenote: If you are using 2012, read this post.  There is a more robust way to access enums in AX 2012.

If your business operates like any would and has the need to report on data from custom functionality then you might find this post useful.  In creating custom functionality we often have the need to create new custom Enums and store the values in a custom table.  Doing so works just fine inside the Dynamics circle.  However, if you use a separate reporting engine outside of Dynamics, like SSRS, to render your reports, you will find that your users will not understand what a ShipStatus of 2 is, and let's be serious, your proposed solution to hard code your enum labels in your report is a poor strategy.

Microsoft was nice enough to create some tables to store enum values and labels for some of these enums.  These are located in two tables, namely, SRSAnalysisEnums and SRSModelEntityCache.  But, like I mentioned earlier, there are only SOME enums in these tables.  For a long time it was mysterious to me why it was this way, but after some digging in and a few "Ah ha!" moments later I figured it out.  It also introduced me to another unknown area of Dynamics.

Today, we're going to learn about Perspectives.  It's an object in the AOT under Data Dictionary.  Perspecitves in Dynamics are a poor attempt to mimic the functionality of a database cube.  It's not as flexible as a real cube and not very customizable.  If you expand the Perspectives node, you'll see a Perspective object for each Module.  Each of these perspectives has a list of tables.  Any enums in these tables listed under a perspective are the enums that show up in the SRS tables.  To get your new enums to show up you just need to have your table included in one of these perspectives or create your own, but try to put your table in the most relevant module, pertaining to the data that's stored in the table.

Note: If your table is already in these perspectives, read ahead.

Now, putting the table in the perspective alone won't put your enums in the table.  You also have to update the model so the SRS tables get populated correctly.  The way to do this is Tools->Reporting tools->Update models.  You can choose to update all languages or just one, as you feel is necessary for your business.  Running this will take awhile, but when it is finished, you will now see your new enums show up in the SRS tables.  Here's a script that will get your enum labels from a value:
SELECT ae.EnumItemName
FROM SRSModelEntityCache ec
JOIN SRSAnalysisEnums ae
    ON ae.EnumId = ec.EntityId
WHERE ec.PhysicalName = 'BaseEnum_ShipStatus'
    AND ae.EnumItemValue = 2

Thursday, February 9, 2012

Kernel Function Madness: any2str()

I'm a developer.  I'd treat this function as any would.  Here we have a function called any2str and takes in an anytype object and returns a string representation.  That's great because I have a dynamically changing anytype object that could be an integer one time and a date the next.  I also need to get a string representation of each type to I can put the values into the same column of a table to report on later.  So like any developer would do you code up your logic and write:
table.stringField = any2str(dynamicObject);
Sweet.  Everything compiles, let's continue writing code.
...
***Days pass***
...
Ok, time to run a test.  What?  Run-time error?  Method is expecting object of type string?  Wait, what line is it talking about?

For a moment I refused to believe that someone wouldn't have been so stupid to code up this method to expect only strings to be passed in.  And even after I realized that was the truth, I still couldn't believe it.  Yes, the method does in fact take in an anytype.  However, the method only works if that anytype is of type string.  So, instead of using the intuitive knowledge us developers have been taught to use, we have to now second guess every kernel function we call into no matter how we much "know" (think) how a method should work.  We've been duped, we've been given a useless str2str() method.

The workaround is below.  Use this as a local function (or as a method in the Global class to make a global method) inside the methods you often would use any2str(), that is, before understanding what the method really does.
str theRealAny2str(anytype _val)
{
    switch (typeof(_val))
    {
        case Types::Date:
            return date2str(any2date(_val), 213, 2, -1, 2, -1, 2);
        case Types::Enum:
            return enum2str(any2enum(_val));
        case Types::Guid:
            return guid2str(any2guid(_val));
        case Types::Int64:
            return int642str(any2int64(_val));
        case Types::Integer:
            return int2str(any2int(_val));
        case Types::Real:
            return num2str(any2real(_val), 1, 2, 1, 0);
    }
    return any2str(_val);
}

Monday, November 14, 2011

Ignore IDs During Code Compare

If any of you have done a code compare, comparing one XPO to the matching objects within the AOT, you may have noticed that in most cases the ID values of various nodes/sub-nodes don't match.  This is only partially correct, the node itself may match identically except the ID property, which doesn't cause a change in functionality anyways.  This annoyingly forces you to click through each item to verify that each node IS in fact identical.  I have developed a solution, an option to ignore these ID properties as a checkbox in the Advanced tab of the Compare form.

Let's take the Address table for example.  If we have an XPO that includes the Address table and we attempt to compare it with the object in the AOT, we will see "differences" like this:


As we can see, it shows a difference in every single field and index on the table.  What happens is when an object is exported to an XPO, it does not retain the ID values as they really don't change how it functions.  When importing from an XPO and comparing, the IDs in the XPO default to 0, thus showing the difference.

Normally this isn't a bad thing, but it gives you the feeling that there are truly differences that change the functionality.  It becomes a pain to filter through each node and double check each to see the real differences.  More often than one would think, a real difference slips by unnoticed because of the mass amount of false positives in changes, thus promoting code to Production without truly meaning to.

I proposed a solution to add an option to the SysCompareForm to ignore ID properties (as shown below).


By default, I have the checkbox checked to ignore the IDs, you may choose to change that as needed per your business's requirements.  The screenshot below shows how tidy it looks after ignoring the ID properties.



DOWNLOAD XPO HERE and Enjoy!

Bug Reported (2/27/2012): Ineffective towards EDTs and Enums, still shows the comparison.