ArcGIS Add-In Custom Mouse Cursor

I was working on a project and wanted my own custom mouse cursor and did not easily find a way to make your own in ESRI’s instructions.  But, once you know how to do it, it is pretty easy.  In Visual Studio, Add a New Item:

Add a Cursor File:

You can edit your cursor with the editor program in Visual Studio.  Once you satisfied with how it looks, make sure that the Build Action on the cursor is “Embedded Resource”.

Then you can set your cursor with two lines of code. Not that my cursor is in my QDI.QdiAddIn Namespace:

       
Dim pCursorStream As System.IO.Stream = Me.GetType.Assembly.GetManifestResourceStream("QDI.QdiAddIn.NewCursor.cur")
MyBase.Cursor = New System.Windows.Forms.Cursor(pCursorStream)

Renaming Raster Dataset and arcpy.Exists()

Discovered something today. I was working on an arcpy script that copies a raster dataset from a file geodatabase into a Postgres SDE geodatabase and then does some boring routine tasks–building stats, creating a mosaic dataset, adding the raster to the mosaic dataset and making a couple referenced mosaic datasets.

It sometimes has trouble with the initial step of uploading the raster because of the sheer size of if (1m elevation raster for counties) and it failed today on one. It failed today so I used the ArcCatalog GUI to copy the raster and renamed it.

I then proceeded to run launch my script. Before each step, I use arcpy.Exists() extensively to check to see if various items exist before I attempt to create them. It was continuously reporting that my raster set did not exist even though I could see it in ArcCatalog.

Finally, I realized that I needed to close ArcCatalog before arcpy recognized the fact I had renamed something. To note, I was running arcpy from a separate PythonWin window, not from the ArcCatalog session I had renamed the raster dataset with.

Once I closed ArcCatalog, arcpy recognized the renaming and life was good.

I’m also suspicious now about a problem I often have running statistics on my rasters.  The ArcTool reports no errors when I create them but for some reason the raster does not show that it has statistics afterwards.  I normally have multiple ArcApplication sessions open and now suspect that perhaps this problem is due to sessions not letting go of the connection.  Stay tuned for further developments on this.

Sorting a Coded-Value Domain Add-In (ArcGIS 10)

I am working on an data-entry application to edit feature classes that contain several coded-value-domains. The problem with some of the domains, however, is that some entries have been added after the initial creation.  So the first 25 entries are in alphabetical order and there are some stragglers at the end that are in the order they were appended.

This can be confusing for users–they go to select “Milli Vanilli” and look between “Madonna” and “Motley Crue” but can not find their favorite band there–they have to go to the end of the list to find their selection.

In the past, I have gone through the tedious process of exporting the domain to a table, sorting the table, removing the domain from the necessary field(s), deleting the domain, re-importing the table back in a new domain and finally re-applying the domain to the necessary field(s). Let’s just say I didn’t do this until someone asked a few times and I didn’t have anything more exciting–like a root canal–I could busy myself with.

But this new application contains more domains than any of other datasets so it was time to find a better solution. ESRI does have a Domain Sort Developer Sample.  It, however, did not play nice with ArcGIS 10.

So I went ahead and update it from VB 6 to VB.Net/ArcObjects 10.  I made an Add-In that can be installed by downloading the .esriaddin file and double-clicking on it.  The source code is also available.

This will add an ArcCatalog Toolbar that can be added by going to Customize-Toolbars-Domain Sorter Toolbar.

This will add a toolbar with one button.  The button enables whenever you select a geodatabase with at least one coded-value domain.

This brings up a Windows form that lets you sort any domain by either the code or description, ascending or descending.  Once you hit “OK” it re-sorts your domain.

The only problem I have had is that only the owner of a domain is allowed to edit it on an SDE geodatabase.

But other than that, the button allows you to easily keep your domains sorted.

http://edndoc.esri.com/arcobjects/9.2/CPP_VB6_VBA_VCPP_Doc/COM_Samples_Docs/Geodatabase/Schema_Creation_and_Management/Sort_a_domain/e826c5a8-9740-4f0b-86b6-d3b834735574.htm

Quick & Dirty arcpy: Batch Splitting Polylines to a Specific Length.

For some odd reason, I wanted to split all the arcs in a polyline feature class to a specific length–if a specific feature was longer than the target length, it would become two or more separate polyline records.

Here is the bare-bones script that copies an existing feature class into a new feature class then processes each record, splitting it into multiple records if the polyline is longer than the user-specified tolerance.  Some cautionary notes:

  • This is Quick & Dirty code–minimal error catching or documentation.
  • I basically tested this against one feature class (the one I wanted to split) once I got it to work, I quit.
  • There is some rounding error–features may be a tad bit off (a few ten-thousandths of a unit).
  • I did not test against multi-part features.
  • The tolerance is the native units of the data–if your data is in meters but you want to split the polylines every mile, enter 1,609.344.

I have included both a toolbox file (.tbx) and python script (.py).  After loading the toolbox, you’ll have to change the Source of the script by right-clicking on it, selecting the Source tab, and then navigating to the .py file.

Here is the code for the Googlebots, but you are better off just downloading it.

import arcpy
import sys, math

def printit(inMessage):
    print inMessage
    arcpy.AddMessage(inMessage)

if len(sys.argv) > 1:
    inFC = sys.argv[1]
    outFC = sys.argv[2]
    alongDistin = sys.argv[3]
    alongDist = float(alongDistin)
else:
    inFC = "C:/temp/asdfasdf.mdb/jkl"
    OutDir = "C:/temp/asdfasdf.mdb"
    outFCName = "jkl2d"
    outFC = OutDir+"/"+outFCName
    alongDist = 1000

if (arcpy.Exists(inFC)):
    print(inFC+" does exist")
else:
    print("Cancelling, "+inFC+" does not exist")
    sys.exit(0)

def distPoint(p1, p2):
    calc1 = p1.X - p2.X
    calc2 = p1.Y - p2.Y

    return math.sqrt((calc1**2)+(calc2**2))

def midpoint(prevpoint,nextpoint,targetDist,totalDist):
    newX = prevpoint.X + ((nextpoint.X - prevpoint.X) * (targetDist/totalDist))
    newY = prevpoint.Y + ((nextpoint.Y - prevpoint.Y) * (targetDist/totalDist))
    return arcpy.Point(newX, newY)

def splitShape(feat,splitDist):
    # Count the number of points in the current multipart feature
    #
    partcount = feat.partCount
    partnum = 0
    # Enter while loop for each part in the feature (if a singlepart feature
    # this will occur only once)
    #
    lineArray = arcpy.Array()

    while partnum < partcount:
        # Print the part number
        #
        #print "Part " + str(partnum) + ":"
        part = feat.getPart(partnum)
        #print part.count

        totalDist = 0

        pnt = part.next()
        pntcount = 0

        prevpoint = None
        shapelist = []

        # Enter while loop for each vertex
        #
        while pnt:

            if not (prevpoint is None):
                thisDist = distPoint(prevpoint,pnt)
                maxAdditionalDist = splitDist - totalDist

                print thisDist, totalDist, maxAdditionalDist

                if (totalDist+thisDist)> splitDist:
                    while(totalDist+thisDist) > splitDist:
                        maxAdditionalDist = splitDist - totalDist
                        #print thisDist, totalDist, maxAdditionalDist
                        newpoint = midpoint(prevpoint,pnt,maxAdditionalDist,thisDist)
                        lineArray.add(newpoint)
                        shapelist.append(lineArray)

                        lineArray = arcpy.Array()
                        lineArray.add(newpoint)
                        prevpoint = newpoint
                        thisDist = distPoint(prevpoint,pnt)
                        totalDist = 0

                    lineArray.add(pnt)
                    totalDist+=thisDist
                else:
                    totalDist+=thisDist
                    lineArray.add(pnt)
                    #shapelist.append(lineArray)
            else:
                lineArray.add(pnt)
                totalDist = 0

            prevpoint = pnt                
            pntcount += 1

            pnt = part.next()

            # If pnt is null, either the part is finished or there is an
            #   interior ring
            #
            if not pnt:
                pnt = part.next()
                if pnt:
                    print "Interior Ring:"
        partnum += 1

    if (lineArray.count > 1):
        shapelist.append(lineArray)

    return shapelist

if arcpy.Exists(outFC):
    arcpy.Delete_management(outFC)

arcpy.Copy_management(inFC,outFC)

#origDesc = arcpy.Describe(inFC)
#sR = origDesc.spatialReference

#revDesc = arcpy.Describe(outFC)
#revDesc.ShapeFieldName

deleterows = arcpy.UpdateCursor(outFC)
for iDRow in deleterows:       
     deleterows.deleteRow(iDRow)

del iDRow
del deleterows

inputRows = arcpy.SearchCursor(inFC)
outputRows = arcpy.InsertCursor(outFC)
fields = arcpy.ListFields(inFC)

numRecords = int(arcpy.GetCount_management(inFC).getOutput(0))
OnePercentThreshold = numRecords // 100

printit(numRecords)

iCounter = 0
iCounter2 = 0

for iInRow in inputRows:
    inGeom = iInRow.shape
    iCounter+=1
    iCounter2+=1    
    if (iCounter2 > (OnePercentThreshold+0)):
        printit("Processing Record "+str(iCounter) + " of "+ str(numRecords))
        iCounter2=0

    if (inGeom.length > alongDist):
        shapeList = splitShape(iInRow.shape,alongDist)

        for itmp in shapeList:
            newRow = outputRows.newRow()
            for ifield in fields:
                if (ifield.editable):
                    newRow.setValue(ifield.name,iInRow.getValue(ifield.name))
            newRow.shape = itmp
            outputRows.insertRow(newRow)
    else:
        outputRows.insertRow(iInRow)

del inputRows
del outputRows

printit("Done!")

Extract Values to Points (Spatial Analyst) Bug

One of the Spatial Analyst tools we often use in ArcGIS is the “Extract Values to Points” tool.  This allows us to take a point file (well locations in our case) and attach a value (elevations) from a raster image (a DEM) to each point.

Today I was running it for the first time against an Image Service we recently published and I received a warning message,”WARNING 000957: Skipping feature(s) because of NULL or EMPTY geometry”.  But the script seemed to run and the final results said “Succeeded” so I thought it was probably fine.  But as I double-checked, I realized the results were wonky.

Turns out that I had two records with Null geometry in my point file of 397 records.  These two records threw the above error but actually had a value in the [Rastervalu] field.  Turns out all 397 records had values.  These two records were consecutive–let’s say the 100th and 101st records in my shapefile.  What happened is record 100 got the value for record 102, record 101 get the value for 103, record 102 (which has valid geometry) had the value for 104.  This pattern, each record having the value for the record 2 place after it, continued until record 396 which had the value for record 397.  Record 397 also had the value for 397.  So the final three records all had the value for the final record.

What I would have expected would be for the two records with Null Geometry to have null values in the [Rastervalu] field and the rest of the records to have the correct values.  Despite the warning, it is very misleading for all the records to end up with a value.

I have a simplified example below.  I made a point shapefile with four records.  The first, third, and fourth  records have valid geometries; the second has Null geometry.  The second record ends up with the value for the third record.  The third record, has the value for the fourth.  The fourth record being the last record, ends up with the last valid value, which was its own.

The results that I would have hoped for would be for the third record to have a Null value.

The way I envision what is occurring behind the scenes is this:  the process makes a list (more of a stack in programming terms) of result values as it processed the points but just assumes that every record will return a value so it does not track which value goes with which shape.

When it reached the two null geometries, it threw an error but continued on.  It did not add a value for these records to the stack of values–when it comes across records with valid geometry but do not intersect the raster it adds a psuedo-null value of (-9999) to the stack.  After it processed all the records it had 395 values in the stack.  It then went, one-by-one through the stack and populated the records in the output shapefile, the first record got the first value in the stack, the second record got the second value, the 100th record got the 100th value (which came from the location of the 102nd record) and so on.  At the end, the final two records received the last valid value.

This final behavior–using the last valid value–corresponds a bit to a behavior we’ve seen with ArcObjects in general.  When iterating through a table, if a field is Null for a specific row, the value from the last non-Null value for that field is often returned.

I’m in the process of submitting a bug to ESRI.  I’m not sure if this existed prior to ArcGIS 10.0 (I’m guessing it did) or if it occurs in other processes (I’m guessing it does).  I did find out that the “Extract Multi Values to Points” works as expected.  I’m guessing it is because unlike the “Extract Values to Points” which creates a new shapefile, this tool appends fields to the existing shapefile and presumably processes records one-by-one without putting the results in a virtual stack.  The “Extract Multi Values to Points” tool also does not throw any warnings.


Instantiating Add-In Objects at Load Time

In migrating a toolbar consisting of a button and a couple of tools for use in ArcMap 10, I decided to take advantage of the ease of deployment enabled by add-ins which was introduced in 10.0.  So far, I’m loving the functionality.

One thing, however, that I have to figure out is that the controls are not instantiated until they are clicked on.  One of the results is that the controls, by default, are enabled.  This is not the functionality I wanted.

I found the solution in ESRI’s topic, Advanced add-in concepts (ArcObjects .NET 10 SDK) in the Delayed loading section:

By default, the assemblies associated with add-in buttons and tools are not loaded until the corresponding item on a toolbar or menu is clicked by the user. This behavior helps conserve application memory and other resources. Since the enabled state of an add-in button or tool is controlled by the OnUpdate method within code, the button or tool will initially appear enabled. If you need tighter control over the initial enabled state of a button or tool, you need to override the default behavior and force the item to load at startup by setting the onDemand Extensible Markup Language (XML) attribute to false.”

The verbiage, “setting the onDemand Extensible Markup Language (XML) attribute to false” was a bit non-specific to me.  I guessed right, however, that they were referring to the control definiton in the Command section of Config.esriaddinx.  In the example below, I set the onDemand attribute to false by adding the  ‘onDemand=”false”‘ tag and the control did, in fact, get instantiated at load time, giving me the ability to disable it.

<Tool id="MGS_QDIAddin_QDIAddTool" class="QDIAddTool" message="QDI Add New Location." caption="QDI Add Tool" tip="QDI Add Tool." category="QDI Add-In Controls" image="ImagesQDIAddTool.png" onDemand="false"/>

Walkthrough: Building custom UI elements using add-ins (ArcObjects .NET 10 SDK)

I was working my way through this ESRI Walkthrough: Building custom UI elements using add-ins (ArcObjects .NET 10 SDK).  And came across a couple minor errors that I had to correct during the process.

First, while implementing the OnClick() code for ZoomToLayer.vb, Visual Studio gave me a “Name ‘ArcMap’ is not declared.” error.

In the walk-through, they mention that the ArcMap method of your class.  For me, however, it appeared under the .My method.  Not sure if this is something specific to my set-up or, as I’m guessing, something that got changed after the first documentation was created and the final libraries published.

The fix is just adding  “My.” to the namespace in this line:

ZoomToActiveLayerInTOC(TryCast(ArcMap.Application.Document, IMxDocument))

To get this:

ZoomToActiveLayerInTOC(TryCast(My.ArcMap.Application.Document, IMxDocument))

When I added the code for AddGraphics.vb, I got 8 errors.  There was essentially two errors, repeated four times.  I took a screen shot after fixing the first error pair:

The fixes in this case was also to use the complete name space path.  Examples:

Change this:

(geometry.GeometryType) = esriGeometryType.esriGeometryPoint Then

To this:

If (geometry.GeometryType) = ESRI.ArcGIS.Geometry.esriGeometryType.esriGeometryPoint Then)

And change this:

simpleMarkerSymbol.Style = esriSimpleMarkerStyle.esriSMSCircle

To this:

simpleMarkerSymbol.Style = ESRI.ArcGIS.Display.esriSimpleMarkerStyle.esriSMSCircle

Overall, the walk-through is very well done, just a couple minor tweaks.  I am now working my way through modifying an existing solution–one that included seven projects–to see if I can create an ArcGIS 10 Add-In.

Example of how to add controls in data grid VB.NET

I have been working on some data entry forms that utilize a DataGrid.  Using a PostGres Geodatabase that had domains set on several fields, I could not directly bind to the controls on my dialog.  So I am going the round-about way of populating my own comboboxes with valid names and displaying within the DataGrid.

Having not done this previously, I found this example: Example of how to add controls in data grid VB.NET very useful and just wanted to point it out to anyone.  George Shephard’s Windows Forms FAQ also had several useful tips.

Finally, I has a problem with the masked textboxs I added to the Datagrid, they required users to click once to get focus and a second to start editing.  After much googling, I found used some information from MSDN that allowed me to find a work-around.  In my Mousedown event, I included this snippet (QdiControl is the control for the specific cell that the HitTestInfo says the mousedown hit):

</pre>
Dim WindowsControl As System.Windows.Forms.Control = CType(QdiControl, System.Windows.Forms.Control)
If (TypeOf WindowsControl Is MaskedTextBox) Then
 Dim pTextBox As MaskedTextBox = CType(WindowsControl, MaskedTextBox)
 Dim dgdtblStyle As DataGridColumnStyle = RelatedForm.DataGrid.TableStyles(0).GridColumnStyles(CurrentColumn)

 RelatedForm.DataGrid.BeginEdit(dgdtblStyle, CurrentRow)
 pTextBox.Select(pTextBox.Text.Length, 0)
 WindowsControl.Focus()

 End If

Peace.

Workstation Arc/Info Interchange Import Error

I have been working on updating some data that was published using Arc/Info Workstation Export (.e00) files, converting them to shapefiles, or in the case of annotation, to a geodatabase feature class.

While converting some INFO tables, I have received this error:

Unable to encode the 38th item in the 65th record for the INFO file

It turns out that Workstation can create export files that contain values that did not fit within their field–I’m guessing that somehow you are actually able to enter invalid values into those fields but I did not confirm that.

The error, “Unable to encode the 38th item in the 65th record…” was only partially helpful.  The error was in the 65th record but I was not able figure out which field contained the error.  The nature assumption that it was the 38th column was incorrect, at least if you are counting from left to right.  Whatever logic that is used, is consistent, at least within a specific table, because I had numerous records with problems and when the error messages indicated it was the same item, it did turn out to be the same field.

What I ended up having to do was open the export file in my favorite text editor, locating the info for problematic record and go through the columns one-by-one, comparing the value in the text file with that in the table.  In this case, 14.1 was suppose to be the value in the WP_TIO2 field.

…”

One thing I noticed, and this is consistent with behavior I have observed in ArcObjects in other cases, is that the record with the invalid value in a field ends up with the same value in that field as the previous record.  So in this case, the bad record has a value of 1.12 in the WP_TIO2 field because that is what previous record has.   That leads me to believe that ArcObjects is re-using a variable as it does the import, just updating each value, field-by-field.

Just to make sure that I had diagnosed the problem right, I attempted to change the value to 14.1 in an ArcMap edit session and got this message:

A look at the field properties confirmed that the field would not accept 14.1 as a value. The precision (3) means the field can hold 3 digits with 2 (the Scale) of them being right of the decimal point–leaving only one digit available to the left of the decimal point.

Once I identified the problem, I used two methods to correct it. The majority of my cases were actually cases where a psuedo-null value was being used to indicate that a field was really Null. In some cases, either -999.9 or 100 were being used. In these cases, I just substituted a different psuedo-null value (-99.9) that conformed to the field limitations. The data I was working with was not tightly controlled in the past and there were other existing records that used these value for the same fields, so I was able to do this without much concern.

Some values, however, were not nulls and I felt needed to be retained. These, I had to add a new field that could contain the values, copy the values over and then edit the ones that had caused problems during the import. Once complete, I exported the data to a new table with the new table structure I created.