Taming the Windows taskbar for LabVIEW

This article was posted on June 26, 2012 on lavag.org and has since disappeared from the net, after lavag.org upgraded and lost the blog functionality.

Note: The current VI library can be found here.

Prompted by a post here on Lavag.org several days ago, I started to dig into the possibilities to control the Windows 7 Taskbar interface from a LabVIEW application. This Taskbar is in fact a feature that has evolved over time from various concepts such as the Windows start menu, the Quick Launch bar, the Shell Notification area to the current Windows 7 taskbar, that combines some of the mentioned features with newer features added in the Vista and Windows 7 release.

A quick search showed on the NI site a little LabVIEW library that used the progress bar functionality in the taskbar buttons. This utility was based on a .Net component to access the Windows taskbar API. This is not really ideal, since the Windows taskbar API is in fact unmanaged, and incorporating a .Net intermediate library makes the whole solution somewhat heavy weight. More importantly, the taskbar API is based on Windows COM, a technology that builds on top of OLE and is the basis of ActiveX. But COM is only a building block of ActiveX and not equal to ActiveX, so there is no way to use the ActiveX functionality in LabVIEW for accessing this API. The involvement of COM however adds extra obstacles, since COM is by default using an apartment threading model, which is not exactly single threaded, but for the not so technically versed reader, it can be mostly seen as single threaded component. It does support access from multiple threads, but at a horrible cost, that is mostly invisible to a casual user.

This complication gets especially important, if one wants to use some of the other features of the taskbar that require more complex data interfaces than just sending scalar integers to the taskbar manager.

One example would be thumb-buttons, which can be used to allow control of some application operations through small little buttons underneath the thumbnail preview. The probably best known example for this would be some media players, such as the Windows Media Player shown below.

But in order for this, the component managing the taskbar has to be able to send messages to the application, whose taskbar button is shown. So we need to be able to let the taskbar manager message into LabVIEW, but in a way that lets us react to those messages in our own LabVIEW application. A .Net interface could use .Net events that get mapped to callback VIs. This would be a relatively elegant solution, but above mentioned COM limitations make this a little difficult to handle.

Since I have a lot more experience with interfacing to Windows APIs through DLLs, than with .Net, I decided to take a look what would be needed for this. The messaging from the taskbar manager to a LabVIEW program can be solved relatively easily by using user events and calling the documented PostLVUserEvent() C function from the external code. The taskbar manager sends Windows messages to the application in question (here LabVIEW) so we need to install also a message filter hook that intercepts those messages and translates them into the desired user event.

Everything seemed fairly straightforward at first after reading the documentation for ITaskbarList3 on MSDN. And my first attempts at controlling the progress bar functionality seemed very promising. Below code is all that is needed to access the method to set the progress state of a taskbar button.

MgErr SetProgressState(HWND hwnd, uInt32 state)
{
  ITaskbarList3 *ptbl;
  HRESULT hr = CoCreateInstance(&CLSID_TaskbarList, NULL, CLSCTX_INPROC_SERVER, &IID_ITaskbarList3, &ptbl);
  if (SUCCEEDED(hr))
  {
    hr = ITaskbarList3_HrInit(ptbl);
    if (SUCCEEDED(hr))
    {
      hr = ITaskbarList3_SetProgressState(ptbl, hwnd, state);
    }
    ITaskbarList3_Release(ptbl);
  }
  return hr;
}

For anyone wondering here, yes above code is standard C code and while COM objects are in fact using OOP techniques, their ABI is defined in such a way that they stay independent of the used C++ compiler, in order to allow calling of COM interfaces from code created with different compilers than what was used to create the COM object. And the Windows headers also define a standard C interface mechanism for most COM interfaces. Compiling this code into a DLL and calling it correctly with the Call Library Node works like a charm.

So this initial and quick success asked of course for more and I started to look into adding thumbbar buttons. This requires to first send a list of images to the taskbar manager, to be used for the visuals of the buttons. This is quite a bit more involved, since it basically requires some form of bitmap and creating the code to do this translation is quite cumbersome. I took a few shortcuts here in the beginning but immediately seemed to run into a road block. Even with the most trivial code of just loading an existing bitmap into an imagelist and passing this to the taskbar API function ITaskbarList3::ThumbBarSetImageList(), this always returned with a generic failure. However a quick C test program doing exactly the same worked flawlessly. Some googling showed that the image list is in fact implemented in the Windows Common Controls library. There exist two different versions that are very different in nature. The old 5.x version is a simpler version not supporting any theming, while the newer 6.x version implements theming support. Supposedly the Windows taskbar uses of course the newer version, but checking the LabVIEW executable showed that it also uses this version, yet it seemed my DLL was using the older version and the Windows taskbar manager was failing to recognize my imagelist because it used a different internal implementation. But no magic seemed to help to make the imagelist created in my DLL to be recognized by the taskbar manager. Several hours later and after having stepped through lots of assembly code in debug mode, I suddenly realized that the imagelist that was passed to the taskbar manager, was not at all implemented in the memory area that was used by the Common Controls library but rather in the Remote Procedure Call library. How could that be?

Suddenly the fog started to clear up and I remembered various bits about COM apartment threading. Basically any Windows application wanting to use COM functionality has to call CoInitialize() before trying to access any COM object. I knew that LabVIEW was doing that early on during initialization, as otherwise the whole ActiveX interface inside LabVIEW would be simply impossible. However each COM component can specify in the registry if it is apartment, or free threading. Apartment threading means that the component needs to be always called from the same thread, while a free threading component can deal with being called from any thread. Most COM components only support apartment threading, and this includes most Windows COM components and in fact just about any ActiceX component out there. So LabVIEW calls CoInitialize() early on during initialization but of course in the UI thread. And if our DLL then calls CoCreateInstance() from a different thread, Windows correctly determines that this would violate the apartment threading contract for the ITaskbarList object and instantiates an intermediate marshaling layer that is using the RPC library. This marshaling layer basically translates the entire object and all method parameters into a stream of binary data, that can be transmitted through memory streams or even network sockets in the case of remote invocation through DCOM (Distributed COM). The serialized stream is then sent to an RPC daemon that hooks into the application message queue, and sends the data to the server (here the taskbar manager) whenever the application is retrieving messages from the Windows message queue. The same happens in reverse order for any return parameters the server sends to the client. This message queue hooking is also the reason that you can end up with deadlocked applications when using ActiveX and not being very careful. If there is any marshaling involved in the execution of the COM/ActiveX object, this will only work if the application in question is still servicing the Windows event message queue, by regularly calling GetMessage() to retrieve new messages from the OS. But if you happen to lock out that loop in your application because you run the marshaled code execution in the same thread, a classical dead lock occurs.
This marshaling seems to use the old Common Control 5.x format for the Image-lists, and the taskbar manager expecting 6.x image-lists simply fails when it tries to verify the image-list object before accessing its content. I’m not sure there is any way to make the RPC serialization in COM to use Common Control 6.x image-lists, but this was not really necessary, once I realized what the problem was. Setting the Call Library Node, that was calling my DLL function, to run in the UI Thread was all that was really necessary. Since now the CoCreateInstance() was executed in the same thread that LabVIEW had called CoInitialize() earlier on, the whole marshaling was left out and the image-list that I had created got passed directly to the taskbar manager and the functions started to simply work.

There was a bit more work to be done in converting the LabVIEW Pixmap data structure into a Windows bitmap, which could be properly used as image-list source. Bitmaps are tricky to handle and even trickier to translate between different formats, but a bit of trial and error eventually resolved that too. I chose to use Pixmaps instead of a file path to a bitmap file, because the Windows API for image-lists only supports Windows BMP bitmaps, and initial tests had shown, that it was a bit difficult to get the necessary 32 bit bitmaps for the required transparency to show without any artifacts. Allowing to use LabVIEW pixmaps instead, one can easily load 32 bit PNG files with alpha channel, but it’s possible to use JPG or BMP image sources as well with the according LabVIEW VIs.

And here is the current result of this work:


Tree of VIs


Thumbbar functionality

External Code in LabVIEW, Part3: How to use the Call Library Node properly to ease multi platform support

This is an old post that has appeared on expressionflow.com on May 19, 2007

Shared libraries are since about LabVIEW 6 the preferred way for use with external code in LabVIEW. While the Call Library Node can directly interface with many existing shared libraries for the platforms LabVIEW supports, they can also be used to interface to shared libraries that were specifically developed for use with LabVIEW. And one of the very neat features of LabVIEW itself is its multi-platform support. This multi-platform support can even be taken to external shared libraries that were specifically developed for use with LabVIEW. If one observes a few rules and makes smart use of some not very well documented facts and features of the Call Library Node, this will result in a very easily maintainable solution both in terms of the C code as well as the LabVIEW VI libraries.

Calling convention

One aspect of shared libraries is, that on some platforms, each function can have different so called calling conventions. A calling convention is a specification how parameters are to be passed to the function. This usually happens over the stack but some not very common calling conventions also can use registers for all or the first x number of parameters. For instance under Windows 3.1 the two most common calling conventions were Pascal and C. The difference between them is that Pascal passes the parameters on the stack from right to left while C passes them in the opposite order. A second difference is that with Pascal the called function is responsible for restoring the stack just before returning to the caller while in C the caller is restoring the stack itself after the function returns. Under Windows 32-bit (9x, ME, NT, 2000, XP) the two most common calling conventions are ’stdcall’ and C. The difference here is only in who is cleaning up the stack as the order of the parameters is really the same in both. In any case having a caller assume a different calling convention than what the called function is implemented in, will always cause a stack corruption and end in the General Protection nirvana, or if implemented in the exception handler, immediately after return of the function to the caller.

While different platforms support different calling conventions, the only common one on all LabVIEW platforms is the C calling convention. This is an important thing to consider when one wants to create multi platform shared libraries. By making sure all exported functions in the shared library use the C calling convention one only needs to maintain one single VI library for all possible LabVIEW platforms.

Library name

Each platform has its own type of naming convention for shared libraries. Under Windows all standard shared libraries use a .dll extension, while under modern Unix systems using ELF shared libraries this is usually the .so extension. Power Macintosh shared libraries normally have a .framework extension.

One handy and not well documented feature in the LabVIEW Call Library Node is the fact that one can simply specify the shared library name as “<name>.*” and LabVIEW will automatically search the directory containing the VI as well as its search paths for a shared library file consisting of the <name> part together with the appropriate extension for the current platform.

So by defining the name of the shared library in all Call Library Nodes as “mylib.*” for instance, you can truly have one single VI library that interfaces on all LabVIEW platforms to the correct shared library. You can even put all those shared libraries into the same directory and LabVIEW will pick the right one. Unfortunately this does not work for Macintosh systems since a shared library there is really a collection of files in a subdirectory and some of these files make use of the resource fork, which gets lost if these files are copied to a non Macintosh file system.

LabVIEW manager function use

A common misconception is that only CINs can make use of the LabVIEW manager functions documented in the External Code Reference Manual. This is absolutely not true. A shared library can just as easily call any exported LabVIEW manager function such as NumericArrayResize() to create LabVIEW compatible data types, which eventually even can be passed back to LabVIEW to be used on the diagram unaltered. To be able to do that, your shared library only needs to include the extcode.h and possibly the hosttype.h file in the cintools directory and link with the labview.lib file in that same directory. The only limitations are that the functionality of those LabVIEW manager functions really is provided by the LabVIEW development environment or runtime system. Therefore a shared library linking to any of those functions will only be executable (in fact loadable) in the LabVIEW development environment or in a LabVIEW executable. But this can hardly be considered a disadvantage, as CINs by nature only can run in those environments.

Use of these LabVIEW manager functions can make your C code a lot more uniform across platforms. These functions are provided by LabVIEW on all platforms in the same way, and are guaranteed to have the same semantics and behaviour. This can greatly reduce the number of places in your source code where you would otherwise have to provide specific code parts for the different platforms or compiler tool chains.

Conclusion

Knowing the details about the previous three sections one can create a single VI library interfacing to the appropriate shared library for the platform, LabVIEW is currently running on. We will see this in one of the upcoming articles in this series.

External Code in LabVIEW, Part2: Comparison between shared libraries and CINs

This is an old post that has appeared on expressionflow.com on May 19, 2007

Not looking at ActiveX and .Net, since they both are Windows technologies only and are not really suited to integrate generic external code into LabVIEW, there remain two similar possibilities nowadays to integrate external code into LabVIEW. These are CINs and shared libraries through use of the Call Library Node. While CINs used to have certain advantages over shared libraries in the past, this is since about LabVIEW 6 basically not true anymore and with LabVIEW 8.20 the Call Library Node got additional features that remove the last (esoteric) advantages CINs had over shared libraries. On the other hand however have shared libraries quite a few advantages.

In this article I will try to compare the most important differences between these two technologies and try to make a point, why anyone should nowadays go for the Call Library Node with shared libraries, instead of using CINs.

Let’s start with two features where CINs still had a slight advantage over shared libraries until the advent of LabVIEW 8.20. The previously mentioned “callback” functions introduced in LabVIEW 8.20 actually make the Call Library Node go on par with CINs in these aspects. I will investigate into the specifics of those callback functions in a later article in this series.

Advantages of CINs before LabVIEW 8.20

Instance specific data storage

One advantage of CINs used to be the possibility of instance specific global data storage. If you don’t know what this could be and why you could use it, you most probably never will need it and you can skip this section.

Using the functions GetDSStorage() and SetDSStorage() one can use a single 4 byte location which LabVIEW manages on a per instance base. This means that unlike a global variable declared outside any function body in the CIN code, each instance of a CIN in a diagram using that particular CIN code resource gets its own 4 byte value, managed by LabVIEW. You could see this similar to an uninitialized shift register in a LabVIEW VI, which has been set to be reentrant. While this feature could be powerful in some esoteric situations it is almost never used and its understanding is made even more complex by the fact that such a CIN located in a reentrant VI itself, will actually maintain multiple instance specific data storages for each instance of that reentrant VI multiplied by the number of those CIN code resources located inside that VI. Once you continue this acrobatic brain exercise to include multiple levels of reentrant VIs, everything gets very soon very complicated for anybody not used to think in parallel realities in n-dimensional systems.

More detailed control over initializing and unintializing of the code resource

While shared libraries only support initializing and unintializing of code on loading and unloading of the library through OS provided mechanisms such as DLLMain() in Windows or init() and fini()  see footnote 1) under ELF shared libraries, LabVIEW CINs separate these actions into load, init, uninit, and unload together with a save operation. The load and unload is called once for each CIN code resource and could be used to initialize and deallocate global data while the init and uninit is called once for each code resource instance and can be used to initialize and deallocate above mentioned instance specific data storage. The save routine doesn’t really make much sense for someone outside of NI as one needs to know about some undocumented parts in LabVIEW to properly make use of this.

Both these features are very rarely used and can be achieved through other means such as maintaining a pointer in the calling VI inside a shift register and passing this pointer to the function on every call, or by using the platform specific shared library initialization and deinitialization methods in a smart way.

But shared libraries have a number of advantages, which should make it easy for anybody to choose for them instead of trying to dig into the proprietary CIN technology.

Advantages of the use of the Call Library Node over CINs

Parameter type support

With the new Adapt to Type parameter configuration since LabVIEW 5.1, a Call Library Node can pass any LabVIEW data type directly to a shared library without any translation or fiddling around. In this aspect the parameter support is now superior to CINs, since the Call Library Function supports some additional configuration options as well, such as passing LabVIEW handles by reference whereas a CIN will always pass them by value. And the Call Library Node supports also other data types such as C string pointers or C arrays among others and LabVIEW will take care to pass the correct part of its own data type to the shared library so that the library can work on them with the standard C runtime routines. While it is not necessary to use these types for your own libraries, it is an additional bonus which can sometimes be very handy.

– Accessing most shared libraries directly

With CINs you always have to write some intermediate C code to access already existing shared libraries. The Call Library Node can however interface with many shared libraries directly, making the creation of an intermediate wrapper DLL in many cases an optional step. This does not always work that way, since LabVIEW arrays and strings are really dynamically resizeable while most standard C datatypes do not allow for easy dynamic resizable pointers. In a later article I will go into more depths about when you would need to create a wrapper for use with the Call Library Node, but in general many shared libraries can be interfaced directly without an intermediate wrapper shared library.

Multiplattform support

With the current LabVIEW platforms, there is no reason, why maintaining a shared library source code for multiple platforms would be more difficult, than doing the same for a CIN. On the other side if you take care to follow some recommendations in respect to calling and naming convention, mentioned in the next article, there is no reason why you would need one set of VIs for each platform you want to support. For CINs this is mandatory and can only be sort of circumvented with a technique invented by Christophe Salzmann, which he called FatCIN. With this technique you create a VI for each platform you want to support and then you create another wrapper VI, which uses dynamic calls through VI server to load and execute the appropriate VI for the current platform. But this is a rather cumbersome technique, as it will still require one VI for each platform for every CIN and an additional wrapper VI. Also the maintenance of the CIN VI itself will be cumbersome too, since whenever you make a change to the CIN code no matter how small it is, you have to manually touch each platform VI to reload the new code resource.

Multifunction support

If you happen to have a number of functions to support, you can implement them all in a single shared library and create one VI for each function in that shared library. If you are using CINs instead, you either have to incorporate all the functions into a single CIN and create a CIN which takes a selector parameter for the function to execute. This VI also will require a more or less involved parameter list, to support the most extended list of parameters of all the functions. Or you can create a CIN for each of those functions and wonder why a single change to some common parameter type will have you compile each of the source codes separately and then require you to go into each CIN (and in case of multiplatform support into each of those platform VIs too) and reload the code resource into the CIN. In the case of separate CINs for each function you also don’t have any means to store some global information inside the CIN to allow sharing that information among multiple functions.

For shared libraries you almost have to create a VI for each shared library function. But even when using the single CIN approach you still will also usually want to create one additional wrapper VI for every function selector value. This is because direct use of the single selector based CIN VI is not very user friendly. Imagine the many VI parameters on such a selector based CIN that will not have any effect for most of the selected functions.

Compiler tools support

Shared libraries can be created in virtually any development environment which is able to create them. There is no need to use a specific C compiler unless you want to link with the labview.lib library to make use of the LabVIEW manager functions documented in the External Code Reference Manual. This limitation is because object libraries can come in a number of file formats and each compiler has its own preferred object file format and often no support for object file formats from other compilers. LabVIEW for Windows 32-bit comes with a labview.lib file in COFF format, which is the format used by Microsoft Visual C and a labview.sym.lib file in the OMF format used by the now discontinued Symantec C compiler. Borland C also uses some form of OMF format, but it is not clear to me if the Symantec C provided library is compatible with any version of Borland C, because object file formats can and will certainly come in various flavors.

As long as you do not want to link to labview.lib however, there is really no limitation in what development environment you use to create a shared library, provided this development environment can create shared libraries usable by other development environments such as Visual Basic, Delphi or similar.

CINs on the other hand always will have to link to at least cin(.sym).lib and lvsb(.sym).lib and in most cases labview(.sym).lib too and therefore will always be limited to the LabVIEW supported compiler tool chains.

In fact Open source tool chains such as the GNU C Compiler can be used on all current platforms to create shared libraries callable by the Call Library Node, although for Windows I would recommend the use of some customized GNU tool chain such as the MingW compiler tool chain http://www.mingw.org, possibly in connection with a development environment such as Dev C++  http://www.bloodshed.net, or Code::Blocks http://www.codeblocks.org

Conclusions

CINs should be considered legacy technology and should absolutely not be used for new developments. CINs have basically not one single advantage anymore over the use of the Call Library Node but quite a few disadvantages, especially in terms of code maintenance both for the C code as well as for the LabVIEW part! This is already true since about LabVIEW 6.

To see where the future of CINs most probably will go you only have to investigate a current LabVIEW installation and try to find stock VIs, which still make use of CINs. They are virtually non-existent. National Instruments has ported all their LabVIEW interfaces such as NI-DAQ, NI-IMAQ, NI-CAN, etc and add-on tools like the Advanced Analysis library or IMAQ Vision to use shared libraries instead of CINs.

Note from March 2, 2017: Recent versions of LabVIEW have no support to create CINs anymore. While some of them can still execute CINs created for earlier versions, it’s cumbersome at best to create new CINs, as one would need to get the tools from an old LabVIEW installation to do so. And all new LabVIEW platforms such as the Windows and Mac OS X 64-bit versions, or the NI Linux Realtime versions, never received support for creating CINs at all. So CINs are definitely a legacy technology, which is only supported on a fraction of the currently available LabVIEW platforms.

  1. init() and fini() are considered obsolete in modern shared libraries. Instead the GCC attributes __attribute__((constructor)) and __attribute__((destructor)) should be used.

External Code in LabVIEW, Part1: Historical Overview

This is an old post that has appeared on expressionflow.com on May 9, 2007

From the beginning, the LabVIEW developers recognized the need for some form of node or interface to allow LabVIEW users linking in their own code that could be created outside of LabVIEW. The reasons for wanting this can be varied. Someone might be interested to interface to an existing application or code library already developed and tested in order to leverage its functionality. Other reasons could be about interfacing to third party drivers and libraries or some optimized code, which needs to be executed for performance reasons.
This first installment about external code in LabVIEW will try to bring a historical overview about the possibilities to interface to external code in the different LabVIEW versions.

LabVIEW 2, MacOS only, external code resources

This version of LabVIEW did only run on Macintosh computers. In the Macintosh 68k OS, executable code was really just another resource inside the resource fork of the file among the other resources such as icons, images, menus, text strings etc. LabVIEW therefore used this technology to implement an external code interface. The compiled object code was added to the LabVIEW VI simply as a code resource in addition to the code resource created by LabVIEW from its diagram code. Whenever the VI got loaded into memory the compiled VI code resource and the additional external code resources were linked into the existing code base.

LabVIEW 3, CINs, complications and reasons

LabVIEW 3 (we leave out 2.5 as that was only really a 3.0 prerelease) was the first version to support other operating systems than Macintosh OS. It is not fully clear to me if shared library support was already standard in the Sun OS (later to be called Solaris 1) operating system at that time. The first versions of shared library support under Unix were however in general quite buggy and each Unix vendor had his own ideas about how it should work. The Macintosh OS also did not have a standard interface for shared libraries. One had to download an extension for this and the first version for shared library support was at some point discontinued and replaced by a different incompatible version. Here too, were a number of nasty bugs present that got only slowly ironed out with new releases.
But the most difficult platform was the DLL interface for the Windows version. It had some serious troubles due to the fact that LabVIEW was a flat 32bit memory application built around a so called DOS memory extender. This DOS memory extender was delivered with the Watcom C development system and provided to applications a true 32bit environment while running on top of the old 16bit environment used in Windows 3.x. Any applications wanting to use that environment had to be compiled with Watcom C. This posed problems when wanting to call normal Windows DLLs, since they were really 16-bit and the stack as well as pointers inside the 32-bit environment had absolutely no meaning to a DLL running in the 16-bit environment. So when passing information between the 32-bit LabVIEW application and the 16-bit OS environment the entire parameter stack and any pointers had to be properly translated.
Interfacing to Windows DLLs directly would have meant to translate auto magically every single parameter between the 32-bit LabVIEW environment and the 16-bit Windows memory model. For every single call, a complete new stack frame needs to be allocated in the 16-bit environment, parameters need to be copied from the 32-bit memory into the 16-bit memory and for pointers additional translations need to be done to make the pointers valid in the 16-bit environment. On return of the function these operations have to be reversed too. Watcom did provide some routines to deal with this translation, which basically was only possible by executing some involved assembly code internally, but the setup and configuration of those functions was a quite involved task. Therefore it was decided that this would be a to large development effort for an initial multi platform LabVIEW version. Instead the already existing idea of external code resources from the Macintosh OS was used and ported to the Windows and Sun OS. A Windows CIN back then was in fact a native Watcom REX object code file and a Macintosh OS CIN was a 68k object code file. The lvsbutil tool provided in the cintools directory wrapped this object code file into a CIN header and added the resulting file as another resource to the VI resources. This allowed LabVIEW to directly call the code resource in the same environment as LabVIEW itself was running, without involved memory translations that are difficult to handle automatically and also cause performance degradation. The disadvantage was that the Windows CINs only could be generated by the Watcom C compiler since they needed to be in the 32-bit REX object format, which no other compiler could generate.
I feel personally that the developers missed an opportunity here, when they did not allow for multiple code resources being added for the same CIN. This made it necessary to manually load the correct platform specific code resource into the VI whenever a VI was moved between platforms. This deficiency was never fixed in later versions of LabVIEW and caused a significant loss of convenience in using CINs for multiplatform libraries.

LabVIEW 4, Shared library support

LabVIEW 4 added the Call Library Node to interface directly to external shared libraries (Macintosh Code Fragment Manager components which were not standard for non PowerMac computers, Unix shared libraries and Windows Dynamically Loaded Libraries)
Due to the limitations of the Macintosh Code Fragments and Windows 3.1 16-bit DLLs, the supported data types were limited. For instance function return values other than void and numeric were not possible because of differences how Borland and Microsoft DLLs returned pointer types as function return values. More complex function parameters than string and arrays were also not possible because the parameters needed to be prepared accordingly for Windows 3.1 DLLs. To create an automatic thunking interface supporting more complex data types would have been a real nightmare to implement and therefore was left for a possible later version of LabVIEW. Because of the 32-bit <-> 16-bit translation in accessing external DLLs under Windows 3.1, this solution also had a lower performance than using CINs. Also developing code for more than one platform was not very straightforward when using DLLs. This made the use of CINs still the preferred external code solution in those days, especially when support for multiple LabVIEW platforms was desired or performance was an important issue.

LabVIEW 5, multi-threading

With the introduction of multi-threading support, the external code interfaces also got somewhat more complex in certain situations. To take advantage of multi-threading, the external code interfaces need to get configured to tell LabVIEW if it is safe to call the external code in different threads. For CINs, this was done by exporting an additional function from the CIN which tells LabVIEW if the CIN is safe for reentrant execution or not. LabVIEW assumed automatically unsafe behavior if this export is missing. This forces the CIN to always be executed in the only exclusively single threaded execution system, which is the UI system. For DLLs, there was no way to have an automatic way of telling LabVIEW, if a DLL function was safe for reentrant execution or not. Also a DLL could have multiple functions and some could be safe, while others could be unsafe and Microsoft never had anticipated that a programming environment might be interested in this information from shared libraries. Therefore a manual configuration option was added to the Call Library Node configuration dialog. Only the developer of the shared library can really know if a function is safe or not. The LabVIEW VI developer has to decide if he wants to set the Call Library Node to be able to call the function in any thread or force it to run inside the UI system, either because he has developed the DLL himself and understands the issues, or because he got this information from the documentation for the DLL. In many cases there is no specific information available in the documentation and in that case the best option is to leave the Call Library Node configuration to execute in the UI system. The alternative is trial and error, which can be cumbersome. Race conditions and other errors resulting from unsafe reentrant execution can occur randomly and at different moments, based on previous execution order of library functions. Also many external factors such as system load or used memory can cause randomness in how race conditions make themselves noticed. Often you can execute an unsafe function countless times from a multi-threading environment only to find that the application starts to crash or exhibits unexplained calculation results after it has been shipped to the other side of the world.

LabVIEW 6, Extended shared library support

In LabVIEW 5.1 support for 68k Macintosh and Windows 3.1 was dropped and this allowed for an enhanced data type support in the Call Library Node. A function now could return also a string and function parameters could be configured to adapt to the LabVIEW data type, as no complicated automatic thunking needed to be performed anymore.
In LabVIEW 6.0 this was even more extended with ActiveX datatypes for Windows platforms, additional selections for the Adapt to Datatype parameter selection and as a gadget, the selection of the available function names inside the shared library in a drop down box, but unfortunately only for Windows. Another nice feature added is the Create .c file selection in the context popup menu for the Call Library Node. With this you can let LabVIEW create a C header file with the correct prototype for the currently configured Call Library Node. This feature is especially handy if you happen to use the Adapt to Type parameter type, as LabVIEW obviously knows best how its own data types are to be declared in C syntax.

One indication that CINs are starting to be considered legacy technology by the LabVIEW developers, is the removal of support for creating external subroutines. These were external code fragments not loaded into the VI itself but instead being left as independent files on the file system, in order to be called by different CIN code resources. One of its applications were common subroutines, another one was providing a place to store global data among multiple CIN code resources. This was a fairly seldom used feature although the National Instruments NI-DAQ library and the LabVIEW Advanced Analysis library did make use of them before they were ported to shared libraries that were accessed through the Call Library Node interface.

LabVIEW 7, not much news on this front

LabVIEW 7 has not made significant changes to the possibilities of incorporating external code in comparison to the previous versions. The Call Library Node is quite mature and works well for almost any possible scenario and the CIN support has been further marginalized by removing almost any use of it in all the different LabVIEW function libraries provided by a fresh installation. Also National Instruments finished the port of almost all its hardware interface libraries and add-on toolkits to use the Call Library Node instead of CINs.

LabVIEW 8, A few more improvements

In LabVIEW 8.0 the Call Library Node was left mostly alone and no new feature was added. LabVIEW 8.2 however improved the Call Library Node further by adding error terminals to it, allowing passing the path to the shared library to be loaded at runtime, and it also added so called callback functions although this naming is in my opinion quite misplaced. Callback functions are usually function pointers that a caller can provide to be called back by a library or other external component. This callback functions can usually be called by that library at any time it wants to inform the caller about something. What LabVIEW 8.2 really supports is the configuration of initializing and deinitializing functions that the LabVIEW environment will call before and after calling the actual function itself and an abort function when the user aborts the user hierarchy while LabVIEW is in the process of calling that Call Library Node. Obviously this is only something that is possible for functions that are declared reentrant in the configuration.

LabVIEW 2009, 64-bit support

LabVIEW 2009 is the first version that officially shipped also as 64-bit version, albeit only for Windows for now (64-bit versions for Linux and Mac OS X were introduced with LabVIEW 2014, together with support for NI Linux Realtime for x86 and ARM realtime targets from National Instruments). Accordingly when running in the LabVIEW 64-bit version, all shared libraries that need to be called, have to be compiled as 64-bit library too. The Call Library Node got for that a new numeric datatype which is pointer sized. On the LabVIEW diagram this is always a 64-bit integer but when being passed to the shared library function, LabVIEW will perform the correct coercion to a 32-bit or 64-bit pointer value.

A new experiment

So I have finally, after many years of thinking about it, decided to start my own presence on the big wide WWW.

Many may know me from my posts on LabVIEW related fora. It all started out back in the early 90ies of last century on a mailing list called Info-LabVIEW, while working as an Application Engineer in the technical support of the Swiss branch of National Instruments, the inventors and makers of LabVIEW. By current standards, such a mailing list was pretty arcane. One would send in a mail to an email server which gets distributed to everybody who had signed up and then people could answer to that same email server with their thoughts, questions, ideas and suggestions to the problem. But it worked amazingly well and the nature of this exchange made it clear that one should not expect instant replies. Compare this to modern fora, where people sometimes start to poll after one hour already, why they haven’t received an exhaustive answer to their not very clearly described problem.

But enough of musings from an aging guy who may sound like everything was much better back in the old days :-). It sure had its charms but progress is unavoidable even if it sometimes doesn’t feel like an improvement.

I plan to revive a few posts I did in the past about some of my pet topics, which many may know to include interfacing of external code components to LabVIEW. Most of these were placed at some point in one form or the other on a blog or presented during a user group meeting but have since been dropping from the net. Some of the initial ones may be already older and to some extend have more of a historical value, but I feel they still may serve as an introduction to the later articles, which do still have actual value today.