This book is a work in progress, comments are welcome to: johno(at)johno(dot)se

Back to index...

IMGUI

Motivation

Programming user interfaces has a reputation of being difficult. This is perhaps much due to the fact that user interface toolkits tend to be large and complex software systems. They often have a steep learning curve and are often cumbersome to use, typically involving quite a bit of application specific implementation in order to integrate. When you next log onto Poker dk, or Twitter, take into account the multifaceted nuances of the UI, as you'll be surprised at how many components they really have. Designing such a toolkit is harder still, and it is my experience (when it comes to games) that even a toolkit that is explicitly designed for re-use doesn't get re-used very much in practice.

Due in large part to the advances in dedicated graphics hardware (GPUs) during the past 10+ years, it is now entirely feasible to approach user interfaces in a novel way. Immediate Mode Graphical User Interface (IMGUI) represents a paradigm where user interfaces are simpler to create (for the client application) and simpler to implement (for the toolkit designer).

The broken paradigm

There is a dominant paradigm within programming since (forever?), and that simply:

The user interface and / or visualisation of any program is inherently stateful.

I maintain that this is a broken paradigm. Not that such things CANNOT be stateful; the current state of various software technlogies are indeed based upon this paradigm. I will however argue that avoiding such statefulness significantly simplifies software.

The woes of caching state

I maintain that much of the complexity associated with the design and use of of traditional user interface systems is a direct result of the tendency of such systems to retain state. The programmer is typically required to actively copy state back and forth between the application and the user interface in order for the user interface to reflect the state of the application, and conversely, for changes that happen in the user interface to affect the state of the application.

This is the basic problem; this state (inherent to the user interface system) is a COPY / CACHE of the REAL state, which is owned by and resides with in the specific application itself.

The user interface, from the point of view of the client application, most often looks like a collection of objects, typically one per "widget", which encapsulate state that needs to be frequently synchronized with that of the application. Such synchronization goes both ways; state moves from the application to the user interface in order for that state to become visible to the user, and state moves from the user interface back to the application when the user interacts with the interface in order to change the state of the application.

When the user interacts with the user interface, the client application must explicitly move state from the widgets back into application data structures. Sometimes, depending on the toolkit used, a level of automation is provided by the user interface toolkit for such "data exchange", but the synchronization itself (not to mention the duplicated state) is still a fact of life.

Additionally, the manner in which the application is notified of user interactions with the interface (which in turn signals a need for re-syncing of state) often takes the form of callbacks. This requires the application to implement "event handlers" for any low-level interaction that is of interest, often by subclassing some toolkit baseclass either manually or via various code generation tricks; in either case further complicating the life of the client application.

Immediate Mode applied

IMGUI does away with this type of state synchronization by requiring the application to explicitly pass all state required for visualization and interaction with any given "widget" in real-time. The user interface only retains the minimal amount of state required to facilitate the functionality required by each type of widget supported by the system.

With IMGUI, a conceptual shift occurs. Widgets are no longer objects at all, and can't really be said to "exist". They take instead the form of procedural method calls, and the user interface itself goes from being as stateful collection of objects to being a real time sequence of method calls.

Fundamental to this approach is the concept of a real-time application loop, where the application processes logic and draws its display at real-time rates (30 frames per second or more). In the context of games, this is already common practice.

At first glance it might seem extremely cumbersome to constantly be passing the required state to the user interface, but this is in practice not at all true. Also, it might seem wasteful (from a computing resources standpoint) to be constantly resubmitting state and redrawing the user interface at real-time rates. However with modern CPU speeds and GPU fillrates this is not a problem at all.

The gains are in both simplicity and flexibility. The removal of the implicit state cache in the user interface system results in less potential for cache-related bugs, and also completely removes the need for the toolkit to to expose widgets to the client application as objects at all. Widgets, logically, change from being objects to being method invocations. As we shall see, this fundamentally changes how a client application approaches the implementation of user interfaces.

Issues of acceptance

Before diving into implementation details, I want to discuess the big WHY of this.

I realize that one of the compelling reasons to use existing user-interface toolkits is the fact that there exist toolkits to be used; you don't have to code it yourself, and can concentrate on the details of your own application. Here I am about to explain how to implement a user-interface toolkit from scratch; why is this interesting at all?

It may not be immediately obvious here, but the main gain and reason to use IMGUI is that the actual application specific client code becomes MUCH less (fewer LOC) and MUCH simpler. For small applications this may not be a gain, but any significantly complex user interface is usually non-trivial to implement and maintain, even given a robust user-interface toolkit.

An example of simplification

In one of my games, UfoPilot II : The Phadt Menace, the entire "front-end" user interface was initially implemented in classic retained mode style. This was more or less equivalent to how MFC dialog boxes worked, in that I had a class for each specific "screen", and instantiated an object of each of these classes as the user navigated throughout the interface.

Each "screen class" had multiple widget members, and layout was part of construction and much a manual issue where I would run the program, look at the placement of things, shut it down, edit the code, and repeat. A dedicated editor (like MFC has) might perhaps have helped me here.

Upon porting this user interface to IMGUI, with toolkit-methods being implemented as needed during the porting process (I built my Gui class as I went along, moving code from Widget classes to the Gui class), I gained several things:

Firstly, in each case where there was a class for a "screen", this collapsed from a class to a single method in a Menu class (which represented the entire collection of front-end screens and code). So where I had previously had about 10-15 classes I now had a single class.

All of the widgets classes collapsed into methods of the Gui class, so again, where I previously had several classes I now had one.

Further changes and iterations of the front-end changed from being a painful experience involving widget instantiation, layout, callbacks, etc, to being about adding or removing a few lines of code in the form of "if(doButton()) do something...".

Layout was still coded straight into the application, but since I already had this information it was simply a matter of moving the code around.

How to encourage acceptance

Someone is going to have to implement a reference tookit and one or several applications that use it. In addition, these applications must underline the fact that this is about a paradigm shift, not just a nifty trick. Indeed, looking at WPF, the whole mindset is still extremely retained.

Up until now it hasn't really been feasible on win32 (unless one turns to using DirectX) due to the overhead of drawing with GDI or GDI+. However on Windows 7 the new DirectX apis Direct2D and DirectWrite might prove to be a good solution for high-resolution / high-performance IMGUI applications.

This reference toolkit needs to "look good". This means widgets that look "modern"; it would probably be a good idea, from a political standpoint, to involve an artist in this... :)

The "layout issue" needs to be adressed, because for many applications I suspect that requiring the programmer to deal with layout per "widget" will simply not be feasible.

TODO: more information (as promised) in the Advanced Features section on how to implement "built-in layout".

Basic implementation (Object Oriented)

The client sees a single Gui instance (the IMGUI "context"). This single instance encapsulates the entire gui "system" / or "framework". Gui will typically expose one or more methods per widget type that you would typically need in your application.

Here is the interface of an example Gui class:

class Gui
{
public:

    void label(const int aX, const int aY, const char* aText);
    const bool button(const int aX, const int aY,
                      const int aWidth, const int aHeight,
                      const char* aText);
    const bool radio(const bool anActive,
                     const int aX, const int aY,
                     const int aWidth, const int aHeight,
                     const char* aText);
    const bool check(const bool anActive,
                        const int aX, const int aY,
                        const int aWidth, const int aHeight,
                        const char* aText);
    const bool tab(const bool anActive,
                   const int aX, const int aY,
                   const int aWidth, const int aHeight,
                   const char* aText);
    void edit(const int aX, const int aY, String& aString);
};

From the point of view of the client application, using Gui is very straightforward. In order to put a given type of widget on the screen, the client simply calls the corresponding method.

In the above example, all methods that return const bool will return true if the left mouse button was clicked inside the bounds of that widget. Also, the screen position and size of each widget is explicitly passed in each call (aX, aY, aWidth, aHeight). Depending on the application, this might be a pro or a con, as we shall see in later sections.

In any case, observe the basic premise; the client application passes all the state required for a given widget to operate at any given time, and on a frame-by-frame basis.

Buttons

Based on this premise, reacting to a button-click is as simple as:

Gui myGui;

void doSomeUserInterface()
{
    if(myGui.button(64, 64, 32, 16))
    {
        //do something as a result of the button being clicked
    }
}

Observe the absence of any type of event handling callback; both creating and reacting to interaction with a button is simple as an if statement.

Radio buttons, check boxes, and tabs

An interesting aspect of IMGUI is that the classic widget types radio button, check box, and tab (i.e. like in a property sheet) are functionally equivalent from a client perspective. The various methods are here only for aesthetic reasons, i.e. depending on your application one or the other may be more applicable.

Using these widget types is a matter of explictly passing the application state that represents the "activeness" of each widget. Here is an example with radio buttons:

Gui myGui;
int myChoice(0);

void doSomeUserInterface()
{
    int i;

    for(i = 0; i < 5; i++)
    {
        if(myGui.radio(myChoice == i,
                       64, 64 + i * 20,
                       32, 16,
                       String::format("choice %d", i + 1)))
        {
            myChoice = i;
        }
    }
}

As you can see, the user interface that results from the above code is based on the actual state of the application, not that of any "widget objects". Again, the central theme of IMGUI; there is no need to explicitly synchronize application state to gui state, as there is only a single copy of state in existence, namely that of the application itself.

The call to String::format() simply returns a formatted string (sprintf() style) to use as the label for each radio button. Note also the above use of "dynamic layout"; the resulting radio buttons are evenly spaced in the y-dimension by 20 units.

Edit boxes

Using an edit box is similarly simple, in which case you pass a String reference which is the string to be edited. Again, the idea is to pass a String instance that is part of your application state to be edited directly by the gui.

Gui myGui;
String myString("hello");

void doSomeUserInterface()
{
    myGui.edit(64, 64, myString);
}

Hey, where's the list box?

Most user interface toolkits support the concept of a list box / list control. Interestingly this widget type is largely obselete with IMGUI (unless you explicitly require scrolling support; see the section on advanced features). Since a list is often simply a bunch of text labels, you can support that by simply doing the following:

Gui myGui;
String myStrings[5] = {"hello", "how", "are", "you", "doing"};

void doSomeUserInterface()
{
    int i;

    for(i = 0; i < 5; i++)
        myGui.label(64, 64 + i * 16, myStrings[i]);
}

If you need selection support (as many list boxes support), you can do something similar to the following, which again is the typical approach to radio buttons or property sheets / tabs (see above):

Gui myGui;
String myStrings[5] = {"hello", "how", "are", "you", "doing"};
int mySelection(0);

void doSomeUserInterface()
{
    int i;

    for(i = 0; i < 5; i++)
    {
        if(myGui.radio(mySelection == i,
                       64, 64 + i * 16,
                       32, 16,
                       myStrings[i]))
        {
            mySelection = i;
        }
    }
}

At this point it should be clear that the list box / list control concept doesn't exist per-se in IMGUI, as you can simply iterate application state and "do a widget" per item in your collection. While this might be viewed as cumbersome, remember that traditional user interface toolkits require you to sync your application state with that of the list box itself (and vice-versa).

Additionally, with IMGUI it is trivial to create a gui that includes what looks just like a "list widget" that supports different kinds of "widgets" per line (i.e. text, buttons, images, etc), something which is typically very difficult to do with traditional toolkits.

How it works

Widgets as methods instead of objects

Each "widget method" in the Gui class encapsulates both the existence, interaction, and display of each logical "widget". Once again, note that from the perspective of the client that widgets can only be said to "exist" in the form of a method invocation; widgets change from being objects to being method calls.

On of the main gains here is the complete centralisation of control to the calling code. Both the "widgets" and the code that reacts to user interaction with these widgets are all in the same place.

Additionally, consider the following example:

Gui myGui;
bool myEnableChoices(false);
int myChoice(0);

void doSomeUserInterface()
{
    if(myGui.button(64, 64, 32, 16))
    {
        myEnableChoices = !myEnableChoices;
    }

    if(myEnableChoices)
    {
        int i;

        for(i = 0; i < 5; i++)
        {
            if(myGui.radio(myChoice == i,
                           64, 64 + i * 20,
                           32, 16,
                           String::format("choice %d", i + 1)))
            {
                myChoice = i;
            }
        }
    }
}

As you can see, it is very simple to "enable or disable" certain aspects of the user interface, due to the fact that the user interface doesn't really exist at all in the form of objects to be enabled or disabled, and can thus be easily changed on a per-frame basis without any overhead.

Of course one could use any arbitrarily complex expression in place of the simple boolean variable myEnabledChoices; this is a big part of the power and flexibility of IMGUI. In traditional user interface systems, this kind of functionality would typically require mass enabling / disabling of widget objects.

Implementing basic interactions

In the above examples, a central interaction is the concept of clicking on a widget with the left mouse button. In order to do this, you need to have direct pollable access to the position of the mouse cursor as well as the state of the buttons (how to do that is of course system specific and outside the scope of this text).

Consider this implementation of button():

const bool Gui::button(const int aX, const int aY,
                       const int aWidth, const int aHeight,
                       const char* aText)
{
    drawRect(aX, aY, aWidth, aHeight);
    drawText(aX, aY, aText);

    return mouse::leftButtonPressed() &&
           mouse::cursorX() >= aX &&
           mouse::cursorY() >= aY &&
           mouse::cursorX() < (aX + aWidth) &&
           mouse::cursorY() < (aY + aHeight);
}

As you can see, this implementation is quite trivial.

Implementing edit boxes

Edit boxes are slightly more complicated than buttons due mainly to the issue of input focus, which is required to support several editboxes on the screen at the same time.

void Gui::edit(const int aX, const int aY, String& aString)
{
    if(&aString == myEditInstance)
        activeEdit(aX, aY, aString);
    else
        passiveEdit(aX, aY, aString);
}

Gui maintains a pointer to the string that is currently being edited in order to retain focus across frames. This is essentially a simple "blind data handle" (const void*) and only used by Gui as an identifier of external (application) context.

Based on this focus information, each edit box is either active (only one at any given time) or passive. The implementation of activeEdit() is omitted here in the interest of brevity, but can be seen in full in the Direct3D example.

void Gui::passiveEdit(const int aX, const int aY, const String& aString)
{
    if(radio(false, aX, aY, aString))
        myEditInstance = &aString;
}

A passive edit box is here implemented as an inactive radio button, where clicking on it will set it to be in focus, making it active.

The implementation of edit boxes highlights an important detail of IMGUIs; they are not completely stateless. However, they need only retain enough state to handle a single interaction at a time. This is a key detail of how IMGUI can get away without widget objects; they logically have some internal "widget state", but only a single copy of what is needed for each supported widget type, since the user can only interact with a single widget at any given time.

Implementing display

When it comes to the actual display of the widget, there are some different approaches to consider, depending on a few aspects of your application needs.

In the above interaction examples, the internal methods drawRect() and drawText() are called before the actual interaction was calculated and returned. What these methods actually do depends on a number of factors, as we shall see below. In any case, care must be taken by the calling code not to create situations with overlap (these basic examples cannot handle overlapping widgets / windows in any way, but see the more advanced examples for how to implement this).

Direct display

If the calling code intends to do all user interface interaction and display in a single pass, and given that the platform itself supports it, you can basically have drawRect() and drawText() render directly to whatever underlying "canvas" you have.

Most software based 2d drawing libraries (for example GDI on Win32) will support this type of implementation, as they most often have the concept of a "drawing canvas" and are typically intrinsically insensitive to drawing order or the number of drawing calls made.

Deferred display

In situations where display needs to be more controlled, deferred display can be used. This basically boils down to having drawRect() and drawText() log "drawing events" to some list, and later have the application traverse this list and draw it appropriately.

Situations where this is appropriate include most hardware accelerated applications, where the underlying API's are optimized for batching of similar primitives. For example, my Direct3D based implementations typically have a vertex buffer in the Gui class which each drawRect() call writes to, and then when it is time to draw the user interface (usually the last thing to be drawn), the client code calls Gui::draw() to flush the internal cache of "draw events". Note that this cache only persists for the duration of a single frame, and facilitates effective batching of primitives.

Another reason to use deferred display is if your application separates input from output, as many games do. In the typical loop of input() / update() / output(), the "widget calls" and the corresponding reactions to the resulting interactions would go in input(), while the actual display of the buffered widgets would go in output(), often displayed last (on top of any 3d stuff you are displaying).

Implications and tradeoffs

The Style and/or Layout issue

In the examples covered, the issue of layout (where the widgets are positioned on the screen and their sizes) is entirely the responsibility of the calling code. In some cases this is entirely appropriate, for example when the user interface needs to be very dynamic and / or animated.

In other cases, finer aesthetic control is required, for example when artists or other non-programmers need to be in control of widget placement and appearance. Consider the following:

class Gui
{
public:

    Gui(const LookAndFeel& aLookAndFeel);
    const bool button(const Layout& aLayout, const char* aText);
};

By parameterizing Gui itself with a "look and feel" which encapsulates details of widget appearance, any imaginable data-driven visualisation approach can be supported.

Also, by objectifying layout information for each widget call, data-driven layout objects can be used, perhaps controlled be an external editor and loaded at application startup.

Gui myGui;
int myChoice(0);
Layout myLayouts[5];

void doSomeUserInterface()
{
    int i;

    for(i = 0; i < 5; i++)
    {
        if(myGui.radio(myChoice == i, myLayouts[i]))
            myChoice = i;
    }
}

As you can see, this cleans up the code significantly while still allowing the code to control the "existence" of widgets at any given time. It is important to note that this isn't the same concept as fully objectifying widgets; we are only objectifying their layout information.

This approach can be extended, in true Immediate Mode style, to build layout editing right into the Gui class. By enabling a way for the user to toggle "layout edit mode", perhaps by simply holding down the SHIFT key, click-and-drag style editing of layouts could be supported (see the section on advanced features for details on implementing this).

TODO: actually write this information down!

Frame shearing

One aspect of IMGUI to be aware of in the context of real-time applications (constantly rendering new frames many times per second) is that user interactions will always be in response to something that was drawn on a previous frame. This is because the user interface must be drawn at least once for the user to be aware that there are widgets there to be interacted with. Most of the time this doesn't cause any problems if the frame rate is high enough, but it is something to be aware of.

There is a chance that the result of any given widget interaction changes some application state that controls the appearance of the user interface itself, and such discrepancies can result in parts of the user interface reflecting the "old" state while some reflect the "new" state. I call this "frame shearing", in that the displayed image represents parts of two different logical images at once.

Again, in real time cases (30 fps or higher) this is most often simply not apparent to the user, but if you have a case where you don't want to / aren't able to display the gui at interactive rates (maybe you have some intricate caching scheme going on) you need to take certain precautions (one example of this is using IMGUI techniques in web applications, see that section for more details).

The main technique to utilize is to have any code that changes the appearance of the user interface generate a "shearing exception" which breaks out of the method that generates the gui for the current frame and restarts the entire process for the current frame. Theoretically a "shearing exception" must be thrown for each interaction that could change the appearance of the user interface, but in practice this usally only happens once per frame (i.e. the gui is at most generated in full more than once but less than twice). As the application owns all state, it needs to explicitly throw shearing exceptions when changes to such state is made.

Advanced features

Some applications will get by with the basic widgets outlined so far, while others will require more advanced widgets and features. Widget types like combo-boxes, tree controls, sliders, and edit boxes, which are intrinsically more complex than (for example) buttons, are sometimes useful and are indeed possible to implement in IMGUI without too much trouble.

NOTE: due to their inherent complexity, these advanced features are intended to be accompanied by real working code examples.

How advanced do we want to be?

Before getting into the details of such widgets, it is at this point worth mentioning that many of these more complex widget types are related to saving screen real-estate (combo-boxes, tree-controls, sliders and scrolling functionality in general). I speculate that the original reason for the existence of such functionality is directly related to the difficulty / expense of changing and / or creating and destroying the user interface dynamically.

With IMGUI, widget existence is controlled by method invocations, and layout can be completely dynamic and procedural. This means that the contents of the entire screen can change on a per frame basis, effectively making the amount of screen real-estate infinite. This means that there is no real need for overlapping windows, scrolling views, etc. Alternatives to scrolling include "pages" of items, closely related to using tabs / property sheets in place of multiple windows.

From a usability standpoint, I personally feel that user interfaces which don't heavily rely on scrolling or multiple overlapping windows are more productive. However, user expectations and established user interface standards are still important for a large number of applications. Therefore, we now look at some more advanced widget types.

Tree controls

Tree controls can basically be reduced to a "node" widget that can be expanded and collapsed. In order to make this friendly for the client application, an application handle / identifier needs to be passed to the call.

for(c = 0; c < NUM_CATEGORIES; c++)
{
    //returns true if node is expanded
    //application passes a const void* to identify the node
    //across frames (first param, can be anything which is
    //unique and constant across frame boundaries)
    //actual expand/collapse state is stored inside the gui
    //in a map (const void* <-> bool)
    if(myGui.node(CATEGORY_NAMES[c], x, y, CATEGORY_NAMES[c]))
    {
        //do gui for expanded node
    }
}

The Gui class supports this by maintaining an internal collection of mappings between const void* and bool, which encapsulates the expand / collapse state of each logical node. This relieves the client application of the need to keep gui-related state associated with application data items.

bool Gui::node(const void* aHandle, const int aX, const int aY, const char* aLabel)
{
    bool& h(handleState(aHandle));
    String s;

    s.format(h ? "-%s" : "+%s", aLabel);
    if(doRadio(h, aX, aY, s))
        h = !h;

    return h;
}

Combo boxes

Combo boxes / drop-down lists are very similar to tree controls; they can be implemented using the same handle concept. The Gui manages the expand / collapse functionality, and can additionally support the disabling of all other widget interactions when a combo box is expanded, in order to support overlap (of the expanded list over other widgets).

//calling code
static const char* COMBO_CHOICES[] =
{
    "orange",
    "pink",
    "red",
    "blue",
    NULL,
};
int myChoice(0);

myGui.combo(myChoice, 64, 64, COMBO_CHOICES);


//implementation
void Gui::combo(unsigned int& aChoice, const int aX, const int aY, const char** someChoices)
{
    bool& h(handleState(&aChoice));

    //expanded
    if(h)
    {
        //current choice
        if(doButton(aX, aY, someChoices[aChoice]))
            h = false; //same choice

        //list
        unsigned int c(0);
        int y(aY);
        while(someChoices[c]) //terminate on NULL
        {
            if(doRadio(c == aChoice, aX, y += buttonHeight(), someChoices[c]))
            {
                aChoice = c;
                h = false;
            }
            c++;
        }
    }
    //collapsed
    else
    {
        if(doRadio(h, aX, aY, someChoices[aChoice]))
            h = true;
    }
}

Sliders / scrollbars

Sliders / scrollbars, when broken down into pieces, are really just a background rectangle, an optional text to display the current value, as well as rectangle that acts as the "drag handle", positioned in the appropriate place and with the appropriate size to denote the current value. The Gui retains a float* to the current value, set by clicking inside the drag handle and updated by moving the mouse with the button down, in order to keep track of which value to update.

void Gui::horizontalSlider(const float aMax, const float aRange, float& aValue,
                           const int aX, const int aY,
                           const int aWidth, const int aHeight)
{
    const int SIZE((int)(aWidth * aRange / aMax));

    //if this is the current one, update the value
    if(&aValue == myScrollValue)	//myScrollValue is a float*
    {
        aValue += sliderX() / (float)(aWidth - SIZE) * aMax;
        if(aValue < 0.f)
            aValue = 0.f;
        else if(aValue > aMax)
            aValue = aMax;
    }

    //display the value as text
    text(aX + aWidth / 2 , aY, String::format("%.1f", aValue));

    //draw a background
    rect(aX, aY, aWidth, aHeight, true);

    //do the dragabble thing (the slider)
    if(rect(aX + (int)((aWidth - SIZE) * aValue / aMax), aY, SIZE, aHeight, true) &&
        !myScrollValue &&
        isButtonDown(VK_LBUTTON))
    {
        myScrollValue = &aValue;
    }
}

Drag and drop

Drag and drop is similar to edit boxes in that there is a passive and an active mode of operation for each "draggable widget". For display reasons, it is often useful to keep a separate text and rect for the thing being dragged, in order to be able to display it on top (drawn last) of all the other widgets, as support for overlapped rendering is required in this case (see the implementation of doDrag()).

const bool Gui::drag(const void* aHandle,
                     const int aX, const int aY,
                     const int aWidth, const int aHeight,
                     const char* aText)
{
    if(aHandle == myDragHandle)
    {
        //do drag
        doDrag(aWidth, aHeight, aText);

        //being dragged
        return true;
    }
    else
    {
        if(aText)
            text(aX, aY, aText);

        if(rect(aX, aY, aWidth, aHeight, true) &&
            !myDragHandle &&
            isButtonDown(VK_LBUTTON))
        {
            //setup drag
            myDragHandle = aHandle;
            beginDrag(aX, aY, aWidth, aHeight, aText);

            //being dragged
            return true;
        }
    }

    //not being dragged
    return false;
}

void Gui::beginDrag(const int aScreenX, const int aScreenY,
                    const int aWidth, const int aHeight,
                    const char* aText)
{
    const POINT CP(cursorPos());

    myDragPos.x = CP.x - aScreenX;
    myDragPos.y = CP.y - aScreenY;

    doDrag( aHeight, aText);
}

void Gui::doDrag(const int aWidth, const int aHeight, const char* aText)
{
    const POINT CP(cursorPos());
    const int X(CP.x - myDragPos.x), Y(CP.y - myDragPos.y);

    if(aText)
        myDragText.set(X, Y, aText);

    myDragRect.set(X, Y, aWidth, aHeight);
}

Keyboard support

Hotkeys / accelerators

Supporting hotkeys / accelerators is quite easy, it's just a case of adding a keycode parameter to your various widget methods.

class Gui
{
public:

    const bool button(const int aKeyCode,
                      const int aX, const int aY,
                      const int aWidth, const int aHeight,
                      const char* aText);
};

Next, in the code that checks for a click inside the widget, also include a check for the key being pressed. Note that the mouse cursor needn't be inside the widget in order for the keypress to be registered.

const bool Gui::button(const int aKeyCode,
                       const int aX, const int aY,
                       const int aWidth, const int aHeight,
                       const char* aText)
{
    drawRect(aX, aY, aWidth, aHeight);
    drawText(aX, aY, aText);

    return (
            mouse::leftButtonPressed() &&
            mouse::cursorX() >= aX &&
            mouse::cursorY() >= aY &&
            mouse::cursorX() < (aX + aWidth) &&
            mouse::cursorY() < (aY + aHeight)
            ) ||
            keyboard::keyPressed(aKeyCode);
}

Tab support

For some guis it might be useful to support tabbing (changing the widget that has the input focus by pressing the TAB key) between various input fields in a form.

A way to trivially support this is to have the edit box method check if there is no current active instance (of edit box) and automatically grab the "focus" by making the current instance the active one.

Next, we have a keypress of TAB cause the current focus to be lost. Since each edit box method invocation will check for NULL, the next exit box that is called will automatically grab the focus when it is called.

void Gui::edit(const int aX, const int aY, String& aString)
{
    if(!myEditInstance)
    {
        myEditInstance = &aString;
    }
    else
    {
        if(keyboard::isPressed(VK_TAB))
            myEditInstance = NULL;
    }

    if(&aString == myEditInstance)
        activeEdit(aX, aY, aString);
    else
        passiveEdit(aX, aY, aString);
}

If you want to support tabbing between all types of widgets, and for example allow buttons to be "pressed" using the spacebar as some guis do, you can do this in a similar manner. Note however than you will need to pass some kind of application-id to the button methods as well (not just the edit box methods) in order to do things this way, as the concept of focus must be extended to support all widget types.

Conclusion

As we have seen, IMGUI techniques are a powerful way to increase flexibility in your applications while reducing both complexity, code size, and nasty cache-related bugs.

How this ties into the rest of the book...

Together with an Immediate Mode MVC approach as well as relational-style persistence schemes, IMGUI techniques empower you to create a whole new level of productivity in your applications, especially in the fields of games and game editors. In general, "there is no excuse anymore" for not building custom editors right into your game applications. With the capability to instantly switch between playtesting your game and editing some aspect of it, your productivity soars, and your product is better as a result.

Examples

GDIplus test, including tree controls, combo boxes, sliders, and drag-and-drop

DXUT gui, including color picker

Back to index...