Better CSS sprites

First of all I shall admit that current way of doing CSS sprites appears as a hack.

OK, what exactly is the CSS sprite? (actually sprite is a wrong name for the entity but seems like it is already wide spread, sigh )

Sprite (here) is just a fragment (slice, portion) of some base image. That image is loaded once but its fragments are used in various places as separate images (or pretended as being such).

Here is how sprites are done at the moment with standard CSS. As an example, typical use case – toolbar with list of buttons each having its own icon.

First we need to define common style for all our buttons:

  .toolbar li a { width: 25px; height: 25px; display: block; background:no-repeat url(tb-icons.png); }

And now for each button we need to define negative offset to scroll needed portion of the image into the “view” of current element like this:

.toolbar li a.btn-formatting { background-position: -25px 0; }
.toolbar li a.btn-bold       { background-position: -50px 0; }
.toolbar li a.btn-italic     { background-position: -75px 0; }
.toolbar li a.btn-font-size  { background-position: -125px 0; }

Because of negative image position button “font-size” for example is rendered like this:

So far so good as it works somehow.

But now try to imagine that one of your lucky users got some device with 225 DPI screen. So you will need to give him/her an image with larger icons for better experience. Images designed for 96dpi may look horrible scaled up to 225dpi by browser.

So you will end up with doing new set:

media ... {
  .toolbar li a { width: 25px; height: 25px; display: block; background:no-repeat url(tb-icons-x2.png); }
  .toolbar li a.btn-formatting	 { background-position: -41px 0; }
  .toolbar li a.btn-bold	 { background-position: -96px 0; }
  .toolbar li a.btn-italic	 { background-position: -143px 0; }
  .toolbar li a.btn-font-size    { background-position: -246px 0; }

All this is not that pretty and badly manageable – you need to recalculate positions each time. And there are many other problems with such “poor man” sprite approach. For example you cannot position the image in the middle of your button, everything is nailed down to the sprite size initially chosen.

Thus and so I’ve decided to make such image catalog mechanism better in Sciter2 engine by introducing @image-map at-rule and image-map() function. Here is how that toolbar declaration will look like with those two.

First of all here is our image map declaration:

      @image-map tb-icons 
        /* we define three images under the single logical entity */
        src:   url(tb-icons.png) 120dpi,    /* <= 120dpi */
               url(tb-icons-x2.png) 240dpi, /* <= 240dpi */
               url(tb-icons-jumbo.png);     /* the rest  */

        cells: 15 2;                        /* 15 columns, 2 rows in the image */

        /* logical names of the parts, see tb-icons.png */ 
        items: bold, italic, underline, strike,
               font-family, font-size, text-color, text-back-color;

And here is how they are used:

.toolbar > button {
  background:no-repeat 50% 50%; padding:3px; /* note - middle aligned */

And use of particular items as just ordinary images using their logical names:

.toolbar > button.bold      { background-image:image-map(tb-icons,bold); } 
.toolbar > button.italic    { background-image:image-map(tb-icons,italic); } 
.toolbar > button.underline { background-image:image-map(tb-icons,underline); } 
.toolbar > button.strike    { background-image:image-map(tb-icons,strike); } 

Note that when you will need to support other resolution you don't need to redesign your CSS, just change the @image-map declaration.

And here is semi-formal specification of the image-map feature.

Sciter UI, application architecture

Architecture of applications that use Sciter based UI can be visualized as this:

Typically such applications contain two distinct layers:

  • UI layer that uses Sciter window with loaded HTML/CSS and scripts (code behind UI);
  • Application logic layer, most of the time that is native code implementing logic of the application.

Ideally these two layers shall be split appart – isolated from each other as they use conceptually different code models and probably code styles.

UI layer uses event driven model: "on click here expand section there and send request to logic layer for some data".

Application logic layer (ALL) is more linear usually. It is is a collection of functions that accepts some parameters and return some data. Even if ALL uses threads code inside such threads is still linear.

UI and app-logic interaction principles:

Most of the time code execution in UI applications is initiated by the UI itself but sometimes application code may generate its own events. For the UI such events are not anyhow different from pure UI events like mouse/keyboard clicks and the like. Informational flow between UI and ALL conceptually fall into these three groups:

  1. "get-request" – synchronous calls from UI to logic layer to get some data:
  2. "post-request" – asynchronous calls with callback "when-ready" functions:
  3. "application events" – application detects some change and needs to notify UI to reflect it somehow:

To support all these scenarios application can use only two "entry points" :

  • UI-to-logic calls: event_handler::on_script_call(name,args,retval)
  • logic-to-UI calls:  sciter::host:call_function(name, args ) – calls scripting function from C/C++ code. The name here could be a path: "namespace.func".  


To handle UI-to-logic calls the application defines sciter::event_handler and attaches its instance to the Sciter window (view). Its on_script_call method will be invoked each time when script executes code like this in scipt:

view.getSomeData(param1, param2);

that will end up in this C/C++ call:

         2 /*argc*/ , 
         SCITER_VALUE& retval /* return value */ );

Sciter SDK contains convenient macro wrapper/dispatcher for such on_script_call function:

  class window
    : public sciter::host<window>
    , public sciter::event_handler
    HWND   _hwnd;
    json::value  debug(unsigned argc, const json::value* arg);      
    json::value  getSomeData(json::value param1, json::value param2);      

  FUNCTION_V("debug", debug);  
  FUNCTION_2("getSomeData", getSomeData); 

Declaration FUNCTION_2("getSomeData", getSomeData); binds view.getSomeData() in script with native window::getSomeData call.  

Therefore functionality exposed to the UI layer by logic layer can be defined as a content of single BEGIN_FUNCTION_MAP/END_FUNCTION_MAP block.

If your application contains many modules that are connected dynamically then you can define single view.exec("path", params...) function that will do name/call dispatch using some other principles:

var newAccount = view.exec("accounts/new", initialBalance);
view.exec("accounts/delete", accountId);
view.exec("accounts/update", {customerName:"new name"} );

application events

Application can generate some events by itself. When some condition or state inside application changes it may want to notify the UI about it. To do that application code can simply call function in script namespace with needed parameters.

Let’s assume that script has following declaration:

namespace Accounts 
  function created( accountId, accountProps ) {
  function deleted( accountId, accountProps ) {
     $(#accountList li[accid={accountId}]).remove();

Then the application code can fire such events by simply calling:

window* pw = ...
pw->call_function("Accounts.created", accId, accFields );
pw->call_function("Accounts.deleted", accId );


Need of post request arises when some of work need to be done inside worker threads. Some task either take too long to complete or data for them needs to be loaded from the Net or other remote sources. UI cannot be blocked for long time – it still shall be responsive. The same situation happens in Web applications when JavaScript needs to send AJAX request. In this case callback functions are used. Call to native code includes reference to script function that will be executed when the requested data is available.

Consider this UI script function that asks app-logic to create some account on a remote server:

function createAccount( accountProps ) 
    function whenCreated( accountId ) // inner callback function
    view.exec("accounts/create", accountProps, whenCreated );

It passes accountProps data and callback function reference to the "accounts/create" thread. This thread creates the account (presumably takes some time) and invokes whenCreated at the end.

class createAccount: worker_thread 
    handle<window> ui;
    SCITER_VALUE props;
    SCITER_VALUE callback;

    void run()
    {  // the thread body
       // ... do some time consuming stuff ...

       SCITER_VALUE accountId = createAccount(props);

       // done, execute the callback in UI thread:
       ui->ui_exec([=]() {; 

Note about that ui_exec function above: the UI is single threaded by its nature – singly display device, single keyboard and mouse, etc. Worker threads shall not access the UI directly – the UI shall be updated from UI thread only. The ui_exec function does just that – executes block of code in UI thread. See C++0x: Running code in GUI thread from worker threads article about it.


Having just two "ports"  (out-bound UI-to-logic and in-bound logic-to-UI) is a good thing in principle. This allows to isolate effectively two different worlds – asynchronous UI and deterministic application logic world. Easily "debuggable" and manageable.

HTML, CSS and script (code behind UI) runs in most natural mode and application core is comfortable too – not tied with the UI and its event and threading model.

Caret positions in HTML

Working on behavior:richtext again. This time for Sciter2.
The behavior:richtext is the thing behind Sciter’s <richtext> element or <div contenteditable> in Web browsers.

behavior:richtext in Sciter1 uses “flat” DOM model: div:element, paragraph:element, that model is similar to the RichTextBox in Windows.
But in Sciter2 new behavior:richtext uses standard DOM model of HTML content: element:node, text:node, comment:node.
That change requires me to rethink concept of caret positions again.

Consider this markup:


That is rendered as:


Question is: how many caret positions in this paragraph?

All known contenteditable implemenatations will give you 7 caret positions here: before 1,  between 1 and 2, … after 7. So they follow approach used by WYSIWYG editors like Word and others.

Now consider these tasks, insert text "AB" in that paragraph between ‘4’ and ‘5’ so it will go in these locations:

  1. Inside <b> : 1234AB56,  12<b>34AB</b><i>56</i>;
  2. Inside <i> : 1234AB56  12<b>34</b><i>AB56</i><p>;
  3. Between <b> and <i> : 1234AB56  12<b>34</b>AB<i>56</i>;

Problem here as we can see: single caret position (visual) actually represents at least three physical DOM insertion points.

That kind of problme is typical for WYSIWYG editing implementations. In flat models where text is just a sequence of "styled characters" this is probably not a major issue but in HTML WYSIWYG something needs to be done I think.  

In Sciter1 I am using "directional" caret approach – physical caret locations are dependent from direction from where caret arrive to that position:

  • so when you move caret rightwards you caret stops at these locations: <b>3|4|</b><i>5|6|</i>
  • and if from the left then <b>|3|4</b><i>|5|6</i>

This covers tasks #1 and #2 above. But #3 is still not covered. Something needs to be done here, thinking.

Actually this kind of problem is not just about characters inside text and spans. It is also about cases like this:

<pre>some code</pre>

 What would you do if you need to insrert paragraph with text between list and <pre> block:

<p>some text</p>
<pre>some code</pre>

You can try it here in your browser:

  • first
  • second
some code