Subscribe to Planet KDE feed
Planet KDE -
Updated: 39 min 19 sec ago

Rainbow Folders

Mon, 2016/08/08 - 9:37pm

Breeze Icons follow the colorscheme that’s not new but now the folder icons also follow the color scheme.

That’s how it look like with the breeze color scheme


as you can see there is no difference between the old and new folder icons but if you play with different color schemes in the color setting, …


you see there openSUSE, Manjaro, Krita, Ubuntu, Elementary, Arch Dark and any other style you would design by yourself. So don’t stop playing with your plasma desktop, it’s designed for your flavor.

My experiences with SOCIS 2016

Mon, 2016/08/08 - 4:00pm

Hello dear readers!

This post is a small synopsis of my experiences so far as a student in this years Summer of Code in Space, where I shall recount the whole adventure of integrating Sentinel-2 data into Marble Virtual Globe.

Sentinel-2 data.
So what exactly is this data, and why is it important to us? Well, Copernicus is the worlds largest single earth observation programme directed by the European Commission in partnership with the European Space Agency (ESA). ESA is currently developing seven missions under the Sentinel programme. Among these is Sentinel-2, which provides high-resolution optical images, which  is on one hand of interest to the users of Marble and the scientific community as a whole.

Our goal with this years SOCIS was to adapt this data into Marble. Since Marble has quite the track record of being used in third party applications with great success, this would essentially be a gateway for many developers to get easy access to high quality images through the Marble library.
So first order of business? Adapt the world. The summer has begun to get exciting. 
First Acquaintance with Marble   
Of course, nothing can happen so quickly, and the first task was obviously on a smaller scale. In order to familiarize me with the inner workings of Marble, my mentor gave me a task to adapt an already available dataset into a map theme for Marble. This is how I came to know TopOSM.

The TopOSM maptheme in Marble.

This task came with its own fair share of challenges, from getting Marble to display the legend icons correctly, to creating a suitable level 0 tile, but in the end it did give an insight into exactly how the creation of a map theme, from the first steps to uploading goes. At this point, the challenge was underway, and so began the real part of our ambitious project to tackle the whole world through Sentinel-2's lens and integrate it into Marble.
Sentinel-2 - From drawing board to tilerendering
After many discussions with my mentor, regarding ideas on how to make the data suitable for use in Marble, we finally came up with a plan. That plan would let us use the currently available Data Hub as a source for our images (since we don’t have a simple server we could just get the data from, as in the case of TopOSM). At that point, we just have to edit these images into a suitable format, and everything will be fine. A three step process:
Step 1. Download some data.
Step 2. Edit it.
Step 3. Use it in Marble.
As you may have guessed, step 2. was to be troublesome. Around this time my mentor came up with the first iteration of the guide for this “three-step” process. We also found an application that would suit our needs for the editing, this was to be QGIS, but we would also be using GDAL.
The now mostly finalized guide can be found here, however here are the original steps:
Step 1. Find some suitable (has few clouds, isn’t very dark, etc.) data on the Data Hub, and download it.

The Data Hub.

This step was to be the least troublesome, since what do you need? A good internet connection (check?), and hard drive space, since each dataset we download is about 4-7 gigabytes (also check). The only problem was the downloads seemed to fail, without warning or rhyme, or reason. One could move the mouse constantly, and it might fail, one could leave the computer unattended and the same thing would happen, even though the last 5 datasets were downloaded without fail.
It was quite a mystery, but thankfully the browser could restart the download (after refreshing the page and logging in again to make sure). Another helpful site was an archive, where some of the more recent datasets were uploaded. These could be easily downloaded with wget without any issues, so the troublesome downloading (the Data Hub only allows two concurrent downloads at a time) was more or less solved.

Step 2.  Edit the data.

Here's how a few tilesets look when loaded, after you applied the styles.

So you finally downloaded your first dataset, the first thing you’re going to have to do, is run a small script that generates vrt files from the images. These will then be loaded up in QGIS, where you’ll need to apply a style to them (that’s right click on each layer, load style, and navigate to the style file. At first, adding these numbers stored in the style file was done by hand). Thankfully you can only do that to the first one, and then copy it to the rest of them. Even with hotkeys, that’s about 10 – 16 styles to apply. But now, you can save your images in tif format! Just right click, save as, apply the correct settings, and…wait 1 to 4 minutes while it generates. Now do that all again for the other 15 layers. And all the other datasets.
As any reader may have felt profound despair at this point, so did I. My mentor most likely as well, as we both, very rightly, felt that there had to be a better, faster, more efficient way.
Who knew that the whole solution for all this would be a script?
So from then out, I was knee deep in the documentation to find exactly which classes store file-saving settings in QGis (hint: it’s this one). Applying the style was something I’ve seen used in plugins, so that was a good stepping stone.
A second great discovery was the fact that QGIS provides an easy way to generate the query window (so I didn’t have to meddle with the appearance, just finding the relevant settings in the documentation) through the processing toolbox.
So much easier.
Soon, the first version of the script was ready: Load the vrts in QGIS, open the script window, select where you want to save it, and which styles you want to use, and presto. An hour later you might be done (with that dataset. Onto the next!). My mentor was quite happy with the fact that you didn’t have to sit there and apply settings every 1 to 4 minutes, instead just once every half an hour or so (to load the next batch up). The sky, or in this case processing power of your computer, was the limit.

The last step in editing involves the creation of the actual slippy map tiles. Thankfully, that was already available in QGIS plugin (QTiles), so we didn’t have to find another way to make that. Tile creation however is a very slow process (it takes more than a day to process about 10 datasets), so this step is still a bit problematic. Even splitting the project into smaller sections doesn’t do much speedwise, but it’s fairly reasonable: there are 15 levels we need to create, with level 0 being a single image of the entire Earth, level 1 being four images, level 2 being those four split into 4 again, and you can soon see that at level 14, there are manytiles being generated. Such as it is.

Step 3. Upload and use the tiles in Marble.
This step is fairly obvious, you need to upload your freshly generated tiles onto the Marble servers, and soon you will be able the see the fruits of your labour with your own eyes. As of this post, more than 70 tilesets have been generated anduploaded, but there’s still a long way to go.
For now I’m just glad to say that I’ve had a wonderful experience this summer here at Marble, by having helpful mentors around who welcomed me into the community, heard all my issues and tried to support me whenever I got stuck. Overall I’m really happy that I took part, because I learned a lot about communication, project management, problem solving, both on my own, and with help. Of course, this is just the beginning of everything, and I hope to become much more productive and helpful in the future. As for everyone, I wish you all a great summer and many great experiences to you all.

Qt Creator 4.1 RC1 released

Mon, 2016/08/08 - 10:06am

We are pleased to announce the release of Qt Creator 4.1.0 RC1.

Read the beta blog post for an overview of the new features coming in the 4.1 release. Since then we have been fixing bugs and polishing the new features. We believe that what we have now is very close to what we can release as the final 4.1 release soon.

A few visual improvements that happened since the Beta are the addition of the dark and light Solarized editor color schemes, a new “Qt Creator Dark” editor color scheme as companion for the new Flat Dark theme and the polishing of the new Flat Dark and Flat Light themes.

See some items of the above list in action:
Flat Dark Theme - Qt Creator 4.1Flat Light Theme - Qt Creator 4.1

Now is the best time to head over to our download page or the Qt Account Portal, get and install the RC and give us your feedback, preferably through our bug tracker. You also find us on IRC on #qt-creator on, and on the Qt Creator mailing list.

Known issue #1
Unfortunately, an incompatibility between MSVC 2015 Update 3 and Clang 3.8 causes an issue with the Clang Static Analyzer plugin. We will try to iron this out until the final release.

Known issue #2
The changes-4.1.0 file in the source package does not contain what happened (and who additionally contributed) between 4.1.0-beta1 and this 4.1.0-rc1. Here is the updated

The post Qt Creator 4.1 RC1 released appeared first on Qt Blog.

GCompris release 0.61 and reoganization

Sun, 2016/08/07 - 9:43pm

Some of you are aware that I (Bruno) have a new “day” job and I don’t have time anymore to be active on GCompris. I created this project in 2000 and maintain it since then. So this release note is important to me because it will also be my last one. From now on, the releases will be handled by Johnny Jazeix.

As I cannot be active anymore, it would not make sense to continue selling GCompris. After investigating several options and in accordance with the most active members in our community, I decided to transfer the business operation to Timothée Giet. Timothée is a graphic artist with a long list of contributions on GCompris and in the KDE community. For now, all sales go to Timothée and he is in charge of the commercial support. There are no changes in the licensing nor in the development process which is still under the responsibility of the KDE community (Johnny Jazeix being the most active contributor).

r0-61-smallConcerning the release notes, the new activities are:

  • A simple wordprocessor (baby_word_processor by Bruno Coudoin), this activity is not available on mobile platforms due to issues with the virtual keyboard: if we only use the real keyboard, styles don’t apply and if we only use the virtual keyboard, we can’t navigate in the text, select it…
  • A tangram (by Bruno Coudoin). It starts with children friendly levels to introduce each concepts little by little and end up in the real tangram.
  • Explore the monuments of the world (explore_monuments by Ayush Agrawal during SoK).
  • An activity in which the children must color a graph so that no two adjacent nodes have the same color (graph_coloring by Akshat Tandon during SoK).
  • Pilot the spaceship towards the green landing area (land_safe by Holger Kaelberer).
  • Find the differences between the two pictures (photo_hunter by Stefan Toncu).



Timothée Giet updated images for chess, hangman and horizontal/vertical reading activities along with the gcompris logo.


Lots of little fixes/improvements have been done (storing and restoring the window’s width/height at start-up, docbook updated, levels/images bonus, adding an internal dataset for words games so that we no more expect the network to run GCompris (except for the voices but they also can be bundled in the binary).

On translation side, we have 17 languages fully supported: Belarusian, British English, Brazilian Portuguese, Catalan, Catalan (Valencian), Chinese Simplified, Dutch, French, Galician, Italian, Norwegian Nynorsk, Polish, Portuguese, Romanian, Spanish, Swedish, Ukrainian and some partially: Breton (88%), Chinese Traditional (97%), Estonian (82%), Finnish (74%), German (93%), Russian (83%), Slovak (76%), Slovenian (82%), Turkish (82%)

If you want to help, please make some posts in your community about GCompris.

The source code tarball is available here (GNU/Linux packagers are welcome)

Some details about Qt properties system

Sun, 2016/08/07 - 6:18pm

As I promised, in this post I shall tell you about some details of Qt properties system. Most probably, you’ll find it useful if you are only familiar but haven’t used Qt properties much in your code yet. Using them is a good way to start integrating your application to QtQuick, where, as far as I know, Qt properties are mostly used. Correct me, if I’m not accurate. I shan’t explain you what Qt properties are, so if you don’t know about them, you may want to read the following article: This post actually will be quite short.

Investigating archive operations

Sun, 2016/08/07 - 5:23pm

In this post I’m going to tell you about some advanced using of terminal utilities for zip, 7z and rar archive types. It’ll cover the following operations: add files to specific folder in archive, move or rename files and copy files within archive. There will be described both solutions with using only classic command line programs for mentioned archive formats and programmed workarounds for solving the problems, if those tools are inefficient.

Time flies for FBSD updates, too

Sun, 2016/08/07 - 10:16am

So it’s been some time since I last wrote something about the KDE-FreeBSD ports. Time flies when you’re having fun (in my case, that means biccling around Zeeland and camping). In the meantime, Ralf has been hammering on the various Qt ports updates that need to be done — official ports are still Qt 5.5.1, but we have 5.6 done and 5.7 on the way. Tobias has been watching over the plasma5 branch of the unofficial ports, so that it’s up-to-date with KDE Frameworks, Plasma and Applications.

The older KDE stuff — that is, KDE4, which is still the current official release for the desktop on FreeBSD — is also maintained, although (obviously) not much changes there. We did run into a neat C++-exceptions bug recently, which was kind of hard to trigger: running k3b (or ksoundconverter and some other similar ones) with a CD that is unknown to MusicBrainz would crash k3b. There’s not that many commercially-available CDs unknown to that database, so I initially brushed off the bug report as Works For Me.

Good thing Methyl Ethel is still relatively obscure (but a nice band to listen to). They trigger the crash, too.

At issue is the visibility of C++ exception symbols; when -fvisibility=hidden is used and C++ exceptions are thrown out of a shared object, then users of that shared object may not catch the exceptions. Making exception classes explicitly visible fixes the problem — which in our case was because libkcddb catches exceptions from MusicBrainz, except when -fvisibility=hidden is used in linking and executable that uses libkcddb and libmusicbrainz5, in which case the exception isn’t caught and ends up crashing the application. has been very useful in sorting this bug out. After Tobias identified the exceptions as the root cause, I played with code across the shared-object-dependency chain until I came up with a patch which I’ve proposed upstream (e.g. libmusicbrainz5) and in the packaging for FreeBSD.

Even older software sometimes shows interesting bugs. And time flies when you’re having fun, but also when tracking down unhandled exceptions.

QRPC: A Qt remoting library

Sun, 2016/08/07 - 9:42am

This project of mine has been around for quite a while already. I’ve never much publicised it, however, and for the past year the code hasn’t seen any changes (until a few days ago, anyway). But related to my current job, I’ve found a new need for remote procedure calls, so I came back to the code after all.

QRPC is a remoting library tightly integrated with Qt: in simplified terms, it’s an easy way to get signal-slot connections over the network. At least that was its original intention, and hence comes the name (QRPC as in “Qt Remote Procedure Call”).

But actually, it’s a little more than that. QRPC pipes the whole of Qt’s meta-object functionality over a given transport. That can be a network socket, unix domain socket or something more high-level like ZeroMQ. But it could, for example, also be a serial port (not sure why anybody would want that, though).
The actual remoting system works as follows: A server marks some objects for “export”, meaning they will be accessible by clients. Clients can then request any such object by its identifier. The server then serialises the whole QMetaObject hierarchy of the exported object and sends it to the client. There, it is reconstructed and used as the dynamic QMetaObject of a specialised QObject. Thus, the client not only has access to signals and slots, but also to properties and Q_CLASSINFOs (actually anything that a QMetaObject has to offer). The property support also includes dynamic properties, not only the statically defined QMetaProperties.
Method invocations, signal emissions and property changes are serialised and sent over the transport – after all, a QMetaObject without the dynamics isn’t that useful ;).

To give an impression of how QRPC is used, here’s an excerpt from the example included in the repository:


Widget::Widget(QWidget *parent) : QWidget(parent), ui(new Ui::Widget), localServer(new QLocalServer(this)), exporter(new QRPCObjectExporter(this)) { ui->setupUi(this); // Export the QSpinBox with the identifier "spinbox". exporter->exportObject("spinbox", ui->spinBox); // Handle new connections, serve under the name "qrpcsimpleserver". connect(localServer, &QLocalServer::newConnection, this, &Widget::handleNewConnection); localServer->listen("qrpcsimpleserver"); } void Widget::handleNewConnection() { // Get the connection socket QLocalSocket *socket = localServer->nextPendingConnection(); // Create a transport... QRPCIODeviceTransport *transport = new QRPCIODeviceTransport(socket, socket); // ... and the server communicating over the transport, serving objects exported from the // exporter. Both the transport and the server are children of the socket so they get // properly cleaned up. new QRPCServer(exporter, transport, socket); }


Widget::Widget(QWidget *parent) : QWidget(parent), ui(new Ui::Widget), socket(new QLocalSocket(this)), rpcClient(0) { // ... // Connect to the server on button click connect(ui->connect, &QPushButton::clicked, [this]() { socket->connectToServer("qrpcsimpleserver"); }); // Handle connection events connect(socket, &QLocalSocket::connected, this, &Widget::connectionEstablished); connect(socket, &QLocalSocket::disconnected, this, &Widget::connectionLost); } void Widget::connectionEstablished() { // Create a transport... QRPCIODeviceTransport *transport = new QRPCIODeviceTransport(socket, this); // ... and a client communicating over the transport. QRPCClient *client = new QRPCClient(transport, this); // When we receive a remote object (in this example, it can only be the spinbox), // synchronise the state and connect the slider to it. connect(client, &QRPCClient::remoteObjectReady, [this](QRPCObject *object) { ui->horizontalSlider->setValue(object->property("value").toInt()); ui->horizontalSlider->setMinimum(object->property("minimum").toInt()); ui->horizontalSlider->setMaximum(object->property("maximum").toInt()); connect(ui->horizontalSlider, SIGNAL(valueChanged(int)), object, SLOT(setValue(int))); }); // Request the spinbox. client->requestRemoteObject("spinbox"); // Clean up when we lose the connection. connect(socket, &QLocalSocket::disconnected, client, &QObject::deleteLater); connect(socket, &QLocalSocket::disconnected, transport, &QObject::deleteLater); // ... }

Argument or property types have to be registered metatypes (same as when using queued connections) and serialisable with QDataStream. The QDataStream operators also have to be registered with Qt’s meta type system (cf. QMetaType docs).
At the moment, one shortcoming is that references to QObjects (i.e. “QObject*”) can’t be transferred over the network, even if the relevant QObjects are “exported” by the server. Part of the problem is that Qt doesn’t allow us to register custom stream operators for “QObject*”. This could be solved by manually traversing the serialised values, but that will negatively impact the performance. Usually one can work around that limitation, though.

Furthermore, method return values are intentionally not supported. Supporting them would first require a more complex system to keep track of state and secondly impose blocking method invocations, both of which run contrary to my design goals.

In some final remarks, I’d like to point out two similar projects: For one, there’s QtRemoteObjects in Qt’s playground which works quite similarly to QRPC. I began working on QRPC at around the same time as QtRemoteObjects was started and I didn’t know it existed until quite some time later. Having glanced over its source code, I find it to be a little too complex and doing too much for my needs without offering the same flexibility (like custom transports). I must admit that I haven’t checked it out in more detail, though.
Then there’s also QtWebChannel which again offers much of the metaobject functionality over the web, but this time specifically geared towards JavaScript/HTML5 clients and Web apps. I thought about reusing part of its code, but ultimately its focus on JS was too strong to make this a viable path.

Tip of the iceberg !

Sat, 2016/08/06 - 2:50pm

So we have previously seen how Davide, Alessandro and I designed the Rating Engine for our  WikiRating:Google Summer of Code project. Now this is the time for our last step , that is to connect the engine to the Website for displaying the computed results and for providing voting functionality to WikiToLearn users.

In MediaWiki additional functionalities like this are added via extensions. You can think of extensions in the literal sense too as something that provides some extension points on the top of the current code base. This make the life of developers easier since by using extensions we can add new code in a modular fashion and thereby not much fiddling with the Wiki code base.

So now I needed to write an extension that can the following:

  • Fetch the information about the page being viewed by the user.
  • Allowing the user to vote for the page.
  • Displaying additional information about the page is user demands.

So with the following things in mind I began to analyse the basic components of a MediaWiki Extension.


So besides the boiler plate components that required minor tweaking extension.json , modules , specials are of our interest.



This JSON file stores the setup instructions for instance name of the extension, the author, what all classes to load etc.



The module folder of our WikiRating Extension contains these 2 components:

  • resources: where all the images and other resources are stored.
  • wikiRating.js: A java script file to fetch, send and display data between the engine and the Website instance.

It is the wikiRating.js script where we wrote most of our code.



This folder contains a php script whose function is display additional information about the page when asked for. The information will be passed to the script via the URL parameter by our master script (wikiRating.js).

So the final step(or first step !) is to enable our extension by adding this to LocalSettings.php file in the WikiToLearn local instance.

wfLoadExtension( 'WikiRating' );

So now it is the time to see the fruits of our labour:

score1Basic information about the page score2Additional information about the page 

So this is how the output of our engine looks , subtle like a tip of an iceberg😛


[GSoC] KDev-Embedded, OpenOCD and avrdude

Fri, 2016/08/05 - 6:15pm

KDev-Embedded  now have OpenOCD integration and a new interface to use avrdude in launcher.

With Arduino-Makefile, it's possible to use a makefile to perform compilation of Arduino projects. In the video one the the examples are used to shows how it is possible to use the new avrdude launcher to execute the upload process.

In the avrdude new … Read the rest

Kontact build dependencies

Fri, 2016/08/05 - 12:45pm

kde pim

Adding KDE PIM to KDE neon I wondered if it really was as complex to build as it felt.  So I mapped them in Graphviz, and yep, it really is complex to build.  I’m quite amazed at the coders who work on this stuff, it’s an impressive beast.

Facebooktwittergoogle_pluslinkedinby feather

boot 25 % faster

Thu, 2016/08/04 - 7:46pm

My machine isn’t that fast, but I can improve the boot time for the plasma desktop up to 25%.

How, it’s simple turn of ksplash.

from plymouth to ksplash to the plasma desktop (movie)  8 sec plymouth (linux boot time) 11 sec ksplash 4 sec desktop is finished ===================== 23 sec (1/3 for linux 2/3 for plasma) from plymouth to the plasma desktop (movie) 8 sec plymouth (linux boot time) 11 sec desktop is finished ===================== 19 sec (42% for linux 58% for plasma)

So with ksplash the computer need 15 sec to start plasma without it need 11 sec (-27 %) but the god thing is that it looks even faster cause I see plasma instantly after the login and as you can see in the lined movies the desktop flickering is also visible with ksplash. To be honest I like the deskop flickering, cause I see how plasma work.

WIP: Plasma World Map Wallpaper & World Clock Applet, powered by Marble

Thu, 2016/08/04 - 5:18pm

The core of Marble, the virtual globe and world atlas, is a library, intended for reuse. Next to exposing its native C++ interface (see API dox of development version of the Marble library), there is also the option to use it in a QtQuick variant.

The Marble code repository itself holds a few application and tools based on the library. Additionally it also has extensions & plugins for other applications & workspaces, like the KIO thumbnailer plugins for previews of KML, GPX & Co. in KIO-using file manager or file dialogs, a Plasma Runner plugin for looking up geo coordinates or a world clock Plasma applet.

Moving to Plasma5, Marble KRunner plugin arrives first

Having extensions for external applications & workspaces in your repo requires yourself to keep up-to-date with those externals. When Plasma5 appeared with the big switch to QML-only plugin architecture, that left the Marble-based plugins a little behind for now.

Good news is that the KRunner/Plasma Runner plugin was not affected by that, and with Applications 16.08 and the new Marble release will now also be out in a version working in all locales.

Reviving the Marble world clock plasmoid from 2009

The Marble world clock plasmoid, introduced already in 2009 (see bottom of page), though needs a full rewrite, from C++ to QML, which is a bigger undertaking. It also needs a port from KTimeZone & KTimeZoneWidget to QTimeZone, and in that course a cleanup of the mapping between time zones to geocoordinates.

Good news here is that some initial work has been done now and is up for review. A lot of the features are still missing, and there are some glitches (e.g. vertical offset, license info displayed). But it is a start, and the current state might be even fancy enough for some users already:)

It is a short mutation from a Plasma Applet to a Plasma Wallpaper

With the applet initially working, the idea came to have the map also shown fullscreen as wallpaper. While I could not find documentation how to do a wallpaper plugin, the existing code and examples though gave enough hints, and one hour later there was the first incarnation of a Marble-powered Plasma Wallpaper plugin.

See here a screenshot with both the World Map Wallpaper and the World Clock Applet in their current state:
Plasma World Map wallpaper & world clock applet, powered by Marble

Tell what kind of world maps you want to see as wallpaper

Because the World Map wallpaper can make full use of all Marble plugins, a lot of different kind of setups are possible:

  • Plain political map view
  • Globe view with stars in the background
  • Map view of just a continent/country
  • Satellite-photos-based view with live cloud rendering

There could be also plugin variants for the Moon, Venus or Mars.

So start Marble, play around with the settings to learn what is possible. Then tell in the comments what other kind of maps (or globe views) you would like to use for a wallpaper, so we could consider them as predefined settings at least.

Create your own Plasma Wallpaper quickly by a new template

BTW, if you have other fancy ideas for Plasma “Wallpapers”, as by-product I did a template for Plasma Wallpaper plugins. It is currently up for review, so hopefully will be part of future KF5 Plasma frameworks releases. With that template, your first own wallpaper plugin is just a few clicks away e.g. in KAppTemplate or KDevelop.
When playing around with the settings, be aware there might be at least the issue of the wallpaper config page not reading current/default settings when switching the wallpaper type. As workaround one, after switching the wallpaper type, first has to close and reopen the dialog to be able to adjust the settings of the new wallpaper type.

Choqok 1.6 Beta 2 released

Thu, 2016/08/04 - 9:13am

We are happy to announce a new upcoming release for Choqok after more than one year and half. We want to make this release more stable than ever and then we are going to release a beta today and the final version next month. Big news about this release is that Choqok is now based…

The How part (1) - Converting Natural Earth SHP to OSM format

Wed, 2016/08/03 - 9:00pm

( This post is related to my GSoC project with Marble. For more more information you can read the introductory post )

So my first GSoC task was finding efficient mechanisms/procedures for converting Natural Earth data in SHP format to OSM format. My mentors gave me two options, either create a new tool from scratch using Marble libraries on the lines of the existing shp2pn2 converter or modify the existing polyshp2osm tool. On IRC, it was discussed that if script does not offer any advantage in extracting the metadata /semantics from SHP to OSM format resulting in suitable rendering of lakes and roads, then a new tool must be built.

Prep Work

After experimenting with the polyshp2osm tool, I figured out the following things about it.

polyshp2osm is built upon the GDAL/OGR python bindings. GDAL/OGR is a library for reading/writing raster and vector geospatial formats. It presents a single abstract data model, for all the formats, to the calling application. polyshp2osm reads the SHP file into this abstract data model and then loops over all the possible geometries to figure out the nodes, ways and relations of the corresponding OSM file. In this regard it is quite similar to the shp2pn2 tool which reads the SHP file into a GeoDataDocument. However, polyshp2osm is limited in the sense that it provides support only for polygons and not for linestrings and other geometries. In fact, it throws an error if we try to convert SHP files containing linestrings like ne_10m_roads and ne_10m_rivers_lake_centerlines. Thus polyshp2osm lags in geometry conversion and effort must be put in order to perform this conversion properly.

The other area of polyshp2osm is its metadata extraction and tag-mapping. The GDAL/OGR library, on which the converter is built provides direct methods for extracting the metadata for a particular geometry present in SHP file. This metadata is available in the form of key-value pairs. These key-value pairs obtained from the SHP files can be directly mapped to corresponding OSM key-value pairs. However all such mapping from the Natural Earth metadata to the OSM key-value pair has to be put into the polyshp2osm program manually since the polyshp2osm was built keeping in mind the MassGIS OpenSpace dataset and has mappings corresponding to this dataset only. Though the tool(polyshp2osm) provides a framework for tag-mapping, still effort must be put in analyzing the metadata provided by different datasets of Natural Earth and finding the corresponding OSM key-value mappings.

polyshp2osm is basically a template code which needs to be extended so that all types of geometries can be converted to OSM. Also, in-spite of having a mechanism for tag-mapping from SHP metadata to OSM key-value pair, it has no inbuilt mapping for NaturalEarth SHP data to OSM key-value and hence this mapping has to be added in.

However, eventually I settled on tweaking polyshp2osm instead of building a new converter from scratch on the lines of shp2pn2. Building a new converter will involve adding support for creating nodes, ways and relations on the existing shp2pn2 procedures which will take more time as compared to adding support for 1-2 more geometries in polyshp2osm. Also I will have to build a metadata extraction and tag-mapping feature for the new converter which are already present in the polyshp2osm tool.

Tweaking and fixing polyshp2osm so as to make it compatible with Natural Earth data and Marble stylings

So basically there were three types of additions/modifications which I had to do

  • Adding support for all the geometries

As mentioned before, polyshp2osm supported conversion of only those geographic features which have a polygon type of geometry. However Natural Earth data consisted of all forms of geometries like linestrings, multilinestrings, multipolygons and points (all these geometries are defined by the SHP format). Apart from this, the entire code was written in a script like manner and was not very modular. So before adding any support for remaining geometries I refactored the code to make it a little modular(although as of now it is still not pythonic and is not OOP based) so that code components related to extracting metadata; mapping to OSM key-value pairs; writing OSM node, way , relations elements can be reused for different geometries. After this refactoring, the task of adding additional SHP geometries became straightforward.

  • Mapping metadata to OSM key-value pairs

This task involved mapping Natural Earth metadata to the corresponding OSM key-value pairs so that Marble is able to render all the features in the resulting OSM file appropriately. I had to write a dictionary, consisting of around 120 entries which contained all the corresponding metadata to key-value mappings. Using this dictionary the program inserts appropriate key-value tags in the resulting node, way or relation.

  • Fixes

Problems started to emerge when the converted files were used to make vector tiles for Marble. In the initial version of the tiles few of the geographical features such as land masses were not visible. It was eventually found that the converter is not outputting in the correct format. According to, in an OSM file there should be a block of nodes then a block of ways finally followed by a block of relaitions. The tool was outputting nodes, ways and relations in a random order.

After fixing this, everything was getting rendered suitably however osmconvert, which is used to cut the OSM file into tiles continued to show warnings, sequence errors. This was because osmconvert expects the elements in the OSM file to be in non-decreasing ID’s.

With these two fixes in place, things have been fine till now.

So this was all about shp to osm conversion. In the next post I will try to explain about the various style related changes and the additional geographical features which I have added in Marble.

The Why part - medium and low tile levels in Marble

Wed, 2016/08/03 - 5:00pm

It has been quite some time since I have written anything about my GSoC project. Anyways, here it is.

So what exactly is my project and what I am supposed to do? Let’s start with my project title

“Support for medium and low tile levels in the OSM Vector Map of Marble”


To the uninitiated it sounds quite cryptic so let me break it down and explain it a little better. Marble, which is our awesome virtual globe, can display both raster as well as vector maps. Raster maps are made up several images(JPG or some other raster format) arranged in a grid like manner. Vector maps on the other hand are made up of chunks of vector data(XML, JSON etc)., each chunk having geographic information for a particular bounded area.

Depending on the zoom level, Marble loads in geographic data(raster or vector). As we zoom in further and further, the resolution of the geographic information being displayed/rendered increases. Now Marble just can’t load the entire Earth’s data for a particular zoom level since beyond a certain resolution this data set goes into the range of tens and then hundreds of gigabytes. Marble resolves this problem via a geo-data storage/indexing strategy known as QuadTiles. In this method, depending on the zoom level, we divide the entire world into tiles. As the zoom level increases, the number of tiles as well as the resolution of these tiles increases. The advantage is that , when viewing a specific region of earth at a particular zoom level, only a few tiles are required as opposed to loading the entire dataset into memory. OSM Vector Map is a a new Marble map theme which is still being worked upon, and the cool thing about this theme is that it utilizes openstreetsmap vector data to actively render and display the maps resulting in more crisp, dynamically styled maps.

My part

Coming back to my project, I , along with my mentors Dennis Nienhüser and Torsten Rahn and fellow summer of code student David, am supposed to add support for lower and medium zoom levels to this OSM Vector map theme. Before the project started, the map supported high zoom levels and the expectation is that by the end of the project, Vector map theme will fully support lower and medium zoom levels.

Support is a pretty vague term so let me be a little more specific. Till now Marble was able to render/display only the higher zoom levels of OSM Vector map theme. For being able to properly render the lower and medium zoom levels quite a many things need to be done which one does not need with the higher zoom levels. A question which might be bothering and which even bothered me was that if Marble was able to render higher zoom levels then why don’t we render medium and lower levels on similar principles ?

It is because of the level of detail with which everything is described in the openstreetmap data. For example osm data captures all kinds of roads, highways, bilanes, their nooks and corners; very detailed shapes for buildings. Right now, the higher levels i.e the street levels of VectorOSM directly use OSM data, which is filtered to exclude many elements. We cannot directly use OSM data for lower and medium levels and it is because of the following 2 reason.

  • Visual

For the lower and medium zoom levels, which you can compare to a bird’s eye view, we don’t need such detailed and precise data. At that level things are quite different. One does not observe some smaller streets and bi-lanes; multiple lane highways appear as a singular strip; nearby railway lines appeared merged, blocks of buildings appeared joined together, many low level details like poles, trees etc are just not visible. Also, since OSM data is street level data, if directly loaded it will lead to severe screen clutter. Hence we need to pre-process data so as to concatenate, cluster and merge the data elements to make them visually apt for a particular level.


When OSM data directly loaded and observed from level 11; This happens because a single highway can consist of several tiny OSM Way elements. Marble omits OSM elements which are less then 2px on the screen for performance reasons.


Buildings can be clustered and then merged so as to form continous blocks

  • Performance

As told earlier, the VectorOSM map theme directly renders the data and does not depend on pre-rendered content. Now if we directly feed in the OSM data there will be several performance issues since depending on the region being viewed, it may involve rendering of thousands or even millions of OSM elements. Hence we need to reduce the redundancy in OSM data by eliminating nodes as well as concatenating ways wherever possible.



Each OSM node is a small square. See the square overlappings as redundancies which need to be removed

For doing this we are constructing an OSM data pre-processor which concatenates, eliminates, merges, does clustering so as to make OSM data suitable for live rendering for some particular level of Marble.

Also, it was decided that for the lower levels we will use Natural Earth data instead of the regular openstreetdata. It is because Natural Earth contains data which is already filtered and categorized and is available in three resolutions 110m, 50m and 10m which we can easily adapt and style to the needs of Marble. However doing this requires two major things

  • Conversion from Shapefile to OSM format

Natural Earth vector data is present in SHP(Shapefile) format. However our Vector OSM theme requires data in OSM format. Hence we need to convert the data from SHP to OSM. There are existing tools which do this conversion but these tools lack in geometry support as well as the exact mapping of Natural Earth to OSM data. These tools need to be extended so that they are able to properly interpret Natural Earth data as well as map them to the OSM format.

  • Styling

Stylings for various Natural Earth geographical features need to be added to Marble, some of which have an OSM equivalent like administrative boundaries and some which don’t have any OSM equivalent like Bathymetries. For doing this some Marble specific OSM tags have to be introduced. Also, the stylings must be in sync with the existing OSM stylings.


Administrative boundaries, bathymetries, IDL , land polygon stylings were added

In this post, I have mostly described the Why part of my project. Over the next few posts I will try to write about the How aspect i.e. how I achieved a particular goal and why did I do it using that particular method.

ownCloud is hiring!

Wed, 2016/08/03 - 3:18pm

Come join us!

Come join us!

After the recent news, we are now back on stage and with this blog we want to point you to our open positions. Yes, we are hiring people to work on ownCloud. ownCloud is an open source project, yes, but ownCloud GmbH, the company behind the project, provides significant people’s power to expand the project to serve the needs for both the community and ownCloud GmbH’s customers. So if you ever dreamed of getting paid for work on open source, read on.

What we do – what you will work on

The call is for people who understand the vision of bringing the idea ownCloud to an enterprise ready level: ownCloud is not only running on individual open source enthusiasts hardware, but also on sites with huge amounts of data like CERN or the Sciebo project, and at large companies who want to work with their data in a secure way.

To provide the best solution for all of them we are looking for:

A System Administrator

In this role, you make sure that the infrastructure that we use in ownCloud is up and running. That involves troubleshooting and streamlining existing infrastructure, but also designing new services. If you love virtualization of all kinds and have an eye for security, this position is for you. Of course all this does not only happen behind closed doors, but you will be in contact with the open source community around ownCloud. The Application Security Engineer

For security professionals who would like to take on a high profile open source project. As security is one of the core values of ownCloud, we are looking for somebody who constantly monitors the code flowing in for security problems, is able to find glitches in existing code and handle the bug bounty program. That and more is the task of this high profile position.

A Software Engineer PHP

For engineers with a passion for good software design and a love for writing code without being code monkeys: In this role you iron the server part of our platform, build new features, work on fixing bugs with the support colleagues and bother the architect with new ideas how to make the thing even better. For this you need to urge to get down and dirty with code, feel yourself comfortable in a team of high profile developers who can teach you things and learn from you.

PHP or what?

Yes, ownCloud is written in PHP, and PHP is the most important, but by far not the only language that we use for the ownCloud platform.

Before you turn your back because of PHP, please think twice. There are a lot of good reasons why we are going with PHP, some of them are named in this blog, but there is more: For example PHP7: With PHP 7 (which can be used with ownCloud) the language has caught up with many criticism it faced before and has done a big leap.

And anyway, the language of a system is not the only thing that is important in a developers life. It is rather how many people use, love and recommend the project and the development processes the team lives. And in all that points, ownCloud is already awesome, and will become even more with your help.

Send your resume in to so we can get talking!

First of Various Konquis Celebrating the Anniversary of KDE

Wed, 2016/08/03 - 10:46am

I Approached around 50 brazilian artists for this, I’m not sure how many of them will answer, but this was the first answer that I got. This konqui is made by André Noel, the Real Programmer, owner of the web comic strip “A Programmers Life”, which you can read here in portuguese.


He also created a open SVG, but I’m having issues with my wordpress instalation – it’s not accepting svg files. I’ll fix that and soon edit this post with the svg file too.

Good Way to Share Configurations Between C++ / QML?

Mon, 2016/08/01 - 4:40pm

On Qt there’s the QSettings, while it has a lot of issues, it works well for c++ related code, but what if you wanted to also use the config for a QML Based application, that shares the C++ core with a Qt Widget based application? I’m on this dilema right now with subsurface and I want to hear a word of advise. For the few that don’t know yet, Subsurface is a dive logging application started by Linus Torvalds a few years ago, and that I stepped in to take care about the interface, which was rewritten mainly in Qt, but the Core is still in C.

So we have a C-based struct that has all the preferences, which I cannot use QSettings, because the code is in C, at the same time, Qt-Widgets based interface uses QSettings to store / restore the preferences and fill the c-based prefs struct, and now we also have the QML Android application, that cannot use the QSettings because it’s not easily exported to QML engine. I know that there is already a “Settings” qml element on Qt.labs, but The current implementation is based on QSettings. This imposes certain limitations, such as missing change notifications. Writing a setting value using one instance of Settings does not update the value in another Settings instance, even if they are referring to the same setting in the same category, so this is a no-go for me.

I actually started to implement something very boilerplately to handle all the settings in the c-based prefs structure in a way that it will call the signals, but it’s very, very boilerply.

on QML Code I wanted to just do something like:

Item {
width: prefs.sizes.width
height: prefs.sizes.height

property var username = prefs.user_id.username

so I started to create a QObject based item for *every* subgroup of the preferences, and then for the preferences itself, creating *tons* of Q_PROPERTY declarations with the signal and slot for them, I couldn’t use the MEMBER option on the Q_PROPERTY because on the C++ code this doesn’t trigger a changed signal when a property changes when it’s directly accessed.

If anyone has a better idea, please let me know.

Ark improvements in 16.08

Mon, 2016/08/01 - 4:28pm
Ark, the file archiver and compressor developed by KDE, has seen a lot of development for the upcoming 16.08 release. This blog post provides a summary of the most important changes.
Redesign of several dialogs
The dialog for creating new archives was completely redesigned. The old dialog contained a large file-picker widget which didn’t provide much room for other widgets to set additional options. The file-picker was scrapped and we now have a line edit for entering the archive name. Additional options are presented below in collapsible groups. For this purpose we employ the new KCollapsibleGroupBox widget developed by the awesome David Edmundson (blog post here).
Previously, two separate actions were available for adding entries to an archive: One for folders (only one folder could be added at the time using this dialog) and one for files. Now files and folders can be added simultaneously and the new dialog also allows setting compression level for the newly added files.
Setting compression options
Ark now allows setting more options when creating new archives. Currently, this includes password, compession level and multi-volume archives. Compression level can be set for most archive types, while only rar, 7z and zip support creating multi-volume archives.
Support for new compression formats
Ark can now open and extract XAR archives and AppImages. Additionally, tar archives can be compressed with LZ4 compression (requires libarchive 3.2).
Support for editing comments
Ark has been able to show archive comments since 15.12. We now implemented support for adding or editing comments in rar archives. The action can be found in the Archive menu. When editing the comment a KMessageWidget appears at the bottom which allows you to save the comment.
Testing archive integrity
Ark gained the ability to test archives for integrity. This functionality is currently available for zip, rar and 7z archives. The test action can be found in the Archive menu.
Dynamic context menus in file managers
Ark provides service menus for easily compressing or extracting files from file managers such as Dolphin. The service menus are now dynamic, so that only actions for which the required executables are installed are shown. E.g. if neither 7z and zip executables are found in path, the entry for compressing as zip will not be shown. This is achieved through usage of KFileItemAction plugins. Note that you also need Dolphin 16.08 for this to work properly.
Bugfixes and under-the-hood changes
A great amount of under-the-hood changes allowed us to fix many annoying crashes with the CLI-based backends. Also, several new classes were added to the infrastructure of Ark which greatly simplify adding further features in the future.
Testing and feedback
The  16.08 beta is now out, while the release candidate should be out on August 4th and the final release on August 18th. Please test the new features and provide feedback either as comments on this blog post or as bugs on KDE’s bugzilla.
What’s next?
For Ark 16.12 we hope to add a graphical interface for configuring the plugins Ark uses to handle different archive formats. Also, we want to add support for setting additional compression options such as encryption method (ZipCrypto, AES256, etc.).
We also hope to merge the excellent GSoC work by Vladyslav Batyrenko for advanced editing of archives.
If there are features you are missing in Ark, please let us know.
Thanks to Elvis Angelaccio and Vladyslav Batyrenko (mvlabat) for their development work on Ark.