Subscribe to Planet KDE feed
Planet KDE -
Updated: 3 min 32 sec ago

Why I love Plasma

Sat, 2015/09/19 - 12:02am

I’ll first say that this blog post very one sided towards KDE but I do like and love the work that every developer does for Desktop Environments like GNOME Shell, Unity, elementary’s Pantheon Shell and even Linux Mint’s Cinnamon. While I might get some names and descriptions wrong please do correct me.

Some Linux terminology:

  • KDE – Core project
  • Plasma – Desktop
  • Kwin – Window Manager

Some people just use there computers the way the developers designed and make no changes what so ever (looking at you Windows and Mac OS X users), but us Linux or BSD users do not always feel this works for us. While some users of the above mentioned OS’s can and do move the panel around a bit, Plasma takes to a whole new Universe.

The Plasma Universe  

First let’s go though what my Plasma Desktop looks like – Hey I love pugs don’t judge!


With Plasma I get to have a desktop that is truly mine, from the widgets to panel placement and design.


With the other Shell’s you launch your applications with well… a launcher but if your on GNOME Shell you have just a fullscreen launcher (which I do like a bit, I’m excluding GNOME Extensions out of this for a stock like experience). Unity has a similar setup but you can resize it a degree, both of which go with the “type to search” layout while Linux Mint’s Cinnamon goes with the old and truth menu layout.


But with Plasma you have a choice of which one you want (for the fullscreen one you need Plasma 5.4):


Keyboard shortcuts

I love the shortcuts that Unity uses:

  • Meta + W for all the windows in all the workspaces
  • Meta + S for switching workspaces
  • Meta + A for the windows in the current workspace.

But I can make those shortcuts in Plasma as well!




Fiber, X, Ad Blocking and Tracking

Fri, 2015/09/18 - 5:12pm

It’s been about a month since my last update? Maybe I should post something.

Fiber has been a bit slow moving, partly because I’ve just been slow and hitting other projects, but mostly from toying around with CEF. One prominent issue is the fact that the current crop of CEF examples and samples are using a mix of GTK and X. Many outdated questions and forums on the web bring up Qt and CEF, but with the relatively limited time I have to work on the project it has made things feel slightly more arduous than it needs to be when there’s no answer to the problem. Part of this is the fact that Qt has WebEngine/WebView, so CEF integration is a niche topic. It’s nothing insurmountable, but I might need to accept a temporary dependency on X. This annoys me, but the project needs to move forward and I can’t be stuck spinning my wheels. At the lowest level of the application I will need platform-specific code anyway, I simply wanted to see how long I could avoid it.

“Did you ever solve for X?”

Even without the renderer in place, work has started on the basic extension framework which is aiming at getting a “hello world” message from a dummy extension to a simple debugger extension, in both threaded and out-of-process modes toggled on each. The core of this work is the message passing pipeline; since extensions may be threaded or broken out-of-process they need a reliable way of communicating. The diagram below shows the current rollout I’m aiming for;

fiber-processStructureEvery node in the pipeline is based off a generic class abstraction which can daisy-chain together. It is through these paths that the vast majority of Fiber will operate through, and even core services will build off the same message system.

The most interesting aspect of this structure is the “Queues”. Queues keep track of the endpoint status and may “hold” incoming signals for a variety of reasons. This should let us do things like modify extensions on the fly even while other extensions actively use them, letting us potentially update or outright replace them without interrupting service depending on the type of extension. For example, you may replace your entire history system without restarting the browser.

switchboard“Fiber router – where can I place your call?”

If it looks like the router is going to be a bottleneck, you can fault my diagram; even though my current iteration of the message system involves direct connections to the router, the router will eventually serve as a connection manager broadcasting the various changes only when queues have out-of-date connections. This will let extensions talk semi-directly to each-other without involving the router.

Finally, the pipeline is a “daisy chain” style of operation; nodes can be added anywhere along the line. This is how I plan to add development faculties, where an extension developer can hook into pipelines and convert them to live debugging toolchains, able to do things such as simulate extension crashes, toggle services, flood messages, monitor performance, etc. All of these tools will also work for native components of the browser, as they too will use the message system.

Ad Blocking

Over the past weeks I’ve read about Epiphany and its recent decision to enable ad blocking by default. For those not in-the-know on Gnome stuff, Epiphany is the Gnome-based browser (equivalent to Rekonq and Konqueror), and for a while now the application has featured an integrated ad blocker. Ad blocking was disabled by default, but the setting has been enabled by default for their 3.8 version. This decision was made because the developers of the application feel the experience is “not as good” when websites can display ads, and they did not see users activating the ad-blocker “enough”.

Since I’m doing the browser thing I’ve been asked by a couple people what my thoughts are, and what I intend to do with Fiber as it eventually matures.

Ultimately I’m fine with ads, they’re a well-known method for site operators earning the sweat of their brow. Many websites don’t have the staff, time, or prominence to use alternative means of income such as sponsored articles or direct user funding.

My opinion on advertising sours greatly when it comes to the topic of tracking and targeting, which I believe is overstepping the line from advertising to stalking. I don’t like going onto Amazon and finding whatever I looked at spilled over to other sites I visit. I’m disturbed when I use a Google service to realise later I’ll be inundated and pressured into purchasing something  until my next pushable product becomes apparent. It’s like browsing physical store to find several random people have followed you back out, taking notes on everything you do and observing where else you’ll go – in the real world those people would be arrested for stalking, how is it acceptable online?

It’s a tough situation because I think it’s the right of a website to display the content they want, but it’s the right of the user to decide who they give their information to. This is compounded by the fact that it’s near impossible to keep track of what ad providers you are dealing with, as some sites can use several advertisers in tandem. On a very personal level this has affected what extensions I use day-to-day, and I personally use NoScript now.

Epiphany_AdBlock_woGueWorld of Gnome asks users to turn off ad-blocking software as a developer posts about enabling ad-blocking by default. I didn’t even specifically get out to get this shot, it just happened when I looked up Epiphany.

So, where does this leave Fiber?

I want Fiber to one day become a ‘legitimate’ browser, the kind which respects the general spirit of the web as a content medium. Part of this is making Fiber a good citizen, and for that reason I have no intention of bundling an ad-blocker, as advertising is part of the web and essential for many sites to function. This does not mean there will not be an ad-blocking solution, but it does mean users will need to consciously install one and there will not be an ‘official’ Fiber ad-blocker.

But tracking users is not legitimate, and its technical possibility along with the transparency of doing it does not make it acceptable. I do intend to eventually ship Fiber with a simplified NoScript or uMatrix-like extension which can be used to block “trackerware”. In my humble opinion the “Do Not Track” header is a bit of a joke as a method of protecting privacy, and it does nothing to stop bad actors from slurping up your data parfait. Some trackers will respect the DNT header, but in all irony those people are also the most likely to respect your data.

I doubt whatever tool I eventually come up with will be as powerful as contemporary solutions as I’ll be aiming to make it approachable for casual users to handle. It would also be initially tuned to its most conservative settings so the web does not appear ‘broken’ to Fiber users, which would give Fiber a bad reputation for unfamiliar users. Ultimately power-users would want a more robust solution, but I think every browser needs at least simple ways of putting users in real control of their data.

Long story short I will not seek to have Fiber specifically block advertisers on a vanilla installation, and I think bundling those tools with a browser is offensive to legitimate advertisers and online businesses. If those advertisers choose to stalk and track users then I will work to ensure privacy is maintained, and at that point it’s the choice of websites to decide who they work with and whether they respect user privacy.

Logo Update

On a final teeny but fun note, the Fiber logo has also been updated a bit.

iconsFor those who are wondering “what is it supposed to be?”, the icon is the cross-section of a Fiber-optic cable. You can find several varieties of them, and things like undersea cables can be made of several bundles in bundles which can make a jaw-droppingly pretty motif for such functional designs, and I think really lines up nicely with the mindset of Fiber.

Half-Left is back

Fri, 2015/09/18 - 9:43am

I don’t usually post news, but I think that this needs some more exposure.

Once upon a time, Sean (aka half-left) made quite a few high-quality Plasma themes.

A lot of time has passed since then, and more than a few Gnome Shell themes came out from his workshop during that time.

I’d say that it was unfortunate for us. The themes he created used to be designed extremely well with attention to detail. Most 3rd party Plasma themes used to be either incomplete, or a mixture of visually not matching elements collected from different themes (at least that was my impression).

Since then, we got a great Visual Design Team, and we got an awesome default theme (and its dark version).

But, two themes are not enough. People have different tastes.

Now, Sean is back in the Plasma world.

You can see his work at


Release candidate for LabPlot 2.1.0

Thu, 2015/09/17 - 12:30pm

With the upcomming release we decided to change the numbering schema. Starting from now, every time we implement new features, we increase the second digit in the version string. The third digit will be reserved for patch (hot fix) releases without any new features. The major number will be increased on any worldshaking events like new architecture for Qt/KDE, etc. This time, besides many bug fixing and performance improvements, we implemented many new features. – so, it’s going to be 2.1.0.

Today we want to announce the release candidate for v2.1.0. We’ll spend the next couple of days with testing and polishment. No additional features will be implemented during this time. If no bigger issues will be found, we’ll release v2.1.0 soon. The source code of the release candidate is available here. Everybody is welcome to test and to provide any kind of feedback on the new functionality we’re going to describe shortly in the following.

For handling of matrix-like data we introduced a new data container. This matrix data container is presented like a table or, alternatively, as a two-dimensional greyscale image. The elements of such a table/matrix can be thought as being the Z-values, Z=Z(X,Y), with X and Y values being the row and column numbers, respectively. The transition from the row and column numbers to the logical coordinates is done via an explicit user-defined mapping of both worlds.


The most prominent application of such a data container one can think of is plotting of 3D-data. So, this new functionality can be regarded as one of the preparation steps towards 3D-plotting in LabPlot. More on this in Minh’s recent blog.

The matrix data can either be entered manually or via an import from an external file. Similar to the data generation for a column in a spreadsheet, the matrix can be filled with constant values or via a formula, too. The screenshot below shows the image view of a matrix together with the formula that was used to generate the matrix elements:

Generation of matrix elements via a function

The handling and generation of data in the spreadsheet also got some attention. New operations on columns in the spreadsheet were implemented – reverse, drop and mask values. When specifying which values to drop or to mask, several operators (“equal to”, “greater then”, “lesser then”, etc.) are available. These operations can help to remove (or to hide) some outliers in the data set prior to, e.g., performing a fit to this data set.

Furthermore, the generation of new data via a mathematical expression in spreadsheet was extended to support multiple variables. Now it is possible to define a multivariant function and to provide a data set (a column in a spreadsheet) for each of the variables. The corresponding dialog supports the creation of arbitrary number of variables.

multivariant function

Formula used to generate the values in the column, the names of the variables and the provided columns for each of the parameters are saved and can be changed in the formula dialog afterwords again, if required.

To help the user to better organize and to group different data containers (Spreadsheet and Matrix) we introduced a new object Workbook. This object serves as the parent container for multiple Spreadsheet- and/or Matrix-objects and puts them together in a view with multiple tabs. With folders it is already possible to bring some structure in the project explorer and to, e.g., group together several somehow interconnected spreadsheets with data stemming from text files of similar origin like red, green and blue values of an image imported into three different matrices, etc. With Workbook the user gets now the possibility for another additional grouping.


Data Import:
Import of external data into LabPlot was greatly extended. New data formats – binary, image, NetCDF and HDF5 – are supported now. Preview of all supported file types in the import dialog was improved. For data formats with complex internal structures like NetCDF and HDF5, the content of the file is presented in a tree view that allows comfortable navigation through the file. The screenshots below show two examples for a NetCDF-file (136.9MB) and for a HDF5-file (1.6MB):

NetCDF import

HDF5 import

The next screenshot shows the data set ROSE from the example NetCDF-file mentioned above that was imported in LabPlot into a matrix, the image view was used.

ROSE data set imported into a matrix

The next new feature related to the data import improves the handling of compressed files. Import of data compressed with gzip, bzip2 or xz can be done now directly – the decompression happens transparently for the user.

We’re still in the process of completing the 2D-plotting part of the application. Though this part is already very feature-rich in LabPlot, there’re still couple of gaps and we close some of them in this release.
We implemented curve filling – the area below, under, left to or right to the curve can be filled with a solid color, a color gradient or with a user specified image.

curve filling

Sometimes there’re gaps in the data set – no values are available or some values don’t have a valid numerical format (“not a number”). When plotting such a data set it is often desirable not to interpolate between the data points in the region with missing data. This is now the default behavior in LabPlot that can be changes via the option “Skip Gaps”. The screenshot below shows two data sets with some masked data that was plotted with the option “Skip Gaps” switched off and on, respectively.

skip gaps

For the format of axis ticks, multiples of Pi can be used now – useful when plotting periodical functions. Minh Ngo, one of the three GSoC-students who worked this summer on LabPlot, contributed this small feature when he started to become familiar with LabPlot’s code.

Lock screen security of phones

Thu, 2015/09/17 - 11:54am

These days we can read a lot about a lock screen vulnerability in the Android system. Given that I have spent quite some thought on how we can use Plasma’s lock screen on our phone system I take the incident as an opportunity to share some thoughts about the topic. The tldr is “much ado about nothing”.

In Plasma we have an in my opinion rather secure infrastructure for the lock screen. Of course it suffers from the general problems of X11, but once it’s ported to Wayland it will be truly secure (till the first exploit is found). Given that I would like to use our lock screen architecture also on the phone. It’s secure by not letting anyone in even if the lock screen crashes (one of the problems hit in the Android exploit), by ensuring nothing else is rendered and no input is passed to any other application. So awesome! It will be secure!

But on second look we notice that the requirements on phone and desktop are different. On a phone we need to allow a few exceptions:

  • Accept phone calls even if screen is locked
  • Interact with notifications (e.g. alarm clock)
  • Allow emergency phone calls

The last item is also an important part of the puzzle for the Android exploit. These exceptions directly conflict with the requirements for our lock screen on the desktop. To quote:

Blocking input devices, so that an attacker cannot interact with the running session

It allows interacting with the running session (even more with the hardware) and it doesn’t block input devices any more.

I have over the last months spent quite some time thinking about how we can combine these requirements without compromising the security and so far I haven’t come to a sufficient solution. All I see is that if we allow applications (e.g. phone app) to bypass the lock screen, we in truth add a hole into the architecture and if there is a hole you can get through it. There will be ways to bypass the security then. No point in fooling ourselves. A phone app is not designed for the secure requirements of a lock screen.

Now phone calls are not all we need to care about on a lock screen – this could be solved by e.g. integrating the functionality into the greeter app. Users might want to take photos without having to unlock the screen (another piece of the Android exploit). It’s from a security perspective a questionable feature, but I can understand why it got added. Now this feature directly adds a huge hole into it: it writes to the file system. I can easily imagine ways to bypass the lock screen from a camera app, get to the file system, etc.

At this point we need to take a step back and think about what we want to achieve with a lock screen. On the desktop it’s clear: if there is a keyboard somewhere you should not be able to penetrate the session even if you have hours to try. But on a phone? Does this requirement hold? If I have the chance to unattended attack the lock screen of a phone, it means I own it. For desktop hardware we can say that the lock screen doesn’t protect against screw drivers. This also holds for phones. If one has enough time, it’s unlikely that one can keep the attacker out and the lock screen is most likely not the weakest link in the chain. Phones have things like finger print readers (easy to break), various easy to reverse construct passphrase systems, simple passwords, etc. So the lock screen itself is relatively easy to bypass and then we have not even looked at all the things one can do when attaching a usb cable…

Given that the requirements for phone security might be different? Maybe it’s not about blocking input devices, preventing anyone to get in? Maybe the aim is only to hold of people having access to it when unattended for a few moments?

If that’s the aim our lock screen architecture of the desktop might even be over done, adding holes to it would be wrong and we shouldn’t share the code? It also means that the Android vulnerability doesn’t matter. The exploit is a complicated process needing quite some time. The lock screen prevents access for uneducated people and also for those having just a few moments of unattended access. It only breaks in situation where it might not matter: when you already physically own it.

Randa Meetings 2015 are History – But …

Wed, 2015/09/16 - 7:30pm

I’m exhausted and tired, but it was great and a lot was achieved. And as people just start to report about it and publish blog posts it was decided that we prolong the fundraiser for another two weeks. Thus it will officially end on the 30th of September 2015. The reason for this prolongation is the shaky internet connection we had in Randa during last week. Most of the people will report about what they did and achieved in the next days.

And if you are interested you can still checkout what was planned for the Meetings in the middle of the Swiss Alps. And there are some notes about the achievements too. So don’t stop to support this great way of bringing you the software and freedom you love.

flattr this!

Bugs fixed in Ark 15.08.1

Wed, 2015/09/16 - 5:38pm

Ark 15.08.1 was released as part of KDE Applications yesterday. This release contains a handful of bugfixes, including a fix for a long-standing, highly-voted bug. The bug was first reported in 2009 and had 738 votes in bugzilla.

The bug caused drag’n’drop extraction for multiple selected archive entries to not function properly. When selecting multiple entries in an archive and dragging them to e.g. Dolphin for quick extraction, the selection would previously be undone and only the single entry under the mouse cursor would be extracted. This is now fixed so that all selected entries are extracted. Any dragged files are simply extracted without path, while for dragged folders any subfolders/files beneath them are also extracted. This is comparable to how most file archiving software works.

Multi-selection d'n'd extractionNB: Currently, for zip, rar and 7z archives, the selected entries are extracted with full path, but this will be fixed in a subsequent release.

Of other changes in this release, there was a fix for extracting rar archives when using unrar version 5 (bug 349131). If one or more of the destination files already existed, Ark would stall and the extraction process would never complete. This was caused by Ark only supporting the overwrite prompt of unrar version 3 and 4.

Also, the mime-type detection when opening archives was made more robust, so archives with a wrong filename extension should now be opened correctly by Ark (bugs 101170 and 265971).

Enjoy the release and look out for a great release in December with several new features :)

Thanks to Elvis Angelaccio and Raphael Kubo da Costa for reviewing the code changes.

Help ownCloud rock SCALE and FOSDEM in 2016!

Wed, 2015/09/16 - 3:04pm
FOSDEM and SCALE are respectively Europe and North America's biggest FOSS events and, of course, we'd love to run a booth there again. We had a good time last year, just check out see my overview blog and detailed blogs about FOSDEM and SCALE. It is time to start preparing again to have as much fun and impact as last year!
FOSDEMFor FOSDEM we will request a booth again and like last year I am sure it will be very well visited so we need help talking to the visitors!

My experience from last year was that many use ownCloud (and love it). These users often are interested in hearing and seeing what is coming, so I usually have a demo machine with me.

People new to ownCloud are almost invariably very interested and you can help them get started.

Note that you don't need any particularly deep insight in ownCloud to be able to help. For people new to ownCloud, a general overview of how it is for you as a user is already a huge help and we always have people around who can help with the harder questions.
Besides the booth, it'd be great if we get some talks in at various ownCloud-related devrooms, like the decentralization room and such. See some info on this page and stay tuned for the announcements of the devrooms.

SCALEThe 14th SCALE moves to a new location, promising to be bigger and better than ever. I sure want to be at that epic first in a new venue and so should you!

For SCALE, too, we'll try to get us a booth again and we do expect it to be very well visited just like last year. SCALE is a very cool event with many friendly folks. It surprised me how many people already knew about ownCloud but there were still hundreds we could delight with the knowledge a real free solution exists for their cloudy needs. We can really use some hands with this!

And like with FOSDEM - you don't need to be an ownCloud expert to be able to help out. Being able to explain the concept from the users' point of view is the most important thing!

I'll shoot in a talk or two but - if you have anything you'd like to talk about, SCALE, too, has a call for papers open.

For both events we have the ability to help you with travel and hotel costs if need be. Just contact me directly about that and we can figure things out!

ownCloud needs you!

Any Object Left Without a Parent Will Be Destroyed

Tue, 2015/09/15 - 5:02pm

Recently we had to fix a tricky crash at work: our Qt Quick app was crashing randomly when switching between two pages of the app.

The backtrace looked as if a QObject::deleteLater was trying to delete an already deleted object. Running the app with Valgrind confirmed we were trying to delete an already deleted pointer.

After much debugging, we found out the reason for the crash. Our app contains a C++ Qt model, which wraps a list of QObject. Some properties of the QObject are exposed as roles in the model, but we also have a getter which returns the QObject itself.

Here is a rough summary of the code.

The object used for the rows of the model:

class MyObject: public QObject { Q_OBJECT public: MyObject(int value, QObject* parent = 0); Q_INVOKABLE doSomething(); };

The model looks like this:

class MyModel : public QAbstractListModel { Q_OBJECT public: MyModel(QObject* parent = 0) : QAbstractListModel(parent) { mObjects << new MyObject(12) << new MyObject(34); } ~MyModel() { qDeleteAll(mObjects); } Q_INVOKABLE MyObject* objectAt(int row) const { return mObjects[row]; } private: QList<MyObject*> mObjects; };

The QML code uses the model like this:

ListView { model: MyModel { id: myModel } delegate: Row { Text { text: model.value } Button { text: "Do something" onClicked: { var myObject = myModel.objectAt(model.index); myObject.doSomething(); } } }

Can you spot the bug?

Here is what's wrong: when QML gets a MyObject instance through the objectAt() method, it receives a QObject, but this QObject has no parent (since MyModel constructor creates the MyObject instances without setting a parent). What does QML do if it gets a parent-less QObject? The answer is in QQmlEngine::ObjectOwnership doc: it takes ownership of it... and will garbage collect it when it feels like doing some clean up. If QML garbage collector runs before MyModel destructor, the destructor will crash in the qDeleteAll() call...

We fixed it by making the MyObject instances children of the model. Since the objects now have a parent, QML does not take ownership of them anymore. We assumed it was not necessary because we were taking care of deleting all instances with the qDeleteAll() in the destructor, but in this case, that turned out to be an error. Another possible approach (which we did not try) would have been to explicitly set the ownership using QQmlEngine::setOwnership().

So remember, do not leave any QObject unattended without a parent, otherwise the QML engine will destroy it!

Breeze is finished

Tue, 2015/09/15 - 12:48pm

Talking about Kubuntu, Arch Linux, OpenSuse, …. Questions where I can find the Plasma Widgets, UI Session about Kmail, Plasma, Kdenlive … . Where Plasma Mobile should go, how the user should navigate through Plasma and the phone applications. Starting improvements for plasma. Talking how the VDG can improve the workflow between designers and development. Writing bug reports, fixing bugs. Make code changes, discuss it on reviewboard. Talk to the devs to fix some UI stuff. Go hiking and don’t stop talking about Plasma and KDE. That was Randa for me. It was amazing.

The VDG was represent by Jens, Heiko, Uri and me. I met Uri the first time since I start making Breeze icons. That was so cool that we work together to finalize Breeze. And we made it happen. All Oxygen icons from the oxygen repository are now available in Breeze. Therefore we made during the sprint 1.600 new icons. So we have now 4.780 Breeze icons. Uri and I work since spring 2014 on the Breeze icons and we made 55 % of the missing icons in one week. It was impressing what you can do with KDE sprints. Hope we can do a design sprint in 2016.


In the diagram you see we now support 24px GTK icons, add the missing devices icons, look that the KDE applications get Breeze icons and updated our actions icons for 16px and 22px size. Now we support not only the Oxygen icons we also include the Krusader, Amarok and Digikam icons. We have now 1034 action icons.

As QT5 will ship the icons first from the system setting icon set, Digikam 5.0 (still in Beta) switch between Breeze, Breeze Dark and Oxygen within one click on the look and feel package. In KDE4 applications you have to rename the folder in /usr/share/kde4/apps/xxx/icons/ (kubuntu) and you get the Breeze icons for Digikam and Amarok (at your own risk). Our changes are in the Plasma/5.4 brunch so the user will get our Randa stuff asap.

For the Plasma 5.5 Release we will go to the KDE Applications and add the last missing icons. In addition we will remove wrong used icons cause we send fixes to the applications or make Bug Reports that the Applications should use the right icons.

As I wrote in the first paragraph we do a lot of User Interface stuff together with the developers (Heiko made a blog post about some stuff) We show how the VDG work and let the developers know that we would like to work with them together. Not only by make the first design draft. We are excited where our journey will go.

I now work since 10 months for KDE in my spare time and I enjoy it very much. I’m quite happy that there is the VDG cause it is much easier to start the KDE journey within a small group. If you like to go with us, go to the forum and make your first comment. The VDG is not an elite group where you have to study design, … If you like to focus on usability and visual stuff, … Welcome

digiKam Recipes 4.9.1 Released

Tue, 2015/09/15 - 9:04am

A new release of digiKam Recipes is ready for your reading pleasure. This version features completely revised material on one of the most complicated subjects in digital photography: color management. The completely rewritten Color Management in digiKam recipe now provides an easy-to-understand introduction to key color management concepts and explains how to set up a color-managed workflow in digiKam. The recipe also offers practical advice on calibrating and profiling digital cameras and displays.


Continue reading

read more

184 Qt Libraries

Mon, 2015/09/14 - 2:00pm

We have collected 184 third party Qt libraries on Inqlude now. This is a pretty complete map of the Qt ecosystem, quite an impressive number, and lots of useful libraries extending Qt for many purposes.

Inqlude is based on a collection of manifests. If you like to add or update a library, simply submit a pull request there. The inqlude tool is used to manage the manifests, it generates the web site, but you can also use it to validate manifests, or download libraries. There also is inqlude-client, which is a C++ client for retrieving sources of libraries via the data on the Inqlude web site. It's pretty handy, if you want to integrate some library into your project.

If you want to get a brief introduction into Inqlude, you might want to watch my award winning lightning talk from Qt Dev Days 2013: "News from Inqlude, the Qt library archive". It still provides a pretty accurate explanation of what Inqlude is about and how it works.

A big part of the libraries which are collected on Inqlude are coming from KDE as part of KDE Frameworks. We just released KDE Frameworks 5.14. It's 60 Qt addon libraries which represent the state of the art of Linux desktop development and more.

Inqlude as well as KDE Frameworks are a community effort. Incidentally they both started at a developer sprint at Randa. Getting community people together for intense hacking and discussions is a tremendously powerful catalyst in the free software world. Randa exemplifies how this is done. The initial ideas for Inqlude were created there and last year it enabled me to release the first alpha version of Inqlude. These events are important for the free software world. You can help to make them happen by donating. Do this now. It's very much appreciated.

One more recent change was the addition of a manifest for all libraries part of the Inqlude archive. This is a JSON file aggregating all latest individual manifests. It makes it very easy for tools who don't need to deal with the history of releases to get everything in one go. The inqlude client uses it, and it's a straight-forward choice for integration with other tools which would like to benefit from the data available through Inqlude.

At the last Qt contributors summit we had some very good discussions about more integration. Integration with the Qt installer would allow to get third party library the same way you get Qt itself, or integration with Qt Creator would allow to find and use third party libraries for specific purposes natively in the environment you use to develop your application. One topic which came up was a classification of libraries to provide some information about stability, active development, and support. We will need to look into that, if there are some automatic indications we can offer for activity, or what else we can do to help people to find suitable libraries for their projects.

It's quite intriguing to follow what is going on in the Qt world. As an application developer there is a lot of good stuff to choose from. Inqlude intends to help with that. The web site is there and will continue to be updated and there also are a number of ideas and plans how to improve Inqlude to serve this purpose. Stay tuned. Or get involved. You are very welcome.

Interview with Lucas Ribeiro

Mon, 2015/09/14 - 5:40am


Could you tell us something about yourself?

Hi, I am a 24-year-old Brazilian artist, who lives in Sao Paulo. Married and eldest of three brothers. Watching my mother making a lot of pencil portraits when I was a child inspired me to do the same years later, since I saw it was not impossible to learn how to draw. I started to draw with pencils when I was 13, but nothing serious until I reached the age of 20. I began to learn digital painting, watercolor and improving my drawing skill (self-taught). Now I have worked in book covers, character design, a mascot for the government of Sao Paulo and recently even with graphic design. I use mainly Krita, but used previously GIMP, MyPaint, ArtRage, Sketchbook Pro, SAI… but Krita fits everything that I need better.

Do you paint professionally, as a hobby artist, or both?

I’m starting to do more freelance jobs. So I’m combining my hobby with my profession, which is a blessing. So, it is both.

What genre(s) do you work in?

I’m very eclectic, but I have to say that fantasy art and the cartoon style with a more realistic approach, like the concept art of Pixar and Dreamworks, are my favourites, and I plan to dedicate myself more to these styles.

Whose work inspires you most — who are your role models as an artist?

Well, this list is very, very large. I need to say that movies and books inspires me a lot: Lord of the Rings, Star Wars and the Disney animated movies. Inspiration can come from anywhere at any time. A song, a trip. But speaking about artists, I can’t fail to mention David Revoy and Ramon Miranda for doing excellent work with open source tools.

How and when did you get to try digital painting for the first time?

Well, I think that was with MS Paint Brush in the 90’s. Even though I was using a mouse, I was a happy child doing some ugly stuff. But when I started do draw seriously, I heard of Gimp Paint Studio and give it a try. After that I started to try different tools.

What makes you choose digital over traditional painting?

Actually I draw a lot with pencils, pen, ink and watercolor. But digital painting gives you endless possibilities for combinations and experiments without any cost (both in money and in time).

How did you find out about Krita?

I was looking for tips and resources to painting with GIMP, until I found out that David Revoy was using Krita to do the free “Pepper & Carrot” webcomic. When I looked up the pictures, I was impressed. Which is awesome.

What was your first impression?

The brushes feels very natural, almost as the real world. The way that the colour blends is very unique. There was no comparison with Photoshop in that, for example. The experience of painting with Krita was really natural and smooth. Even though that in my old laptop was lagging a little bit in the previously versions of Krita.

What do you love about Krita?

In first place: The brush engines and transform tools. I think they are the best in the market, on this moment. The brush editor is very intuitive and powerful too.

What do you think needs improvement in Krita? Is there anything that really annoys you?

Maybe some speed improvements. When I’m using more layers in high resolution I feel that.

What sets Krita apart from the other tools that you use?

The way that the brushes feel. There is no comparison with other painting tools. Is very natural, in that way I feel I am really painting and not just using a digital tool.

If you had to pick one favourite of all your work done in Krita so far, what would it be, and why?

Every day I make studies or a new illustration. But I think that I would choose the “Gangster Pug”. I used a lot wet brushes, which is very similar to painting with watercolor in the real world. It’s basically the same workflow.

What techniques and brushes did you use in it?

Wet brushes, and airbrush with blending modes like Multiply and Overlay. The Muses and David Revoy’s V6 Brushpack is what I use most.

Where can people see more of your work?

Soon I’ll have a new website and portfolio. But right now, people can see it at behance and facebook. I invite everyone to visit me at these links, especially because 90% of my work is done in Krita now. For stuff like graphic design I use Inkscape or Blender.

My page:

Anything else you’d like to share?

You can add me on facebook ( or send me an email ( and share your thoughrs. If you have not used Krita yet, try it. I think it’s the best tool in the market at the moment, and it’s really a production machine, whether you’re interested in VFX painting, illustration, comics and concept art, or just in painting and sketch.

Vector Tiling in Marble Maps @Randa

Sun, 2015/09/13 - 8:46pm

Earlier today I returned from the KDE Sprint in Randa, where Torsten, Sanjiban, me and 50 other KDE developers met in the Swiss Alps for a week of hacking. Our Marble subgroup concentrated on vector tiling in Marble Maps. After some very productive days we have a first prototype of OpenStreetMap vector tile support ready both on the Desktop and on Android. It will become the new map rendering engine for Marble Maps on Android in future releases.

Our main goal for the Randa Sprint was getting the vector tiling tool chain running. This includes splitting OpenStreetMap data into smaller chunks and providing them on a KDE server (thanks, Ben). These chunks are then downloaded by Marble clients on the Desktop or Android and provide the data for the map you see. Fortunately we got the server infrastructure and a basic vector tile generation tool up and running within the first two days, and had it generate vector tiles for a couple of test regions for the rest of the week.

For texture tiles the server is responsible for rendering the map and client devices just display images. This approach is easy to implement for clients, but no changes to the look of the map are possible. Vector tiles require a client that is capable to render the data by itself. Even though that pushes more work on the client, it has a lot of advantages: The map always looks crisp and all elements can be adjusted dynamically. Some of that can be seen in direct comparison already as shown in this screenshot (best viewed in original size):


Marble has been able to render vector data since the very start, but support for OSM vector data only started to emerge recently. With a working tile server in place we now could concentrate on the fun part, extending and improving OSM vector rendering itself. Beaches, buildings with real height, glaciers, butchers, car sharing and narrow-gauge railways are just a few examples of elements we added to the rendering. There’s still a lot of further elements and details to consider, but we covered all major map features already.


The Randa Sprint brought us much closer to a releasable (end-user ready) version of vector tiling. Chances are good this happens within this year still. Our public beta version of Marble Maps in the Android Play Store will get the update automatically. You can become a beta tester if you’re interested in seeing it emerge. We now also have the weekly Marble Café where everybody is invited to get involved with Marble and learn about recent developments.

Last but not least I’d like to thank everyone who helped making the Randa Sprint possible, especially the awesome organization team around Mario and his family/friends as well as everyone who donated and supported it.

Progress of KDE on Android Infrastructure

Sat, 2015/09/12 - 10:23pm

We have 2015 and Android is a very important platform for (mobile) applications and developers. — This somehow could also have been written a year ago, and actually it was stated then by several people. Those people also started porting some first applications from the KDE/Linux world to the Android platform. Yet, when looking at what happened the last year, as of now, we only have KAlgebra, GCompris and (since recent) Marble Maps that are available on Android.

The interesting question is, what holds back the many KDE applications that would also fit on an Android platform? During this year’s Randa sprint we took the opportunity and sat together to exchange what we learned during the last year. Looking at the different approaches of porting applications to Android,  we learned that already setting up a build system is a by far non-trivial job and probably one of the main points that holds people back from playing around with Android. Still, also the availability of KDE Framework libraries was not really tested in details yet, and without having availability guarantees it raises an uncertainty about how easy a port to Android might be.

To overcome these problems, we start with some simple approaches:

  1. Provide a simple and easy to use build environment.
    From the several existing toolchains for building Android applications, we started to reduce the number of different ones within the KDE projects to one. The new general toolchain (provided since some time via extra-cmake-modules) gained a new tutorial for how to use it. Further, by providing a build script for frameworks libraries, we make it easy to setup a whoel build environment that can directly be used for porting KDE applications that use KF5.
  2. Make development easy for new people.
    Initial work was started to create a docker image as a simple to use SDK. The goal is: run one command and get a fully setup build environment. With this approach we follow the way as it was started for Plasma Mobile.
  3. Availability of KDE Frameworks 5.
    We started to look into which frameworks currently can be built for Android. The list is already notably: kconfig, kcompletion, kitemmodelsm kitemviews, kcodecs, karchive, kguiaddons, kwidgetsaddons, attica, kdnssd, kapidox, kimageformats, and kplotting. For getting more frameworks build, the current two major blockers are building ki18n and kcoreaddons, which both need actual changes to the code to support the Android platform with its stripped down libc.

Looking at what was already achieved, the sprint itself was essential to get all people together to really work on this topic. As always, you can support this wok by donating for KDE’s sprints.
Though the work is not yet done, the basement is set to post some interesting news in the next weeks.

FreeBSD Plasma 5.4.1 and Frameworks 5.14.

Sat, 2015/09/12 - 8:08pm

Just a quick note that the KDE-FreeBSD “bleeding edge” ports repository area51 has been updated to the most-recently-released KDE Frameworks 5.14.0 and Plasma Desktop 5.4.1. These packages have been poudriere-built on 9.3 x86, 10.2 amd64 and 11 amd64 (no, I’m not building Plasma Desktop for Beagle Bone just yet). Information on area51 is on the K-F site, although you’ll want the plasma5-branch, which is at

The Wildest Ubuntu Wily release party in Florida!

Sat, 2015/09/12 - 5:39pm

With food, drinks and prizes who really doesn’t want to come to this Ubuntu release party in Florida? Come party hard for the latest Ubuntu release! We’ll have Ubuntu Touch devices ranging from the Nexus 4 to the Nexus 10 all running Ubuntu! We’ll have hardware running the latest Ubuntu release of course as well as Kubuntu, Ubuntu GNOME. With more details coming later on so RVSP as soon as possible to get in on the fun! We have a[1] page as well as a[2] for you to use.





KDE Telepathy ThinkLight Plugin

Sat, 2015/09/12 - 11:49am

Do you own a ThinkPad? Good!
Does it have the ThinkLight? Good! Then this post might interest you!

I just wrote a KDE Telepathy plugin that blinks the ThinkLight when you get an incoming message.
Sounds almost useless, isn't? Maybe not.

I found a good use case for it: sometime you could be away from keyboard, but near your ThinkPad (e.g. studying), the screen goes black, sounds are off, but you see the ThinkLight blinking - you got a message!

To enable it you just have to fetch the source code, build and install as usual with CMake.

There's just an annoyance at the moment: you need write permission over /proc/acpi/ibm/light. I'm looking for a solution for this, but found nothing if not changing that file permissions manually. Any idea?

There's also a tool, thinkalert (mirror), which allows to turn on/off the ThinkLight without being root by using suid. If you prefer this way, you can fetch the code from the thinkalert branch instead.

Have fun!

Sometimes the day begins…

Fri, 2015/09/11 - 9:04pm

…with nothing to look forward to.
This is the beginning of one of the most amazing picture books I have seen so far.
The Red Tree
I takes only five minutes to “read” through it, but the story will be formed in your imagination by memories of your own days like these. And there are pretty pictures. :)

Why am I telling you this? Because my current days feel a bit like it. Why?
 not going to Desktop Summit

After years of letting Akademy after Akademy pass by because it was just too far away, I actually have to skip the one on Germany as well. Reading the Planets KDE and Debian these days just adds to the grief. But you all know it, you all fear it, you all met it: Real Life!

There are just too many things going on right now and thus I just do not feel like it.

So, I wish you all a nice Desktop Summit and look forward to the next first meeting I will attend. :)

Kdenlive at Randa meetings - first report

Fri, 2015/09/11 - 3:48pm

We arrived here on wednesday, and started discussions about our plans in the train. Here is a short resume of what happened during these first days:

We decided to work on bug tracking during the first day, since the Kdenlive 15.08.1 release was imminent. We fixed quite a few bugs that will improve stability of the 15.08 branch.

The second important goal was to take advantage of having so many great people around us to improve Kdenlive. So we had contacts in these domains:

Mission statement:

We started the work on our "Mission Statement" to better define which users / jobs Kdenlive is designed for. This will help us to focus on important tasks for our target audience. We are in contact with people from the VDG (KDE's Visual Design Group) to get feedback on that process.

Ui design and workflow:

We had a session with people from the VDG (KDE's Visual Design Group) to review Kdenlive. We also had the chance of having a user of professional video software who had never seen Kdenlive so he was the perfect test subject for our UI review session. We wrote a wiki page on the ideas that came out of this. We encourage you to visit that page and give us feedback (a few issues were already reported on the bugtracker). As time permits, we will try to implement these ideas to improve the workflow.


There are a few issues/crashes in Kdenlive related to Qt5/KF5 bugs, and we made some contacts to get some help on this.

Next features:

We discussed some of the features we want to integrate for the 15.12 release, and other long term goals.

The feature we would like to integrate for the december release are: merge the animation feature from GSoC, same track transition (eg. overlapping 2 clips to crossfade), and a way to copy/paste between projects.