Subscribe to Planet KDE feed
Planet KDE -
Updated: 16 min 9 sec ago


Tue, 2014/02/18 - 8:31am

It's been brought to my attention that the Icecream documentation more or less suggests it is necessary to manually set up $ICECC_VERSION (which also involves creating the environment tarball with the compiler and so on). That is incorrect. I've already updated the documentation to say that, like with pretty much everything, Icecream simply figures it out itself by default.

So if you happen to use $ICECC_VERSION, unless you know why you do that (e.g. cross-compilation), don't. It's not only simpler but also better to leave it up to Icecream to package the system compiler as necessary, as it simply works, and avoids possible problems (such as updating the system compiler and forgetting to update the tarball).

[Howto] First Steps With Ansible

Mon, 2014/02/17 - 11:25am

Ansible LogoAnsible is a tool to manage systems and their configuration. Without the need for a client installed agent and with the ability to launch programs with command line, it seems to fit between classic configuration management like Puppet on one hand and ssh/dsh on the other.


System/Configuration management is a hot topic right now. At Fosdem2014 there was an entire track dedicated to the topic – and the rooms where constantly overcrowded. There are more and more large server installations out there these days. With virtualization, it again get sensible and possible to have one server for each service. All these often rather similar machines need to be managed and thus central configuration management tools like Puppet or Chef became very popular. They keep all configuration stored in recipes on a central server, and the clients connect to it and pull the recipes regularly to ensure if everything is fine.

But sometimes there are smaller tasks: tasks which only need to be done once or once in a while, but for which a configuration management recipe might be too much. Also, it might happen that you have machines where you cannot easily install a Puppet client, or for example where you have machines which cannot contact your configuration management server via pull due to security concerns. For that situations ssh is often the tool of sysadmin’s choice. There are also cluster or distributed versions available like dsh.

Ansible now fits right in between these two classes of tools: it does provide the possibility to serve recipes from a central server, but does not require the clients to run any other agent but ssh.

Basic configuration, simple commands

First of all Ansible needs to know the hosts its going to serve. They can be managed on the central server in /etc/ansible/hosts or in a file configured in the shell variable ANSIBLE_HOSTS. The hosts can be listed as IP addresses or host names, and can contain additional information like user names, ssh port and so on:

[web-servers] ansible_ssh_port=222 ansible_ssh_user=liquidat [db-servers] blue ansible_ssh_host=

As soon as the hosts are defined, an Ansible “ping” can be used to see if they all can be reached. This is done from the central server – Ansible is per default a pushing service, not a pulling one.

$ ansible all -m ping | success >> { "changed": false, "ping": "pong" } ...

As seen above, Ansible was called with flag “m” which means module – the module “ping” just contacts the servers and checks if everything is ok. In this case the servers answer was successfully. Also, as you see the output is formatted in JSON style which is helpful in case the results need to be parsed anywhere.

In case you want to call arbitrary commands the flag “a” is needed:

$ ansible all -a "whoami" --sudo -K sudo password: | success | rc=0 >> root ...

The “a” flag provides arguments to the invocated modules. In case no module is given, the argument of the flag is executed on the machine directly. The flag “sudo” does call the argument with sudo rights, “K” asks for the sudo password. Btw., note that this requires all servers to use the same sudo password, so to run Ansible you should think about configuring sudo with NOPASSWD.

More modules

There are dozens of modules provided with Ansible. For example, the file module can change permissions and ownership of a file or delete files and directories. The service module can check the state of services:

$ ansible -m service -a "name=sshd state=restarted" --sudo -K sudo password: | success >> { "changed": true, "name": "sshd", "state": "started" }

There are modules to send e-mails, copy files, install software via various package managers, for the management of cloud resources, to manage different databases, and so on. For example, the copy module can be used to copy files – and shows that files are only transferred if they are not already there:

$ ansible -m copy -a "src=/home/liquidat/tmp/test.yml dest=/home/liquidat/text.yaml" | success >> { "changed": <strong>true</strong>, "dest": "/home/liquidat/text.yaml", "gid": 500, "group": "liquidat", "md5sum": "504e549603f616826707d60be0d9cd40", ... $ ansible -m copy -a "src=/home/liquidat/tmp/test.yml dest=/home/liquidat/text.yaml" | success >> { "changed": <strong>false</strong>, ... }

In the second attempt the “changed” status is on “false”, indicating that the file was not actually changed since it was already there.


However, Ansible can be used for more than a distributed shell on steroids: configuration management and system orchestration. Both is realized in Ansible via so called Playbooks. In such Yaml files all the necessary tasks are stored which either ensure a given configuration or set up a specific system. In the end the Playbooks just list the Ansible commandos and modules which could also be called via command line. However, Playbooks also offer a dependency/notification system where given tasks are only executed if other tasks did change anything. Playbooks are called with a specific command line: ansible-playbook $PLAYBOOK.yml

For example, imagine a setup where you copy a file, and if that file was copied (so not there before or changed in the meantime) you need to restart sshd:

--- - hosts: remote_user: liquidat tasks: - name: copy file copy: src=~/tmp/test.txt dest=~/test.txt notify: - restart sshd handlers: - name: restart sshd service: name=sshd state=restarted sudo: yes

As you see the host and user is configured in the beginning. There could be also host groups if needed. It is followed by the actual task – copying the file. All tasks of a Playbook are usually executed. This given task definition does have a notifier: if the task is executed with a “change” state of “true”, than a “handler” is notified. A handler is a task which is only executed if its called for. In this case, sshd is restarted after we copied over a file.

And the output is clear as well:

$ ansible-playbook tmp/test.yml -K sudo password: PLAY [] ********************************************************* GATHERING FACTS *************************************************************** ok: [] TASK: [copy file] ************************************************************* changed: [] NOTIFIED: [restart sshd] ****************************************************** changed: [] PLAY RECAP ******************************************************************** : ok=3 changed=2 unreachable=0 failed=0

The above example is a simple Playbook – but Playbooks offer many more functions: templates, variables based on various sources like the machine facts, conditions and even looping the same set of tasks over different sets of variables. For example, if we take the copy task but loop over a set of file names, each which should have a different name on the target system:

- name: copy files copy: src=~/tmp/{{ item.src_name }} dest=~/{{ item.dest_name }} with_items: - { src_name: file1.txt, dest_name: dest-file1.txt } - { src_name: file2.txt, dest_name: dest-file2.txt }

Also, Playbooks can include other Playbooks so you can have a set of ready-made Playbooks at your hand and combine them as you like. As you see Ansible is incredible powerful and does provide the ability to write Playbooks for very complex management tasks and system setups.


Ansible is a tempting solution for configuration management since it does combine direct access with configuration management. If you have your large server data center already configured in an ansible-hosts file, you can it use for both system configuration as well as performing direct tasks. This is a big advantage compared to for example Puppet setups. Also, you can write Playbooks which you only need once in a while, store them at some place – and use them for orchestration purposes. Something which is not easily available with Puppet, but very simple with Ansible. Additionally, Ansible can be used either pushing or pulling, there are tools for both, which makes it much more flexible compared to other solutions out there.

And since you can use Ansible right from the start even without writing complex recipes before the learning curve is not that steep – and the adoption of Ansible is much quicker. There are already customers who use Ansible together with Puppet since Ansible is so much easier and much quicker to learn.

So in the end I can only recommend Ansible to anyone who is dealing with configuration management. It is a certainly helpful tool and even if you don’t start using it it might be interesting to know how other approaches to system and configuration management do look like.

Filed under: Business, Debian, Fedora, HowTo, Linux, Monitoring, Shell, SUSE, Technology, Thoughts, Ubuntu

Monday Report #3

Mon, 2014/02/17 - 9:12am

In which we talk about, alpha-beta, about release methods, moodboards, colors, mockups and icon ideas. We will talk about goldstar community members and a few other things.

See that background, behind the "K"? You will see more of it in the future :)

Another week has flown by, and what a week it has been! We are getting closer and closer to cracking the future themes for Plasma as well as Applications visuals. Ideas for layouts are being hammered together and there is plenty of things milling around in the VDG. Consider this blog the Beta to our Alpha work. We are one step ahead of it but present things as they get nailed definitely. So what you see here is the work done the week before the last.


First off lets talk about releases. We all know how these things are supposed to go: some teasing and then at a given date "The Big Reveal". Not so with the next version of Plasma. One of the things that I am so completely over-the-moon about with the way the Plasma Devs are handling the release for Plasma is that it will be done incrementally. What that means is that beyond the basics - the cool backend things, some slight visual changes - the first release of Plasma wont be anything drastic. The next release of Plasma will contain a little bit more in terms of change. The one after that a little bit more.
Why this slow pace? Because it's one thing knowing that your ideas and changes are sound and correct - another to see how we all react to them IRL. By changing bit by bit there is more room for community participation - for everyone to try them, see what they think, give feedback and suggestions and change or add to them if we have to - when there isn't ten thousand different things that are changed, hinging on each others existence to work.
This may not be great for marketing - everyone likes to write about a big splash - but it will be an insane improvement for the community and when the choice is between those two... well you know which side of the toast our bread is buttered on. :-)


Then lets talk about community participation! Now you all know that there are plenty of things to take part in the forums I hope? Plenty of ideas waiting to be commented on, plenty of dev's looking for design help. This is one of the best easiest ways to help KDE in a very practical way. Don't be shy. Jump in and get the ball rolling if you have a great idea or better yet look at the ones who need help and we will try to get you in contact with the devs :)
I also want to give a shout-out to one of the community participants for being an inspiration to me personally, you rock! (but who wants to be anonymous sadly)


Now let's get down to the moodboard! Two weeks ago this moodboard was posted in our group as a way of describing the theme we chose before that (presented in last weeks Monday Report). Now a moodboard is a way to define and describe the theme and the emotions so that all members of a team can grasp it, find it accessible and as a center around which to communicate further about other ideas.

The tricky bit with moodboards for something so big and complex as the Plasma Next desktop unlike say a printed publication or an ad-campaign is that you often find yourself striking too far out or being too specific - I think this one was a nice mix of it all and shows the look we're going for. Something clean, something futuristic but still bright, "hopeful and human without being too clingy" as it was described. (from top left to bottom right - Eve from Wall-E, City Scape from mirrors edge, "Futuristic City", "Cleaner", Flat Android Icons, Still from 2001 - A Space Odyssey, A lightbulb and a Pantone Card, Still from Mirrors Edge and A redone Star Trek Federation Insignia)

Colors. What about colors? The eagle-eyed among you may notice the Pantone card in the moodboard but before you link it with the end result I should post the correct color guide.

Now picking colors is a huge and tricky subject, its dangerous and involved a massive amount of debate. But when one of us (Cuan I think it was) posted the main colors to this we had a winner. The combination of greyscale hinting towards the blue side of the color scale and a stronger and lighter version of blue as main spot color played well together.
Combine that with a broad swath of strong colors from the other sides of the color wheel to counteract the blue-grey scale and use them sparingly to create striking visual change at times - a powerful exclamation mark when needed.

You may notice the transparency levels on the right. We will stick with those three levels of transparencies in all situations to avoid clashes and tricky details that may make: Transparencies are complex - especially in large flat work-communities with a lot of different designers and devs at play.
Now remember that colors are tightly connected to the individual themes - this is for the two base themes that are being planned - but the colors will be used for all visuals for Plasma Next, marketing, logos and imagery, and my hope is that they will find their way into further design work within the KDE application ecosystem.

There is a final tiny thing about colors - Sunburst Yellow - some have claimed that it feels "dirty" or "smudged" - but we will stick to it for now and see how everything pans out.

What about icons, well that's an interesting subject - we have some rather interesting news not only about Fabians systemtray icons, Acidtrays icon theme (yes a new icon theme) but also Andrew Lakes icon work. There will be more info of this next monday report.
Another thing coming next week are details on visuals and shapes - there are some nice news coming there too :) (soo many fun things that are still secrets .... iiiih)

So next time - there will be more and more community information coming along and of course next monday more about Plasma.

A Yakuake update: Frameworks 5, Wayland, More

Mon, 2014/02/17 - 1:34am
KDE Project:

Things have been rather quiet in Yakuake land for a while. 2014 is going to shake things up, though, so it's time for a brief look at what's been going on and where things are headed next.

Frameworks 5

Not long ago I pushed an initial port of Yakuake to Frameworks 5, the next generation of KDE's platform libraries, to Yakuake's git repository. The new code lives on a branch called frameworks for now.

Yakuake on Frameworks 5 is functional and free of the use of any deprecated APIs. The port is mostly complete, too, missing just a few more touches to the first run dialog and skin handling. Here's a screenshot:

Yakuake on Frameworks 5
Ah yup: Looks no different.


One of the broader initatives the community is engaged in this year is enabling our workspaces and applications to use Wayland. Yakuake may just have a small role to play in that.

Historically, Yakuake's relationship with window managers has been plenty interesting. It's managed to crash several of the popular ones at one point or another; it's an unusual application with unusual behavior that exercises window manager code differently from more typical apps. More recently, it was perhaps the first non-workspace process to communicate and collaborate tightly with the compositor, asking KWin to perform the slide-in/out animations on its behalf if available.

The latter is a model for things to come. In Wayland, application windows know intentionally little about the environment they exist in, and instead have to petition and rely on the window manager for things like positioning with no recourse. Yakuake on X11 does this legwork by itself; on Wayland, the comm protocol to the window manager will have to be rich enough to allow for equivalent results.

Having Yakuake ported and ready will allow for it to serve as a useful testcase there.

General feature plans

Yakuake's theming system has been showing its age for a while, so I'm looking to implement a replacement for it. The new system will be based on QML, taking some inspiration from KWin's Aurorae decoration engine. The result should allow themes to support anti-aliased edges and shadows, along with much more flexible control over the placement of Yakuake's UI elements. Existing themes will continue to be supported however (by way of new-format theme that knows how to load and display the old ones -- the config UI will list both types transparently in the same list, though).

The other major feature that's been a long time coming is proper support for session restore. This has mostly been held back by missing functionality in the API of the Konsole component that provides Yakuake with terminal emulation. Unfortunately that situation remains unchanged for now, but I'm still hoping to eventually get around to some Konsole hacking to satisfy Yakuake's needs there.

Schedule thoughts

Frameworks 5 uptake in distributions has been very promising so far, with several distros (I'm aware of Fedora and Kubuntu, but there are no doubt others) packaging the preview releases soon after they were announced. It's still pre-release software, though, and APIs might still change until the stable release this summer. Until it's out, the repo's master branch will therefore continue to contain the KDE 4 codebase, and there will be another maintenance release cut from it sometime soon.

Development of the new theming system will be targeted at Qt 5 and Frameworks 5, however, due to significant API changes in the new Qt Quick and QML generation. As with the next KDE 4-based release there's currently no firm date for this - Aaron makes a good case for not rushing things - except to say it will be some time this year.

GroupedLineEdit Reused in Subsurface

Sun, 2014/02/16 - 4:03pm


It has been a long time since I last blogged about the Nepomuk user-query parser that I developed during the summer. The reason is that the university required all my time. Here is a quick blog post that summarizes what happened these last months.

Merging the Query Parser

For those who missed the news, Nepomuk has been “abandoned” and renamed to Baloo. The reason is that the Nepomuk developers thought that using RDF (a complete and complex system of ontologies, that require specialized databases like Virtuoso) wasn’t the right thing to do in order to provide what the user want, that is to say a powerful yet lightweight search engine. RDF allows many things, but most of the users just want to look for documents matching a specific string.

I cannot better explain the reasons, but you can read this mailing-list thread if you want more accurate details.

What is important is that Nepomuk has been rewritten, and that breaks my user-query parser. Fortunately, the parser is pretty much self-contained, and the algorithms used don’t need to change, but the way the parser provides its results to Nepomuk has changed. In Baloo, queries seem to be simpler (for what I’ve seen) and are not built out of trees anymore.

In Nepomuk, a query was a big AND, containing ORs, other ANDs and actuel comparisons. Getting every document containing “test” and tagged as “important” was done using a query like AND(tagged_as(important), containing(test)) (this syntax is not used anywhere in Nepomuk but represents in-memory Nepomuk2::Term subclasses). In Baloo, for what I’ve seen in the source code, a query is more “fixed”. Every query consists of search terms (“test” in my example) and filters (date-time filters, maybe also tags, etc). There does not seem to be any recursive data structure. Because of that, the user-query parser cannot simply convert its abstract syntax tree to a tree of terms (a very simple operation), but needs to carefully analyze the user-query in order to produce the Baloo query (or queries) that will provide the expected results.

I’m still thinking about all that and I will contact the Nepomuk developers for advice, because porting the user-query parser to Baloo seems not to be that complex, and seems very interesting.

GroupedLineEdit gets used by Subsurface

Free Software and its communities are full of great surprises. One of them was when Aaron Seigo quoted me on Google+, or when the Nepomuk file indexer for MS Office 2003 files, that I developed in October, was mentioned in the KDE Commit Digest.

Several weeks ago, just before the FOSDEM, the Subsurface team announced that the project was ported to Qt. Subsurface is a divelog program started by Linus Torvalds and now developed by a fair number of contributors, some of them being well-known kernel hackers. When Subsurface 4 was announced, I read the news (even though I don’t dive, I wanted to see how a GTK+ program could be ported to Qt and what that brings as new/improved features), and I discovered a very intriguing screenshot in the user manual :

Tags in Subsurface

Have you noticed that there is a “Tags” field? And that it contains tags highlighted in colors? When I noticed that, I thought that Linus (or someone else) had developed exactly the widget that I needed during the summer, the widget that is used to highlight query terms in user-queries, as shown in this image :

Modern GroupedLineEdit

I downloaded the source code of Subsurface and started to look after this widget that does exactly what I want. Finally, I found it, and I was very surprised to see that it was actually my GroupedLineEdit widget that Subsurface used! I’m very pleased to see that something that took me several weeks to develop (the widget is less than 200-lines long, but I needed a great deal of time to figure out what was the nicest way to highlight text, and how to implement it properly) is used, and used in a program started by nobody else than Linus Torvalds.

So, if anyone wants to develop something : don’t hesitate to do it! Even if it is a small widget or a toy application, what you develop may be of use to someone else, and it’s always a great pleasure to see that code we have developed gets used by other developers.

ModemManagerQt and NetworkManagerQt: new releases

Sat, 2014/02/15 - 5:17pm

New versions of ModemManagerQt and NetworkManagerQt are out, respectively 1.0.1 and The changes are short, just some bugs fixes and small new features.

In ModemManagerQt:

. fix a crash when ModemManager is restarted.

In NetworkManagerQt:

. add workaround for wrongly updated ActiveConnection property.
. update doxygen documentation.
329260: avoid conditional jumps based on uninitialized values.
. add device property to WirelessNetwork.
. backport PrimaryConnection, ActivatingConnection and Connectivity properties.

Pebble Protocol Implementation in QML and SkippingStones UI for #SailfishOS

Fri, 2014/02/14 - 10:32pm

This is somewhat an announcement as well as a call for participation. Some time ago, I treated myself to a Pebble smart watch. As my primary phone is a Jolla smart phone, apparently, the next “logical” step was to get both, Pebble and Jolla, work together.

Thereby, I had two aims. Firstly, the implementation of the Pebble protocol should be in “as pure as possible” plain QML. Secondly, of course, I wanted to have at least basic interaction with my smart phone like getting notifications about calls and SMS or controlling the music player.

I put the emphasis on plain QML for the protocol implementation for two reasons. On the one hand, plain QML, due to its interpreted nature, enables pretty quick hack and test cycles. On the other hand, plain QML also promises to be quite platform independent, as in, “Runs on all platforms supported by Qt.”.

I used the very helpful libpebble Python implementation of the protocol as reference. Thanks to libpebble the protocol implementation was actually very much straight forward.

The current result of my experiments is “SkippingStones”. SkippingStones consists of a “backend”, i.e., the Pebble protocol implementation and a “frontend”, that is currently only available for SailfishOS and provides the connection to the platform dependent interfaces such as DBUS as well as a user interface.

SkippingStones is still in a very strong work-in-progress state. However, basic functionality like controlling the music player and getting notifications already works. I even succeeded in uploading watchfaces to my Pebbe via SkippingStones. SkippingStones is released under the same license terms as libpebble and is hosted on github: I uploaded an .rpm of the current state as well: In order to use the media player control, you need to also apply another patch to the installed media player QML files (in “/usr/lib/qt5/qml/com/jolla/mediaplayer/AudioPlayer.qml”): (See also:

Please be aware that this is not an “end-user-ready” product right now. The UI as well as possibly many other parts are still pretty much unfinished and in a rough state. So please don’t be disappointed if there are many things that still need to be improved or if things fail. Generally, you use SkippingStones on your own responsibility.

All participation/help for further improving SkippingStones is highly appreciated. There are many places that need some love, in the backend as well as in the frontend. Furthermore, so far there is only a SailfishOS based UI while it might be worthwhile to support more platforms as well.

No Licence Needed for Kubuntu Derivative Distributions

Fri, 2014/02/14 - 4:55pm
KDE Project:

Early last year the Linux Mint developer told me he had been contacted by Canonical's community manager to tell him he needed to licence his use of the packages he used from Ubuntu. Now Ubuntu is free software and as an archive admin, I spend a lot of time reviewing everything that goes into Ubuntu to ensure it has freedom in its copyright. So I advised him to ignore the issue as being FUD.

Later last year rumours of this nonsense started appearing in the tech press so instead of writing a grumpy blog post I e-mailed the community council and said they needed to nip it in the bud and state that no licence is needed to make a derivative distribution. Time passed, at some point Canonical changed their licence policy to be called an Intellectual property rights policy and be much more vague about any licences needed for binary packages. Now the community council have put out a Statement on Canonical Package Licensing which is also extremely vague and generally apologetic for Canonical doing this.

So let me say clearly, no licence is needed to make a derivative distribution of Kubuntu. All you need to do is remove obvious uses of the Kubuntu trademark. Any suggestion that somehow compiling the packages causes Canonical to own extra copyrights is nonsense. Any suggestion that there are unspecified trademarks that need a licence is untrue. Any suggestion that the version number needs a trademark licence is just clutching at straws.

From every school in Brazil to every computer in Munich City Council to projects like Netrunner and Linux Mint KDE we are very pleased to have derivative distributions of Kubuntu and encourage them to be made if you can't be part of the Ubuntu community for whatever reason.

In more positive news Ubuntu plans to move to systemd. This makes me happy, although systemd slightly scares me for its complexity and it's a shame Upstart didn't get the take up it deserved given its lead in the replace-sysv-init competition, it's not as scary as being the only major distro that didn't use it.

systemd analysis: a personal perspective

Fri, 2014/02/14 - 3:50pm
I have looked with passion the systemd case since a little more than two years ago. As some of you know my number one passion is innovation, sometimes I think than even more than freedom. From that perspective systemd is a great case for analysis.

To me, systemd is the confirmation of the existence of a establishment in the Free Software space. In early stages, yes, but is already there. People that changed the world once and, after being so long part of the solution, are little by little becoming part of the problem. Also about people that joined in the late nineties or first years in the XXI century this movement and know no other reality but the one they are living in. Many of them, from the innovation perspective, are nothing but qualified followers of the first group.

Do not give me wrong, I am not trying to be disrespectful or play "the smart observer" role here. I might be one of them. It is not up to me to judge this. Please take in consideration that the existence of these groups of people is nothing but the normal consequence of.... success and getting older. It is hard to scape from nature, right?

As usual in these cases, not just Lennart, but many of those who supported him, also those who sponsored these efforts, has suffered all kind of attacks. Sadly not just for technical, I mean ATTACKS. Even journalists have been involved. Yes, Free Software is also mature enough to have "yellow (technical) press" associated, political and business interests and people in different communities willing to use them against anybody who threaten the current status quo.

But this is something you have to be prepared to assume if you want to succeed in bringing key changes in mature environment. And Free Software is becoming a mature environment.

You cannot expect to change the current status quo if you are not able to assume heavy criticism. You cannot succeed any longer just by talking, trying to convince you are right from a technical level, being nice, transparent and open to get feedback. Playing as a good citizen is a must, of course, but is not enough any longer.

You need to sustain your effort for a while and have enough support (yes, financial too) to fight back while keep walking in the direction you believe in. If you are not strong enough, if you are not willing to make sacrifices, if you cannot or are not able to ignore the noise, the attacks from the establishment, no matter how popular they are or were, don't try it. Try instead to innovate in unexplored areas. It is
easier and more pleasant.

But is worse for all of us in the long run, I think.

To me there is a very interesting aspect to remark. Even if you want to change a pillar and you are ready to fight the dinosaurs (which is not a condition directly related with age, by the way), you need to have financial support, specially in key moments, to be able to execute your plans despite heavy criticisms. systemd would have never been successful without it, I think.

I cannot judge from a technical perspective if systemd is a step forward, one of those architectural changes that we all will regret or a very expensive road before getting a good solution. This post is not about technical evaluations or predictions. This post is about me believing that Free Software is still (also) about innovation, not just in new areas, but in those aspects that brought us here too.

No, we are not in the finish line. No, many of those who brought us here or are relevant today are prepared to take us to the next level. As in many other industries, the main forces against evolution are internal ones.

"If it works don't touch it" Or "Disrupting changes come through iterations" were popular statement among those who are not relevant any longer.... or will become. Should become.

Thanks Lennart, your sponsors and supporters for succeeding.... or die trying. My respects. I hope the future of Free Software will be in the hands of people like you.  we need it or something else will replace us. Maybe that is not bad either.

Two final remarks....
  • Please Lennart and colleagues, make sure systemd works very well. I do not want to eat my words in three years. There are some people out there willing to see me swallowing them ;-)
  • I regret writing about this today instead of some months ago. Now it has been too easy.

Thanks for your help and please go on!

Fri, 2014/02/14 - 3:48pm

First and foremost I’d like to thank all the people who already took some time and participated in the questionnaire for my diploma thesis and KDE.

But it’s not over yet (last chance is on 25th of February) and we still need more data and as a member of KDE I know we can do more and better. I got some feedback about the length of the questionnaire and that some questions (mostly the once at the beginning of the questionnaire about the 12 different tasks) are quite abstract and difficult. But please try it, try your best and take the time and brain power. The remaining part of the questionnaire (after these two pages with the tasks questions) is quite easy and quickly done.

So if you already started the questionnaire and gave up then or needed to run for something else: no problem, you can reopen the questionnaire and continue where you left. And don’t forget the chance to win something nice  – but just if you complete and submit the questionnaire.

And if there are any questions, feedback or you need help don’t hesitate a moment to write me an email or ping me on IRC ( as unormal.

Thanks a lot for your help and tell your fellow KDE mates about this questionnaire
Mario Fux

Videos from Qt DevDays 2013 – Berlin

Fri, 2014/02/14 - 2:41pm

Finally we have all the videos from the multiple top class sessions at Qt Developer Days 2013 uploaded here.

You can also access the program here, with links to the awesome set of slides used during the presentations at DevDays 2013.


The post Videos from Qt DevDays 2013 – Berlin appeared first on KDAB.

Qt on Android London and Berlin Dates fixed – Sign up now!

Fri, 2014/02/14 - 2:32pm
Free Qt on Android Coffee and Code Sessions

KDAB’s BogDan Vatra, the man who ported Qt for use with Android, is coming to London and Berlin in March as part of his 3 City tour. He will conduct three hour workshops Free to Qt enthusiasts.

Times: 9am – 1pm
Venue: British Computer Society, Central Offices, London, WC2E 7HA

Times: 08:30 – 12:30
Venue: The Maritim Hotel, Stauffenbergstraße 26, 10785 Berlin (near Potsdamer Platz)

Not just a simple how-to, this session will include insightful discussions on why things are the way they are, how the internals of Qt on Android work and how Qt itself is deployed on Android devices.

Participants will learn:

  • how to set up the Android development environment
  • how to run and deploy an application
  • Android specific concerns like its security model and signing packages
  • how to use JNI to access and use Android Java APIs from Qt and vice-versa.

BogDan will also visit Paris in April (date to be announced).

Find out more about Qt on Android here…

The post Qt on Android London and Berlin Dates fixed – Sign up now! appeared first on KDAB.

Valentines Special - Talk is cheap

Fri, 2014/02/14 - 12:00pm

In which we talk shortly on Valentines Day, refer people to the forum and comment on the comments of blogs. Also a short informational about whats going on and the issues facing us.

Juliana Coutinho / Creative CommonsI have a philosophy, I have several actually, but one that sticks out right now. Sometimes its better to just get going and do something than talk and plan for it forever. Its in action we show where we are going and its also only with some kind of result we can talk more precisely about what needs to be changed. Since it's Valentines here I like to think about it the way I met my husband. Our first date was done with skepticism from my part. I didn't know him that well, he seemed charming but I wasn't sold completely. But the only way to find out was to actually do something, to go out with him, and when he walked over Iron Square in Gothenburg towards me smiling nervously I think that was when I knew. But it was only through action, no matter how ill prepared it may have been, that we found out. It's only through risk that we gain the greatest things.

So I tend to applaud action first and then talk about problems with those actions later.


... and I tend to take negative comments lightly. This may be a bit of a "uh-oh" moment for many as there is a false equivalency between "commenting" and "criticism" in Open Source which combined with the idea of the "customer" tend to wreck havoc with projects, but hear me out here:

Talk is cheap, some talk is cheaper than others. Anyone can say "This sucks". It takes no skill, it offers no input and its often more damaging than the original problem.
"Criticism", valuable criticism, is being able to not only say WHY something is bad or better yet suggest a fix, but also do it in a respectful way and a pedagogical way. If you can't even say whats wrong without resorting to rudeness and failing to properly describe it - chances are you have no idea what you're talking about anyway and nine times out of ten, your comment is a bigger problem to a project than the problem you are trying to address.


Now for those who write blogs and get the first kind of comments on looks - refer them here to the Visual Design Group forum and say "Put up or shut up". A mockup isn't magic, a written suggestion isn't Olympic level mental acrobatics and being able to publicly and in an orderly way DEFINE what it is should be done - is not a high bar to clear. If you can say "this sucks" chances are you are capable of being a mensch about it and think about WHAT sucks, HOW it sucks and how it can be FIXED so it doesn't suck.


But aside from that, whats going on with the VDG? Well for starters a majority of our designers are on Valentines Day leave (this one will too as soon as this post is done), we have a new member Andrew Lake a contributor to Bangarang and a designer of note from Seattle and things are slowly moving in the right direction!

Shapes, shadows and visuals are coming in place for Plasma 2 which swallows most of our time, attempts to balance it with design work for Plasma NM has been obviously tricky but hopefully they will get some help and there are PLENTY of work posted in the forums - so write a suggestion, post a mockup - don't be shy - better to do and doubt it than don't and regret it!

Next post will be the Monday Report and contain color choices, some motif's and our early moodboard. So stay tuned!

Final shape of plasma-nm

Fri, 2014/02/14 - 9:57am

We are probably in the end of our journey to improve design and usability of plasma-nm and new release is almost behind the door. There are still some small design issues in the current design, but it’s almost impossible to find a designer with spare time and especially now, where everything is about Plasma 2. This is also probably the last major change for KDE 4/Plasma 1 and we should move on and focus to KDE 5/Plasma 2. Currently we are working on a new model, which is going to be same for the applet and the editor, so the editor will be more powerful. I would like to also have a new kcm, but this is for further future.

If you want to try it, you can compile plasma-nm from git (master branch), but you also need a new version of libnm-qt (NM/0.9.8 branch). Or if you are a Fedora user, you can install it from COPR repository. Otherwise you will have to wait for your distro packagers or for new tarballs.

Docker and openSUSE getting closer

Thu, 2014/02/13 - 9:46am

I have some good news about Docker and openSUSE.

First of all the Docker package has been moved from my personal OBS project to the more official Virtualization one. The next step is to get the Docker package into Factory :)

I’m going to drop the docker package from home:flavio_castelli:docker, so make sure to subscribe to the Virtualization repository to get latest versions of Docker.

I have also submitted some openSUSE related documentation to the official Docker project. If you visit the “Getting started” page you will notice the familiar geeko logo. Click it to be redirected to the openSUSE’s installation instructions.

Kate: Python plugins for C++ developers

Thu, 2014/02/13 - 8:00am

Nowadays kate has a few things implemented as Python plugins that can be useful to C++ programmers. Some of them (like working w/ comments) can be used wth Python, CMake, Bash or JS code as well. To start playing w/ them one has to enable the following plugins:

Then take a look to Pâté menu:

Few words about working with comments
  • Inline comment means that it is placed on a line w/ code. Just like on the screenshot above. To add it to the current line use Alt-D. The cursor will move to 60th (default) position and // will be added. To transform it into a doxygen comment (///<) just press one / (this part works w/ my C++ not-quite-indenter™ ;-) To configure the default position use the Commentar plugin config page (take a look at the pages available at the first screenshot). Pressing on a selection will add (if possible) inline comments to every selected line.

  • To move inline comment to a line above use Meta+Left and Meta+Right to move it back

  • The next example shows how to Comment Block with #if 0, Toggle #if0/#if1, and Remove #if 0 part

  • Transform Doxygen Comments (between /** ... */ and /// forms) and Shrink/Expand Comment Paragraph

Boost MPL-like parameters fold/unfold

To format template or function parameters in the boost (MPL) style, move cursor inside the parenthesis or angle brackets and use Boost Like Format Params (Meta+F) action from the Pâté menu. Every time invoked this action will expand the nesting level. To unfold parameters use the reverse Unformat (Meta+Shift+F) action.

Using expand plugin

Since long time ago in a galaxy far, far away Pâté had the expand plugin. The (initial) idea behind it was quite simple: a user writes a function which returns a string to be inserted into the document (by means of an Expand action) instead of the corresponding function name under the cursor. Expand functions may have parameters. Running in a context of embedded Python, these functions may access kate’s API. There is however another benefit - this still is a fully functional Python with a lot of third party libraries.

Trivial example

The “Hello World” demo function is as simple as

def hi(): return 'Hello from expand plugin'

To make this expand available put this function into a file named after a MIME type, where / replaced with _. For example use ~/.kde4/share/apps/kate/pate/expand/text_html.expand to make it available in a HTML documents or text_x-python.expand for Python source code.

Then open a document with a corresponding MIME type, add hi text and press Ctrl+E (default) shortcut – hi will be replaced w/ Hello from expand plugin text.

Lets rewrite the code a little and introduce an optional parameter:

def hi(name=None): return 'Hello {}'.format(name if name is not None else 'anonymous')

To pass a parameter type hi(John) + Ctrl+E. Note, that the expand function may have positional and named parameters, just like any other ordinary python function.

Level 2: Templating templates

Kate has a builtin template engine. Snippets and file templates use it. Now, expand can use it as well :) Yeah, the string 'Hello {}' can be considered as a level 1 template – a name parameter will be substituted into it. The level 2 is to allow to edit it after expansion, just like with the snippets plugin (yeah, it is actually a plugin ;-). All you need is to return a string with an editable field, i.e. smth like 'Hello ${name:John}' – here is an editable field name with a “predefined” value John which is an actual parameter to the expand function hi ;-) Also we ought to tell the expand plugin that the result string must be inserted into the document via KTexeEditor::TemplateInterface, so we use a Python decorator postprocess from the module expand:

import expand @expand.postprocess def hi(name=None): return 'Hello ${{name:{}}}'.format(name if name is not None else 'anonymous')

This way a partly editable piece of code can be inserted! Yeah, right, just like these snippets from a plugin… But wait! “Why the heck we’ve just reinvented the wheel?” – one may ask ;-) The benefit is that we can return any template we want! – i.e. not just a static text with a few editable holes (which is what a “classic” snippet is)! What to return can be (trans)formed according to the expansion function parameters… and this is a real advantage over snippets! :-) Still do not believe? Here is a real world example: lets write an expansion function to insert structure definitions to C++ code (it doesn’t use editable fields for simplicity):

_BRIEF_DOC_TPL = '''\ /** * \\brief Struct \\c {0} */''' _TEMPLATE_PARAMS_TPL = ''' template <{1}>''' _STRUCT_BODY_TPL = ''' struct {0} {{ }}; ''' def st(name, *templateParams): params = [name] if len(templateParams): template = _BRIEF_DOC_TPL + _TEMPLATE_PARAMS_TPL + _STRUCT_BODY_TPL params.append('typename ' + ', typename '.join(templateParams)) else: template = _BRIEF_DOC_TPL + _STRUCT_BODY_TPL return template.format(*params)

This pretty simple (and quite short) function can do something, that snippets just can’t!

// st(test) expands to /** * \brief Struct \c test */ struct test { }; // but st(templated_test, T, U) expands to /** * \brief Struct \c templated_test */ template <typename T, typename U> struct templated_test { };

The only mandatory parameter (the first one) is a structure name to insert. The rest are optional, if present they are turned into template parameters… That is how st function looked like before KDE SC 4.13.

But soon after trying to use the built-in Python’s format function with “template” strings intensively, I realized that it is time to use a real template engine ;-)

Level 3: Metatemplates

Template engines are widely used in web programming to split business logic from a representation. One of the famous for Python is jinja. Yep, it is not an all-inclusive-framework… just a template engine ;-) and now we can use it with the expand plugin.

Lets rewrite the structure expansion function using jinja templates:

import expand @expand.jinja('struct.cpp.tpl') @expand.postprocess def st(name=None, *templateParams): return {'name': name, 'template_params': templateParams}

Yeah, that’s it! :-) Just return a Python dictionary with arbitrary keys. These keys will be used in a template file later, so how to name them is up to you. The representation (template) file (struct.cpp.tpl), mentioned in the @expand.jinja decorator call, must be placed in a ${KDEDIR}/apps/kate/pate/expand/templates/ dir, “near” the corresponding .expand plugin file. .

/** * \brief Struct /*< name | editable >*/ */ /*% if template_params %*/ template < /*% for tp in template_params %*/ /*% if not loop.first %*/, /*% endif %*/typename /*< tp | editable >*/ /*% endfor %*/ > /*% endif %*/ struct /*< name | editable(active=True) >*/ { ${cursor} }; //# kate: hl C++; remove-trailing-spaces false;


  • /*< and >*/ open/close sequences to substitute a variable – yeah, one of the keys in a dictionary, returned by the expand function
  • /*% and %*/ open/close sequence for jijna control tags (like if, endif, for, endfor)
  • //# is a short or single-line form of a comment inside a template file
  • next, one custom filter is used: editable – to transform a value of a variable (key) into an editable KTE::TemplateInterface field after the snippet gets inserted. Also note that the structure name is present both in the comment and in the struct declaration line, but only the second one will be “editable”, the other “copies” are just mirrors in terms of KTE::TemplateInterface
  • everything else, i.e. a text out of open/close sequences, is just a text to be rendered and inserted into a document
  • both open and close sequences are chosen for a reason: since it is a template for the C++ code, it would be nice to have some highlighting and the chosen sequences are subsets of plain C comments ;-) Surely, it can be configured for other languages as well…

To reduce syntactic noise For demonstration purposes I've removed whitespace control characters from jinja tags.

Here are a couple more decorators defined in the expand module for use by expand functions: @expand.description and @expand.details. Both accept a string to be displayed in a completion popup. The first one is a short description. The second one is the details (or usage) text accessible by pressing an Alt key for the selected completion item. Because most of the completions are really short names, completions will not appear in automatic mode, so in order to see them, one has to bring up this popup manually. There are a few expansion functions available for C++ code out-of-the-box. The most complicated one (but powerful compared to trivial snippets) is cl. It is capable to generate template or non-template classes, possibly with a constructor with zero (default) or more argumentis, maybe with defaulted or deleted copy and/or move constructor and/or assign operator, maybe with a virtual destructor, where the class name and the possible template and/or constructor parameters are (post)editable fields. All parameters, and also the expand function’s name should be really short to reduce the typing effort and be memorizable.

One example is worth more than 1K words… lets generate a class with:

  • template parameter Iter followed by T0, T1, T2, T3
  • default constructor
  • constructor accepting two parameters
  • disabled copy constructor
  • defaulted move constructor
  • and virtual destructor
// cl(some,@c,c2,-cc,@mc,vd,t,Iter,T0..3) // the result is: /** * \brief Class \c some */ template <typename Iter, typename T0, typename T1, typename T2, typename T3> class some { public: /// Default constructor some() = default; /// Make an instance using given parameters some(param0, param1) { } /// Copy constructor some(const some&) = delete; /// Copy assign some& operator=(const some&) = delete; /// Move constructor some(some&&) = default; /// Move assign some& operator=(some&&) = default; /// Destroy an instance of some virtual ~some() { } };

I will not go into details about other functions – it is easy enough to read the descriptions/sources and play with them. But I have one more final damn cool feature to tell about ;-)

Level 80: God Mode

This feature came into my mind after reminiscing «Mortal Combat» and/or «Doom». The way to use it came from these games: one has to press the exact key sequence like cheat codes in a game ;-) The expand plugin tracks single key press events and if it finds a cheat magic code among the last consequently pressed keys, it will replace it with a result.

Important Usage Notes
  • every magic sequence starts and ends with a semicolon
  • if you type something wrong, you can't "fix" it with cursor movement keys + delete/backspace keys!
  • you have to type «cheat codes» without "errors" from start to end
  • so remove the wrong text and start over again… repeat until you can type them as if it were an unconditioned reflex ;-)

To add your own dynamic expand function one has to use the @expand.dynamic decorator, which accepts a compiled regular expression. If the latter matches the typed key sequence, a magic function will be executed and passed two parameters: a source “magic” sequence (string) and a result of (which is some match object). Here is an example of that kind of function to expand an enum definition:

@expand.dynamic(re.compile('^e(?P<is_class>c)?(?P<has_type>:)?')) @expand.jinja('enum.cpp.tpl') @expand.postprocess def __dynamic_enum(params, match): data = {} data['is_class'] ='is_class') is not None data['has_type'] ='has_type') is not None return data

Yeah, it is quite similar to the “usual” (jijna templated) functions except for a one more decorator ;-)

Below is a list of available dynamic snippets for C++. Every item starts with a simplified graphic image of a regex (for visualization and better remembering ;-) that is used to match that snippet.

;cl; Synopsis

Insert a class declaration. Depending on the options it is capable to add a default, parameterized, copy, move constructors and/or destructor. Also the class can be templated with a desired number of parameters.


  • ;clcd; class with default constructor and destructor
  • ;clc2; class with constructor with 2 parameters
  • ;clc1@cc@mc; class with “conversion” constructor (single parameter), defaulted copy and move constructor/assign
  • ;cl@mv-cc; class with defaulted move constructor/assign and delete copy constructor/assign
  • ;clt2vd; class with two template parameters and a virtual destructor
;e; Synopsis


  • ;e; insert C++03 enum declaration
  • ;ec; insert C++11 strong typed enum declaration
  • ;ec:; insert C++11 strong typed enum declaration with the type specified
;fn; Synopsis

Insert a function declaration or definition, if a dynamic snippet ends with { character. Function may have runtime and/or template parameters, as well as various modifiers.


  • ;fnt1; void function with one template parameter
  • ;fn2s; function with a static modifier and 2 parameters
  • ;fn1vfoc; virtual function with 1 parameter and final, override and const modifiers
;for; Synopsis

Generate a for statement with various flavors. It is capable to expand into a range-based for or more “traditional” iterator-based. The latter can use either C++03 or C++11 semantics – i.e. when begin() is a member or a free function, possibly matched by ADL. The type for the range-based for can be const and/or ref/ref-ref qualified.


  • ;fa; range-based for with the auto type and no body
  • ;fa&&{rng; range-based for with auto&& type over some rng and an empty body
  • ;fori3c{var; loop using const_iterators and C++03 syntax over var container
;ns; Synopsis

Generate an anonymous or named (possibly nested) namespace.


  • ;ns; insert anonymous namespace
  • ;nskate::details; insert a kate namespace and one nested details
;st; Synopsis

Generate a simple struct possibly with a given number of template parameters.

;sw; Synopsis

Generate a switch statement with a desired number of case statements and possibly with a default one.

;try; Synopsis

Insert a try block with a desired number of catch blocks and possible a catch (...).

Dynamic Snippets Short Demo

Fun with n-grams, part 2: tightly packed tries

Thu, 2014/02/13 - 12:00am

Follow-up to part 1.

Last time, I talked a little bit about an implementation of a trie to store frequency information about n-grams. The problem is that the naïve implementation of a trie is much more compact than the source data, but still not small enough.

One approach to dealing with this problem is to do evil bit-twiddling hacks to reduce the size, but ultimately, this just gives you worse code and little real benefit.

Aside: a bit of information theory

In information theory, the entropy, also called Shannon entropy after Claude Shannon, is a measure of the information content of a random variable.

Information theory is pretty interesting, but for our purposes we’ll just consider two facts. First, if we have a random variable that takes values in an alphabet of size \(n\), then the entropy is maximized by a uniform distribution and here the entropy is \(\lg n\) bits. Second, Shannon’s source coding theorem tells us that, asymptotically speaking, the best compression rate possible is the entropy.

Now, let’s think about the space complexity of a naïve tree data structure, where we number all of the vertices, and store for each vertex a list of its children. For \(n\) vertices, this representation takes \(\Omega(n\log n)\) bits. But suppose that our vertices are unlabeled. Let \(r(n)\) be the number of unlabeled trees on \(n\) vertices with a specified root vertex (OEIS:A000081). Then, as \(n \rightarrow \infty\), \[ r(n) ~ D \alpha^n / n^{3/2}, \] where \(D = 0.439\ldots\) and \(\alpha = 2.95\ldots\) are constants, so if we simply numbered these and used the number to identify the tree, we could use only \[ \log r(n) = 2n - \Theta(\log n) \] bits. Obviously, this isn’t a practical data structure (how can you perform tree operations on a number), but the point is that the upper bound on the most compact representation is actually sub-linear. There’s a big gap between sub-linear and \(n\log n\), so it’s an interesting question to ask how we could have practical succint trees. For more on this, take a look at this paper, Succinct Trees in Practice.

The point, of course, isn’t that we should expect to get all the way down to sub-linear space complexity (in fact, if we want to label nodes with up to \(\sigma\) different labels, we need an additional \(O(n\log \sigma)\) bits for that), but just that we shouldn’t be surprised if we can do much better than a naïve approach.

Tightly packed tries

It turns out that this problem has been thought about before: for instance, there’s a 2009 paper by Germann, Joanis, and Larkin, called Tightly Packed Tries: How to Fit Large Models in Memory, and Make them Load Fast, Too. I implemented their data structure, which encodes the nodes in depth-first order as follows. For each node, write its frequency in LEB128, then write the frequency of its children, also in base-128. If it is not a leaf node, then we have already written all of its children, since the encoding is depth-first, so we can compute the byte offset of the child node from the current node. Finally, we write a list of (key, offset) pairs, with an index size in front so we know when the index ends.

This is maybe a little confusing, but there’s an annotated version of the binary format for a simple example here that makes it more clear.

The TPT data structure is basically the same as the original trie structure: you have nodes, which store some frequency counts, and a list of child nodes. But it’s much more efficient, for two reasons.

The first is the use of a variable-length encoding for integers. Zipf’s Law is the observation that much data in linguistics and other sciences follows a power-law distribution. From Wikipedia: “Zipf’s law states that given some corpus of natural language utterances, the frequency of any word is inversely proportional to its rank in the frequency table. Thus the most frequent word will occur approximately twice as often as the second most frequent word, three times as often as the third most frequent word, etc.”

The LEB128 code has a slight overhead: 1 bit per byte. But the vast majority of the frequency counts will be small, with only a few large numbers, so it’s much more efficient.

The second reason is that instead of storing pointers to the child nodes, we store relative byte offsets, which are LEB128-encoded. By using relative addressing instead of absolute addressing, the numbers tend to be smaller, since we usually write child nodes close to their parents in the file. Smaller numbers mean fewer bytes written. Moreover, relative offsets mean that we don’t need to put things in physical memory: to deal with large data sets, just mmap and call it a day.

In the toy example I linked, we write the whole trie in 53 bytes, instead of 280, so it’s more than five times smaller. For a bigger data set, like the English 1-million 2-grams, writing the trie this way takes 710 MB, compared to about 7 GB with the previous method (the original data set is ~80 GB).

AppStream/Listaller FOSDEM slides

Wed, 2014/02/12 - 7:21pm

FOSDEM is over, and it has been a great time! I talked to many people (thanks to full devrooms ;-) Graph Databases and Config management was crazy, JavaScript as well…) and there might be some cool new things to come soon, which we discussed there :-)
I also got some good and positive feedback on the projects I work on, and met many people from Debian, Kubuntu, KDE and GNOME (haven’t seen some of them for almost 3 years) – one of the best things of being at FOSDEM is, that you not only see people “of your own kind” – for example, for me as Debian developer it was great to see Fedora people and discuss things with them, something which rarely happens at Debian conferences. Also, having GNOME and KDE closely together again (litterally, their stands were next to each other…) is something I missed since the last Desktop Summit in 2011.

My talks were also good, except for the beamer-slides-technical-error at the beginning, which took quite some time (I blame KScreen ;-) ).

In case you’re interested in the slides, here they are: slides for FOSDEM’14 AppStream/Listaller talks.

The slides can likely be understood without the talk, they are way too detailed (usually I only show images on slides, but that doesn’t help people who can’t see the talk ^^)

I hope I can make it to FOSDEM’15 as well – I’ve been there only once, but it already is my favourite FOSS-conference (and I love Belgian waffles) ;-)

In the bag!

Wed, 2014/02/12 - 4:01pm

Yesterday, the Muses DVD arrived from the printers. Today, all pre-orders got stuffed in jiffy bags and sent on to the post office! Irina Rempt did all the hard work of creating address labels, stuffing the bags, pasting on the stamps, and then we bagged the bags in Official Post Office bags! So, if you had ordered one, it's on its way!

And if you haven't ordered one yet...

Support Krita development by getting your very own copy!

KDE India conference 2014

Wed, 2014/02/12 - 11:20am

Hello planet! KDE India team is going to organize a KDE India conference 2014 in DAIICT, Gandhinagar. In KDE India conference there are many awesome talks. In fact every talk is interesting but I find this talks more interesting.

And of course my own talk, Introduction to Plasma Next by KDE

Introduction to Plasma Next by KDE

In this talk I will give introduction to Plasma Next, changes in Plasma Next and if there is enough time I will demostrate installation of KDE Frameworks 5 and Plasma Next.

I am already excited about KDE India conference 2014, because it is the first KDE event that I am going to attend and first talk at any KDE event (Well, no stage fear).

I am going to 2014