Subscribe to Planet KDE feed
Planet KDE -
Updated: 23 min 6 sec ago

Improved data fitting in 2.5

Sun, 2017/11/19 - 8:05pm

Continuing the introduction of the new features coming soon with the next release of LabPlot (see the previous blogs here and here), we want to share today some news about the developments we did for the data fitting (linear and non-linear regression analysis) in the last couple of months.

Data fitting, one of the most common and frequently used data analysis tasks, got a lot of improvements. As already mentioned in the previous blog, all analysis functions benefited from the recent general UX improvements. Instead of going through the many manual steps, the final fit result can now be quickly produced via the context menu of the data spreadsheet or directly in the plot in the context menu of the data curve:

analyze and plot data context menu

Until now, the fit parameters could in principle take any values allowed by the fit model, which would lead to a reasonable description of the data. However, sometimes the realistic regions for the parameters are known in advance and it is desirable to set some mathematical constrains on them. LabPlot provides now the possibility to define lower and/or upper bounds for the fit parameters and to limit the internal fit algorithm to these regions only. Also, it is possible now to fix parameters to certain already known values:

New fit parameters widget

Some consistency checks were implemented to notify the user about wrong inputs (upper bound is smaller than the lower bound, start value is outside of the bounds, etc.) immediately.

The internal parser for the mathematical expressions learnt to recognize arbitrary user-defined parameters. With this, the fit parameters of custom models are automatically determined and there is no need for the user anymore to explicitly specify the parameter names manually once more when providing the start values and the constraints for them.

To obtain the improved parameter estimators for data where the error in the measurements is not constant across the different data points, fitting with weights is used usually as one of the methods to account for such unequal distributions of errors. Fitting with weights is supported in LabPlot now. Different weighting methods are available to ensure the appropriate level of influence of the different errors on the final estimation of the fit parameters. Furthermore, the errors for both, the x- and y-data points, can be accounted for.

The representation of the fit results was extended. In addition to what was already available in the previous release of LabPlot for the goodness of the fit, new criteria were added like t and p values, the probability that the null hypothesis in the t-test is true, confidence intervals, Akaike- and Bayesian information criteria. The screenshot below shows the current version of the fit dock widget:

New fit dock widget

Though quite a lot of features are already available in LabPlot in this area, many important and useful features like the support for different fitting algorithms, the subtractions of baseline from a spectrum, etc. need to be implemented. We hope to close the open gaps here very soon in one of the next releases.

Atelier and the social networks

Sun, 2017/11/19 - 2:01pm

This post is only a catch-up. In my opinion, I think that any project needs to have their social networks accounts and website so people can easily find information about it. To achieve that, the first thing that I put some work into, was on the website. Atelier isn't only for Linux users, so we [...]

Monitoring 3DPrinters with Atelier

Sat, 2017/11/18 - 2:15pm

One of the features that were asked a lot of times on our Telegram groups was the ability to monitor the 3DPrinter via a stream feed. Since we released the beta version of the AtCore couple weeks ago, we are trying now to get more work done with Atelier. In our project, Atelier is the [...]

Applications 17.12 Beta available for testing with KDE neon dev stable

Fri, 2017/11/17 - 1:34pm

Please join us in testing 17.12 release of KDE applications!

They are ready in Dev Stable.

Happy testing!

CI for Windows installer and macOS bundle generation: KDE Binary Factory

Fri, 2017/11/17 - 9:00am

For some time now the KDE community has had a separate Continuous Integration system running which repeatedly generates Windows installers and macOS app bundles (DMG) for a specific subset of KDE projects.

For a starter, all the KDevelop on Windows releases (read: 32-bit and 64-bit NSIS-based installers) are nowadays generated on this CI and the binary blob which falls out of this process is used as official Windows release installer on the KDevelop website.

So, what exactly does KDE's Binary Factory do, and how does it work -- and why was it created to begin with?


With the move to KF5 it became easier to get KDE applications running on non-Linux systems. With the very-much needed split of kdelibs into smaller components, it is now easier to pick & choose what to use on these platforms -- so arguably it is easier to cut out unwanted dependencies (think of DBus, KDE service daemons, ...) on these platforms and stop packaging them at all.

Still, the process of getting a reproducible build of KDE project X on either Microsoft Windows and/or macOS has always been a daunting task. Lots of energy went into KDE's Craft (an an open source meta build system and package manager, primarily but not only focused on C++ projects) lately which also helped streamlining this process.

It has never been that easy to generate installers or app bundles for any KDE project (where Craft has a recipe for) using Craft. The missing bit was automating all this by using Craft on Continuous Integration system.

Introducing the KDE Binary Factory

KDE's Binary Factory is a Continuous Integration system primarily created for generating Microsoft Windows installers and macOS app bundles. It has no relation to KDE's CI system other than by sharing some of the machines of the CI workers. The jobs on the Binary Factory are mostly generated manually -- adding projects is very easy though.

How does it work?

First of all, the Binary Factory doesn't really know how to build a particular project. The logic for that -- i.e. which dependencies need to be there, which CMake arguments should be used etc. pp. -- is all stored only in Craft blueprints.

The Binary Factory has a set of projects for which Jenkins will trigger jobs on a nightly basis. It does not do much more than calling Craft like this each night:

# exemplary Windows job for kbruch # rebuild kbruch python "C:/Packaging/craftroot32/craft/bin/" -v --fetch --unpack --compile --install --qmerge kbruch # after that, package kbruch (create a Windows installer) python "C:/Packaging/craftroot32/craft/bin/" -v --package kbruch

If there would be missing dependencies for kbruch, or dependencies out of date, Craft would automatically install or update the depedencies, resp.

After the package has been created, Jenkins is instructed to archive the result. You can easily grab the freshly generated installer on the job page, for instance: which usually shows something alike:

Last Successful Artifacts kbruch-17.08.3-windows-msvc2017_32-cl.exe 32.65 MB view

One click there and the download of the installer starts.

How do I add my pet project?

If you do want your project to be on the Binary Factory, add a recipe (documentation here) and then notify me.

I'd urge you to try building your pet project on Windows yourself first, via Craft, so we don't need to play ping-pong with the CI too much. You (the "project owner") are responsible that the Craft blueprint for your project is up-to-date and works.

Now what to do with the installer/bundle?

The Binary Factory generates Windows installers and macOS bundles at this point. Now what I would not like to see (what already happened) is that individual projects just link to the Binary Factory job pages and tell people 'here, this is the official release of project X'. You shouldn't just link to untested binaries.

Instead I'd like to establish this workflow for project owners:

If there's a release of, say, Kate:

  1. Project owner waits for or triggers a build of Kate on the Binary Factory
  2. Project owner verifies that the installer/bundle works(!)
  3. Project owner uploads the install/bundle on the KDE FTP into the correct project folder
  4. Project owner then adds links to the newly uploaded files on the FTP to the project homepage

Note: Of course linking to the Binary Factory to point users to 'nightly builds of project X' is of course fine -- we do that for KDevelop, too.

Why should I use the Binary Factory?

It has some nice benefits:

  • Automated nightly installer/bundle generation for your project, if set up properly
  • Automated signing of installers/bundles with a KDE-wide code signing certificate
    • Avoids false-positives in AV scanners, warnings by Windows SmartScreen, etc. pp.
  • No need to run a Windows or macOS CI yourself, for package generation
  • Kept up-to-date implicitly via Craft features
    • I.e. right now we're using Qt 5.9.1, KF5 5.37.0
  • It's all there already -- you just need to use it!
Future plans macOS bundles

We have a macOS worker set up, but unfortunately the DMG package generation for macOS is still somewhat broken in Craft. We need to sit down and work on this. I don't think there's a lot work left; we can actually create macOS bundles just fine for KDevelop, but there are a few problems with missing shared libraries and/or wrongly set up library metadata (RPATH, etc.).

For the time being, the main focus is on providing Windows installers, but we'll have a look into providing macOS bundles soon-ish now.

Final words

I'm happy to see this finally working in an automated way. At least for the KDevelop team, the automated installer generation for the KDevelop installers has been a major step into the right direction. After a couple attempts of doing that on personal machines where we literally always managed to break our setup, we now have a clean, automated process of generating them on a isolated machine.

I hope the Binary Factory can be useful to other KDE projects as well. I'm happy to help you guys out to set up jobs on the CI so your personal projects can be built.

Last Weeks Activity in Elisa

Thu, 2017/11/16 - 10:03pm

Elisa is a music player designed to be simple and nice to use.

Alexander Stippich did several fixes in the interface while I focused on bugs related to the music database.

KDE community has been working on flatpak. For that, a runtime including Qt5 and KF5 frameworks has been written. Aleix Pol has written a recipe to build Elisa under flatpak. It is now built regularly from the git repository.

I am very enthusiastic about that since that should allow easy testing of the application thanks to the work of a lot of people on Flatpak and the support inside the KDE community.

Currently some features are not well working and some audio codecs are missing in the current runtime. Use it at your own risk and keep in mind that some parts are known not to work correctly. Please report any bugs or suggestions to the tracker.

In order to test, you can (extracted from the KDE community wiki) do the following steps:

I am afraid I have no snapshots to show today.

The following things have been integrated in Elisa git repository:

  • Fix context menu for MediaAllTrackView and MediaAlbumView by Alexander Stippich ;
  • Improvements of the single entry in the playlist by Alexander Stippich ;
  • Fixes in the play list for different albums with the same title ;
  • Fix mouse interaction and dragging while LabelWithTooltip is truncated by Alexander Stippich ;
  • Preliminary work to allow to store multiple artists for a track or an album ;
  • Report database errors and check for them during tests. I fail to see how to report such an error to an user (being a developer or a non technical user does not make any difference here) ;
  • Preliminary work to allow database versioning and evolution ;
  • Fix several bugs around the music database ;
  • Modify a bit the way tracks are tested for uniqueness ;

I am still planning to do an alpha release soon. This is an important milestone to test that everything needed to make a proper release after review will be possible.

Kubuntu Most Wanted

Wed, 2017/11/15 - 10:12pm

Kubuntu Cafe Live, is our new community show. This new format show is styled using a magazine format. We created lots of space for community involvement, breaking the show into multiple segments, and we want to get you involved. We are looking for Presenters, Trainers, Writers and Hosts.

  • Are you looking for an opportunity to present your idea, or application ?
  • Would you like to teach our community about an aspect of Kubuntu, or KDE ?
  • Would you like to be a show, article or news writer ?
  • Interested in being a host on Kubuntu Cafe Live ?

Contact Rick Timmis or Valorie Zimmerman to get started

The Kubuntu Cafe features a very broad variety of show segments. These include free format unconference segments which can accommodate your ideas. Dojo for teaching and training, Community Feedback, Developers Update, News & Views.

For upcoming show schedule please check the kubuntu calendar

Check out the show to see the new format,


Qt WebGL: Cinematic Experience

Tue, 2017/11/14 - 6:11pm

Following the previous blog posts related to the Qt WebGL plug-in development updates, we have some features to show.

As a quick recap: Qt WebGL QPA plug-in is a new platform plug-in which will allow your Qt Quick applications to be streamed and ran directly into the browser using streaming of OpenGL calls and rendered in the client using WebGL. For more info, check the related posts below.

After some months of bug-fixing and other improvements, we managed to accomplish some cool stuff. Take a look at one of our good old demos – Qt Cinematic Experience, running remotely on a TV Web Browser:

In the video, you can see the demo running remotely on the built-in web browser of a usual TV. For the interaction with the application, I’m using the ‘extremely fast’ TV remote which is interpreted as a mouse device. This shows that you can use the WebGL streaming plug-in if your application is targeting a smart-device with a browser to try your applications without creating and deploying the application, this can save a lot of development time.

Remember, you’ll be able to test the plug-in in the upcoming Qt 5.10 release. If you find any bug don’t hesitate to report it.

NOTE: If your application is using Text.NativeRendering and your text is not displayed properly, try removing this option.

The post Qt WebGL: Cinematic Experience appeared first on Qt Blog.

A Giant UPS for the Power Grid

Tue, 2017/11/14 - 5:30pm

I've never really written much about my job since I've always done pretty mundane stuff. Yes, I always had really cool jobs (well, internships) except the one other full time job that I had - that I'd rather not talk about - but as far as the tech goes nothing was really out of the ordinary. That's changed, however, and the full extent of the coolness of my current job is starting to hit me so hard now I thought I just had to write about it.

Power Grids, Explained.

The world runs on electricity. Everything that you see around you runs on electricity - in some sense having power is almost as important as having drinking water. As it turns out though, if everyone generated their own electrical power according to their own needs, it would be massively inefficient. Power generation works on economies of scale, so it's much cheaper - and more efficient in a thermodynamic sense - to generate a massive amount of power in a specialised location and distribute that power using cabling to all the consumers.

So a nation's electricity infrastructure will consist of a couple of million households or offices or factories and stuff that consume power, a few hundred electricity generation sites, and a tangle of cables that carry power from these power plants to the consumer. This tangle of cables (okay, they're not really a tangle - they're far more organised and heavily engineered) form a nation's power grid.

An ideal power grid is a fully connected graph. That's why it's a grid - there's always a path from every single consumer to every single electricity producer in the system, so that if one power plant fails the consumer is always connected to all the other ones to be able to draw power from them.

However, with so many things connected to the power grid, things start getting complicated.

Let's begin with some high school physics. Joule's Law1 - the first one - says that the heat dissipated by a conductor is directly proportional to the square of the current that passes through it. That means that 1 ampere of current passing through a wire with 1 ohm of resistance will produce 1 joule of heat per second, but 2 amps of current passing through the same conductor will produce 4 joules of heat. A thousand amps of current will... produce enough heat to melt the wire and there will soon be no power grid.

The power grid has to carry tens to hundreds of gigawatts of power.

Notice that I said power, not current. An electrical appliance only cares about how much power it consumes. Power is the product of voltage and current, which means you can carry the same amount of power through a wire by using a low amount of current at an insanely high voltage. High voltages don't melt wires. Also, wires used in power grids have pretty low resistances - usually of the order of 10-10 ohms per meter, but that's still 0.001 Ohms for a 1000 km stretch of wire. If you do the math, a thousand amps of current will produce 10,000 joules of heat every second - or 10 kW of heat. That's a lot of wastage.

So electricity transmission happens at even lower amperages, and insanely high voltages. Common transmission lines operate at anywhere between 345 kilovolts to 750 kilovolts, but there are transmission systems that operate at 1000 kV. A full megavolt. At 1 MV, the current needed to transmit one megawatt of power is just one ampere.

Remember what I said about electrical appliances caring only about the power they consume? I lied... sort of. You can't just connect your television to a one megavolt power line. Potential differences (voltages) have real implications in the physical world - most importantly, if you bring a conductor at a +1 MV potential within a few metres of anything at ground potential (such as yourself), you'll get a brilliant flash of lightning and pretty soon that thing at ground potential will cease to exist. Air is a good insulator, but not enough to prevent an electrical arc between two objects at a million volts of differece.

So just before you deliver power to the consumers, you have to step down the voltage to something a little more sane. In Europe, India and most of the world that sane voltage is 230 volts. At that voltage, the amount of power that a common household consumes does not need a lot of amps to transmit. In the US that voltage is 110 volts, but the USA is a strange country so I will attempt to explain this.

And here's where things start getting interesting. It turns out - due to something called inductive coupling2 - that you can pretty much convert between low voltage and high current to high voltage and low current, without (theoretically, anyway) any loss in power (i.e., the multiplication product of the voltage and current) by just wrapping the wires around a piece of metal in a specific way. The device used to do this is called a transformer, and transformers are an integral part of every power grid.

There's just one problem. Inductive coupling only works if the voltage in the conductor keeps changing all the time. And this is where alternating current (AC) comes in. With alternating current, the voltage constantly cycles from +230V to -230V, 50 times every second (in the EU, India and other not-strange countries). Okay, it cycles from +325V to -325V because 230V is the RMS voltage, not peak voltage, but that's just splitting hairs.

Remember these numbers. These are national standards. Every wall socket in an European country has to output 230V of AC electricity at a frequency of 50Hz. If it stops doing that, then Bad ThingsTM will happen.

Power Grids, Part 2

In part 1, we learnt that the power grid distributes power from power plants to homes and stuff. We also learnt that the power grid carries power at very high voltages and very low currents, and because it's so efficient to step down voltages at the consumer site using inductive coupling, the whole system uses alternating current.

In part 2, we learn that the power grid has one major drawback - it can only transmit power, not store it. So at any given point in time, all the power plants in the system together produce just as much power as all the consumers together need. No more, no less. All the power that is being produced has to be consumed.

So why can't we just produce more power than is required? Well, here's the deal. Most power generation in power plants is done using Synchronous Alternators. That's a fancy term for generators which generate alternating current at the exact same frequency that they're turning at. This means all the generators in power plants in Europe keep turning at 3000 RPM (50 rotations per second, for AC electricity at 50Hz).

Imagine you are driving your car on a level road, doing 50 KPH. Suddenly, you start going up an incline. Your speed starts dropping, because your wheels now start to turn more slowly. You need to press down harder on the accelerator to make your wheels go faster now, if you want to maintain that 50 KPH speed.

Synchronous Alternators work the same way. The more power you draw from them, the harder it gets to turn them. If whatever is driving that alternator - a diesel engine, a nuclear reactor, a gas turbine or something - does not step up its power output, the alternator will start turning more slowly, and the frequency of the power output will drop. Similarly, if you draw too little power, the driving engine will be turning the alternator with too much power, and it will turn faster than it needs to.

The load on the electrical grid changes every moment, because at any given moment someone is turning something off and someone else is turning something on. Power plants have to adjust their power output every milisecond. Unfortunately, a nuclear reactor, or a steam boiler, or a gas turbine is not like a car engine. You can't just press an accelerator pedal and make it instantly go vroom vroom. Power plants need a lot of time to react to load changes. Tens of seconds to full minutes, sometimes.

Frequency Containment Reserves

This is where the work that I do comes in.

Devices that hook up to the power grid tend to be pretty tolerant about voltage fluctuations, but not about frequency fluctuations. Indeed, some devices (including critical medical devices) rely on the power grid cycling exactly 50 times a second to count time. They measure one second by counting 50 cycles on their input current.

A 100 milihertz deviation of frqeuency is therefore considered a power grid emergency. This means (by EU standards) the power grid frequency can never drop below 49.9Hz and never go above 50.1Hz.

Frequency Containment is the process of controlling the frequency of the power grid. If we see that the frequency is too high, we create load to consume more power from the system and bring the frequency down. If we see that the frequency is too low, we deliver more power into the system (i.e., create a negative load) to bring the frequency of the grid up.

So how does FC become FCR? The R in FCR stands for Reserves. Batteries.

If you bring enough batteries together, you have the capacity to charge and discharge them really fast. Enough Lithium-Ion batteries in one place can be used to draw a massive amount of power from the grid in a very short amount of time. They can also inject a massive amount of power into the grid in just as short a period of time.

And so that's what FCR is. It's a giant uninterruptible power supply for a power grid. It draws or injects power from or into the power grid for the short spans of time that it takes for the power plants to react to changes in power demands and spin their generators faster or slower.

All national power grids have FCR units, typically operated by the power generation companies or the grid operators. In the continental European power grid, however, the FCR services market is open to all players, so there are private players with very innovative control technology competing with established publicly owned infrastructure service providers.

In fact, in the EU, even you as a private person can be an FCR provider. Do you have a giant battery (like a Tesla PowerWall) at home? Install some hardware that reacts to the grid frequency, hook it up to your power line with a grid-tie inverter, and tell your utility company that you'd like to provide local FCR services. TSOs (Transmission Service Operators, a fancy term for power grid operators) theoretically provide daily contracts to individuals who wish to provide FCR services with batteries in their homes. It's not really effective - one PowerWall can't really make much of a dent in the local grid, but in the current EU legal framework it's already possible to do it.

Best of all, as Europe switches to renewable energy, the FCR market is here to stay. Wind turbines and solar panels all generate fluctuating amounts of DC power depending on the sun and the wind, and need grid-tie inverters to convert that into AC power at grid voltages. Electronic inverters have their own set of problems that make maintaining a proper sinusoidal frequency even more difficult under constantly fluctuating loads, so FCR operators will need to provide more and more power balancing capacity.

I hope this was a good explanation of what FCR is. If you have questions or comments, please get in touch! Till next time, tschau!

Announcing KTechLab 0.40.0

Tue, 2017/11/14 - 10:00am

KTechLab, the IDE for microcontrollers and electronics, has reached a new milestone: its latest release, 0.40.0, does not depend on KDE3 and Qt3, but on KDE4 and Qt4. This means that KTechLab can be compiled and run on current operating systems.

OpenStack Summit Sydney - Slides and Videos

Tue, 2017/11/14 - 8:59am

Back from the OpenStack Summit in Sydney. It seems all my sessions were recorded this time. Here you can find the slides and videos:
  • Multicloud requirements and implementations: from users, developer, service providers (Panel Discussion) [video]
  • Email Storage with Ceph (Lightning Talk) [slides | video]
  • Vanilla vs OpenStack Distributions - Update on Distinctions, Status, and Statistics (Talk) [slides | video]


Tue, 2017/11/14 - 3:38am

Hi there,

It's been a while since my last post over here. After being drained with a lot of work on the very first edition of QtCon Brasil, we all had to take some rest to recharge our batteries and get ready for some new crazinesses.

This post is a short summary of the talk I presented at Akademy 2017, in the quite sunny Almería in Spain. Akademy is always a fascinating experience and it's actually like being at home, meeting old friends and getting recurrently astonished by all awesomeness coming out of KDE community :).

My talk was about architecting Qt mobile applications (slides here | video here). The talk started with a brief report on our Qt mobile development experiences at IFBa in the last two years and then I explained how we've been using lean QML-based architectures and code generators to leverage the productivity and provide flexible and reusable solutions for Qt mobile applications.

Our approach ‒ named Meg ‒ is defined by two major components: a lean dynamic QML-based architecture for Qt mobile applications and a code generator for enabling the creation of modular RESTful servers and Qt mobile applications. Meg provide a Ruby-based CLI, built on top of Thor CLI framework, with templates specified in ERB (Embedded RuBy). We also designed a nice architecture for the generator itself, so that new templates can be defined as new Thor modules. The framework that implements the lean architecture we designed for Qt mobile applications already provides a JSONListModel QML object, making it easier to implement RESTful Qt clients.

The following templates are currently available:

  • Sinatra RESTful server with modular architecture (sinatra-server)
  • Ruby Sinatra RESTful service plug-in (sinatra-service-plugin)
  • Simple Qt mobile app with plugin-based architecture (simple-app)
  • Qt mobile RESTful app with plugin-based architecture (restful-app)
  • Qt mobile RESTful client plug-in (restful-client-plugin)

You're welcome to try it :) Here is what you need to do:

* git clone * install ruby * run ‘gem install bundler’ * Run ‘bundle install’ * Meg CLI is available in ‘./bin/meg’ Creating a simple plugin-based QML application

So, these are the steps to create a simple plugin-based QML application using Meg:

1. Create a new project using simple-app as project type (template): ./bin/meg new SimpleApp -t simple-app 2. Create a new plugin, for example, to display students information: ./bin/meg generate plugin students -t basic-plugin -a simpleapp/ 3. Create another plugin, for example, to display teachers information: ./bin/meg generate plugin teachers -t basic-plugin -a simpleapp/

In step 1, we create a new project in the simpleapp directory, using the basic-plugin template (-t option). In steps 2 and 3, we create two plugins (students and teachers), using the basic-plugin template and install them at simpleapp directory. After that, your application is available at simpleapp directory and can be built by a normal qmake/make run.

Here is what you get out of those steps:

Creating a Sinatra-based RESTful server

Besides automating the creation of Qt-based mobile clients, Meg also provides some templates for generating RESTful servers based on Sinatra Ruby microframework. As in the client-side, we also designed a plugin-based architecture for the RESTful server, where new services are provided by separate modules. Here are the steps to create a new RESTful server:

1. Create the server project: ./bin/meg new myserver -t sinatra-server 2. Create a plugin for handling CRUD operations for an application for visualizing a conference program: ./bin/meg generate plugin Conference -t sinatra-service-plugin -a myserver/ acronym:string name:string city:string country:string venue:string start_date:datetime end_date:datetime 3. Create a similar plugin for handling CRUD operations for handling speakers: ./bin/meg generate plugin Speaker -t sinatra-service-plugin -a myserver/ name:string affiliation:string shortbio:string

To run the server you first need to setup the database:

$ cd myserver $ rake db:migrate

And populate some data:

$ sqlite3 db/development.sqlite3 insert into conferences values (1, 'QtCon-BR', 'QtCon Brasil', 'São Paulo', 'Brasil', 'Espaco Fit', '2017-08-18 09:00:00', '2017-08-20 18:00:00'); insert into conferences values (2, 'Akademy', 'Akademy', 'Berlin', 'Germany', 'BCC', '2017-03-01 09:00:00', '2017-03-03 18:00:00'); insert into conferences values (3, '', '', 'Guwahati', 'India', 'IIT', '2017-03-10 09:00:00', '2017-03-12 18:00:00'); insert into speakers values (1, 'Our beloved Konqi', 'KDE', 'Konqi is awesome'); insert into speakers values (2, 'Dirk Gently', 'Adams', 'He is a holistic detective');

Now, you can start the server:

$ ruby myserver.rb Creating a Qt-based RESTful client

Creating a Qt-based RESTful client for our server is also quite simple:

$ ./bin/meg new MyApp -t restful-app $ ./bin/meg generate plugin conferences -t restful-client-plugin -a myapp -i name -c university acronym:string name:string city:string venue:string start_date:datetime end_date:datetime $ ./bin/meg generate plugin speakers -t restful-client-plugin -a myapp -i name -c microphone name:string affiliation:string shortbio:string

Such commands create a new Qt mobile project, using the restul-app template. Then, two RESTful client plugins for the aforementioned services are created and installed in the project directory. You can now build your Qt mobile client and, once started the server, this is what you get out from it:


So, that's all folks! We hope this can improve somehow the development workflow and of RESTful-based Qt mobile applications. This is a one-year project being carried out at IFBa by myself and Eliakin Costa. Hopefully, we'll get back with some news soon.

See you!

Latte Dock v0.7.2 arrives in KDE and Kubuntu backports PPA

Mon, 2017/11/13 - 5:44pm

Latte Dock, the very popular doc/panel app for Plasma Desktop, has released its new bugfix version 0.7.2. This is also the first stable release since Latte Dock became an official KDE project at the end of August.



Version 0.7.1 was added to our backports PPA in a previous round of backports for Kubuntu 17.10 Artful Aardvark.

Today that has been updated to 0.7.2, and a build added for Kubuntu 17.04 Zesty Zapus users.

The PPA can be enabled by adding the following repository to your software sources list:


or if it is already added, the updates should become available via your preferred update method.

The PPA can be added manually in the Konsole terminal with the command:

sudo add-apt-repository ppa:kubuntu-ppa/backports

and packages then updated with

sudo apt update
sudo apt full-upgrade

Upgrade notes:

~ The Kubuntu backports PPA includes various other backported applications and Plasma releases, so please be aware that enabling the backports PPA for the first time and doing a full upgrade would result in a substantial amount of upgraded packages in addition to Latte Dock.

~ The PPA will also continue to receive further bugfix updates when they become available, and further updated releases of Plasma and applications where practical.

~ While we believe that these packages represent a beneficial and stable update, please bear in mind that they have not been tested as comprehensively as those in the main Ubuntu archive, and are supported only on a limited and informal basis. Should any issues occur, please provide feedback on our mailing list [1], IRC [2], and/or file a bug against our PPA packages [3].

1. Kubuntu-devel mailing list:
2. Kubuntu IRC channels: #kubuntu & #kubuntu-devel on
3. Kubuntu PPA bugs:

Latte bug fix release v0.7.2

Mon, 2017/11/13 - 12:57pm

Latte Dock v0.7.2   has been released containing many important fixes and improvements!

KDE Project

Latte managed to pass the kde review process and become an official kde project! It can be found in extragears (this is where projects with independent release schedule are landed). By becoming a kde project Latte has already benefited in many areas, many more translations through kde localization teams, plasma devs sharing their knowledge in qt, qml etc. etc.

Please everyone using 0.7.1 update to 0.7.2 as this should fix any crashes relevant to qt>=5.9.2 and at the same time it should provide you with a more concrete exprerience.

Go get v0.7.2  from,


  • fix crashes introduced with qt 5.9.2 when the user hovers the dock, after deleting some applets etc.
  • highly improve the attention bouncing animation
  • fix coloring for shortcut badges
  • various fixes for animations and glitches
  • hide internal tasks separator at the edges
  • improvements for window manager disabled compositing window state
  • fix small issues with title tooltips
  • pass kde review process
  • move source to kde infrastructure
  • more translations from kde localization teams

Interview with Lars Pontoppidan

Mon, 2017/11/13 - 8:00am
Could you tell us something about yourself?

Yes certainly! I’m Lars Pontoppidan; a 36 year old, self-employed programmer, game developer, musician and artist.

I’ve been drawing and painting since I could put my pen to the paper – so about 35 years.

I made my first recognizable painting when I was around 3 or 4 – my mom still has it framed at her house.

I’ve always wanted to end up at some level where I could combine all my skills and hobbies. Somewhere along the way I found out that game development demand a lot of the skills I possess – so 1.5 years ago, I decided to cancel all my contracts with my clients and go for a new path in life as “indie game developer”. I’ve now found out that it’s probably the worst time I could ever get into indie game development. The bubble has more or less already burst. There’s simply too many game releases for the consumers to cope with at the moment. But hey I’ve tried worse so it doesn’t really bother me – and I get to make art with Krita!

Do you paint professionally, as a hobby artist, or both?

Both I’d say. I’ve always been creating things on a hobby level – but have also delivered a lot of designs, logos and custom graphics as self-employed. I like the hobby work the most – as there are no deadlines or rules for when the project is done.

What genre(s) do you work in?

Cartooning, Digital painting, Animation and Video game art. All these (and maybe more) blend in when producing a game. I also like painting dark and gloomy pictures once in a while.

I think I’ve mostly done cartoon styled work – but with a grain of realism in it. My own little mixture.

I started out with pencil and paper – moved to the Deluxe Paint series, when I got my first Amiga – and ended up with Krita (which is an absolute delight to work with. Thanks to you guys!). I still occasionally do some sketching with pencil and paper – depending on my mood.

Whose work inspires you most — who are your role models as an artist?

* A list too long for me to compile here, of sci-fi fantasy artists. Peter Elson is the first that comes to mind. These artists, in my opinion, lay the very foundation of what’s (supposedly) possible with human technology – and currently, the only possibility to get a glimpse of how life might look like other places in the vast universe that surrounds us. It’s mind blowing how they come up with all the alien designs they do.

* Salvador Dalí – It’s hard to find the right words for his creations – which, I think, is why his works speak to me.

* “vergvoktre” He’s made some really dark, twisted and creepy creations that somehow get under my skin.

How and when did you get to try digital painting for the first time?

The very first digital painting program I’ve ever tried was KoalaPainter for the Commodore 64. I had nothing but a joystick and made, if I recall correctly, a smiley face in black and white.

Thankfully my Amiga 500 came with a copy of Deluxe Paint IV, a two-button mouse and the luxury of a 256+ color palette.

What makes you choose digital over traditional painting?

The glorious “Undo” buffer. I mean… It’s just magic.Especially in the first part of the day (before the first two cups of coffee) where your hand just won’t draw perfect circles, nor any straight lines.

How did you find out about Krita?

I read an article about the Calligra office suite online. It described how Calligra compared to Open Office. I eventually installed it to see how it compared to Open Office and boom there was Krita as part of the package. This was my first encounter – unfortunately it ended up with an uninstall – because of stability issues with the Calligra suite in general.

What was your first impression?

The first impression was actually really good – unfortunately it ended up a bit in the shadows of the Calligra suite’s combined impression. This wasn’t so positive after a few segfaults in the different applications. Luckily I tried Krita later when it entered the Qt5 based versions. I haven’t looked back since.

What do you love about Krita?

The brush engines and the “Layers” docker.

The brushes, and most of the default settings for them, just feel right. Also the many options to tweak the brushes are really awesome.

The layers docker was actually what gave me the best impression of the program – you had working group layers – and you could give any layer the same names! None of the graphic creation applications I used a few years back had these basic, fundamental features done right (Inkscape and GIMP – I’m looking at you). Krita’s layers didn’t feel somewhat broken, hacked-on and had no naming scheme limitations. A small thing that has made a big difference to me.

What do you think needs improvement in Krita? Is there anything that really annoys you?

Uhm… I was going to write ‘speed’ – but everybody is screaming for more of that already. I know how the developers are doing their best to get more juice.

Some great overall stability would be nice. I’ve only ever had 2 or 3 crashes with GIMP over a long period of time – the count is a bit higher with Krita – on a shorter time scale.

My biggest feature request would be: Cut’n’paste functionality through multiple layers, that also paste in separate layers. This would greatly improve my workflow. I’ve always worked with a group layer containing separate layers for outline, color, texture, shadow etc. – on each e.g. movable part in a character rig. So I would really benefit from a (selection based) cut’n’paste that could cut through all the selected layers – and paste all these separate selection+layers elsewhere in the layer tree.

What sets Krita apart from the other tools that you use?

I find that most of Krita’s tools actually do what you expect them to do – without any weird limitations or special cases. Plus the different brushes, brush engines and all the flexibility to tweak them, are real killer features.

The non-destructive masks (Transparency, Filter and Transform) are also on my list of favourite features. I use these layer types a lot when creating game art – to make them blend in better with the game backgrounds.

And maybe the single most important thing: it’s free and open source. So I’m quite certain I will be able to open up my old Krita files many years into the future.

… and speaking of the future; I really look forward to getting my hands dirty with the Python scripting API.

If you had to pick one favourite of all your work done in Krita so far, what would it be, and why?

It would have to be the opening scene of my upcoming 2D game “non”. It’s using a great variety of Krita’s really awesome and powerful features. The scenes in the game features full day and night cycles where all the lighting and shadows change dynamically – this makes it especially hard to get beautiful painted scenes in all the states each scene has between day and night. Krita’s tool set makes it easier and quicker for me to test out a specific feature for an object or sprite – before throwing it into the game engine.

The biggest scene I have so far is 10200×4080 pixels – Krita was actually performing decently up to a certain point where I had to break the scene into smaller projects. I’m not blaming Krita for this ��

What techniques and brushes did you use in it?

For cartoon styled work I use a Group layer containing:
* background blend (Transparency Mask)
* shadows (Paint layer)
* outlines (Paint layer)
* textures (Group layer)
* solid base color(s) (Paint layer)

For outlines I use the standard Pixel brush ‘Ink_gpen_10’ – it has a really nice sharp edge at small tip sizes. For texturing I mostly use the ‘Splatter_thin’ Pixel brush – with both standard and custom brush tips and settings depending on the project at hand. For shadowing I really like the ‘Airbrush_pressure’ and ‘Airbrush_linear_noisy’ Pixel brushes. I use a selection mask based on the solid base color layer (Layer name -> Right mouse click -> Select Opaque) – and start shadowing the object.

Where can people see more of your work?

In my games:
On my band album covers:

Anything else you’d like to share?

I’d like to thank everyone involved with Krita for making this great open source and free software available to the world. I hope to soon get enough time on my hands to help the project grow.

Take care and be nice to each other.

KDevelop 5.2 released

Sun, 2017/11/12 - 3:30pm

KDevelop 5.2 released

A little more than half a year after the release of KDevelop 5.1, we are happy to announce the availability of KDevelop 5.2 today. Below is a summary of the significant changes -- you can find some additional information in the beta announcement.

We plan to do a 5.2.1 stabilization release soon, should any major issues show up.


With 5.1, KDevelop got a new menu entry Analyzer which features a set of actions to work with analyzer-like plugins. During the last 5.2 development phase, we merged more analyzer plugins into kdevelop.git which are now shipped to you out of the box:


Heaptrack is a heap memory profiler for C/C++ Linux applications.

heaptrack screenshotScreenshot of heaptrack run from the KDevelop plugin, visualizing memory usage of KDevelop.cppcheck

cppcheck is a well-known static analyzer for C++, and can now also be run from within KDevelop by default, showing issues inline.

KDevelop with Cppcheck integration KDevelop with Cppcheck integrationImproved C++ support

A lot of work was done on stabilizing and improving our clang-based C++ language support. Notable fixes include:

  • Properly pass on some categories of compiler flags from the build system to the analyzer, fixing e. g. parse errors in some Qt header files which cannot be parsed if a certain compiler configuration is not respected
  • Improve performance of C++ code completion in some situations
  • Restore some completion features from 4.x, such as automatic insertion of semicolons in some cases

More improvements, such as better handling of template class member functions, are already being worked on and will be in one of the next versions of KDevelop.

Improved PHP language support

Thanks to Matthijs Tijink we've got many improvements for the PHP language support. The number of syntax warnings with modern PHP code should be greatly reduced, and the type inference is better. The improvements include added support for new language features, work on the type system, as well as bug fixes.

PHP support in KDevelop 5.2

PHP support in KDevelop 5.2


Improved Python language support


Mostly thanks to Francis Herne, some cleanup has been done in the Python language plugin as well.

  • Fixed a false-positive warning when a name used in a closure was defined later in the file.
  • Fixed highlighting of local variables in comprehensions and of parameters in lambda definitions.
  • Infer the correct type when slicing a tuple with constant integers.
  • Infer the correct type from `and` or `or` expressions (Nicolás Alvarez).
  • Internal code cleanups.
Ongoing support for other platforms

We're continuously improving the Windows version of KDevelop. For the Windows version, we upgraded the Qt version to 5.9.1, the KF5 version to 5.37 and the LLVM/Clang version to 5.0.0. Also noteworthy, on Windows, we now also ship QtWebEngine instead of QtWebKit for the documentation browser.

Get it

Together with the source code, we again provide a prebuilt one-file-executable for 64-bit Linux, as well as binary installers for 32- and 64-bit Microsoft Windows. You can find them on our download page.

The 5.2.0 source code and signatures can be downloaded from here.

Should you find any issues in KDevelop 5.2, please let us know in the bug tracker.

sbrauch Sun, 2017/11/12 - 16:30

GCompris available from the Windows Store

Fri, 2017/11/10 - 8:18pm

As the title says, you can now buy the full version of GCompris for Windows 10 from the Windows Store.

This is a good way to make it more visible and easy to find for new users. Also, if you buy GCompris from the Windows Store, you get automatic updates, you can install it easily on all your Windows systems, and it will run in a sandbox.

Spread the word !

GCompris in Windows Store

GCompris in Windows Store

If you prefer to not use the store, or if you only want the free demo version, you can still download it from our Download page, and buy the activation code to unlock all the activities. The store is just one more way to distribute GCompris, and to provide some income to support the project.

As usual, the full version is free on Free-Software operating systems like GNU/Linux, but for proprietary operating systems like Windows, the full version has a cost. Of course, the source code of GCompris is and will always be under a Free-Software license.

Note: the package on the store contains the version using the software renderer instead of OpenGL, since it’s the only way we have for now to make sure it will work on any computer. If you really want the version using OpenGL, get it from our Download page.

Plasma 5.11.3 bugfix release now in backports PPA for Artful Aardvark 17.10

Thu, 2017/11/09 - 6:03pm

The 3rd bugfix update (5.11.3) of the Plasma 5.11 series is now available for users of Kubuntu Artful Aardvark 17.10 to install via our Backports PPA. This update also includes an update for Krita to

To update, add the following repository to your software sources list:


or if it is already added, the updates should become available via your preferred update method.

The PPA can be added manually in the Konsole terminal with the command:

sudo add-apt-repository ppa:kubuntu-ppa/backports

and packages then updated with

sudo apt update
sudo apt full-upgrade

Upgrade notes:

~ The Kubuntu backports PPA includes various other backported applications, so please be aware that enabling the backports PPA for the first time and doing a full upgrade will result in a substantial amount of upgraded packages in addition to Plasma 5.11.3.

~ The PPA will also continue to receive bugfix updates to Plasma 5.11 when they become available, and further updated applications where practical.

~ While we believe that these packages represent a beneficial and stable update, please bear in mind that they have not been tested as comprehensively as those in the main Ubuntu archive, and are supported only on a limited and informal basis. Should any issues occur, please provide feedback on our mailing list [1], IRC [2], and/or file a bug against our PPA packages [3].

1. Kubuntu-devel mailing list:
2. Kubuntu IRC channels: #kubuntu & #kubuntu-devel on
3. Kubuntu PPA bugs:

Kdenlive 17.08.3 released

Thu, 2017/11/09 - 5:22pm

The last dot release of the 17.08 series is out with minor fixes. We continue focus on the refactoring branch with steady progress towards a stable release.


  • Set a proper desktop file name to fix an icon under Wayland. Commit.
  • Sort clip zones by position instead of name. Commit.
  • Fix melt.exe finding on windows. Commit.
  • Revert “Windows: terminate KDE session on window close”. Commit.
  • Make KCrash optional. Commit.

Learn Digital Painting with Krita in Bogota, Colombia

Thu, 2017/11/09 - 10:54am

Lina Porras and David Saenz from the Ubuntu Colombia user group wrote to tell us that they will give an introduction to digital painting with Krita starting this Saturday. David will be teaching Krita four Saturday sessions.

It will be an introductory course where people from 14 years old and older will learn the basics of digital painting and will start painting in Krita. Here is more information:

And you can follow them on twitter (@ubuntco and facebook as well:

If you are thinking of organizing a Krita course for your local user group, community art college or similar,  contact us so we can help you spread the word, too!