Subscribe to Planet KDE feed
Planet KDE -
Updated: 34 min 4 sec ago

#30: GSoC with KDE Now – 8

1 hour 39 min ago

Hey ! I’m making KDE Now, an application for the Plasma Desktop. It would help the user see important stuff from his email, on a plasmoid. It’s similar to what Google Now does on Android. To know more, click here.


Time sure passes fast when you are enjoying something. And so, another week has passed and I’m back with another status update. Things look very very good from here on. At least, not until we decide to change something drastic that I hadn’t proposed earlier😄.

Last week was rough. I was struggling with viral fever and throat infection for the better part of the week. Plus I had some other things to attend to. Nonetheless, I managed to devote the needed time on my project. I managed to create the UI of the Flight and Restaurant cards. They look good and work as expected (the dynamically loading up of things and all)

Here are the obligatory screenshots

dark1 dark2 light1 light2

Other than that, I’m afraid I don’t have much to talk about. I still have not made the Database fix I had talked about in my earlier post. But I’ll do it next.If you look at it, then in a way I have managed to successfully deliver what all I had planned initially in my proposal:). This was a real confidence boost.

For the next few weeks, I plan to work on some tidbits I had left earlier (like the Database thingy I mentioned). I also might refine the UI. Plus, I also need to figure out a convenient way to make the plasmoid available for the end user in terms of his email credentials. I might look into KWallet but me and my mentor have to still talk about it.

Ending it with a request: If you are a graphics designer and have experience with making vector graphic images and want to help me out, feel free to contact me. Just comment your email id below (it’s private and I moderate comments :)) and I’ll contact you. I could really use your help.

Till next time



Update on my work at GCompris

6 hours 40 min ago

Two months into GSoC, I must say that this summer is the best in my life. I have learned a great deal of programming, went jogging every day and had fun overall in my vacation.

If you followed my blog posts, you already know that before the actual start of the GSoC coding period, to better familiarize myself with the programming languages GCompris uses and also to prove my commitment and dedication, I contributed to GCompris by solving various bugs and developing two activities. “PhotoHunter” is an activity ported from the previous GTK version of GCompris. Its goal is finding differences between two similar images.

The second activity is “Share”, an authentic creation of mine. In this activity, the user is presented with a problem about a child who has some candies and wants to equally share them with his friends. The user has to drag the exact amount of candies from the left panel to the child friends’ areas on the main screen.

Now for the GSoC part of my work, I am happy to announce that “Crane” is already merged, and “Alphabetical Order” is coming fast from behind – everything is finished, but it needs some more testing.


In this post I will just mention the updates brought to Crane. For a more detailed presentation on the functionality of Crane, please check my previous posts.

For adding a pedagogical side to this activity, I decided to use as starter levels some easy words consisting of image letters connected one to the other. The user’s goal is to reproduce the model and by doing so he automatically learns new words.


To better teach the association between the two models, the levels are carefully designed to gradually increase the difficulty: for the first levels, the two boards are at the same height and the grid is displayed, then for some levels, the grid is removed, for others, the height of the two boards is changed, and at the end, both changes are made: the grid is removed and the heights differ.

Click to view slideshow.

At last, there are levels with small images that the user must match. As the levels increase, there are more and more images on the board, making it more difficult to match.

6 7 Alphabetical Order:


The second activity on my list for Google Summer of Code was “Alphabetical Order” – an educational game in which the user learns the alphabet by practice. Its goal is to arrange some letters in their alphabetical order: on top of the screen is the main area, where some letters are displayed from which some are missing. The user has to drag the missing letters from the bottom part of the screen to their right place in the top area.

As the difficulty increases, the levels become more and more complicated:

  • more letters,
  • more missing letters,
  • the letters from top are not always in the good order.


Pressing on any letter triggers a sound and the letter is read so the user (children) learns its spelling as well. In the configuration panel, these sounds can be turned on or off, by pressing one button. The language used by the activity can be changed in the same configuration panel.


Here we can find two more buttons: “Easy mode” and “Ok button”.

In “Easy mode”, when the user drags a letter to its place, if the answer is correct, a sparkle is triggered. If the answer is wrong, the letter will turn red for a few seconds.

If the “Ok button” is activated, the level will be passed only when the user presses the “Ok button”. If the answer is good, a sparkling effect will appear on all the letters, else, the wrong letters will glow red.

If the user’s answers are mostly correct for the first level, an adaptive algorithm will increase the difficulty and as he passes more and more levels, the algorithm will dictate the difficulty of the game. On the other hand, if his answers are mostly incorrect, the difficulty is lowered so he can better learn the alphabet with easier levels.


After finishing “Crane” and “Alphabetical Order”, I went back to “PhotoHunter” and “Share”: for the first one, I added a new feature – a Help Button. Pressing it once will move the two pictures to the center, one on top of the other one. A sliding bar will appear and as the user drags it to right, the two images will combine and reveal the differences. In this “Help Mode”, the user cannot press on the differences; he has to press the “Help Button” to exit the “Help Mode” so the images will go to their initial positions and then press on the difference again in order for it to be counted.


This is how the slider bar works:


A portrait view:

6 7 Share:


For “Share”, I added a new type of levels and a new feature: “Easy mode”. In “Easy mode”, the user can use the maximum number of candies given in the problem. If he gives more candies to a friend on the board, then he won’t have enough left for the others. On the other hand, if the “easy mode” is deactivated, the user can drag more than the exact amount of candies to each friend on the board needs. This addition forces the user solve the given problem and find its answer instead of guessing it by dragging the candies from one child to another.





The new levels consist of placing candies in some friends’ areas before the user starts playing. This feature makes him take in consideration the candies already added to the board and compute again to find the new solution.

I am currently working on porting TuxPaint, a paint activity in which children can have fun drawing lines, rectangles and circles or free-drawing their own creation. The next post will mainly cover the development of TuxPaint.

Interview with Liz de Souza

12 hours 3 min ago
mascot-training Could you tell us something about yourself?

Hi! I’m 32 years old, Brazilian, I’m a full-time wife and mother, and also an illustrator.

Do you paint professionally, as a hobby artist, or both?

Both, but after having children I paint mostly professionally. I still have sketchbooks to carry in my backpack when I have to go to the doctor or do something where I’ll stay waiting – while I wait, I draw. At home, I honor my daughters’ requests for specific drawings or drawing lessons.

What genre(s) do you work in?

I’m working mostly in portraits, character designs, concept art and illustration. People call me specially for portraits/illustrations for wedding invitations and family drawings. Another genre that I work with as a favorite is illustrating Catholic themes. My faith is always portrayed in my personal works.

fatima Whose work inspires you most — who are your role models as an artist?

I admire artists from all periods of history. I love Giotto, Fra Angelico, Michelangelo, Leonardo da Vinci, Caravaggio, Renoir, Ingres, Monet… Great masters always inspire me.

I really have as role models the Eastern Orthodox iconographers. The Eastern icons are so full of meaning and an inexplicable beauty.

And I have lots of artists I admire that work with digital painting. Some of them I follow for the technique, others because of the use of colors, others because of the way they illustrate abstract concepts… But I can list some: Yuko Shimizu, Lois Van Baarle (Loish), Charlie Bowater, Vicktoria Ridzel (Viria), Cyarine, Bobby Chiu, David Revoy, well… and hundreds more. My favs list is huge 🙂

As a Brazilian, my role model in my Country is the artist Maurício de Sousa. He has been an inspiration for me since I was a toddler. I love comics. Oh, I love Will Eisner also. Well…. I love lots of artists Did I mention I like manga too?

wrapunzel How and when did you get to try digital painting for the first time?

When I was in my 3rd or 4th semester of College (2001), I had a class called “Electronic Art” (yes, creepy name). I bought my first tablet, a huge Genius model I can’t even remember the name of. I did works scanning my lineart and coloring with Photoshop 5.

After that, I only got into digital painting, started to research and really practice after 2 years I finished College, in 2007, when I bought a Genius Wizard Pen and tried hard to make this thing happen. Bobby Chiu was a great mentor and friend that year. I heard all his podcasts and drew a lot, when I was not at my job.

In 2009 I got married and had the opportunity to leave the job to dedicate my professional efforts to what I love: illustrating and digital painting. It took some years to get somewhere, but I can tell that having children
helped so much to make my brain work quickly to learn new techniques and improve, since I never have too much spare time.

In 2013 I started using social media to post free drawings, and got a new tablet, a Wacom Intuos. After that, I’ve always had commissions, thank God!

What makes you choose digital over traditional painting?

I work mostly with digital media because it’s easy to correct problems, the client can ask for changes without making me do everything from scratch again, and because I’ve never actually learned how to paint with real paint (even if I tried hard in College — with no result).


I like the opportunity digital painting gives me to share my work and get commissions from anywhere. I’ve done commissions for USA and Germany and had lot of feedback about my free drawings from several countries!

I still love traditional drawing, specially black and white drawings with pen, brush and india ink.

How did you find out about Krita?

My husband and I started using only Linux on our computers when we got married and I installed all paint programs I had available to test and find something that was close or better than Photoshop. I used GIMP for a couple of years, but more or less in 2012 I found Krita at the Ubuntu Software Center and tried it. And liked it. And never left it.

What was your first impression?

Krita seemed to me very similar to Photoshop. It took several months to get used to it. It had at that time many bugs that shut down the app without warning, what annoyed me a lot. But after I changed my OS from
Ubuntu to Kubuntu, I work a lot better with it.

What do you love about Krita?

It’s great software, and I love that such a great project has been made free software (of course there are paid versions, but the free one is the most popular). All functionalities and features are fantastic and work so well for  the digital artist. But what I admire the most is the fact that the team is so available to answer questions, and work so hard to make Krita better and better. My husband is a software engineer and I know how much work it is to build a program, how much time you spend on it, how many nights you lose due to the project deadline, and how great it is to hear the feedback from people who use your app and help you to make it better. If all human beings had this inner good will, so many good things would happen in the world. God bless the Krita Foundation.


What do you think needs improvement in Krita? Is there anything that really annoys you?

Well… Actually there are some issues about importing brushes I would like to happen (like importing MyPaint brushes and PS.TPL brushes), but I believe I should only thank the team for all the hard work, and try to help them with the bugs so Krita become the great software for digital painting. Sometimes I ask if the team plans to implement this or that feature, but when they answer with the expression “reverse engineering” I have goosebumps. I know what it is and how hard it is. I saw my husband doing that once. It was a nightmare. So, I feel that my duty is to be thankful for them and do something to help Krita Foundation (like the Krita Training
in Portuguese I’m doing right now).

What sets Krita apart from the other tools that you use?

It is high-quality open-source software. Runs in my dear Kubuntu. That’s happiness for me. I have other tools installed in my OS, such as GIMP and MyPaint. But Krita does everything, it has all the features a professional digital artist needs. I still like MyPaint, but only for sketches.

If you had to pick one favourite of all your work done in Krita so far, what would it be, and why?

I’m always drawing something new, and the newest drawings are always the best ones, because we learn something every day. I don’t know if I can pick only one drawing! Mmmmm… maybe the portrait I made for a family of a dear friend in 2015, the first real digital painting I made, is still my favourite. The result was awesome, I was so proud of myself when I finished it and my friend loved it. Happiness everywhere.*


What techniques and brushes did you use in it?

I used one of the three techniques I’m used to; it is doing a layer with a  very realistic drawing, painting a basic color one layer below, and then complete the painting with a third layer above it, making all lines disappear. It gives a look of a real portrait, and people love it. I also use two other techniques for painting below and using lots of layers, but this one is for simple drawings, usually cartoon portraits, memes and T-shirt illustration. In fact, I love trying other people’s techniques,there is always many ways to solve digital painting problems. I love
learning with other artists!

Where can people see more of your work?

My commissions and best drawings (and very old stuff) are in my DeviantArt –
Best quality memes and other drawings you can see in my Tumblr –
News, updates, memes, and thoughts at my Facebook –
Random photos, paper sketches and sometimes memes at Instagram –
Recently I’ve joined twitter, but I use it to talk with the developers of Krita and MyPaint
My Youtube channel has some speedpaints –
I have a Patreon page, which isn’t really active yet but people can follow me to receive news when I make it happen –

Anything else you’d like to share?

I’m right now doing a Krita training in Brazilian Portuguese, and it has taken a lot of the little spare time I have. But if the English-Spanish-other-Portuguese speakers want to have access to it, I’m doing a closed class, where I post the videos and share knowledge with the students. To have more information, please join my crowdfunding to feed my family while I work sending me an email to – you can write in Portuguese, English or Spanish, and I answer it. And with the help of my husband I can answer
German and Italian speakers too.

I also take commissions – to ask for it and get information, email me:

And… Thank you Krita Foundation! God bless you all!


We’ve come a long way from where we began!

Sun, 2016/07/24 - 6:43pm
“Measuring programming progress by lines of code is like measuring aircraft building progress by weight.”― Bill Gates

After working for several weeks on our WikiRating:Google Summer of Code project Davide, Alessandro and I have slowly reached up to the level where we can now visualize the entire project in its final stages. It has been quite long since we wrote something , so after seeing this :


We knew it was the time that we summarize what we were busy doing in these 50 commits.

I hope you have already understood our vision by reading the previous blog post. So after a fair amount of planning it was the time that I start coding the actual engine, or more importantly start working on the❤ of the project. This was the time when my brain was buzzing with numerous Design patterns and  Coding paradigms and I was finding it a bit overwhelming. I knew it was a difficult phase but I needed some force (read perseverance) to get over it. I planned , re-planned but eventually with the soothing advice of my mentors : Don’t worry about the minor intricacies now, focus on the easiest thing first.
I began to code!

How stuff works?

I understand that there are numerous things to talk about and it’s easy to lose track of main theme therefore we are going to tour the engine as it actually functions that is we will see what all happen under the hood as we run the engine.Let me make it easier for you, have a look at this:

methodThe main method for engine run


You can see there are some methods and some parent classes involved in the main run of the engine let’s inspect them.

Fetching data(Online):

The initial step is to create all the classes for the database to store data. After this we will fetch the required data like Pages,Users and Revisions via the queries listed here.

{ "batchcomplete": "", "limits": { "allpages": 500 }, "query": { "allpages": [ { "pageid": 148, "ns": 0, "title": "About WikiToLearn" }, { "pageid": 638, "ns": 0, "title": "An Introduction to Number Theory" }, { "pageid": 835, "ns": 0, "title": "An Introduction to Number Theory/Chebyshev" }, { "pageid": 649, "ns": 0, "title": "An Introduction to Number Theory/Primality Tests" }, { "pageid": 646, "ns": 0, "title": "An Introduction to Number Theory/What are prime numbers and how many are there?" },

This is a typical response from the Web API, giving us info about the pages on the platform.

Similarly we fetch all the other components (Users and Revisions) and simultaneous store them too.

database addition1Code showing construction of Page Nodes

After fetching the data for Pages and Users we will work towards linking the user contributions with their corresponding user contributors . Here we will make edges from the user nodes to the respective revisions of the pages. These edges also contain useful information like the size of the contribution done.

We also need to work on linking the pages with other pages via Backlinks for calculating the PageRank  (We will discuss these concepts after a short while).

Once we have all the data via the online API calls , we will now move toward our next pursuit to do offline computation on data fetched.


Since this feature is new to WikiToLearn platform therefore there were no initial user votes on any of the page versions, hence we wrote a class to select random users and then making them vote for various pages. Later we will write a MediaWiki Extension to gather actual votes from the users but till then now we have sample data to perform further computations.

So after generating votes we need to calculate various parameters like User Credibility, Ratings, PageRank and Badges (Platinum,Gold,Silver,Bronze,Stone). The calculation of the credibility and Ratings are listed here. But Badges and PageRank are a new concept .


We will be displaying various badges based on the Percentile Analysis of the Page Ratings. That is we will be laying the threshold for various badges say top 10% for platinum badge then we filter out the top  10% pages on the basis of their page rating and then assign them the suitable badge.The badges will give readers an immediate visual sense of the quality of the pages.

Another very important concepts are  PageRank and Backlinks let’s talk about them too.

PageRank & Backlinks:

Let’s consider a scenerio :

Drawing (1)Page interconnections

There are 5 pages in the system the arrows denote the hyperlink from a page to another these are called backlinks . Whenever the author decides to cite another user’s work a backlink is formed from that page to the other. It’s clear to understand that the more backlinks a page will have the more reliable it becomes (Since each time the authors decide to link someone else’s work then they actually think it is of good quality).

So the current graph :

Page 0 : 4, 3, 2
Page 1 : 0 ,4
Page 2 : 4
Page 3 : 4, 2
Page 4 : 3

Here we have connections like Page 0 is pointed by 3 pages 4,3,2 and so on.

Now we will calculate a base rating of all the pages with respect to the page having maximum number of backlinks.Therefore we see that Page 0 has maximum number of backlinks(3).
Then we divide the backlinks of all the other pages by the maximum.This will give us the
importance of pages based on their backlinks.

We used this equation:
Base Weight=(no of backlinks)/(Max backlinks)

So Base Weight of Page 0 = (1+1+1)/3=1

Base weights
1, 0.666667 ,0.333333 ,0.666667 ,0.333333 of Page 0 ,Page 1 and so on

There is a slight problem here:
Let’s assume that we have 3 pages A , B and C. A has  high backlinks than B but according to the above computation whenever a link from A to C is there it will be equivalent to that of B to C. But it shouldn’t happen as Page A’s link carries more importance than Page B’s link because A has higher backlinks than B.Therefore we need a way to make our computation do this.

We can actually fix this problem by running the computation one more time but now instead of taking 1.0 for an incoming link we take the Base Weight so now the more important pages contribute more automatically. So the refined weights are:

Revised Base Weight of Page 0 =(0.334+0.667+0.334)/3=0.444444

Page weights
0.444444, 0.444444 ,0.111111, 0.222222 ,0.222222

So we see that the anomaly is resolved:)

This completes our engine analysis. And finally our graph in OrientDB looks like this:

sample graph

Right now I am developing an extension for the user interaction of the engine and will return soon with the latest updates. Till then stay tuned😀

LabPlot 2.3.0 released

Sat, 2016/07/23 - 7:53pm

Less then four months after the last release and after a lot of activity in our repository during this time, we’re happy to announce the next release of LabPlot with a lot of new features. So, be prepared for a long post.

As already announced couple of days ago, starting with this release we provide installers for Windows (32bit and 64bit) in the download section of our homepage. The windows version is not as well tested and maybe not as mature as the linux version yet and we’ll spent more time in future to improve it. Any feedback from windows users is highly welcome here!

With this release me make the next step towards providing a powerful and user-friendly environment for data and visualization. Last summer Garvit Khatri worked during GSoC2015 on the integration of Cantor, a frontend for different open-source computer algebra systems (CAS). Now the user can perform calculations in his favorite (open-source) CAS directly in LabPlot, provided the corresponding CAS is installed, and do the final creation, layouting and editing of plots and curves and the navigation in the data (zooming, shifting, scaling) in the usuall LabPlot’s way within the same environment. LabPlot recognizes different CAS variables holding array-like data and allows to select them as the source for curves. So, instead of providing columns of a spreadsheet as the source for x- and y-data, the user provides the names of the corresponding CAS-variables.

Currently supported CAS data containers are Maxima lists and Python lists, tuples and NumPy arrays. The support for R and Octave vectors will follow in one of the next releases.

Let’s demonstrate the power of this combination with the help of three simple examples. In the first example we use Maxima to generate commonly used signal forms – square, triangle, sawtooth and rectified sine waves (“imperfect waves” because of the finite truncation used in the definitions):

Maxima Example

In the second example we solve the differential equation of the forced Duffing oscillator, again with Maxima, and plot the trajectory, the phase space of the oscillator and the corresponding Poincaré map with LabPlot to study the chaotic dynamics of the oszillator:

Maxima Example

Python in combination with NumPy, SciPy, SymPy, etc. became in the scientific community a serious alternative to many other established commercial and open-source computer algebra systems. Thanks to the integration of Cantor, we can do the computation in the Python environment directly in LabPlot. In the example below we generate a signal, compute its Fourier transform and illustrate the effect of Blackman windowing on the Fourier transform. Contrary to this example, only the data is generated in python, the plots are done in LabPlot.

FFT with Python

In this release with greatly increased the number of analysis features.

Fourier transform of the input data can be carried out in LabPlot now. There are 15 different window functions implemented and the user can decide which relevant value to calculate and to plot (amplitude, magnitude, phase, etc.). Similarly to the last example above carried out in Python, the screenshot below demonstrates the effect of three window functions where the calculation of the Fourier transform was done in LabPlot directly now:

FFT with LabPlot

For basic signal processing LabPlot provides Fourier Filter (or linear filter in the frequency domain). To remove unwanted frequencies in the input data such as noise or interfering signals, low-pass, high-pass, band-pass and band-reject filters of different types (Butterworth, Chebyshev I+II, Legendre, Bessel-Thomson) are available. The example below, inspired by this tutorial, shows the signal for “SOS” in Morse-code superimposed by a white noise across a wide range of frequencies. Fourier transform reveals a strong contribution of the actual signal frequency. A narrow band-pass filter positioned around this frequency helps to make the SOS-signal clearly visible:

Fourier Filter Example

Another technique (actually a “reformulation” of the low-pass filtering) to remove unwanted noise from the true signal is smoothing. LabPlot provides three methods to smooth the data – moving average, Savitzky-Golay and percentile filter methods. The behavior of these algorithms can be controlled by additional parameters like weighting, padding mode and polynom order (for Savitzky-Golay method only).

Smoothing Example

To interpolate the data LabPlot provides several types of interpolations methods (linear, polynom, splines of different types, piecewise cubic Hermite polynoms, etc.). To simplify the workflow for many different use-cases, the user can select what to evaluate and to plot on the interpolations points – function, first derivative, second derivative or the integral. The number of interpolation points can be automatically determined (5 times the number of points in the input data) or can be provided by the user explicitly.

More new analysis features and the extension of the already available feature set will come in the next releases.

Couple of smaller features and improvements were added. The calculation of many statistical quantities was implemented for columns and rows in LabPlot’s data containers (spreadsheet and matrix):

Column Statistics

Furthermore, the content of the data containers can be exported to LaTeX tables. The appearance of the final LaTeX output can be controlled via several options in the export dialog.

LaTeX Export

To further improve the usability of the application, we implemented filter and search capabilities in the drop down box for the selection of data sources. In projects with large number of data sets it’s much easier and faster now to find and to use the proper data set for the curves in plots.

A new small widget for taking notes was implemented. With this, user’s comments and notes on the current activities in the the project can be stored in the same project file:

Notes Example

To perform better on a large number of data points, we implemented the double-buffering for curves. Currently, applying this technique in our code worsens the quality of the plotted curves a bit. We decided to introduce a configuration parameter to control this behavior during the run-time. On default, the double buffering is used and the user benefits from the much better performance. Users who need the best quality should switch off this parameter in the application settings dialog. We’ll fix this problem in future.

The second performance improvement coming in version 2.3.0 is the much faster generation of random values in the spreadsheet.

There are still many features in our development pipeline, couple of them being currently already worked on. Apart from this, this summer again we get contribution from three “Google Summer of Code” students working on the support for FITS, on a theme manager for plots and on histograms.

You can count on many new cool features in the near future!

Randa Meetings 2016 Fundraiser Campaign

Ubuntu tablet and smartphone: a personal "mini" review

Sat, 2016/07/23 - 1:06pm

Big-ass Disclaimer: What follows is purely personal opinion as a geek, technophile, and free and open source software (FOSS) and Linux user. It does not, in any way, reflect the opinions of my employer.

As a geek writing about gadgets and technology, I often find myself drooling over the latest innovations and insanity in the industry. But as an open source and Linux user, I lament over how majority, if not all of those are pretty but caged gardens. There are, of course, existing “open” systems, but each one represents a compromise for me. Android is basically an open source code dump, with development happening behind closed doors. And it is so far removed from the Linux that we’ve grown to know and love that it’s barely recognizable as Linux. Sailfish OS has technology closer to my heart (like Qt) and is indeed closer to a traditional Linux system, but its availability on actual commercial devices with modern, decent specs leave much to be desired. Mer/Nemo is far from being something usable even if you install it on, say, a Nexus device.

So when Ubuntu and Canonical revealed they were partnering with actual, big manufacturers for Ubuntu mobile devices, a spark of hope was rekindled in my heart. Let it be clear, I am by no means an Ubuntu user, not even a fan. I left the fold nearly a decade ago, after having spent quite some time using and contributing to Kubuntu (to the point of becoming a certified “member” even, though I never ascended to the Council). In terms of loyalties and usage, I am a KDE user (and “helper”) foremost. I use Fedora because it just works for me, for now. So, yes, an Ubuntu Touch device would be another compromise for me, but it would be the smallest one. Or so I hoped.

When opportunity knocked offering the chance to review two of the latest commercial Ubuntu Touch devices, well, let’s just say it didn’t have to knock twice. To be specific, I got my hands on a bq Aquaris M10 first, and then a Meizu PRO 5. I’ve already written thousands of words on both (not exaggerating on the numbers), so I’m not going to regurgitate them here. For the curious, here are the links to those reviews:

In terms of design and hardware, the two couldn’t be more different. The bq Aquaris M10 tablet is plastic, mid-range, and, at a glance, quite plain. The Meizu PRO 5 is metal and boasted of 2015 flagship hardware. It also looked nice to boot. But surprisingly, the Aquaris M10 was able to perform admirably, despite choking a few times on more resource intensive work. Hardware-wise, I really have no complaints, as they work as advertised. And those happen to also be the least interesting aspects of the devices.

The App Situation

I’m here to write more about the software experience, the Ubuntu Touch experience. The defining feature of these two otherwise fully Android devices. I wish it were otherwise, but I have been sorely disappointed with the end result of my tour. Perhaps I set my expectations to high, or perhaps I put too much faith in the marketing spiel. The good news is that the journey isn’t over yet and the story might very well still change.

While Ubuntu Touch might look and feel like a regular Linux system on top, albeit with the new, more touch-friendly version of Unity 8, beneath has a few things in common with Android, apart from using Android drivers. For better or for worse, Ubuntu Touch adopts the Android convention of a read-only system partition. In a nutshell, even while you can actually gain root, or sudo rather, far easier than on an Android system, you cannot effect permanent changes. In short, you can’t really install software via the age-old apt-get (just apt now) method. Actually you can, but only by explicitly making the system partition read-write. But if you do that, you will no longer be able to receive OTA updates, which, based on experience, are very, very desirable. For example, the most recent OTA-11 added Miracast support, which meant that the Meizu PRO 5, whose USB-C port doesn’t support HDMI out, can finally use Convergence. The next OTA will even add fingerprint scanner support.

So you have a choice between hacking your way to be able to install regular package the regular way (as long as they are built for ARM) and depriving yourself of OTAs, or sticking to the default settings and only the apps available from the Ubuntu Store. Neither is ideal and both are unacceptable.

The selection of apps in the Store is dismal, to be blunt. Of the few that are there, around 80% of them are simply wrappers around web apps or web pages. And while there are some who would rally behind web apps, and in this case they do actually save Ubuntu Touch some space, the ugliness of their unoptimized experiences is easily seen and painfully felt. On the Meizu PRO 5, the situation is a bit worse. In regular mobile/phone mode, the browser’s user agent is fixed as an Android device, which means web services will try to force you to use the Android app instead and will not go further. Yes, those web services are partly to blame, but the default browser app has no easy feature to change the user agent. Conversely, if you switch to desktop mode, it identifies itself as a desktop browser and things work as intended. To cut to the chase, there is a severe lack of usable native apps, written in Ubuntu Touch’s preferred QML style, that would make even Windows 8’s store seem like a jam packed party.


But wait, there’s light at the end of the tunnel, right? There’s that Convergence that Ubuntu, Canonical, and Mark Shulttleworth has been singing about, right? Well, yes and no. For the unfamiliar, Convergence is a nifty concept that, in a nutshell, means that you can use your phone as your desktop if you give it the proper peripherals, which, in this case, means an external display, a keyboard, and a mouse. Microsoft later revealed a similar concept called Continuum (for Mobile), but the difference between the two is that Convergence doesn’t limit you in what type of software you can run. On Microsoft’s Continuum, only blessed “Universal” apps can run when in desktop mode. On Convergence, it’s a free for all. In fact, you can even run those conventional desktop apps while in mobile mode. At least that’s the theory.

In practice, Convergence has a few gaping holes that all sum up to one thing: it blocks the very productivity it promises to enable by turning your smartphone into a desktop.

First, there is the fact that you can’t even install those desktop apps in the first place. There is Firefox, LibreOffice, GIMP, and Gedit preinstalled on the bq Aquaris M10 and only those. The Meizu PRO 5 is worse off, as it doesn’t even have those. Those are coming, I was told, in a future update. Perhaps the reasoning was that the Meizu PRO 5 initially didn’t support Convergence anyway, but now that it does, the apps should be installed. But good luck installing any other desktop app. Remember the above? You can’t. At least not directly and not easily. It’s possible, but you’ll have to work for it and even then you will be sacrificing some things to gain other things.

And then there’s my pet peeve: copy and paste. Those don’t work. Or rather, it only works between native Ubuntu Touch apps. It doesn’t work between Ubuntu Touch and desktop apps. Heck, it doesn’t even work between desktop apps. I’m not completely 100% sure of the rationale, but I have a hunch that it all ultimately boils down to the fact that Ubuntu Touch uses Mir for its windowing system, not X11. Those two, especially their clipboards, just don’t mix well. In order to allow desktop apps to run in a Mir environment, Ubuntu had to implement a sandboxing environment, which also has a nasty side effect. Each sandbox, therefore each desktop app, has its own “view” of the file system, its own “view” of the user’s home directories. Saving a file in one app is no assurance you’ll see the file in a different app. Sometimes it works, sometimes it doesn’t.

So yes, I blame Mir, both for the clipboard and the necessity to sandbox desktop apps. To be fair, Wayland has a similar clipboard interoperability problem with X11, but probably a saner solution. I have more confidence, at least, that the Plasma developers, particular Kwin, already have this problem in mind.

A Never Ending Story

I am, fortunately, not going to end it on a sad note, because I still have hope. Somewhat. Ubuntu Touch practically has rolling releases and, from what I’ve gathered, the next OTA will bring quite a few goodies. Libertine (Convergence) apps, fingerprint scanner, etc. I have no idea if the copy/paste problem will also be resolved by then. Suffice it to say, I’ve never been excited over an Ubuntu release since 2008.

That said, it’s not going to fix all my gripes. There will still be a great lack in native apps, web apps will still work terribly, if at all, and you still can’t install apps through apt-get. The latter probably isn’t going to happen unless Ubuntu and Canonical change directions a bit. I’m not that averse at doing the work needed to actually get a somewhat regular Linux system working underneath Ubuntu Touch. As long as I can be assured I can go back to a pristine copy should I want to get OTA updates again. I’ll figure it out soon enough, but only after OTA 12!

Dreaming of Kogs and Gears

That adventure in Ubuntu Touch land has had me once again pining for a KDE-made solution (KDE being the community now, not the desktop software). There has been and still continues to be efforts in that area, but not yet to the same extent as Ubuntu Touch. The community, or even some of the companies supporting its development, just doesn’t have the same resources as Canonical to have a commercial device as well. Until that happens, which has disadvantages, we’ll have to rely on sheer community power.

Sadly, my experience with even the latest Plasma Mobile project has been rather depressing. Not because of the state of the software but because how I seem to hit corner cases of things not working when it works for others. I guess I just have terrible luck. I do want to try again, and maybe I’ll have a different and better experience this time. But that will be another story for another time.

Krita 3.0.1 Development Builds

Fri, 2016/07/22 - 2:21pm

Because of unforeseen circumstances, we had to rejig our release schedule, there was no release last week. Still, we wanted to bring you a foretaste of some of the goodies that are going to be in the 3.0.1 release, which is now planned for September 5th. There’s lots to play with, here, from bug fixes (the double dot in file names is gone, the crash with cheap tablets is gone, a big issue with memory leaks in the graphics card is solved), to features (soft-proofing, among others). There may also be new bugs, and not all new features may be working correctly. Export to animated gif or video clips is still in development, and probably will not work well outside the developers’ computer.

After all, this is a development snapshot. Still, please experiment with it and test it, and we’re pretty sure that for many purposes it might be better already than the 3.0 stable release!

On the OSX front we’ve made progress, too, and this build should run fine on OSX 10.9 upwards, including 10.10. All libraries are updated to the latest version, too.


On Windows, Krita supports Wacom, Huion and Yiynova tablets, as well as the Surface Pro series of tablets. The portable zip file builds can be unzipped and run by double-clicking the krita link.

Krita on Windows is tested on Windows 7, Windows 8 and Windows 10. There is only a Windows 64 bits build for now. There is now also a debug build that together with the DrMingw debugger can help with finding the cause of crashes. See the new FAQ entry.


For Linux, we offer AppImages that should run on any reasonable recent Linux distribution. You can download the appimage, make it executable and run it in place. No installation is needed. At this moment, we only have appimages for 64 bits versions of Linux.

You can also get Krita from Ubuntu’s App Store in snap format, thanks to Michael Hall’s help. Note that you cannot use the snap version of Krita with the NVidia proprietary driver, due to a limitation in Ubuntu and that there are no translations yet.

OSX and MacOS

Krita on OSX will be fully supported with version 3.1. Krita 3.0 for OSX is still missing Instant Preview and High Quality Canvas scaling. There are also some issues with rendering the image — these issues follow from Apple’s decision to drop support for the OpenGL 3.0 compatibility profile in their display drivers and issue with touchpad and tablet support. We are working to reimplement these features using OpenGL 3.0 Core profile. For now, we recommend disabling OpenGL when using Krita on OSX for production work. Krita for OSX runs on 10.9, 10.10, 10.11 and is reported to work on MacOS too.

Kubuntu 16.04.1 LTS Update Out

Fri, 2016/07/22 - 2:57am

The first point release update to our LTS release 16.04 is out now. This contains all the bugfixes added to 16.04 since its first release in April. Users of 16.04 can run the normal update procedure to get these bugfixes.

See the 16.04.1 release announcement.

Download 16.04.1 images.

Double Post – Lakademy and Randa 2016

Fri, 2016/07/22 - 12:34am


I Have a few favorites kde conventions that I really love to participate.

Randa and Lakademy are always awesome, both are focused on hacking, and I surely do love to hack.

On LaKademy I spend my days working on subsurface, reworking on the interface, trying to make it more pleasant to the eye,

In Randa I worked on KDevelop and Marble, but oh my…

I spend a few days working on KDevelop, in one of bugs that where preventing the release of the 5.0, I’v tried a bit if things with help from Kevin Funk and Aleix Pol, but everything that I fixed created another two corner cases, in the third day trying I stopped and went to work on  marble instead so I could clear my head. My Patch had almost 500 lines already, and more than 20 commits – Something told me that there was something wrong with the approach, but I actually didn’t know what else to try.

The problem was a widget being deleted inside of Qt’s Event Loop when a focus change occoured – the code should have prevented the focus loss, but  didn’t, a crash occoured instead.

Now, when I got back to Brazil I realized that the bug was fixed, so someone had worked on top of it and my patch should have been discarded… I went to look how the other developer fixed it, and at first I didn’t understood. He was working with invokeMetaMethod instead of calling the method directly (this + a few other checks), to summarize… he fixed in 20 lines what I didn’t managed to fix in 500.

It was a really good learning experience for me, I wouldn’t ever have tougth to use invokeMetaMethod inside of the Event Loop.


GammaRay 2.5 release

Thu, 2016/07/21 - 2:48pm

GammaRay 2.5 has been released, the biggest feature release yet of our Qt introspection tool. Besides support for Qt 5.7 and in particular the newly added Qt 3D module a slew of new features awaits you, such as access to QML context property chains and type information, object instance statistics, support for inspecting networking and SSL classes, and runtime switchable logging categories.

We also improved many existing functionality, such as the object and source code navigation and the remote view. We enabled recursive access to value type properties and integrated the QPainter analyzer in more tools.

GammaRay is now also commercially available as part of the Qt Automotive suite, which includes integration with QtCreator for convenient inspection of embedded targets using Linux, QNX, Android or Boot2Qt.

The post GammaRay 2.5 release appeared first on KDAB.

KDStateMachineEditor 1.1.0 released

Thu, 2016/07/21 - 2:47pm

KDStateMachineEditor is a Qt-based framework for creating Qt State Machine metacode using a graphical user interface. It works on all major platforms and is now available as part of the Qt Auto suite.

The latest release of KDAB’s KDStateMachineEditor includes changes to View, API and Build system.


  • Button added to show/hide transition labels
  • Now using native text rendering
  • Status bar removed


  • API added for context menu handling (cf. StateMachineView class)

Build system

  • Toolchain files added for cross-compiling (QNX, Android, etc.)
  • Compilation with namespaced Qt enabled
  • Build with an internal Graphviz build allowed (-DWITH_INTERNAL_GRAPHVIZ=ON)

KDStateMachineEditor Works on all major platforms and has been tested on Linux, OS X and Windows.

Prebuilt packages for some popular Linux distributions can be found here.

Homebrew recipe for OSX users can be found here.

The post KDStateMachineEditor 1.1.0 released appeared first on KDAB.

KDAB contributions to Qt 5.7

Thu, 2016/07/21 - 2:46pm

Hello, and welcome to the usual appointment with a new release of Qt!

Qt 5.7 has just been released, and once more, KDAB has been a huge part of it (we are shown in red on the graph):


Qt Project commit stats, up to June 2016. From

In this blog post I will show some of the outstanding contributions by KDAB engineers to the 5.7 release.

Qt 3D

The star of Qt 5.7 is the first stable release of Qt 3D 2.0. The new version of Qt 3D is a total redesign of its architecture into a modern and streamlined 3D engine, exploiting modern design patterns such as entity-component systems, and capable to scale due to the heavily threaded design. This important milestone was the result of a massive effort done by KDAB in coordination with The Qt Company.


If you want to know more about what Qt 3D can do for your application, you can watch this introductive webinar recorded by KDAB’s Dr. Sean Harmer and Paul Lemire for the 5.7 release.

Qt on Android

Thanks to KDAB’s BogDan Vatra, this release of Qt saw many improvements to its Android support. In no particular order:

  • Qt can now be used to easily create Android Services, that is, software components performing background tasks and that are kept alive even when the application that started them exits. See here for more information.
  • The QtAndroidExtras module gained helper functions to run Runnables on the Android UI thread. They are extremely useful for accessing Android APIs from C++ code that must be done on Android UI thread. More info about this is available in this blog post by BogDan.
  • Another addition to the QtAndroidExtras module is the QtAndroid::hideSplashScreen function, which allows a developer to programmatically hide the splash screen of their applications.
  • The QtGamepad module gained Android support.
Performance and correctness improvements

A codebase as big as Qt needs constant fixes, improvements and bugfixes. Sometimes these come from bug reports, sometimes by reading code in order to understand it better, and in some other cases by analyzing the codebase using the latest tools available. KDAB is committed to keeping Qt in a great shape, and that is why KDAB engineers spend a lot of time polishing the Qt codebase.

Some of the results of these efforts are:

  • QHash gained equal_range, just like QMap and the other STL associative container. This function can be used to iterate on all the values of a (multi)hash that have the same key without performing any extra memory allocation. In other words, this code: // BAD!!! allocates a temporary QList // for holding the values corresponding to "key" foreach (const auto &value, hash.values(key)) { }

    can be changed to

    const auto range = hash.equal_range(key); for (auto i = range.first; i != range.second; ++i) { }

    which never throws (if hash is const), expands to less code and does not allocate memory.

  • Running Qt under the Undefined Behavior Sanitizer revealed dozens of codepaths where undefined behaviour was accidentally triggered. The problems ranged from potential signed integer overflows and shift of negative numbers to misaligned loads, invalid casts and invalid calls to library functions such as memset or memcpy. KDAB’s Senior Engineer Marc Mutz contributed many fixes to these undefined behaviours, fixes that made their way into Qt 5.6.1 and Qt 5.7.
  • Some quadratic loops were removed from Qt and replaced with linear or linearithmic ones. Notably, an occurrence of such loops in the Qt Quick item views caused massive performance degradations when sorting big models, which was fixed in this commit by KDAB’s engineer Milian Wolff.
  • Since Qt 5.7 requires the usage of a C++11 compiler, we have starting porting foreach loops to ranged for loops. Ranged for loops expand to less code (because there is no implicit copy taking place), and since compilers recognize them as a syntactic structure, they can optimize them better. Over a thousand occurrences were changed, leading to savings in Qt both in terms of library size and runtime speed.
  • We have also started using C++ Standard Library features in Qt. While Qt cannot expose STL types because of its binary compatibility promise, it can use them in its own implementation. A big advantage of using STL datatypes is that they’re generally much more efficient, have more features and expand to a lot less code than Qt counterpart. For instance, replacing some QStack usages with std::stack led to 1KB of code saved per instance replaced; and introducing std::vector in central codepaths (such as the ones in QMetaObjectBuilder) saved 4.5KB.
  • While profiling Qt3D code, we found that the mere act of iterating over resources embedded in an application (by means of QDirIterator) uncompressed them. Then, reading a given resource via QFile uncompressed it again. This was immediately fixed in this commit by KDAB’s Director of Automotive, Volker Krause.
Other contributions

Last but not least:

  • It is now possible to use the Qt Virtual Keyboard under QtWayland compositors.
  • The clang-cl mkspec was added. This mkspec makes it possible to build Qt using the Clang frontend for MSVC. Stay tuned for more blog posts on this matter. 🙂
  • A small convenience QFlag::setFlag method was added, to set or unset a flag in a bitmask without using bitwise operations.

About KDAB

KDAB is a consulting company dedicated to Qt and offering a wide variety of services and providing training courses in:

KDAB believes that it is critical for our business to invest in Qt3D and Qt, in general, to keep pushing the technology forward, ensuring it remains competitive.

The post KDAB contributions to Qt 5.7 appeared first on KDAB.

Four Habit-Forming Tips to Faster C++

Thu, 2016/07/21 - 2:46pm

Are you a victim of premature pessimisation? Here’s a short definition from Herb Sutter:

Premature pessimization is when you write code that is slower than it needs to be, usually by asking for unnecessary extra work, when equivalently complex code would be faster and should just naturally flow out of your fingers.

Despite how amazing today’s compilers have become at generating code, humans still know more about the intended use of a function or class than can be specified by mere syntax. Compilers operate under a host of very strict rules that enforce correctness at the expense of faster code. What’s more, modern processor architectures sometimes compete with C++ language habits that have become ingrained in programmers from decades of previous best practice.

I believe that if you want to improve the speed of your code, you need to adopt habits that take advantage of modern compilers and modern processor architectures—habits that will help your compiler generate the best-possible code. Habits that, if you follow them, will generate faster code before you even start the optimisation process.

Here’s four habit-forming tips that are all about avoiding pessimisation and, in my experience, go a long way to creating faster C++ classes.

1) Make use of the (named-) return-value optimisation

According to Lawrence Crowl, (named-) return-value optimisation ((N)RVO) is one of the most important optimisations in modern C++. Okay—what is it?

Let’s start with plain return-value optimization (RVO). Normally, when a C++ method returns an unnamed object, the compiler creates a temporary object, which is then copy-constructed into the target object.

MyData myFunction() { return MyData(); // Create and return unnamed obj } MyData abc = myFunction();

With RVO, the C++ standard allows the compiler to skip the creation of the temporary, treating both object instances—the one inside the function and the one assigned to the variable outside the function—as the same. This usually goes under the name of copy elision. But what is elided here is the temporary and the copy.

So, not only do you save the copy constructor call, you also save the destructor call, as well as some stack memory. Clearly, elimination of extra calls and temporaries saves time and space, but crucially, RVO is an enabler for pass-by-value designs. Imagine MyData was a large million-by-million matrix. There mere chance that some target compiler could fail to implement this optimisation would make every good programmer shy away from return-by-value and resort to out parameters instead (more on those further down).

As an aside: don’t C++ Move Semantics solve this? The answer is: no. If you move instead of copy, you still have the temporary and its destructor call in the executable code. And if your matrix is not heap-allocated, but statically sized, such as a std::array<std::array<double, 1000>, 1000>>, moving is the same as copying. With RVO, you mustn’t be afraid of returning by value. You must unlearn what you have learned and embrace return-by-value.

Named Return Value Optimization is similar but it allows the compiler to eliminate not just rvalues (temporaries), but lvalues (local variables), too, under certain conditions.

What all compilers these days (and for some time now) reliably implement is NRVO in the case where there is a single variable that is passed to every return, and declared at function scope as the first variable:

MyData myFunction() { MyData result; // Declare return val in ONE place if (doing_something) { return result; // Return same val everywhere } // Doing something else return result; // Return same val everywhere } MyData abc = myFunction();

Sadly, many compilers, including GCC, fail to apply NRVO when you deviate even slightly from the basic pattern:

MyData myFunction() { if (doing_something) return MyData(); // RVO expected MyData result; // ... return result; // NRVO expected } MyData abc = myFunction();

At least GCC fails to use NRVO for the second return statement in that function. The fix in this case is easy (go back to the first version), but it’s not always that easy. It is an altogether sad state of affairs for a language that is said to have the most advanced optimisers available to it for compilers to fail to implement this very basic optimisation.

So, for the time being, get your fingers accustomed to typing the classical NRVO pattern: it enables the compiler to generate code that does what you want in the most efficient way enabled by the C++ standard.

If diving into assembly code to check whether a particular patterns makes your compiler drop NRVO isn’t your thing, Thomas Brown provides a very comprehensive list of compilers tested for their NRVO support and I’ve extended Brown’s work with some additional results.

If you start using the NVRO pattern but aren’t getting the results you expect, your compiler may not automatically perform NRVO transformations. You may need to check your compiler optimization settings and explicitly enable them.

Return parameters by value whenever possible

This is pretty simple: don’t use “out-parameters”. The result for the caller is certainly kinder: we just return our value instead of having the caller allocate a variable and pass in a reference. Even if your function returns multiple results, nearly all of the time you’re much better off creating a small result struct that the function passes back (via (N)RVO!):

That is, instead of this:

void convertToFraction(double val, int &amp;numerator, int &amp;denominator) { numerator = /*calculation */ ; denominator = /*calculation */ ; } int numerator, denominator; convertToFraction(val, numerator, denominator); // or was it "denominator, nominator"? use(numerator); use(denominator);

You should prefer this:

struct fractional_parts { int numerator; int denominator; }; fractional_parts convertToFraction(double val) { int numerator = /*calculation */ ; int denominator = /*calculation */ ; return {numerator, denominator}; // C++11 braced initialisation -&gt; RVO } auto parts = convertToFraction(val); use(parts.nominator); use(parts.denominator);

This may seem surprising, even counter-intuitive, for programmers that cut their teeth on older x86 architectures. You’re just passing around a pointer instead of a big chunk of data, right? Quite simply, “out” parameter pointers force a modern compiler to avoid certain optimisations when calling non-inlined functions. Because the compiler can’t always determine if the function call may change an underlying value (due to aliasing), it can’t beneficially keep the value in a CPU register or reorder instructions around it. Besides, compilers have gotten pretty smart—they don’t actually do expensive value passing unless they need to (see the next tip). With 64-bit and even 32-bit CPUs, small structs can be packed into registers or automatically allocated on the stack as needed by the compiler. Returning results by value allows the compiler to understand that there isn’t any modification or aliasing happening to your parameters, and you and your callers get to write simpler code.

3) Cache member-variables and reference-parameters

This rule is straightforward: take a copy of the member-variables or reference-parameters you are going to use within your function at the top of the function, instead of using them directly throughout the method. There are two good reasons for this.

The first is the same as the tip above—because pointer references (even member-variables in methods, as they’re accessed through the implicit this pointer) put a stick in the wheels of the compiler’s optimisation. The compiler can’t guarantee that things don’t change outside its view, so it takes a very conservative (and in most cases wasteful) approach and throws away any state information it may have gleaned about those variables each time they’re used anew. And that’s valuable information that can help the compiler eliminate instructions and references to memory.

Another important reason is correctness. As an example provided by Lawrence Crowl in his CppCon 2014 talk “The Implementation of Value Types”, instead of this complex number multiplication:

template <class T> complex<T> &complex<T>;::operator*=(const complex<T> &a) { real = real * a.real – imag * a.imag; imag = real * a.imag + imag * a.real; return *this; }

You should prefer this version:

template <class T> complex<T> &complex<T>;::operator*=(const complex<T> &a) { T a_real = a.real, a_imag = a.imag; T t_real = t.real, t_imag = t.imag; real = t_real * a_real – t_imag * a_imag; imag = t_real * a_imag + t_imag * a_real; return *this; }

This second, non-aliased version will still work properly if you use value *= value to square a number; the first one won’t give you the right value because it doesn’t protect against aliased variables.

To summarise succinctly: read from (and write to!) each non-local variable exactly once in every function.

4) Organize your member variables intelligently

Is it better to organize member variables for readability or for the compiler? Ideally, you pick a scheme that works for both.

And now is a perfect time for a short refresher about CPU caches. Of course data coming from memory is very slow compared to data coming from a cache. An important fact to remember is that data is loaded into the cache in (typically) 64-byte blocks called cache lines. The cache line—that is, your requested data and the 64 bytes surrounding it—is loaded on your first request for memory absent in the cache. Because every cache miss silently penalises your program, you want a well-considered strategy for ensuring you reduce cache misses whenever possible. Even if the first memory access is outside the cache, trying to structure your accesses so that a second, third, or forth access is within the cache will have a significant impact on speed. With that in mind, consider these tips for your member-variable declarations:

  • Move the most-frequently-used member-variables first
  • Move the least-frequently-used member-variables last
  • If variables are often used together, group them near each other
  • Try to reference variables in your functions in the order they’re declared

Nearly all C++ compilers organize member variables in memory in the order in which they are declared. And grouping your member variables using the above guidelines can help reduce cache misses that drastically impact performance. Although compilers can be smart about creating code that works with caching strategies in a way that’s hard for humans to track, the C++ rules on class layout make it hard for compilers to really shine. Your goal here is to help the compiler by stacking the deck on cache-line loads that will preferentially load the variables in the order you’ll need them.

This can be a tough one if you’re not sure how frequently things are used. While it’s not always easy for complicated classes to know what member variables may be touched more often, generally following this rule of thumb as well as you can will help. Certainly for the simpler classes (string, dates/times, points, complex, quaternions, etc) you’ll probably be accessing most member variables most of the time, but you can still declare and access your member variables in a consistent way that will help guarantee that you’re minimizing your cache misses.


The bottomline is that it still takes some amount of hand-holding to get a compiler to generate the best code. Good coding-habits are by no means the end-all, but are certainly a great place to start.

The post Four Habit-Forming Tips to Faster C++ appeared first on KDAB.

GSoC Update: Tinkering with KIO

Thu, 2016/07/21 - 9:48am
I'm a lot closer to finishing the project now. Thanks to some great support from my GSoC mentor, my project has turned out better than what I had written about in my proposal! Working together, we've made a lot of changes to the project.

For starters, we've changed the name of the ioslave from "File Tray" to "staging" to "stash". I wasn't a big fan of the name change, but I see the utility in shaving off a couple of characters in the name of what I hope will be a widely used feature.

Secondly, the ioslave is now completely independent from Dolphin, or any KIO application for that matter. This means it works exactly the same way across the entire suite of KIO apps. Given that at one point we were planning to make the ioslave fully functional only with Dolphin, this is a major plus point for the project.

Next, the backend for storing stashed files and folders has undergone a complete overhaul. The first iteration of the project stored files and folders by saving the URLs of stashed items in a QList in a custom "stash" daemon running on top of kded5. Although this was a neat little solution which worked well for most intents and purposes, it had some disadvantages. For one, you couldn't delete and move files around on the ioslave without affecting the source because they were all linked to their original directories. Moreover, with the way 'mkdir' works in KIO, this solution would never work without each application being specially configured to use the ioslave which would entail a lot of groundwork laying out QDBus calls to the stash daemon. With these problems looming large, somewhere around the midterm evaluation week, I got a message from my mentor about ramping up the project using a "StashFileSystem", a virtual file system in Qt that he had written just for this project.

The virtual file system is a clever way to approach this - as it solved both of the problems with the previous approach right off the bat - mkdir could be mapped to virtual directory and now making volatile edits to folders is possible without touching the source directory. It did have its drawbacks too - as it needed to stage every file in the source directory, it would require a lot more memory than the previous approach. Plus, it would still be at the whims of kded5 if a contained process went bad and crashed the daemon.

Nevertheless, the benefits in this case far outweighed the potential cons and I got to implementing it in my ioslave and stash daemon. Using this virtual file system also meant remapping all the SlaveBase functions to corresponding calls to the stash daemon which was a complete rewrite of my code. For instance, my GitHub log for the week of implementing the virtual file system showed a sombre 449++/419--. This isn't to say it wasn't productive though - to my surprise the virtual file system actually worked better than I hoped it would! Memory utilisation is low at a nominal ~300 bytes per stashed file and the performance in my manual testing has been looking pretty good.

With the ioslave and other modules of the application largely completed, the current phase of the project involves integrating the feature neatly with Dolphin and for writing a couple of unit tests along the way. I'm looking forward to a good finish with this project.

You can find the source for it here: (did I mention it's now hosted on a KDE repo? ;) )

Programmation Qt Quick (QML)

Thu, 2016/07/21 - 9:39am

Paris, France 2016-08-22 2016-08-26 Paris, le 22 – 26 Août

En août offrez-vous une formation Qt en français avec un expert.

Apprenez les techniques de développement d’applications graphiques modernes, en utilisant la technologie Qt Quick (basée sur le langage QML) ainsi que la technologie objet Qt/C++.

“Mon équipe C++ a été ravie de cette formation. J‘espère pouvoir implémenter Qt dans nos applis ASAP.” CGG Veritas, Massy, France

Découvrir plus!

Voyez autres retours clients.


The post Programmation Qt Quick (QML) appeared first on KDAB.

Kubuntu Podcast #14 – UbPorts interview with Marius Gripsgard

Wed, 2016/07/20 - 9:56pm

Show Hosts

Ovidiu-Florin Bogdan

Rick Timmis

Aaron Honeycutt

Show Schedule Intro

What have we (the hosts) been doing ?

  • Aaron
    • Working a sponsorship out with Linode
    • Working on uCycle
  •  Rick
    • #Brexit – It would be Rude Not to [talk about it]
    • Comodo – Let’s Encrypt Brand challenge
Sponsor 1 Segment

Big Blue Button

Those of you that have attended the Kubuntu parties, will have seen our Big Blue Button conference and online education service.

Video, Audio, Presentation, Screenshare and whiteboard tools.

We are very grateful to Fred Dixon and the team at Go check out their project.

Kubuntu News Elevator Picks

Identify, install and review one app each from the Discover software center and do a short screen demo and review.

In Focus

Joining us today is Marius Gripsgard from the UbPorts project.

Sponsor 2 Segment


We’ve been in talks with Linode, an awesome VPS with super fast SSDs, Data connections, and top notch support. We have worked out a sponsorship for a server to build packages quicker and get to our users faster. BIG SHOUT OUT to Linode for working with us!

Kubuntu Developer Feedback
  • Plasma 5.7 is unlikely to hit Xenial Backports in the short term, as it is still dependent on QT 5.6.1 for which there is currently no build for Xenial.
    There is an experimental build the Acheronuk has been working on, but there are still stability issues.
Game On

Steam Group:

Review and gameplay from Shadow Warrior.


How to contact the Kubuntu Team:

How to contact the Kubuntu Podcast Team:

Plasma’s Publictransport applet’s porting status

Wed, 2016/07/20 - 9:10pm

You might remember that I spoke about Plasma’s Publictransport applet getting some reworking during the summer. It’s been over a month since I made that announcement on my blog and while ideally, I’d have liked to have blogged every week about my work, I haven’t really been able to. This is largely down to the&ellipsisRead the full post »

KDE Brasil Telegram group and IRC United

Wed, 2016/07/20 - 1:06pm

We shouldn’t look like a bunch of old geekircs all the times, and new users and new programmers doesn’t really like to use IRC, it’s just too text-based for them.telegram

That’s why the KDE Irc channel now has a bot that will forward all messages to our Telegram Channel and vice-versa, this way all the new cool kids can talk to all the old geeks around and continue to make the KDE awesome in their platform of choice.

Thanks for the KDE Sysadmin Team for making this possible.

KDE Community Working Group 2016

Tue, 2016/07/19 - 9:31pm

[Hope That Helps]community-working-group

The KDE community working group exists for quite a while, helping develop the KDE community from behind the scenes.

Since we have just too many different cultures working together, from hundreds of different countries (literally), it’s a bit hard to find a common ground on what’s a polite and correct behavior to have in many situations, We have Germans, Brazilians, Indians, Mexicans, Northamericans, Canadians, Romenians, Kenians, and quite a lot of them don’t know how the correct behavior is, we know what we should be polite, and we should treat each other with respect, but what’s respect in one’s point of view may not be respect in another point of view.

And that’s why we exists – If you think someone from the community is abusing, please talk to us, we will be just like your mom and dad asking the kids to hug and not fight anymore.

you can reach us in community-wg at kde dot org

Also, people, Follow the rules.

Wiki, what’s going on? (Part 7)

Tue, 2016/07/19 - 4:14pm



Tears followed by joy and happiness, discussions followed by great moments all together, problems followed by their solution and enthusiasm. Am I talking about my family? More or less, because actually I am talking about a family: the WikiToLearn community!

This last period was full of ups and downs, but that is inevitable in such a project. We are a big family and we do have to face problems, but with willingness and devotion to what we are doing we can manage to overcome such problems and make things go as we want to – or, at least, try to do so.

We are putting our best efforts in what we are doing: our devs are working hard to release the 0.8 version, a first step toward our main goal – the 1.0; promo team is now trying to start local hives (or groups) in different countries; editors have their summer plans to review contents and to create new ones; the importing group is ready and very soon we are having more and more high-quality books available. Members of our family are working together to make WikiToLearn great and to give you (yes, you!) the best place we can give you to study and to create collaborative textbooks!

We are focused on September: with the beginning of the new academic year we have to fully exploit our potential and move toward #operation1000; moreover in September we are also celebrating our first birthday!

Guys, great things are coming: stay tuned!

L'articolo Wiki, what’s going on? (Part 7) sembra essere il primo su Blogs from WikiToLearn.