Subscribe to Planet KDE feed
Planet KDE - http://planetKDE.org/
Updated: 33 min 46 sec ago

Latest attacks on privacy...

2 hours 32 min ago
With the EU (in this case France and Germany) gearing up for another attack on privacy I'm quite happy and proud to have been part of the release of Nextcloud 10!

PrivacyIt is the usual story: we should disallow companies from using perfect end to end encryption and force them to insert backdoors against terrorists.

Not that it would help - that's been discussed extensively already but in short:
  • If you have nothing to hide, you'll use a backdoored app and you're vulnerable to foreign (and your own) governments, terrorists (!), criminals and others who can abuse your data in more ways than you can imagine.
  • If you have something to hide, you can use 1000 different tools to do so and there is nothing government can do about that so you won't use a backdoored app.
  • And note that government has failed to even use fully unencrypted information to stop terrorist attacks so perhaps we should first see if they can actually get their act together there.
Now yes, backdooring all commonly used encryption apps will help a BIT, essentially only with the low level, common crime. So you might catch the dude who broke into your house and bragged about it to his friends over Whatsapp. You won't catch the terrorists plotting with Al Qaida (or whatever the terrorist organization du-jour) to blow up a train because they can simply get one of the many solutions out there to protect themselves.

Nor will you catch corrupt politicians or big companies doing nasty stuff, though I am quite certain the laws will be written in such a way that you can use them to go after people who actually try to expose such politicians or companies.

And I'm also quite certain companies will use this as an excuse to not implement proper protection in their products so you can continue to stop pacemakers remotely or disable the brakes in cars over the internet.

Generally, laws targeting encryption and terrorism do more to harm whistleblowing than terrorism and are thus promoting corruption and bad, unsecure products.

These laws will literally cost lives. Not save any.

And it is exactly why Frank started ownCloud and why we continue to develop that vision at Nextcloud. And keep developing new features, like the File Access Control app which can provide an extra protective layer around your data. I for one certainly can use that app and exactly in the way described in that blog! So much for 'enterprise only features'.


Get it and migrate today. You and your data deserve it!

Wiki, what’s going on? (Part 8-Akademy2016)

Wed, 2016/08/24 - 5:19pm

WTLGoinOn

 

Hello everybody,

It’s august and probably you are on holiday, life seems beautiful and you hope this period never ends, but… Happy or not September is about to arrive, and your daily routine is too. Don’t be afraid though: in these months the WikiToLearn community is working hard to provide you the best WikiToLearn you’ve seen so far.

 

From a brand new homepage to a better organization for news and social pages: you’re going to love it! September is not that sad though: why? If the new WikiToLearn isn’t enough for you, probably Akademy is: the annual word summit of KDE, this year happening in Berlin with QtCon, is one of the greates events for FOSS and we are taking part to it! Why is it so special for us? First of all because we’re part of the KDE community and we are looking forward to meet other members, share opinions and help each other, but also because this period is going to be special: KDE has its 20th birthday while Free Software Foundation Europe and VideoLAN both have their 15th birthday. Not over yet: you know who’s celebrating its birthday too in the same period? WikiToLearn! 😀

 

During these months we worked hard to create local communities, to spread the word about our project, to give more attention and help to new users and to come up with a better communication plan that allows you to be always up to date on what’s going on in our community. September is not that far and it’s full of great news, get ready and prepare yourself!

Watch out: #wtlatakademy #wtlbirthday and others can become viral on our social pages in few weeks, we’re going to Akademy! 😉

Bye

 

akademy

L'articolo Wiki, what’s going on? (Part 8-Akademy2016) sembra essere il primo su Blogs from WikiToLearn.

Going to Akademy

Wed, 2016/08/24 - 1:27pm



I’m going to Akademy! Akademy 2016, as part of QtCon, that is. I missed last year in A Coruña because it conflicted with my family summer vacation, but this year is just fine (although if I was a university student I’d be annoyed that Akademy was smack-dab in the middle of the first week of classes — you can’t please everyone).

Two purely social things I will be doing are baking cookies and telling stories about dinosaurs. I have a nice long train ride to Berlin to think of those stories. But, as those of you who have been following my BSD posts know, the dinosaurs are not so backwards anymore. Qt 5.6 is doing an exp-run on FreeBSD, so it will be in the tree Real Soon Now ™, and the Frameworks are lined up, etc. etc. For folks following the plasma5 branch in area51 this is all old hat; that tends to follow the release of new KDE software — be it Frameworks, or Plasma, or Applications, or KDevelop — by a few days. The exciting thing is having this all in the official ports tree, which means that it becomes more accessible to downstreams as well.

Er .. yeah, dinosaurs. Technically, I’m looking forward to talking about Qt on BSD and KDE Plasma desktop and other technologies on BSD, and about the long-term effects of this year’s Randa meeting. I have it on good authority that KDE Emerge^WRunda^W KDE Cauldron is being investigated for the BSDs as well.

Multisceen in Plasma: Improved tools and debugging

Wed, 2016/08/24 - 1:16pm

cube-small
Plasma 5.8 will be our first long-term supported release in the Plasma 5 series. We want to make this a release as polished and stable as possible. One area we weren’t quite happy with was our multi-screen user experience. While it works quite well for most of our users, there were a number of problems which made our multi-screen support sub-par.
Let’s take a step back to define what we’re talking about.

Multi-screen support means that connecting more than one screen to your computer. The following use cases give good examples of the scope:

  • Static workstation A desktop computer with more than one display connected, the desktop typically spans both screens to give more screen real estate.
  • Docking station A laptop computer that is hooked up to a docking station with additional displays connected. This is a more interesting case, since different configurations may be picked depending on whether the laptop’s lid is closed or not, and how the user switches between displays.
  • Projector The computer is connected to a projector or TV.

The idea is that the user plugs in or starts up with that configuration, if the user has already configured this hardware combination, this setup is restored. Otherwise, a reasonable guess is done to put the user to a good starting point to fine-tune the setup.

kcm-videowall
This is the job of KScreen. At a technical level, kscreen consists of three parts:

  • system settings module This can be reached through system settings
  • kscreen daemon Run in a background process, this component saves, restores and creates initial screen configurations.
  • libkscreen This is the library providing the screen setup reading and writing API. It has backends for X11, Wayland, and others that allow to talk to the exact same programming interface, independent of the display server in use.

At an architectural level, this is a sound design: the roles are clearly separated, the low-level bits are suitably abstracted to allow re-use of code, the API presents what matters to the user, implementation details are hidden. Most importantly, aside from a few bugs, it works as expected, and in principle, there’s no reason why it shouldn’t.

So much for the theory. In reality, we’re dealing with a huge amount of complexity. There are hardware events such as suspending, waking up with different configurations, the laptop’s lid may be closed or opened (and when that’s done, we don’t even get an event that it closed, displays come and go, depending on their connection, the same piece of hardware might support completely different resolutions, hardware comes with broken EDID information, display connectors come and go, so do display controllers (crtcs); and on top of all that: the only way we get to know what actually works in reality for the user is the “throw stuff against the wall and observe what sticks” tactic.

This is the fabric of nightmares. Since I prefer to not sleep, but hack at night, I seemed to be the right person to send into this battle. (Coincidentally, I was also “crowned” kscreen maintainer a few months ago, but let’s stick to drama here.)

So, anyway, as I already mentioned in an earlier blog entry, we had some problems restoring configurations. In certain situations, displays weren’t enabled or positioned unreliably, or kscreen failed to restore configurations altogether, making it “forget” settings.
kscreen-doctor

Better tools

Debugging these issues is not entirely trivial. We need to figure out at which level they happen (for example in our xrandr implementation, in other parts of the library, or in the daemon. We also need to figure out what happens exactly, and when it does. A complex architecture like this brings a number of synchronization problems with it, and these are hard to debug when you have to figure out what exactly goes on across log files. In Plasma 5.8, kscreen will log its activity into one consolidated, categorized and time-stamped log. This rather simple change has already been a huge help in getting to know what’s really going on, and it has helped us identify a number of problems.

A tool which I’ve been working on is kscreen-doctor. On the one hand, I needed a debugging helper tool that can give system information useful for debugging. Perhaps more importantly I know I’d be missing a command-line tool to futz around with screen configurations from the command-line or from scripts as Wayland arrives. kscreen-doctor allows to change the screen configuration at runtime, like this:

Disable the hdmi output, enable the laptop panel and set it to a specific mode
$ kscreen-doctor output.HDMI-2.disable output.eDP-1.mode.1 output.eDP-1.enable

Position the hdmi monitor on the right of the laptop panel
$ kscreen-doctor output.HDMI-2.position.0,1280 output.eDP-1.position.0,0

Please note that kscreen-doctor is quite experimental. It’s a tool that allows to shoot yourself in the foot, so user discretion is advised. If you break things, you get to keep the pieces. I’d like to develop this into a more stable tool in kscreen, but for now: don’t complain if it doesn’t work or eat your hamster.

Another neat testing tool is Wayland. The video wall configuration you see in the screenshot is unfortunately not real hardware I have around here. What I’ve done instead is run a Wayland server with these “virtual displays” connected, which in turn allowed me to reproduce a configuration issue. I’ll spare you the details of what exactly went wrong, but this kind of tricks allows us to reproduce problems with much more hardware than I ever want or need in my office. It doesn’t stop there, I’ve added this hardware configuration to our unit-testing suite, so we can make sure that this case is covered and working in the future.

SSH and complex configs

Wed, 2016/08/24 - 6:58am

Hi!

Today I want to talk about the .ssh/config file, for who don’t knows about it is the configuration file for SSH to customize options to connect with SSH.

The issue with this file is: it don’t supports some kind of “include”, this can be an issue if you have to write long config file.

I wrote a bit of shell script to workaround this (you can see the script here https://quickgit.kde.org/?p=scratch%2Ftomaluca%2Fssh-build-config.git).

This script creates the .ssh/config reading slice of config from .ssh/config.d/ in order and recursively.

I hope to be helpfull for someone.

Try this handy tool to convert a Web site into a native app with Electron

Tue, 2016/08/23 - 9:13pm

The Electron framework lets you write cross-platform desktop applications using JavaScript, HTML and CSS. It is based on Node.js and Chromium and is used by the Atom editor and many other apps.

There is an handy tool to take advantage of Electron technology and make “native” apps from an URL, Nativefier:

npm install nativefier -g nativefier "https://example.com"

(you just need npm package manager to install it)

It will make a folder in your Home with the executable in it:

how Nativefier works

This is how Google+ looks for example:

Google+Electron

Zero differences compared to Chrome/Chromium web app apparently, but the RAM used is much lower:

  • Google+ as Chromium web app: ~350 MB
  • Google+ as Electron app: ~100 MB

You also get a better integration with the desktop enviroment, for example clicking on a link open the default browser, not necessarily Chrome/Chromium, and native notifications:

Native notification with Electron

And my favorite feature, CSS and JavaScript injection: you can specify some CSS/JavaScript code to include before building the app with --inject.

For example I used some CSS rules to add a Breeze style to Diaspora*:

Diaspora* with Breeze style

I’m going to build more apps. Ciao!

What's new in KDevelop 5.0?

Tue, 2016/08/23 - 6:15pm

Almost two years after the release of KDevelop 4.7, we are happy to announce the immediate availability of KDevelop 5.0!

Screenshot showing KDevelop 5.0 under Linux

While the release announcement on kdevelop.org is kept short intentionally, this blog post is going more into depth, showing what's new in KDevelop 5.0.

Read on...

Changes in language support C++ support powered by Clang

We replaced our legacy C++ parser and semantic analysis plugin with a much more powerful one that is based on Clang from the LLVM project.

A little bit of history: KDevelop always prided itself for its state of the art C++ language support. We introduced innovative code browsing functionality, semantic highlighting and advanced code completion, features that our user base has come to rely upon for their daily work. All of this was made possible by a custom C++ parser, and an extremely complex semantic analyzer for the C++ language. Adding support for all the quirky corner cases in C++, as well as maintaining compatibility with the latest C++ language standards such as C++11, drained our energy and stole time needed to improve other areas of our IDE. Furthermore, the code was so brittle, that it was close to impossible to improve its performance or add bigger new features such as proper C language support.

Now, after close to two years of work, we finally have a solution to this dilemma: A Clang based language plugin. Not only does this give us support for the the very latest C++ language standard, it also enables true C and Objective-C language support. Furthermore, you get all of the immensely useful compiler warnings directly inside your editor. Even better, fixing these warnings is now often just a matter of pressing a single button to apply a Clang provided fix-it!

Screenshot of KDevelop showing Clang fixits

There are, however, a few caveats that need to be mentioned:

  • On older machines the performance may be worse than with our legacy C++ support. But the new Clang based plugin finally scales properly with the number of cores on your CPU, which can lead to significantly improved performance on more modern machines.
  • Some features of our legacy C++ support have not yet been ported to the new plugin. This includes special support for Qt code, most notably signal/slot code completion using the old Qt 4 macro syntax. We will be working on improving this situation and welcome feedback from our users on what we should focus on.
  • The plugin works fine with Clang 3.6 and above, but some features, such as auto-type resolution in the code browser, or argument code completion hints within a function call, require newer versionsof Clang. The required changes have been contributed upstream by members of our community and we intend on continuing this effort.

Another screenshot to make you want to try KDevelop 5.0 instantly:

Screenshot of KDevelop analyzing doxygen-style code comments (KDevelop analyzing doxygen-style code comments)

For the best C++ experience in KDevelop, we recommend at least Clang 3.8.

CMake support

We removed the hand-written CMake interpreter and now leverage meta data provided by upstream CMake itself. The technology we're building upon is a so called JSON compilation database (read more about it in this insightful blog post). Technically, all you need to do is to run cmake with the -DCMAKE_EXPORT_COMPILE_COMMANDS flag, and CMake will take it from there, emitting a compile_commands.json file into your build directory.

KDevelop now supports reading those files, which is way more reliable than parsing CMake code ourselves.

But this step also means that we had to remove some of the useful advanced CMake integration features, such as the wizards to add files to a target. We are aware of this situation, and plan to work together with upstream to bring back the removed functionality in the future. Hopefully, you agree that correctness and much improved performance, where opening even large CMake projects is now close to instant, makes up for the loss of functionality.

QML/JavaScript support

With KDevelop 5, we decided to officially include support for QML and JavaScript code. This functionality has been worked on for years in our playground and now, we finally incorporated these experimental plugins and will start to officially support them.

Screenshot showing KDevelop's QML support

Our thanks go to the Qt Creator community here, as we leverage their QML and JavaScript parser (QmlJS, see here) for our language support plugin.

Screenshot showing KDevelop's QML support

QMake support

With KDevelop 5, we decided to officially include support QMake projects, too. Same story here, this functionality has been worked on for years and we now start to officially support them.

The new KDevelop plugin for QMake is simplistic but already super useful for many projects. If you are using more complicated QMake features and want to improve our interpreter, please get in touch with us!

Python, PHP, ...

Together with all this, KDevelop 5 will continue to officially support Python 3 and PHP. In our playground we also have support for Ruby, and there are plans to integrate basic Go and Rust support. If you are interested in improving support for any of these languages in KDevelop, please step up and start contributing!

Screenshot of KDevelop's Python support

Other changes Remove assistant overlay in favor of navigation widget

Another major thing we worked on was rethinking KDevelop’s assistant popup; especially in the current 5.0 betas, it tended to be a bit annoying and got in the way a lot. We thus removed the assistant overlay in favor of offering executions of assistants from the navigation widget.

Here's a screenshot of the assistants in form of a navigation widget:

Screenshot of KDevelop's new assistant widget

Key changes:

  • No longer automatically popup a widget whenever there's a problem (distracting!)
  • Only popup when invoked (via Alt, or via mouse hover)
  • Show problems on keyboard activation (via Alt, wasn't possible before)
  • We can use more text in the solution assistant descriptions (since we requested them, we can cover more space implicitly)
  • No longer creates a OpenGL context each time there's an error (this has been slow at times, using the old assistant popup. There was a noticable lag while typing on heavy load)
Per-project widget coloring

Thanks to Sebastien Speierer we got a super useful feature into KDevelop 5.0: Widget coloring based on an items affinity to a project.

A picture is worth more than a thousand words, see it in action here:

Screenshot showing KDevelop's per project widget coloring

As you can see, both the project explorer rows as well as the document tab bar items are colored based on the project affinity. This is useful to quickly decide which project a specific file belongs to.

(Note this feature is optional, it's possible to enable/disable in settings)

Progress reporting of make/ninja jobs

We added support for tracking the progress of make/ninja jobs in KDevelop, we do so by simply parsing the first few chars of the output of make and ninja. For make, this will only work for Makefiles generated by CMake so far, as those contain proper progress information). Thus, this feature won't work when make is invoked on Makefiles generated by QMake.

Screenshot:

Screenshot showing KDevelop reporting ninja's progress

The progress bar on the bottom right indicates the progress of the ninja invokation. Extra gimmick: Starting with Plasma 5.6, this progress is also indicated in the task bar entry of your task switcher in the Plasma shell.

Welcome Page redesign

The welcome page (the widget which is shown whenever you have no tabs open in KDevelop) got redesigned to better match the current widget style in use). Screenshot:

Screenshot of KDevelop's welcome page plugin

Various debugger related improvements

Debugger support is KDevelop's unloved child, but it got some improvements in 5.0, and will get quite a few improvements in the upcoming 5.1 release (due to the LLDB GSoC happening, which also touches lots of debugger agnostic code).

Debugger support in 5.0 was improved by simply streamlining the debugger related widgets where possible.

Screenshot of KDevelop's frame stack tool view

Changes:

  • Frame stack model: Non-existing files are now rendered in gray
  • Frame stack model: Pretty urls for file paths (i.e. myproject:src/main.cpp), elided in the middle now
  • The crashed thread is now highlighted properly
  • A lot more
Splash screen removal

For performance reasons the splash screen got removed in 5.0. There's been a short discussion on the KDevelop development mailing list about the pros and cons, in the end we decided to drop it.

The reasons for dropping it were:

  • Perfomance: Our QML-based splash screen actually had a noticeable impact on the start time of KDevelop (kind of defeated its purpose)
  • Feels old-fashioned: Showing a splash screen always makes me feel a bit nostalgic, it's just not a modern way to indicate that your application is starting up. All modern DEs provide a way to indicate this (i.e. by a bouncy cursor in Plasma, good old hourglass in Windows -- and OS X has animations for this as well).
  • Startup time got improved significantly (see more about that below) during 4.x -> 5.x, so it no longer felt necessary
Under the hood

Just an excerpt:

  • We have ported our huge code base to Qt 5 and KDE frameworks 5 (KF5).
  • We cleaned up many areas of our code base and improved the performance of some work flows significantly.
  • (Cold) start performance of KDevelop got improved significantly due to changes in KDevelop and libraries below (KF5 icon loading, KF5 plugin loading, etc.) -- something in the order of several seconds on my test machine (Lenovo T450s).

Just to get you an idea how much work was put into the 5.0 release over the years:

kdevplatform% git diff --stat origin/1.7 v5.0.0 | tail -n1 1928 files changed, 65668 insertions(+), 73882 deletions(-) kdevelop% git diff --stat origin/4.7 v5.0.0 | tail -n1 1573 files changed, 131850 insertions(+), 30347 deletions(-) Get it Linux AppImage

If you're on Linux you can start using KDevelop right away, by downloading & running the new KDevelop 5.0 AppImage.

Other platforms

With KF5 overall cross-platform support of KDE applications got better by order of magnitudes. Tons of hours have been spent improving OS X and Windows support.

We hope to release an official OS X app bundle & a Windows installer package soon.

Read more about other installation instructions.

Verdict

We're super proud to finally release KDevelop 5.0 to the public! We think it's a solid foundation for future releases.

With the use of Clang as the C++ support backend we hope to be able to put more energy into the IDE itself as well as other plugins instead of playing catchup with the C++ standard!

Happy to hear your opinions about KDevelop 5.0. What do you like/dislike?

KDevelop 5.0.0 release

Tue, 2016/08/23 - 6:00pm

KDevelop 5.0.0 release

Almost two years after the release of KDevelop 4.7, we are happy to announce the immediate availability of KDevelop 5.0. KDevelop is an integrated development environment focusing on support of the C++, Python, PHP and JavaScript/QML programming languages. Many important changes and refactorings were done for version 5.0, ensuring that KDevelop remains maintainable and easy to extend and improve over the next years. Highlights include much improved new C/C++ language support, as well as polishing for Python, PHP and QML/JS.

KDevelop 5.0 screenshot

 

C/C++ language supported now backed by Clang

The most prominent change certainly is the move away from our own, custom C++ analysis engine. Instead, C and C++ code analysis is now performed by clang. Aside from being much easier to maintain, this has a number of advantages:

  • Even the most complex C++ code constructs are now parsed and highlighted correctly and reliably. In the end there's a compiler in the background -- KDevelop will complain exactly if it wouldn't compile.
  • Diagnostics are a lot more accurate and reliable.  For example, KDevelop can now detect whether or not there is an overload of a function available with the parameters you are passing in.
  • For many problems (e.g. misspelled variable names, missing parentheses, missing semicolon, ...), we get suggestions on how to correct the problem from clang, and offer the user a shortcut key (Alt+1) to apply the fix automatically.
  • There is now a C parsing mode, which enables the analysis engine to correctly parse C code.

Work on getting all our old utilities for C++ to work nicely with the new infrastructure is still ongoing in some areas, but most of the important things are already in place. In contrast to the C++ support, the Python support has not undergone any significant refactoring, but has instead seen further stabilization and polishing. The same is true for the PHP and QML/JS language support components.

Qt 5, KDE Frameworks 5, and other platforms

Apart from those changes, KDevelop 5 has of course be ported to KDE Frameworks 5 and Qt 5. This will for the first time enable us to offer an experimental version of KDevelop for Microsoft Windows in the near future, in addition to support for Linux.  Additionally, we offer experimental stand-alone Linux binaries, which make it much easier for you to try KDevelop 5 before upgrading your system-wide installation.

Read more

This release announcement is kept short intentionally, to check out what's new in KDevelop 5.0, please read this blog post by Kevin.

Download

You can download the source code from here. The archives are signed with the following key ID: AC44AC6DB29779E6.

Along with KDevelop 5.0, we also release version 2.0 of the kdevelop-pg-qt parser generator utility; download it from here.

We also provide an experimental pre-built binary package which should run on any moderately recent linux distribution: Download AppImage binary for Linux (any distribution). After downloading the file, just make it executable and run it.

Thanks to everyone involved in preparing the release!

kfunk Tue, 08/23/2016 - 20:00
Category
Tags

Krita 3.0.1 Beta Builds

Tue, 2016/08/23 - 5:30pm

We’ve made a new set of development builds in the road to Krita 3.0.1. Here are the most important bug fixes:

  • Saving and exporting is much more robust (but make sure your temp folder has space)
  • Using the threshold filter as a mask and the threshold filter preview for colorspaces other than 8 bits RGBA have been fixed.
  • The 3_texture brush tip has been fixed
  • Saving templates works again
  • The color labels in the layer box look better
  • The layout of the grids and guides docker is fixed
  • Multi-threading in general has been improved, thanks to a patch by Andrew Savonichev
  • Scaling is done before transformation of a brush so the new ratio option now looks more correct
  • Some display issues (black screens) when using assistants on NVida GPU’s have been fixed
  • Switching brushes is much faster and doesn’t leak memory

In other news, the Krita team will get together in Deventer, the Netherlands, this weekend to meet in person! We’ll be fixing bugs, discussing and planning new features and plans for new training videos and more!

Windows

On Windows, Krita supports Wacom, Huion and Yiynova tablets, as well as the Surface Pro series of tablets. The portable zip file builds can be unzipped and run by double-clicking the krita link.

Krita on Windows is tested on Windows 7, Windows 8 and Windows 10. There is only a Windows 64 bits build for now. Also, there is debug build that together with the DrMingw debugger can help with finding the cause of crashes. See the new FAQ entry. The Windows builds can be slower than usual because vectorization is disabled.

Linux

For Linux, we offer AppImages that should run on any reasonable recent Linux distribution. You can download the appimage, make it executable and run it in place. No installation is needed. At this moment, we only have appimages for 64 bits versions of Linux. This appimage has experimental desktop integration.

OSX and MacOS

Krita on OSX will be fully supported with version 3.1. Krita 3.0 for OSX is still missing Instant Preview and High Quality Canvas scaling. There are also some issues with rendering the image — these issues follow from Apple’s decision to drop support for the OpenGL 3.0 compatibility profile in their display drivers and issue with touchpad and tablet support. We are working to reimplement these features using OpenGL 3.0 Core profile. For now, we recommend disabling OpenGL when using Krita on OSX for production work. Krita for OSX runs on 10.9, 10.10, 10.11 and is reported to work on MacOS too.

Source

Marble: GSoC 2016 wrap-up

Tue, 2016/08/23 - 5:15pm

Today is the deadline for submitting the final evaluations for Google Summer of Code 2016, that gives me the opportunity to write a wrap-up post about my project this summer.

About my project

The aim of my GSoC project this summer was to bring fluent rendering to Marble's OSM Vector Tile map theme. The idea for this map theme is to render the vector data taken from the openstreetmap and natural earth databases. These are merged and cut into many many tiles that are stored in .o5m format on a server, and downloaded by Marble. To achieve this in the frst place we needed a tool that will handle this for us. Creating that tool was my main objective.

osm-simplify tool

This little program's purpose is to generate the data used by the OSM Vector Tile map theme. It does many different things regarding map data manipulation, but one of it's most important job is to create the tiles. That means to cut a huge world map to tiny little tiles that can be rendered in Marble.
osm-simplify toolThe tool uses Marble's data handling and parsing which I demonstrated in my previous post. That way you can even use this tool as an example if you want to use Marble for your map data manipulation project, but this tool is far from just an example program.
Tile cutting
More exactly this turned out to be a very resource hungry program. If you look at this table, you can guess why is that:


The hardest part of the tile cutting algorithm was the processing of polygons. This turned out to be a little challenging, but in the end, the solution was to inject another algorithm into Marble's clipping algorithm. The Weiler-Atherton polygon clipping algorithm works on concave polygons too, that solves the borderline issue which I described in my previous post, that comes from the Sutherland-Hodgman polygon clipping algorithm. The tool now uses the more simple Sutherland-Hodgman algorithm for clipping non-polygons, and the Weiler-Atherton kicks in when dealing with polygons.

Here is a nice demonstration of the results of the osm-simplify tool. I loaded 20 separate map files, each is a level 5 tile generated with this tool. I marked the corners of the tiles with red, because they are generated without any gaps between them.


Turning on polygon debugging gives us a nice view of each tile and the underlying cutting process:


To be continued...
I've done a little benchmark to check the performance of the tool, and the results were not that promising. Just for zoom level 9 it needed 6 hours to generate all of the tiles from a single map. The good thing is, there is many ways to improve on the performance. Parallel processing and tile tessellation comes to help. That will be my autumn project, because in the end, Google Summer of Code is about to encouraging students to get involved with open-source projects. 


Final words about GSoC 2016

This year I learned a lot about time management and programming, it was a really great experience. I don't know if I'll have time next year for GSoC, because that will be my last year in university, but I'll just encourage anyone who wants to participate in it, especially for KDE and the Marble team. 


LabPlot: Theme Manager – GSoC’16 Final Evaluations

Tue, 2016/08/23 - 12:51pm

Hi folks!

GSoC’16 has come to an end and its time to report current status of my project ‘LabPlot: Theme Manager’. As the name suggests, project was aimed to develop a theme manager for LabPlot,  which is a well known open source application mainly used for analyzing and visualizing scientific data.

There have been a lot of developments after the project’s mid-term evaluation period (previous results are discussed in my earlier blogs), as follows:

  • In addition to the previous 4 themes: Bright, Dark, Creme and GreySlate (screenshot below)OldThemesI created 9 new simple and sophisticated themes : Solarized, SolarizedLight, DarkPastels, BlackOnLightYellow, BlackOnWhite, BlueOnBlack, GreenOnBlack, GreyOnBlack and RedOnBlack. These were created to follow the primary idea of this project, i.e. to provide a wide range of themes to LabPlot users. There are now a total of 13 themes available in LabPlot which can be used as per one’s taste. (screenshots below)

NewThemes

  • Themes preview panel :It showcases a list of QPixmaps of themes along with their names. A user can choose a suitable theme by simply double-clicking on the theme’s icon itself or by selecting the theme’s icon followed by a click on ‘Apply’ button. This panel can be accessed in two ways-
  1. Directly by clicking on the ‘Apply theme’ button available at the bottom of the Cartesian Plot dock widget (screenshot below)ApplyTheme01
  2. Through Cartesian Plot’s context menu, by choosing ‘Apply theme’ option (screenshot below)ApplyTheme02
  • Another addition to the project was implementation of save theme functionality. This provides more flexibility and control to the users over the look and feel of their plots. One can save a theme by clicking on ‘Save theme’ button and by providing a name for the theme. (see below)SaveTheme
  • Also, now the application keeps a track of last theme’s color palette applied on the current plot, so that, when you add a new curve to the plot, a color is chosen and applied in accordance to the current theme.
  • I have also initiated the work for providing an upload/download functionality for Labplot themes. i.e. I have used KDE’s GetHotNewStuff service through which Labplot users will be able to share (by uploading) their own themes with everyone else via public web server. In addition, they will be able to download and use the themes which are created and shared by other users. (This work is in progress!)

Final Result:

As a result, LabPlot is now equipped with a very well developed ‘Theme Manager’ providing a wide variety of 13 themes from basic to more advanced such as Solarized, SolarizedLight, DarkPastels, BlackOnWhite, BlueOnBlack, etc.(as mentioned above) and are also comparable to other existing applications. Currently, the Theme Manager has many useful functionalities such as application of themes on plots, saving personalized themes, uploading/downloading themes to share themes via web server (in progress). I believe current functionalities will make LabPlot a powerful tool to visualize scientific data as well as it will enrich the overall user experience and usability. Last but not the least, I am glad to say that I have successfully met all the proposed goals and deadlines as per my GSoC project proposal.

Overall Experience:

GSoc has been an amazing experience for me. As it was my first attempt at contributing to an open source project, I was amazed by the variety and quality of work carried out by the KDE community. Large number of developers and contributors working in parallel to produce interesting and extremely useful applications…It’s really motivating!

I would also like to thank my mentor Alexander Semke, who has been very responsive and supportive throughout my journey of GSoC. Along with the technical skills (Git, C++ concepts, understanding of Qt and KDE frameworks, visualization of themes + color palettes and color schemes), he also helped me to improve on my communication skills.

Future scope:

  • Currently, this project has provided flexibility of creating, saving and applying themes on 2D plots and in future this scope can be extended to accommodate 3D plots as well.
  • For the improvement of themes preview panel, a functionality can be developed to temporarily create a copy of the current plot and apply the properties of a theme on it. This can be used to show a preview of the user’s plot with theme applied.

If interested, you can find the developments and code at this Github link: https://github.com/KDE/labplot/commits/thememanager?author=prakritibhrdwj

Please don’t forget to give me your feedback:)


Minuet 0.2: massive refactoring and Android version available

Tue, 2016/08/23 - 5:32am

 massive refactoring and Android version available

Hi there,

It's been a while since my last blog post about Minuet but that doesn't mean we aren't moving it forward. Actually a lot of work has been done lately, mostly related to architecture improvements, UX revamping, refactoring, code convergence, and its availability on Android devices. Minuet is a quite recent KDE project (it's been developed since November, 2015) and I'm really delighted with what we achieved so far, given we are a small team made up of only two developers (including a GSoC student) and a designer.

So, keep reading for an overview on the improvements delivered in Minuet 0.2 (desktop version) and our journey towards the very first release of Minuet for Android devices :).

Architectural improvements and general refactoring

Minuet 0.1 already presented a somehow nice architecture, where all ear training exercises are defined in multiple JSON files, which are automatically merged by Minuet's core to make up the navigation menu. That makes it easy to maintain a huge number of exercises and add new ones with no changes in source code. Some technical debt regarding QML source code was identified though and, of course, challenges introduced by the advanced features we expect to address and the need to have Minuet running on other platforms (such as Android, iOS, and Windows) had to be properly tackled with a stronger and more flexible architecture.

The architecture improvements released in Minuet 0.2 address three fundamental aspects: JSON specification of ear training exercises, sound infrastructure, and UI/UX improvements.

New JSON structure for exercises specification

In Minuet 0.1, exercises were defined in multiple JSON files, where intervals/chords/scales/rhythms were defined alongwith the exercises where they appeared as possible answers:

{"exercises":[{ "name":"Intervals", "children":[{ "name":"Ascending Melodic Intervals", "root":"21..104", "children":[{ "name":"Seconds", "options":[{ "name":"Minor Second", "sequenceFromRoot":"1" },{ "name":"Major Second", "sequenceFromRoot":"2" }] }, ... ] }]} ] } Excerpt of exercise specification JSON file in Minuet 0.1

Although that allowed for defining new exercises with no changes in source code, music concepts definitions (e.g. the "Minor Second" and "Major Second" intervals) had to be duplicated in any other exercise category where they appear (e.g. in "Second and Thirds" and "Second to Octave" categories). That was a burden since it caused a lot of duplicated entries for those concepts appearing in multiple categories.

In Minuet 0.2, exercise specification JSON files were splitted in two different types: definitions JSON files and exercises JSON files. Definitions JSON files specify music content (scales, intercals, chords, and rhythm patterns) regardless of the exercise categories where they appear in:

{ "definitions": [ { "tags": ["interval", "ascending", "2", "minor"], "name": "Minor Second", "sequence": "1" }, { "tags": ["interval", "ascending", "2", "major"], "name": "Major Second", "sequence": "2" }, ... ] } Excerpt of definitions JSON file in Minuet 0.2

In the new architecture, music content definitions are marked with any number of tags. Those tags are used by exercises JSON files to collect the definitions which will make up a given exercise category. That makes the definition of new exercises as simple as querying definitions by the tags they were marked with:

{ "exercises": [ { "name": "Intervals", ... "children": [ { "name": "Ascending Melodic Intervals", "and-tags": ["interval", "ascending"], "children": [ { "name": "Seconds", "or-tags": ["2"] }, ... { "name": "Seconds and Thirds", "or-tags": ["2", "3"] }, ... { "name": "Second to Octave", "or-tags": ["2", "3", "4", "tritone", "5", "6", "7", "8"] }, ... ] } ] } ] } Excerpt of exercises JSON file in Minuet 0.2

Note how any (sub-)category uses and-tags and/or or-tags to select the definitions marked with, respectively, all and/or any of the provided tags. Now, changes in definitions JSON files are propagated to all exercises JSON files. You can define any number of definitions and exercises JSON files, since both are merged into a single JSON file for each type.

Sound infrastructure

Minuet 0.1 relied on Drumstick library to implement the required MIDI capabilities to play exercises. That not only yielded a high coupling between Minuet's core and Drumstick but also added a number of run-time dependencies, such as TiMidity++ and freepats. As a consequence, and in spite of a basic system sanity check executed at first Minuet's run, we still got some broken audio infrastructure in some installations.

The new architecture released with Minuet 0.2 totally decouples the sound infrastructure from Minuet's core and sound backends for different platforms are now implemented as Qt plugins. That enabled the move to using Fluidsynth + GeneralUser GS soundfont as Minuet Desktop's sound backend, with no run-time dependencies. Minuet Android's sound backend was built on top of CSound + sf_GMbank soundfont.

New Minuet sound backends can be easily created by implementing the Minuet::ISoundBackend interface:

class MINUETINTERFACES_EXPORT ISoundBackend : public IPlugin { Q_OBJECT ... public: ... public Q_SLOTS: virtual void setPitch(qint8 pitch) = 0; virtual void setVolume(quint8 volume) = 0; virtual void setTempo(quint8 tempo) = 0; virtual void prepareFromExerciseOptions(QJsonArray selectedExerciseOptions) = 0; virtual void prepareFromMidiFile(const QString &fileName) = 0; virtual void play() = 0; virtual void pause() = 0; virtual void stop() = 0; virtual void reset() = 0; ... }; Minuet sound backends must implement the Minuet::ISoundBackend interface

The API is quite straightforward. The most important service is provided by prepareFromExerciseOptions(), where all required steps to initialize/create the audio representation of a given exercise option (a specific interval, scale, chord, or rhythm pattern) should be executed. After that, the exercise can be played by invoking the play() method. The methods setPitch(), setVolume(), and setTempo() should be also implemented to adjust, respectively, the overall pitch deviation, the playing volume, and the playing speed.

UI/UX improvements

Minuet 0.2 is now totally based on QtQuickControls2 and, therefore, requires Qt 5.7 to build. The move to QtQuickControl2 is justified not only by its clean and simple API, but also because it allows for enhanced productivity and leverages code convergence. Enhanced UX and code convergence are ongoing efforts in Minuet, even though the codebase for Minuet Desktop and Minuet Android are already quite similar, differing only on the adopted sound backends.

Minuet for Android available in Google Play Store  massive refactoring and Android version available Minuet for Android: splashscreen

We are glad to announce that Minuet 0.2 is available on Google Play Store.

Minuet for Android is the result of the nice work performed by Ayush Shash in Google Summer of Code 2016. After three months of intense work, struggling with different sound libraries for Android and diverse UX patterns for mobile applications, we are happy in making Minuet for Android available with all features already presented in its desktop counterpart.

The sound backend in Minuet for Android was implemented on top of CSound: a powerful domain-specific language for sound synthesis which works on Windows, OS X, Linux, Android, and iOS. Instead on using all those oscilators combinations and other complicated sound synthesis stuff, we again adopted soundfonts as audio samplers for CSound in Android.

 massive refactoring and Android version available Initial dashboard

Minuet for Android initial screen provides the user a simple dashboard with all top-level exercises categories. That allows for rapidly jumping to the subcategories which represents chords, intervals, rhythms, and scales ear training exercises. The UI is currently sticked to portrait mode in smartphones, although we're rethinking the UX strategy for tablets and larger devices.

 massive refactoring and Android version available The navigation drawer

A typical navigation drawer allows for the user to navigate through all exercise categories resulted from merging the available exercises JSON files. The source code responsible for loading and merging JSON files, creating the navigation menu, and dynamically exhibiting the exercise screen is 100% shared with Minuet Desktop.

Adapting the piano virtual keyboard to small form factors was a particularly challenging task. After some unsuccessful attempts, we ended up using some labels to identify the piano octaves, we constrained the visualization to a single octave and then implemented an automatic horizontal scroll to the keyboard region which encompasses the exercise being currently played.

 massive refactoring and Android version available    massive refactoring and Android version available    massive refactoring and Android version available Chord, scale modes, and rhythm patterns recognition exercises

The UI for running intervals, chords, scales, and rhythm patterns exercises is quite similar to Minuet Desktop's one. In order to keep it scalable for exercises with many available answers we implemented a vertical scroll inside "Available Answers" groupbox. The answer(s) selected by the user for a given exercise is(are) presented in the "Your Answer(s)" groupbox. Incorrect answers are shown with a red rectangle and can be clicked to have the associated right answer revealed.

 massive refactoring and Android version available Minuet for Android: about dialog What's next?

As I mentioned before, Minuet is still on its infancy but I guess it's looking quite promising already :). Now, we'll concentrate our energies in merging remaining divergent code, stabilizing the architecture, making the last UI polishments and then work hard on providing really amazing music content. Addressing new platforms? Yes, that's also in our roadmap, hopefully :).

Minuet for Android wouldn't be possible without the support of the KDE community. Many thanks to Ayush Shah for the courage to brave this road, to Alessandro Longo for the amazing category icons, to the VDG team for valuable UI feedback, and to Aleix Pol for helping with Android cmake buildsystem.

Yeah! We are nearly one week away from QtCon. Dude, I'm excited to meet old friends and make new ones :) If you are heading Berlin and want to learn more about Minuet, I'll present a talk about it on Day 3 (saturday, 3rd September), at 3pm, in room A08.

 massive refactoring and Android version available

See you!

The How part (6) - Concatenating OSM way chunks

Mon, 2016/08/22 - 11:30pm

( This post is related to my GSoC project with Marble. For more information you can read the introductory post )

Problem

As of now, If you load an openstreetmap file containing linestrings such as highways, lanes or streets, in Marble and zoom to level 11, you will find that the highways or streets are not contigous and appear as broken.

You will find the roads/streets as broken

roads_broken

Instead of getting contiguous roads

road_joined

Cause

One of the primary reasons for this discontiguity is that often a particular highway/lane/street is described using multiple contiguous OSM way elements instead of a single OSM way element. Marble treats each of these specific way element as a linestring object. However Marble omits rendering any objects which are smaller than 2px, in order to improve rendering performance. Due to this, many of the OSM way elements, which are smaller than 2px don’t get rendered. Hence the street appears broken since only those of its OSM way elements are rendered which are larger than 2px.

One of the reasons which I can think of and which justifies this highway description using multiple elements is that a street might be called by a certain name for a specific stretch and might be called by some other different name for the remaining stretch. However, at level 11 we are mostly concerned with the type of highways (primary, secondary, motorway , residential) rather than the specifics such as names.

Usually, the multiple OSM way elements of a single highway share the first and the last node ID references. For example consider <1n…2n> as an OSM way element where 1n and 2n corresponds to the first and last node IDs of the way. A highway described by an ordered list of node IDs 1 to 56 can then usually be represented by the following OSM way elements <1…5>, <5…13>, <13…28>, <28…36>, <36…56>

I exploited this way of representation to create a way concatenating module in the existing osm-module tool. For the above example, the module would concat all the 5 OSM way elements into a single OSM way element <1…56>

osm-simplify -t highway=* -w input.osm

The above command concatenates all the highways present in the input file and produces a new osm file as output.

Apart from solving the problem of discontinuity, way concatenation also results in data reduction since it is eliminating redundant node references and way elements. This data reduction in turn results in faster file parsing as well as improved rendering performance since now to render a single stretch of highway one only needs to create and render a single linestring object as opposed to multiple linestring objects.

Algorithm

The tricky part of coding this OSM way concatenator is actually coming up with an algorithm which concatenates all the linestrings present in an input file. Finally I and my mentors Dennis Nienhüser and Torsten Rahn were able to come up with a working algorithm for concatenating osm ways of a file in reasonable time (approximately O(n) time).

The algorithm involves a made up data structure called WayChunk which basically is a list of contiguous ways. It also stores a GeoDataVisualCategory which represents the type of linestring. For example in case of highways GeoDataVisualCategory will contain the kind of highway, whether it is a motorway, primary, secondary or a residential type of highway.

The algorithm utilizes a multi-hash-map to bundle together OSM way elements which share a common starting or terminating node. This multi hash map has nodeID’s as keys and WayChunk’s as the values. The idea is that at any given instant, this map will contain the starting and ending point of the ways which have been evaluated till now, as keys and the corresponding way chunk as the value pointed to by these keys. Now whenever we encounter a new way, and if it’s starting or ending node matches with any of the existing way chunks as well as the type i.e. GeoDataVisualCategory of the way matches with the type of the way chunk, then this way is added to the way chunk and the values of the map are adjusted so that the map’s keys are the starting or ending nodes of some way chunk and not the intermediary ones. This way, eventually, we are able to concat the multiple small OSM chunks of highways, railroads into singular way elements.

The reason we are using multi-hash-maps instead of regular hash maps is that at a particular node, two or more highways(linestrings) of different types may emanate or terminate. Hence a given node may be associated with two or more way chunks having different type(GeoDataVisualCategory).

The algorithm in more detail is described below:

Concat(input_osm_file): Iterate over all of the way elements having key=value tags specified during input Check if the first or the last node ID of the way is present in the multi-hash-map. If neither of the IDs are present CreateWayChunk(way) If only the first ID is present in the map Check if any chunk exsits in the map which has the key as that of first ID and GeoDataVisualCategory as that of the way. If such a chunk exists Append the way to this chunk accordingly(reverse the way if required) Delete the first ID from the map Insert a new entry into the multi map having the key as last node ID of the way and value as that of the found chunk If not CreateWayChunk(way) If only the last ID is present in the map Check if any chunk exsits in the map which has the key as that of last ID and GeoDataVisualCategory as that of the way. If such a chunk exists Append the way to this chunk accordingly(reverse the way if required) Delete the last ID from the map Insert a new entry into the multi map having the key as first node ID of the way and value as that of the found chunk If not CreateWayChunk(way) If both the IDs are present in the map Check if any chunk exsits in the map which has the key as that of first ID and GeoDataVisualCategory as that of the way. Check if any chunk exsits in the map which has the key as that of last ID and GeoDataVisualCategory as that of the way. If both the chunks exist Append the way and the second chunk to the first chunk and modify the map accordingly If only first chunk exists Append the way to the first chunk and modify the map accordingly If only last chunk exists Append the way to this last chunk and modify the map accordingly If none of the chunks exist CreateWayChunk(way) Finally iterate over all the WayChunks and merge the ways in each of their lists to form one single osm way for each chunk CreateWayChunk(way): Create a new WayChunk Append the way to this new way chunk Set the GeoDataVisualCategory of the chunk as that of the way Insert two new entries in the multi map having the starting and ending node IDs of the way as keys and the created WayChunk as the value.
Results

The first image having discontiguous highways represents the raw input OSM file. This file has 2704 ways in total and has a size of 4.7 MB

The way-concatenator reduced the number of ways to 812 and the size to 2.9 MB. The second image having contiguous roads represent the output of the way concatenator.

If we remove the redundant nodes from the above output using the node reducing module described in the previous post, we get a resulting file having a size of 2.5MB. This node reducer removes 15146 redundant nodes (keeping the resolution at level 11).

If you suspect that due to node reduction there will loss in quality of rendering, then look at the below rendered image and compare it with the above ones.

node_road_joined

The node reducer and the way concatenator have resulted in a size reduction of approx 46% for the above sample without any significant loss in rendering quality.

Cantor gets support of Julia language

Mon, 2016/08/22 - 6:22pm
You may noticed GSoC 2016 is about to over and I'm glad to share the results of my work with the KDE and worldwide open-source community. Google sponsored program is a very good stimulus to get into the open-source development. Students aren't simply fixing bugs in existing software, but create new and enhance functionality of existing projects. Participants become a part of community by working together with associated mentors to bring you the best of they can do during this short period of summer.

My project was to implement Julia backend for the Cantor -- free software mathematics application for scientific statistics and analysis, part of KDE Edu project. What is Julia and why you need it? If you studies or maybe work is closely connected with mathematics computations, words like Matlab, R, Python already have bothered your ears. You won't use such fundamental languages like C/C++/Pascal/Fortran, as you need to get from idea to working prototype as fast as possible forgetting about computation speed. At this moment languages like Python/R/Matlab/Octave kicks in with easy-development, thousands of libraries and visualizations of results in a few lines. Julia -- is a language that intended to fill this gap by giving easy-development with fast execution based on JIT compilation:


If you are a dataminer and most of the time work with you favourite language in interactive environment you will like Cantor, that allows to run code pieces, keeping variables between runs and visualize results. It have more features included like auto-complete, variable management, wizards and etc.

The goal of my GSoC project is to implement Julia backend for Cantor, to give KDE users ability to use the latest progress in scientific computing -- Julia programming language. And I can say that all the features declared in my proposal are accomplished:
Julia backend for Cantor in action
Here goes slideshow demonstrating some of implemented features.

Syntax highlighting based on known symbolsSyntax highlighting is working correctly in hard cases. New algorithm were ported also to Python highlighterVariable management in actionPlotting wizard and inline plotsAuto-completionFull list of features with links to Phabricator revisions (everything stated in proposal is done):
By the moment only one revision waits for review, and backend will be ready to be pushed to master branch. Then write me about your opinions/found bugs/wanted features to ivan.lakhtanov <at> gmail <dot> com.

I want to thanks everyone, who were helping me with project: KDE community, KDE GSoC administrators, KDE admins. Special thanks goes to my mentor Filipe Saraiva for his patience during this summer.

Applications 16.08.0 and Frameworks 5.25.0 available in Chakra

Sun, 2016/08/21 - 11:14pm


The latest updates for KDE's Applications and Frameworks series are now available to all Chakra users, together with other package updates.

As always with a new series, Applications 16.08.0 ship with many changes, the most important ones being:

  • Kolourpaint, Cervisia and KDiskFree now ported to the Frameworks 5 libraries.
  • kdepimlibs has been split into akonadi-contacts, akonadi-mime and akonadi-notes
  • kdegraphics-strigi-analyzer, kdenetwork-strigi-analyzers, kdesdk-strigi-analyzers, kdeedu-libkdeedu and kdemultimedia-mplayerthumbs have been discontinued.
  • The kontact suite, marble, ark, konsole and kate received significant enhancements, new features and bug fixes.

    Frameworks 5.25.0 include bugfixes and improvements to the breeze icons, kactivities, plasma framework and kio, among others.

    Other notable package upgrades:

    [core]
    vim 7.4.2207
    ruby 2.3.1

    [desktop]
    virtualbox 5.1.4, fixing the issue with installing guest-utils.

    [lib32]
    wine 1.9.16

    It should be safe to answer yes to any replacement question by Pacman. If in doubt or if you face another issue in relation to this update, please ask or report it on the related forum section.

    Most of our mirrors take 12-24h to synchronize, after which it should be safe to upgrade. To be sure, please use the mirror status page to check that your mirror synchronized with our main server after this announcement.
  • The How part (5) - Removing redundant nodes from OSM files

    Sun, 2016/08/21 - 9:00pm

    ( This post is related to my GSoC project with Marble. For more more information you can read the introductory post )

    The requirement for medium level tiles are a little bit different from those of lower level tiles. The biggest difference being that the medium levels will use openstreetmap data as opposed to the Natural Earth data. Another difference is that in lower level tiles the major data pre-processing steps were concatenation and conversion since the data was in shapefile format and the requirement was that of OSM whereas in medium levels the major data-pre processing steps are reduction, simplification, filtration and merging.

    As already mentioned in a previous post, openstreetmap describes data at a pretty high resolution. Such resolution is fine for higher levels such as street levels but cannot be suitably used for medium (and also lower levels). Hence data-preprocessing steps such as reduction to a lower resolution, filtering features such as trees and lamp posts which are not visible at medium levels , and merging of buildings and lanes which are very close to each other.

    In order to pre-process data so as to reduce, filter, combine; various tools need to be built which modify the raw openstreetmap data so as to make it suitable for a particular Marble zoom level.

    The first tool which I built removes redundant nodes from geographical features present in the input OSM file. Redundant nodes are the nodes which are just not required at a particular zoom level since the resolution with which these nodes describe their parent geographic feature exceeds the maximum resolution which is visible at a particular zoom level.

    Consider this particular patch of lanes at zoom level 11.

    kpiece1

    Now observe the number of nodes with which this patch has been described.

    kpiece1

    A small square describes a single node. These squares are colored in a translucent manner so as to depict overlappings. Higher the intensity of the color, greater are the number of overlappings.

    As you can clearly see, these many nodes are not required for proper rendering of OSM data at medium zoom levels. In order to overcome this problem, I wrote a node reducing module. This module is a part of osm-simplify which is an OSM data preprocessor jointly being developed by Dávid Kolozsvári and me which supports many different kinds of OSM data preprocessing such as clipping, tiling, way-merging(other simplification tools are still being developed)

    osm-simplify -z 11 -n input.osm

    The above command removes the redundant nodes according to the resolution of level 11 of Marble and produces a new reduced OSM file as output.

    The underlying algorithm of this node reduction module is pretty simple

    • Create a new empty list of nodes called reducedLine
    • nodeA = first node of the linestring, ring, or a polygon which is under consideration.
    • add nodeA to reducedLine
    • Iterate from the second to the second last node (so as to retain the first and last nodes) of a linestring, ring, or a polygon
      • If the great circle distance between the nodeA and the currentNode is greater than the minimum resolution defined for that level
      • Then add currentNode to reducedLine and change value of nodeA to that of currentNode
    • Add the last node to reducedLine
    • return reducedLine

    This simple tool results in significant removal of redundant nodes. The final goal is that this node reduction module along with other modules of osm-simplify tool will result in a significant size reduction of the OSM data for a particular zoom level resulting in improved rendering performance of Marble, without compromising much on the visual quality of the rendered data.

    In the next post, I will describe about way-merging module as well as do a little bit of objective comparison of the reduction in size of OSM data caused by these modules.

    Boost dependencies and bcp

    Sun, 2016/08/21 - 8:48pm

    Recently I generated diagrams showing the header dependencies between Boost libraries, or rather, between various Boost git repositories. Diagrams showing dependencies for each individual Boost git repo are here along with dot files for generating the images.

    The monster diagram is here:

    Edges and Incidental Modules and Packages

    The directed edges in the graphs represent that a header file in one repository #includes a header file in the other repository. The idea is that, if a packager wants to package up a Boost repo, they can’t assume anything about how the user will use it. A user of Boost.ICL can choose whether ICL will use Boost.Container or not by manipulating the ICL_USE_BOOST_MOVE_IMPLEMENTATION preprocessor macro. So, the packager has to list Boost.Container as some kind of dependency of Boost.ICL, so that when the package manager downloads the boost-icl package, the boost-container package is automatically downloaded too. The dependency relationship might be a ‘suggests’ or ‘recommends’, but the edge will nonetheless exist somehow.

    In practice, packagers do not split Boost into packages like that. At least for debian packages they split compiled static libraries into packages such as libboost-serialization1.58, and put all the headers (all header-only libraries) into a single package libboost1.58-dev. Perhaps the reason for packagers putting it all together is that there is little value in splitting the header-only repository content in the monolithic Boost from each other if it will all be packaged anyway. Or perhaps the sheer number of repositories makes splitting impractical. This is in contrast to KDE Frameworks, which does consider such edges and dependency graph size when determining where functionality belongs. Typically KDE aims to define the core functionality of a library on its own in a loosely coupled way with few dependencies, and then add integration and extension for other types in higher level libraries (if at all).

    Another feature of my diagrams is that repositories which depend circularly on each other are grouped together in what I called ‘incidental modules‘. The name is inspired by ‘incidental data structures’ which Sean Parent describes in detail in one of his ‘Better Code’ talks. From a packager point of view, the Boost.MPL repo and the Boost.Utility repo are indivisible because at least one header of each repo includes at least one header of the other. That is, even if packagers wanted to split Boost headers in some way, the ‘incidental modules’ would still have to be grouped together into larger packages.

    As far as I am aware such circular dependencies don’t fit with Standard C++ Modules designs or the design of Clang Modules, but that part of C++ would have to become more widespread before Boost would consider their impact. There may be no reason to attempt to break these ‘incidental modules’ apart if all that would do is make some graphs nicer, and it wouldn’t affect how Boost is packaged.

    My script for generating the dependency information is simply grepping through the include/ directory of each repository and recording the #included files in other repositories. This means that while we know Boost.Hana can be used stand-alone, if a packager simply packages up the include/boost/hana directory, the result will have dependencies on parts of Boost because Hana includes code for integration with existing Boost code.

    Dependency Analysis and Reduction

    One way of defining a Boost library is to consider the group of headers which are gathered together and documented together to be a library (there are other ways which some in Boost prefer – it is surprisingly fuzzy). That is useful for documentation at least, but as evidenced it appears to not be useful from a packaging point of view. So, are these diagrams useful for anything?

    While Boost header-only libraries are not generally split in standard packaging systems, the bcp tool is provided to allow users to extract a subset of the entire Boost distribution into a user-specified location. As far as I know, the tool scans header files for #include directives (ignoring ifdefs, like a packager would) and gathers together all of the transitively required files. That means that these diagrams are a good measure of how much stuff the bcp tool will extract.

    Note also that these edges do not contribute time to your slow build – reducing edges in the graphs by moving files won’t make anything faster. Rewriting the implementation of certain things might, but that is not what we are talking about here.

    I can run the tool to generate a usable Boost.ICL which I can easily distribute. I delete the docs, examples and tests from the ICL directory because they make up a large chunk of the size. Such a ‘subset distribution’ doesn’t need any of those. I also remove 3.5M of preprocessed files from MPL. I then need to define BOOST_MPL_CFG_NO_PREPROCESSED_HEADERS when compiling, which is easy and explained at the end:

    $ bcp --boost=$HOME/dev/src/boost icl myicl $ rm -rf boostdir/libs/icl/{doc,test,example} $ rm -rf boostdir/boost/mpl/aux_/preprocessed $ du -hs myicl/ 15M myicl/

    Ok, so it’s pretty big. Looking at the dependency diagram for Boost.ICL you can see an arrow to the ‘incidental spirit’ module. Looking at the Boost.Spirit dependency diagram you can see that it is quite large.

    Why does ICL depend on ‘incidental spirit’? Can that dependency be removed?

    For those ‘incidental modules’, I selected one of the repositories within the group and named the group after that one repository. Too see why ICL depends on ‘incidental spirit’, we have to examine all 5 of the repositories in the group to check if it is the one responsible for the dependency edge.

    boost/libs/icl$ git grep -Pl -e include --and \ -e "thread|spirit|pool|serial|date_time" include/ include/boost/icl/gregorian.hpp include/boost/icl/ptime.hpp

    Formatting wide terminal output is tricky in a blog post, so I had to make some compromises in the output here. Those ICL headers are including Boost.DateTime headers.

    I can further see that gregorian.hpp and ptime.hpp are ‘leaf’ files in this analysis. Other files in ICL do not include them.

    boost/libs/icl$ git grep -l gregorian include/ include/boost/icl/gregorian.hpp boost/libs/icl$ git grep -l ptime include/ include/boost/icl/ptime.hpp

    As it happens, my ICL-using code also does not need those files. I’m only using icl::interval_set<double> and icl::interval_map<double>. So, I can simply delete those files.

    boost/libs/icl$ git grep -l -e include \ --and -e date_time include/boost/icl/ | xargs rm boost/libs/icl$

    and run the bcp tool again.

    $ bcp --boost=$HOME/dev/src/boost icl myicl $ rm -rf myicl/libs/icl/{doc,test,example} $ rm -rf myicl/boost/mpl/aux_/preprocessed $ du -hs myicl/ 12M myicl/

    I’ve saved 3M just by understanding the dependencies a bit. Not bad!

    Mostly the size difference is accounted for by no longer extracting boost::mpl::vector, and secondly the Boost.DateTime headers themselves.

    The dependencies in the graph are now so few that we can consider them and wonder why they are there and can they be removed. For example, there is a dependency on the Boost.Container repository. Why is that?

    include/boost/icl$ git grep -C2 -e include \ --and -e boost/container #if defined(ICL_USE_BOOST_MOVE_IMPLEMENTATION) # include <boost/container/set.hpp> #elif defined(ICL_USE_STD_IMPLEMENTATION) # include <set> -- #if defined(ICL_USE_BOOST_MOVE_IMPLEMENTATION) # include <boost/container/map.hpp> # include <boost/container/set.hpp> #elif defined(ICL_USE_STD_IMPLEMENTATION) # include <map> -- #if defined(ICL_USE_BOOST_MOVE_IMPLEMENTATION) # include <boost/container/set.hpp> #elif defined(ICL_USE_STD_IMPLEMENTATION) # include <set>

    So, Boost.Container is only included if the user defines ICL_USE_BOOST_MOVE_IMPLEMENTATION, and otherwise not. If we were talking about C++ code here we might consider this a violation of the Interface Segregation Principle, but we are not, and unfortunately the realities of the preprocessor mean this kind of thing is quite common.

    I know that I’m not defining that and I don’t need Boost.Container, so I can hack the code to remove those includes, eg:

    index 6f3c851..cf22b91 100644 --- a/include/boost/icl/map.hpp +++ b/include/boost/icl/map.hpp @@ -12,12 +12,4 @@ Copyright (c) 2007-2011: -#if defined(ICL_USE_BOOST_MOVE_IMPLEMENTATION) -# include <boost/container/map.hpp> -# include <boost/container/set.hpp> -#elif defined(ICL_USE_STD_IMPLEMENTATION) # include <map> # include <set> -#else // Default for implementing containers -# include <map> -# include <set> -#endif

    This and following steps don’t affect the filesystem size of the result. However, we can continue to analyze the dependency graph.

    I can break apart the ‘incidental fusion’ module by deleting the iterator/zip_iterator.hpp file, removing further dependencies in my custom Boost.ICL distribution. I can also delete the iterator/function_input_iterator.hpp file to remove the dependency on Boost.FunctionTypes. The result is a graph which you can at least reason about being used in an interval tree library like Boost.ICL, quite apart from our starting point with that library.

    You might shudder at the thought of deleting zip_iterator if it is an essential tool to you. Partly I want to explore in this blog post what will be needed from Boost in the future when we have zip views from the Ranges TS or use the existing ranges-v3 directly, for example. In that context, zip_iterator can go.

    Another feature of the bcp tool is that it can scan a set of source files and copy only the Boost headers that are included transitively. If I had used that, I wouldn’t need to delete the ptime.hpp or gregorian.hpp etc because bcp wouldn’t find them in the first place. It would still find the Boost.Container etc includes which appear in the ICL repository however.

    In this blog post, I showed an alternative approach to the bcp --scan attempt at minimalism. My attempt is to use bcp to export useful and as-complete-as-possible libraries. I don’t have a lot of experience with bcp, but it seems that in scanning mode I would have to re-run the tool any time I used an ICL header which I had not used before. With the modular approach, it would be less-frequently necessary to run the tool (only when directly using a Boost repository I hadn’t used before), so it seemed an approach worth exploring the limitations of.

    Examining Proposed Standard Libraries

    We can also examine other Boost repositories, particularly those which are being standardized by newer C++ standards because we know that any, variant and filesystem can be implemented with only standard C++ features and without Boost.

    Looking at Boost.Variant, it seems that use of the Boost.Math library makes that graph much larger. If we want Boost.Variant without all of that Math stuff, one thing we can choose to do is copy the one math function that Variant uses, static_lcm, into the Variant library (or somewhere like Boost.Core or Boost.Integer for example). That does cause a significant reduction in the dependency graph.

    Further, I can remove the hash_variant.hpp file to remove the Boost.Functional dependency:

    I don’t know if C++ standardized variant has similar hashing functionality or how it is implemented, but it is interesting to me how it affects the graph.

    Using a bcp-extracted library with Modern CMake

    After extracting a library or set of libraries with bcp, you might want to use the code in a CMake project. Here is the modern way to do that:

    add_library(boost_mpl INTERFACE) target_compile_definitions(boost_mpl INTERFACE BOOST_MPL_CFG_NO_PREPROCESSED_HEADERS ) target_include_directories(boost_mpl INTERFACE "${CMAKE_CURRENT_SOURCE_DIR}/myicl" ) add_library(boost_icl INTERFACE) target_link_libraries(boost_icl INTERFACE boost_mpl) target_include_directories(boost_icl INTERFACE "${CMAKE_CURRENT_SOURCE_DIR}/myicl/libs/icl/include" ) add_library(boost::icl ALIAS boost_icl) #

    Boost ships a large chunk of preprocessed headers for various compilers, which I mentioned above. The reasons for that are probably historical and obsolete, but they will remain and they are used by default when using GCC and that will not change. To diverge from that default it is necessary to set the BOOST_MPL_CFG_NO_PREPROCESSED_HEADERS preprocessor macro.

    By defining an INTERFACE boost_mpl library and setting its INTERFACE target_compile_definitions, any user of that library gets that magic BOOST_MPL_CFG_NO_PREPROCESSED_HEADERS define when compiling its sources.

    MPL is just an internal implementation detail of ICL though, so I won’t have any of my CMake targets using MPL directly. Instead I additionally define a boost_icl INTERFACE library which specifies an INTERFACE dependency on boost_mpl with target_link_libraries.

    The last ‘modern’ step is to define an ALIAS library. The alias name is boost::icl and it aliases the boost_icl library. To CMake, the following two commands generate an equivalent buildsystem:

    target_link_libraries(myexe boost_icl) target_link_libraries(myexe boost::icl) #

    Using the ALIAS version has a different effect however: If the boost::icl target does not exist an error will be issued at CMake time. That is not the case with the boost_icl version. It makes sense to use target_link_libraries with targets with :: in the name and ALIAS makes that possible for any library.


    The How part (4) - Automating tile generation for lower levels

    Sun, 2016/08/21 - 6:00pm

    ( This post is related to my GSoC project with Marble. For more more information you can read the introductory post )

    After revamping shp2osm tool and adding the missing styles, the only thing remaining for supporting lower level tiles was automating the process of tile generation . Before the process was automated, in order to create the lower level tiles one needed to manually download the Natural Earth shapefiles, combine and convert the shapefiles into a single OSM file using shp2osm tool, and then using the existing vector tile creator of Marble (which divides a large OSM file into small chunks of OSM vector tiles) create the tiles.

    I wrote a script which simplifies and automates the process of generating lower level tiles for Marble. The script takes a text file as input. This text file includes a listing of the zoom levels and the Natural Earth geographical features which are to be included in those zoom levels.

    *0 ne_110m_land ne_110m_geographic_lines *1 ne_110m_land ne_110m_geographic_lines ne_110m_admin_0_boundary_lines_land *3 ne_110m_land ne_110m_geographic_lines ne_110m_admin_0_boundary_lines_land ne_110m_admin_1_states_provinces_lines ne_110m_admin_0_pacific_groupings ne_110m_rivers_lake_centerlines ne_110m_lakes ne_10m_bathymetry_K_200 ne_10m_bathymetry_G_4000

    As you can see, the above input file describes what all Natural Earth features are to be included for 0th, 1st and 3rd levels of the vector OSM theme of Marble

    Using the above input file, the script checks the input directory for the Natural Earth geographical features specified in the file. If any geographical feature is not present, the script downloads the files in zipped form and then unzips it. Once all the required Natural Earth features are present in the input directory, using the shp2osm tool it combines all the data for a particular level and creates a single OSM file containing all the Natural Earth features specified in the input file. Then using the existing vector tile creator, it creates all the required OSM tiles , arranged properly in respective folder and sub-folders for the levels specified in the input file.

    With this tool in place, we can now easily generate the lower level vector tiles for Marble.

    level_3 level 3

    level_5 level 5

    level_7 level 7

    level_9 level 9

    video

    As you can see in the video, as all the data is getting rendered dynamically we are getting a performance lag and the navigation is not very smooth. This is because of improper tile cutting as well as the huge size of data getting rendered at a particular instant. For creating the tiles, the script itself the preexisting vector tile cutter of Marble. This vector tile cutter in turn depends on osmconvert to actually clip the data and create smaller OSM tiles. However osmconvert’s clipping is not very exact resulting in data redundancy. This redundancy causes Marble to render the same geographical feature 3 or 4 times causing severe performance penalties. Currently the clipping is being improved by Dávid Kolozsvári who is also a GSoC student working on Marble. Apart from the clipping, there are resolution issues i.e. a particular feature is described by much greater number of nodes than required.

    After all the work related to lower level tiles was done, began the second part of my project which dealt with creation of tools for simplification and reduction of OSM data so that it is rendered smoothly and according to the visual requirement of medium zoom levels. I will discuss about these tools in future posts.

    The How part (3) - Concatenating the Natural Earth geographical features into a single OSM file

    Sun, 2016/08/21 - 4:00pm

    ( This post is related to my GSoC project with Marble. For more more information you can read the introductory post )

    Till now, the polyshp2osm tool, which I had modified to support various kinds of geometries as well as OSM tag-mappings for Natural Earth metadata, supported conversion of a single shapefile into OSM.

    python3 ~/a/marble/tools/shp2osm/polyshp2osm.py ne_110m_rivers_lake_centerlines.shp

    The above command will convert the given shp file containing rivers and lakes into its OSM equivalent.

    One of the major aims of my project was to create a tool for automatically generating the OSM tiles for lower zoom levels using Natural Earth data. In my previous post, I described how I added styling for many geographical features so as to enable rendering of many Natural Earth features such as reefs and bathymetries. Now before creating a tool which can output the tiles, I need a tool which can concatenate all the different Natural Earth features such as roads, rails, rivers, reefs into one single OSM file which then can be broken down into tiles for lower zoom levels. Basically I need to modify the tool so that it can take multiple shp files as input and then produces a combined OSM file.

    Now the straightforward way of doing this is to just concat the XML elements produced for different shp files. This will obviously work but will contain way too much redundant data especially in the form of redundant nodes. I solved this redundancy by using a dictionary which maps the (longitude, latitude) of a node to a unique node ID. Now whenever the tool iterates over the nodes of a polygon or a linestring in the tool, it is checked whether that node is already present in the dictionary or not. If it is not present, we assign a new ID to the node and add it to the dictionary, if it is present in the dictionary , we use the node ID of the existing node instead of creating a redundant node having the same longitude,latitude but a different ID.

    Now we can give multiple shp files as input to the tool and it will generate a single OSM file containing all the features of the input shp file without any redundant node data.

    python3 ~/a/marble/tools/shp2osm/polyshp2osm.py ne_50m_land ne_50m_geographic_lines ne_50m_lakes ne_50m_glaciated_areas

    The above command will concatenate the Natural Earth shp files containing land, geographic line, lakes and glaciated areas into a single OSM file.

    The How part (2) - Adding styling for various geographical features

    Sun, 2016/08/21 - 12:00pm

    ( This post is related to my GSoC project with Marble. For more more information you can read the introductory post )

    As I mentioned in my earlier post, I modified the polyshp2osm tool so as to support conversion of Natural Earth shapefiles to OSM format so that Marble is able to render geographical features(roads, rivers, reefs, airports) provided by the Natural Earth data set. Now the primary problem was that Marble never supported the rendering of many of the geographical features provided by Natural Earth. In order to enable rendering of a new geographic feature(reef, boundaries), one needs to describe how the feature must be styled by describing the colors to be used, the width of the boundaries, any geometric patterns or any icon which needs to be associated with the geometry.

    Before describing my work let me first describe how Marble handles the styling for the vector OSM theme. Marble reads the OSM file and for any geographical feature (roads, rivers, reefs, airports, shops, buildings etc) assigns it a GeoDataVisualCategory value. The geometry associated (point, polygon, linestring ) with the geographical feature is primarily styled according to this GeoDataVisualCategory. Depending on the presence of other OSM key=value pair tags and the current zoom level of the map, Marble might dynamically change the styling.

    In fact, this ability to dynamically change the styling was recently added by my mentor Dennis Nienhüser during the Randa meetings.

    In most of the cases, for adding the styling for a new geographic feature in Marble, one needs to do the following steps

    • Create a new GeoDataVisualCategory
    • Assign a default style to this GeoDataVisualCategory by describing the colors, width, pattern, icon which are associated with the feature
    • Decide the order of this new visual category w.r.t other visual categories. Visual categories which have a higher priority will be drawn on top of visual categories which have a lower priority.
    • Decide the minimum zoom level at which this new visual category starts appearing.
    • Map this visual category to some corresponding OSM key-value pair.
    • Decide if you want to change the default styling associated with this GeoDataVisualCategory on some particular zoom level or when some particular OSM key=value tags are encountered.
    Natural Reefs

    Has an OSM equivalent natural=reef which I used for tag mapping it to the styling of reefs.

    reefs

    Landmass

    Natural Earth provides the data for countries, continents, urban areas, islands as polygons. In order to provide a common tag mapping for these land polygons, I constructed a custom marble_land=landmass for tag mapping the above land polygons to the styling of rendered land areas in Marble.

    landmass

    Administrative Boundaries

    The OSM administrative boundaries have an admin_level=x tag which defines the size/importance of a particular administrative boundary. I tag mapped these admin_level=(1..11) tags to 11 different GeoDataVisualCategories .

    admin_boundaries

    Salt Lakes

    Instead of creating a new visual category, I added an additional OSM tag salt=yes to the list of key=value tags list of salt lakes while keeping the visual category as NaturalWater. Now when a salt lake in encountered during parsing, Marble assigns it the default styling of NaturalWater which is a shade of blue with blue outline. However due to the tag salt=yes the styling changes to a shade of yellow with a dashed blue line.

    salt_lakes

    Antarctic Ice Shelves

    I created a new visual category NaturalIceShelf for styling the antarctic ice shelves and tag mapped it to natural=glacier and glacier:type=shelf

    ice_shelves

    glacier_ice_shelves

    Marine Indicators

    I tag mapped the OSM tag maritime=yes to a new visual category BoundaryMaritime whereas used the tags (boundary=administrative, admin_level=2, maritime=yes, border_type=territorial) for dynamically changing the styling of disputed marine boundaries.

    marine_indicators

    International Date Line

    Created a custom OSM tag marble_line=date and tag mapped it to a new visual category InternationalDateLine.

    idl

    Bathymetries

    Styling Bathymetries was quite tricky since I was allowed to use only a single visual category for styling different kinds of bathymetries. Initially I though that this was going to be straightforward and I only need to pass the elevation info of the bathymetry and depending on this info, I can then dynamically change the styling. However I eventually found that in order to do the above task, I need go down a layer of abstraction and hack a few things at the graphics level. The problem was that the bathymetries at level 200 and 4000 were getting mixed up. Ideally bathymetries at level 4000 should appear above the bathymetry at level 200 but since the z-value(which decides which feature must be rendered on top of others) is fixed for a particular GeoDataVisualCategory, rendering the bathymetries was getting messed up. I had to make special cases in the graphics related methods of Marble so as to circumvent this problem.

    bathymetry1

    bathymetry2

    Pages