Subscribe to Planet KDE feed
Planet KDE - http://planetKDE.org/
Updated: 35 min 14 sec ago

KStars Observers Management patched

5 hours 47 min ago

This update is a little break from my current GSoC project so i won’t talk about my progress just yet. I will talk about the current observers management dialog that is currently active in KStars. Basically, an observation session requires observer information like first name, last name and contact. Currently, an observer could be added only from the settings menu so i thought that it would be more intuitive if this functionality was placed in a more appropirate place and a proper GUI was to be implemented for a better user experience.

This is how the new observers management dialog looks like:

observermanagement

Now, the user has a heads on display on how many observers are currently in the database and has the possibility of managing that information.

Regarding GSoC, i am now working on the main Scheduler logic. I will come with an update as soon as possible. Stay tuned :D


Convergence through Divergence

Wed, 2015/07/01 - 10:53pm

It’s that time of the year again, it seems: I’m working on KPluginMetaData improvements.

In this article, I am describing a new feature that allows developers to filter applications and plugins depending on the target device they are used on. The article targets developers and device integrators and is of a very technical nature.

Different apps per device

This time around, I’m adding a mechanism that allows us to list plugins, applications (and the general “service”) specific for a given form factor. In normal-people-language, that means that I want to make it possible to specify whether an application or plugin should be shown in the user interface of a given device. Let’s look at an example: KMail. KMail has two user interfaces, the desktop version, a traditional fat client offering all the features that an email client could possibly have, and a touch-friendly version that works well on devices such as smart phones and tablets. If both are installed, which should be shown in the user interface, for example the launcher? The answer is, unfortunately: we can’t really tell as there currently is no scheme to derive this information from in a reliable way. With the current functionality that is offered by KDE Frameworks and Plasma, we’d simply list both applications, they’re both installed and there is no metadata that could possibly tell us the difference.

Now the same problem applies to not only applications, but also, for example to settings modules. A settings module (in Frameworks terms “KCM”) can be useful on the desktop, but ignored for a media center. There may also be modules which provide similar functionality, but for a different use case. We don’t want to create a mess of overlapping modules, however, so again, we need some kind of filtering.

Metadata to the rescue

Enter KPluginMetaData. KPluginMetaData gives information about an application, a plugin or something like this. It lists name, icon, author, license and a whole bunch of other things, and it lies at the base of things such as the Kickoff application launcher, KWin’s desktop effects listing, and basically everything that’s extensible or uses plugins.

I have just merged a change to KPluginMetaData that allows all these things to specify what form factor it’s relevant and useful for. This means that you can install for example KDevelop on a system that can be either a laptop or a mediacenter, and an application listing can be adapted to only show KDevelop when in desktop mode, and skipping it in media center mode. This is of great value when you want to unclutter the UI by filtering out irrelevant “stuff”. As this mechanism is implemented at the base level, KPluginMetaData, it’s available everywhere, using the exact same mechanism. When listing or loading “something”, you simply check if your current formfactor is among the suggested useful ones for an app or plugin, and based on that you make a decision whether to list it or skip it.

With increasing convergence between user interfaces, this mechanism allows us to adapt the user interface and its functionality in a fully dynamic way, and reduces clutter.

Getting down and dirty

So, how does this look exactly? Let’s take KMail as example, and assume for the sake of this example that we have two executables, kmail and kmail-touch. Two desktop files are installed, which I’ll list here in short form.

For the desktop fat client:

[Desktop] Name=Email Comment=Fat-client for your email Exec=kmail FormFactors=desktop

For the touch-friendly version:

[Desktop] Name=Email Comment=Touch-friendly email client Exec=kmail FormFactor=handset,tablet

Note that that “FormFactors” key does not just take one fixed value, but allows specifying a list of values — an application may support more than one form-factor. This is reflected throughout the API with the plural form being used. Now the only thing the application launcher has to do is to check if the current form-factor is among the supplied ones, for example like this:

foreach (const KPluginMetaData &app, allApps) { if (app.formFactors().count() == 0 || app->formFactors().contains("desktop")) { shownAppsList.append(app); } }

In this example, we check if the plugin metadata does specify the form-factor by counting the elements, and if it does, we check whether “desktop” is among them. For the above mentioned example files, it would mean that the fat client will be added to the list, and the touch-friendly one won’t. I’ll leave it as an exercise to the reader how one could filter only applications that are specifically suitable for example for a tablet device.

What devices are supported?

KPluginMetaData does not itself check if any of the values make sense. This is done by design because we want to allow for a wide range of form-factors, and we simply don’t know yet which devices this mechanism will be used on in the future. As such, the values are free-form and part of the contract between the “reader” (for example a launcher or a plugin listing) and the plugins themselves. There are a few commonly used values already (desktop, mediacenter, tablet, handset), but in principle, adding new form-factors (such as smartwatches, toasters, spaceships or frobulators) is possible, and part of its design.

For application developers

Application developers are encouraged to add this metadata to their .desktop files. Simply adding a line like the FormFactors one in the above examples will help to offer the application on different devices. If your application is desktop-only, this is not really urgent, as in the case of the desktop launchers (Kickoff, Kicker, KRunner and friends), we’ll likely use a mechanism like the above: No formfactors specified means: list it. For devices where most of the applications to be found will likely not work, marking your app with a specific FormFactor will increase the chances of it being found. As applications are being adopted to respect the form-factor’s metadata, its usefulness will increase. So if you know your app will work well with a remote control, add “mediacenter”, if you know it works well on touch devices with a reasonably sized display, add “tablet”, and so on.

Moreover…

We now have basic API, but nobody uses it (a chicken-and-egg situation, really). I expect that one of the first users of this will be Plasma Mediacenter. Bhushan is currently working on the integration of Plasma widgets into its user interface, and he has already expressed interest in using this exact mechanism. As KDE software moves onto a wider range of devices, this functionality will be one of the cornerstones of the device-adaptable user interface. If we want to use device UIs to their full potential, we do not just need converging code, we also need to add divergence features to allow benefiting from the difference of devices.

Hello Red Hat

Wed, 2015/07/01 - 10:35pm

Red Hat Logo As I mentioned in my last post I left my previous employer after quite some years – since July 1st I work for Red Hat.

In my new position I will be a Solutions Architect – so basically a sales engineer, thus the one talking to the customers on a more technical level, providing details or proof of concepts where they need it.

Since its my first day I don’t really know how it will be – but I’m very much looking forward to it, it’s an amazing opportunity! =)


Filed under: Business, Fedora, Linux, Politics, Technology, Thoughts

The Kubuntu Podcast Team is on a roll

Wed, 2015/07/01 - 8:39pm

kubuntu_desktop_1600x1200_hd-wallpaper-771776

Building on their UOS Hangout, the Kubuntu Podcast Team has created their second Hangout, featuring Ovidiu-Florin Bogdan, Aaron Honeycutt, and Rick Timmis, discussing What is Kubuntu?

The Earth, on Android

Wed, 2015/07/01 - 5:26pm

In the previous month I worked on compiling Marble widget to Android. It was a long and hard road but it is here:



(I shot this screenshot on my phone)

The globe can be rotated, and the user can zoom with the usual zooming gesture. Here is a short video example:


The hardest part was to figure out, how to compile everything with cmake instead of qmake and Qt Creator. There are some very basic things what can sabotage your successfully packaged and deployed app. For example if you did not set a version number in cmake for your library...

As you maybe know Marble also uses some elements of QtWebKit, but this is not supported on Android. So I introduced some dummy classes to substitute these (of course, not in their useability) to be able to compile Marble for Android.

You can find here step-by-step instructions, how to compile Marble Maps for Android:
https://techbase.kde.org/Projects/Marble/AndroidCompiling

The next steps:
We have decided to separate Marble's functionality into two separate apps. I introduce you Marble Maps and Marble Globe. As their name suggests Marble Map will be essentially a map application with navigation, and Marble Globe will be an app where you can switch to other planets, view historical maps, etc. what also can be used for teaching purposes.

The main goal for the summer to give life for Marble Maps. But if everything goes fine, Marble Globe can be expected too.

To close this article, here are some screenshots:




Road so far

Wed, 2015/07/01 - 4:32pm

As GSOC's mid-term is closing in, I thought I'd share what's been done so far! In case you haven't seen my earlier posts, here's a quick reminder on what I'm working on: implementing an Open Street Map (OSM) editor for Marble that allows the user to import ".osm" files, edit them with OSM-specific tools, and finally export them into ready-for-upload files. All that inside Marble's existing Annotate Plugin ( editor for ".kml" maps ).


What's been done so far?  
As one would imagine, OSM( http://wiki.openstreetmap.org/wiki/OSM_XML ) has noticeable differences from KML ( https://developers.google.com/kml/documentation/kmlreference ), the schema upon which Marble is built. These differences, from an OSM perspective, mainly consist in server-generated data such as id, changeset, timestamp etc. but also in core data elements, such as the <relation> and <tag> tags.

Up until now, I've developed a way to store this server-generated data, mainly by saving it
as KML's  ExtendedData. Exporting to ".osm" files is now possible as well, so that pretty much makes Marble a KML-to-OSM ( and in reverse )  translator at the moment ( it has some draw backs of course )

What was the main challenge?
Not everything can be translated perfectly from OSM to KML and vice-versa, so while translating, I had to ensure as little data as possible is lost.

Here are some pictures to show a map's journey through Marble's editor even though data parsing isn't really that picture-worthy:

The OSM version of a highway: "sample highway"The KML version of it, after being parsed through marble. The OSM data( irrelevant from a KML point of view ) is stored within an ExtendedData block








Reproducible testing with docker

Wed, 2015/07/01 - 3:22pm

Reproducible testing is hard, and doing it without automated tests, is even harder. With Kontact we’re unfortunately not yet in a position where we can cover all of the functionality by automated tests.

If manual testing is required, being able to bring the test system into a “clean” state after every test is key to reproducibility.

Fortunately we have a lightweight virtualization technology available with linux containers by now, and docker makes them fairly trivial to use.

Docker

Docker allows us to create, start and stop containers very easily based on images. Every image contains the current file system state, and each running container is essentially a chroot containing that image content, and a process running in it. Let that process be bash and you have pretty much a fully functional linux system.

The nice thing about this is that it is possible to run a Ubuntu 12.04 container on a Fedora 22 host (or whatever suits your fancy), and whatever I’m doing in the container, is not affected by what happens on the host system. So i.e. upgrading the host system does not affect the container.

Also, starting a container is a matter of a second.

Reproducible builds

There is a large variety of distributions out there, and every distribution has it’s own unique set of dependency versions, so if a colleague is facing a build issue, it is by no means guaranteed that I can reproduce the same problem on my system.

As an additional annoyance, any system upgrade can break my local build setup, meaning I have to be very careful with upgrading my system if I don’t have the time to rebuild it from scratch.

Moving the build system into a docker container therefore has a variety of advantages:
* Builds are reproducible across different machines
* Build dependencies can be centrally managed
* The build system is no longer affected by changes in the host system
* Building for different distributions is a matter of having a couple of docker containers

For building I chose to use kdesrc-build, so building all the necessary repositories is the least amount of effort.

Because I’m still editing the code from outside of the docker container (where my editor runs), I’m simply mounting the source code directory into the container. That way I don’t have to work inside the container, but my builds are still isolated.

Further I’m also mounting the install and build directories, meaning my containers don’t have to store anything and can be completely non-persistent (the less customized, the more reproducible), while I keep my builds fast and incremental. This is not about packaging after all.

Reproducible testing

Now we have a set of binaries that we compiled in a docker container using certain dependencies, so all we need to run the binaries is a docker container that has the necessary runtime dependencies installed.

After a bit of hackery to reuse the hosts X11 socket, it’s possible run graphical applications inside a properly setup container.

The binaries are directly mounted from the install directory, and the prepared docker image contains everything from the necessary configurations to a seeded Kontact configuration for what I need to test. That way it is guaranteed that every time I start the container, Kontact starts up in exactly the same state, zero clicks required. Issues discovered that way can very reliably be reproduced across different machines, as the only thing that differs between two setups is the used hardware (which is largely irrelevant for Kontact).

..with a server

Because I’m typically testing Kontact against a Kolab server, I of course also have a docker container running Kolab. I can again seed the image with various settings (I have for instance a John Doe account setup, for which I have the account and credentials already setup in client container), and the server is completely fresh on every start.

Wrapping it all up

Because a bunch of commands is involved, it’s worthwhile writing a couple of scripts to make the usage a easy as possible.

I went for a python wrapper which allows me to:
* build and install kdepim: “devenv srcbuild install kdepim”
* get a shell in the kdepim dir: “devenv srcbuild shell kdepim”
* start the test environment: “devenv start set1 john”

When starting the environment the first parameter defines the dataset used by the server, and the second one specifies which client to start, so I can have two Kontact instances with different users for invitation handling testing and such.

Of course you can issue any arbitrary command inside the container, so this can be extended however necessary.

While that would of course have been possible with VMs for a long time, there is a fundamental difference in performance. Executing the build has no noticeable delay compared to simply issuing make, and that includes creating a container from an image, starting the container, and cleaning it up afterwards. Starting the test server + client also takes all of 3 seconds. This kind of efficiency is really what enables us to use this in a lather, rinse, repeat approach.

The development environment

I’m still using the development environment on the host system, so all file-editing and git handling etc. happens as usual so far. I still require the build dependencies on the host system, so clang can compile my files (using YouCompleteMe) and hint if I made a typo, but at least these dependencies are decoupled from what I’m using to build Kontact itself.

I also did a little bit of integration in Vim, so my Make command now actually executes the docker command. This way I get seamless integration and I don’t even notice that I’m no longer building on the host system. Sweet.

While I’m using Vim, there’s no reason why that shouldn’t work with KDevelop (or whatever really..).

I might dockerize my development environment as well (vim + tmux + zsh + git), but more on that in another post.

Overall I’m very happy with the results of investing in a couple of docker containers, and I doubt we could have done the work we did, without that setup. At least not without a bunch of dedicated machines just for that. I’m likely to invest more in that setup, and I’m currently contemplating dockerizing also my development setup.

In any case, sources can be found here:
https://github.com/cmollekopf/docker.git


Web Open Font Format (WOFF) for Web Documents

Wed, 2015/07/01 - 2:55pm

The Web Open Font Format (short WOFF; here using Aladin font) is several years old. Still it took some time to get to a point, where WOFF is almost painless to use on the linux desktop. WOFF is based on OpenType style fonts and is in some way similar to the more known True Type Font (.ttf). TTF fonts are widely known and used on the Windows platform. Those feature rich kind of fonts are used for high quality font displaying for the system and local office-and design documents. WOFF aims at closing the gap towards making those features available on the web. With these fonts it becomes possible to show nice looking fonts on paper and web presentations in almost the same way. In order to make WOFF a success, several open source projects joined forces, among them Pango and Qt, and contributed to harfbuzz, a OpenType text shaping engine. Firefox and other web engines can handle WOFF inside SVG web graphics and HTML web documents using harfbuzz. Inkscape uses at least since version 0.91.1 harfbuzz too for text inside SVG web graphics. As Inkscape is able to produce PDF’s, designing for both the web and print world at the same time becomes easier on Linux.

Where to find and get WOFF fonts?
Open Font Library and Google host huge font collections . And there are more out on the web.

How to install WOFF?
For using inside inkscape one needs to install the fonts locally. Just copy the fonts to your personal ~/.fonts/ path and run

fc-cache -f -v

After that procedure the fonts are visible inside a newly started Inkscape.

How to deploy SVG and WOFF on the Web?
Thankfully WOFF in SVG documents is similar to HTML documents. However simply uploading a Inkscape SVG to the web as is will not be enough to show WOFF fonts. While viewing the document locally is fine, Firefox and friends need to find those fonts independent of the localy installed fonts. Right now you need to manually edit your Inkscape SVG to point to the online location of your fonts . For that open the SVG file in a text editor and place a CSS font-face reference right after the <svg> element like:

</svg>
<style type=”text/css”>
@font-face {
font-family: “Aladin”;
src: url(“fonts/Aladin-Regular.woff”) format(“woff”);
}
</style>

How to print a Inkscape SVG document containing WOFF?
Just convert to PDF from Inkscape’s file menue. Inkscape takes care for embedding the needed fonts and creates a portable PDF.

In case your prefered software is not yet WOFF ready, try the woff2otf python script for converting to the old TTF format.

Hope this small post gets some of you on the font fun path.

Qt3D Technology Preview Released with Qt 5.5.0

Wed, 2015/07/01 - 1:02pm

KDAB are pleased to announce that the Qt 5.5.0 release includes a Technology Preview of the Qt3D module. Qt3D provides a high-level framework to allow developers to easily add 3D content to Qt applications using either QML or C++ APIs. The Qt3D module is released with the Technology Preview status. This means that Qt3D will continue to see improvements across the API design, supported features and performance before release. It is provided to start collecting feedback from users and to give a taste of what is coming with Qt3D in the future. Please grab a copy of the Qt 5.5.0 release and give Qt3D a test drive and report bugs and feature requests.

Qt3D provides a lot of functionality needed for modern 3D rendering backed by the performance of OpenGL across the platforms supported by Qt with the exception of iOS. There is work under way to support Qt3D on iOS and we expect this to be available very shortly. Qt3D allows developers to not only show 3D content easily but also to totally customise the appearance of objects by using the built in materials or by providing custom GLSL shaders. Moreover, Qt3D allows control over how the scene is rendered in a data-driven manner. This allows rapid prototyping of new or custom rendering algorithms. Integration of Qt3D and Qt Quick 2 content is enabled by the Scene3D Qt Quick item. Features currently supported by the Qt3D Technology Preview are:

  • A flexible and extensible Entity Component System with a highly threaded and scalable architecture
  • Loading of custom geometry (using built in OBJ parser or assimp if available)
  • Comprehensive material, effect, render pass system to customise appearance
  • Data-driven renderer configuration – change how your scene is rendered without touching C++
  • Support for many rendering techniques – forward, deferred, early z-fill, shadow mapping etc.
  • Support for all GLSL shader stages (excluding compute at present)
  • Good support for textures and render targets including high-dynamic range
  • Support for uniform buffer objects where available
  • Out of the box support for simple geometric primitives and materials
  • Keyboard input and simple camera mouse control
  • Integration with Qt Quick 2 user interfaces

Beyond rendering, Qt3D also provides a framework for adding additional functionality in the future for areas such as:

  • Physics simulation
  • Skeletal and morph target animation
  • 3D positional audio
  • Stereoscopic rendering
  • Artificial intelligence
  • Advanced input mechanisms

To learn more about the architecture and features of Qt3D, please read KDAB’s series of blogs and the Qt3D documentation.

KDAB and The Qt Company will continue to improve Qt3D over the coming months to improve support for more platforms, input handling and picking, import of additional 3D formats, instanced rendering, more materials and better integration points to the rest of Qt. If you wish to contribute either with code, examples, documentation or time then please contact us on the #qt-3d channel on freenode IRC or via the mailing lists.

The post Qt3D Technology Preview Released with Qt 5.5.0 appeared first on KDAB.

Sponsor our digiKam team for Randa meeting

Wed, 2015/07/01 - 8:57am

Dear digiKam Users

read more

Good bye credativ

Tue, 2015/06/30 - 10:37pm

As you might know I 7 years ago I joined a company called credativ. credativ was and is a German IT company specialized in Open Source support around Debian solutions.

And it was a great opportunity for me: having no business/enterprise experience whatsoever there was much to learn for me. Dealing with various enterprise and public customers, learning and executing project management, support sales as a technician/pre-sales and so on. Without credativ I wouldn’t be who I am today. So thanks, credativ, for 7 wonderful years!

However, everything must come to an end: over the recent time I realized that it’s time for me to try something different: to see what else I am capable of, to explore new and different opportunities for me and also to dive into more aspects of the ever growing open source ecosystem.

And thus I decided to look out for a new job. My future still is with Linux, and might not be that surprising for some readers – but more about that in another post.

Today, I’d just like to say thanks to credativ. Good bye, and all the best for the future! =)


Filed under: Business, Debian, Linux, Politics, ProjectManagement, Technology, Thoughts

KStars GSoC 2015 Project

Tue, 2015/06/30 - 6:36pm

This year marks my first year as a Google Summer of Code (GSoC) mentor, and it has been an exciting experience thus far. I have been a KStars developer for the last 12 years and it is amazing what KStars has accomplished in all those years.

Since KStars caters to both casual and experienced astronomy enthusiasts, the KStars's 2015 GSoC projects reflects this direction. For the casual educational fun side, I proposed the inclusion of constellation art work to be superimposed on the sky map. KStars currently draws constellation lines, names, and boundaries, but constellation art is missing. We required that the data structure must support multiple sky cultures (e.g. western, Chinese..etc) and the artwork itself must be available under a permissible license. New constellation artwork should be available for download using the KNewStuff framework. 

Here is a very early look at the constellation art in KStars. The student still needs to work on scaling, rotation, among other things, but it looks promising! But the end of the project, all 88 western constellations can be viewed within KStars in addition to another cultural group.



For the more advanced users who utilize KStars to perform astrophotography, I proposed a simple Ekos Scheduler tool.

Ekos is an advanced astrophotography tool for Linux. It utilizes INDI for device control. With Ekos, the user can use the telescope, CCD, and other equipment to perform Astrophotography tasks. However, the user has to be present to configure the options and to command the actions to perform all the astrophotography related tasks, and hence a scheduler is required to automate observations to be constrained within certain limitations such as required minimum angular separation from the moon, whether conditions...etc. Furthermore, the observations should be triggered when certain conditions are met such as observation time, object's altitude...etc.

The Ekos scheduler is still at very early stages but the workhorse algorithm responsible for dispatching observations jobs is in the works and should be completed soon. Even though the scheduler is currently an Ekos module, it operates by utilizing Ekos DBus interface completely. 



Fortunately for KStars, both projects were accepted in GSoC 2015 and I am glad to be working with two very talented and highly motivated students:
The students have made good progress on the objectives of the project and been great when it comes to communications. Being introduced to a new framework and a new paradigm of thinking is a shock to new comers who need time to adjust and get the wheel rolling.

I certainly hope the projects stay on track and get completed on time!

Global shortcut handling in a Plasma Wayland session

Tue, 2015/06/30 - 3:02pm

KDE Frameworks contain a framework called KGlobalAccel. This framework allows applications to register key bindings (e.g. Alt+Tab) for actions. When the key binding is triggered the action gets invoked. Internally this framework uses a DBus interface to communicate with a daemon (kglobalaccel5) to register the key bindings and for getting notified when the action triggered.

On X11 the daemon uses the X11 core functionality to get notified whenever key events it is interested in happen. Basically it is a global key logger. Such an architecture has the disadvantage that any process could have this infrastructure and it would be possible for multiple processes grabbing the same global shortcut. In such a case undefined behavior is triggered as either multiple actions are triggered at the same time or only one action is triggered while the others do not get informed at all.

In addition the X11 protocol and the X server do not know that kglobalaccel5 is a shortcut daemon. It doesn’t know that for example the shortcut to lock the screen must be forwarded even if there is an open context menu which grabbed the keyboard.

In Wayland the security around input handling got fixed. A global key logger is no longer possible. So our kglobalaccel5 just doesn’t get any input events (sad, sad kglobalaccel5 cannot do anything) and even when started on Xwayland with the xcb plugin it’s pretty much broken. Only if key events are sent to another Xwayland client it will be able to intercept the events.

This means a global shortcut handling needs support from the compositor. Now it doesn’t make much sense to keep the architecture with a separate daemon process as that would introduce a possible security vulnerability: it would mean that there is a way how to log the keys. One only needs to become the global shortcuts daemon and there you go. Also we don’t want to introduce a round trip to another application to decide where to deliver the key event to.

Therefore the only logical place is to integrate global shortcut handling directly into KWin. Now this is a little bit tricky. First of all kglobalaccel5 gets DBus activated when the first application tries to access the DBus interface. And of course KWin itself is using the DBus interface. So KWin starts up and has launched the useless kglobalaccel5. Which means one of our tasks is to prevent kglobalaccel5 from starting.

Of course we do not want to duplicate all the work which was done in kglobalaccel. We want to make use of as much work as possible. Because of that kglobalaccel5 got a little surgery and the platform specific parts got split out into loadable runtime plugins depending on the QGuiApplication::platformName(). This allows KWin to provide a plugin to perform the “platform specific” parts. But the plugin would still be loaded as part of kglobalaccel5 and not as part of KWin. So another change was to turn the functionality of kglobalaccel into a library and make the binary just a small wrapper around the library. This allows KWin to link the library and start kglobalaccel from within the KWin process and feed in its own plugin.

Starting the linked KGlobalAccel is one of the first things KWin needs to do during startup. It’s essential that KWin takes over the DBus interface before any process tries to access it (as a good part it’s done so early that the Wayland sockets do not accept connections yet and Xwayland is not even started). We will also try to make kglobalaccel5 a little bit more robust about it to not launch at all in a Plasma/Wayland session.

Now the reader might think: wait, that still gives me the possibility to install a stealth key logger, I just need to create shortcuts for all keys. Nope, doesn’t work. As key events get filtered out a user would pretty quickly notice that something is broken.

Integrating KGlobalAccel into KWin on Wayland brings an obvious disadvantage: it’s linked to KWin. If one wants to use applications using KGlobalAccel on other compositors some additional work might be needed to use their local global shortcut system – if there is some. For most applications this is no problem, though, as they are part of the Plasma workspace. Also for other global shortcut systems to work with KWin it’s needed to port them to use KGlobalAccel internally when running in a Plasma/Wayland session (that’s also a good idea for X11 sessions as KGlobalAccel can provide additional features like checking whether the key is already taken by another process).

GSoC Midterm Update

Tue, 2015/06/30 - 2:36pm

So Google Summer of Code midterms are here. I want to thank my mentor, Jasem, for helping me out. I am now able to successfully display one constellation image, that is the Andromeda constellation. Currently, the image is displayed, but a lot of work needs to be done on positioning the image, and rotating the image on the sky map. Here is a brief summary of the changes I made.

I implemented an abstract function in SkyPainter called virtual bool drawConstellationArtImage(ConstellationsArt *obj, bool drawConstellationImage) and then implemented it as an empty function in SkyGLPainter. The function is implemented in SkyQPainter. Now, in skymapcomposite.cpp, the class ConstellationArtComponent is added via the addComponent() method. And the draw function is called like m_ConstellationArt->draw(skyp). This draw function then calls the drawConstellationArtImage() function I described above. Lastly I have edited data/CMakeLists.txt to include skycultures.sqlite, and all the constellation images. Presently the file skycultures.sqlite includes only one record, that is for the Andromeda constellation. Here is a screen shot of the same.
KStars
Here’s the plan for the next few days. Get the button to toggle constellation art off/on working, and position/rotate/scale the image appropriately for Andromeda. Once that is done, make all constellations appear in the sky. Here there would be 85 of them, instead of 88, because I have the image file for Argonavis, which was later on divided into Carina, Puppis and Vela. More to come soon!

Interview with Livio Fania

Tue, 2015/06/30 - 2:02pm
r_dijeau-800 Could you tell us something about yourself?

I’m Livio Fania. I’m Italian Illustrator living in France.

Do you paint professionally, as a hobby artist, or both?

I paint professionally.

What genre(s) do you work in?

I make illustrations for press, posters and children books. My universe is made by geometrical shapes, stylized characters and flashy colors.

Whose work inspires you most — who are your role models as an artist?

I like the work of João Fazenda, Riccardo Guasco and Nick Iluzada among many others.

What makes you choose digital over traditional painting?

I did not take a definite choice. Even if I work mainly digitally, I still have a lot of fun using traditional tools such as colored pencils, brush pens and watercolors. Besides, in the 90% of cases I draw by hand, I scan, and just at the end of the process I grab my graphic tablet stylus.

I do not think that working digitally means to be faster. On the contrary, I can work more quickly by hand, especially in the first sketching phases. What digital art allows is CONTROL all over the process. If you keep your layer stack well organized, you can always edit your art without losing the original version, and this is very useful when your client asks for changes. If you work with traditional tools and you drop your ink in the wrong place, you can’t press Ctrl+z.

r_Ev-800

How did you find out about Krita?

I discovered Krita through a video conference posted on David Revoy’s blog. Even if I don’t particularly like his universe, I think he is probably the most influential artist using FLOSS tools, and I’m very grateful to him for sharing his knowledge with the community. Previously, I used to work with MyPaint, mainly for its minimalist interface which was perfect for the small laptop I had. Then I discovered that Krita was more versatile and better developed, so I took some time to learn it and now I could not do without it.

What was your first impression?

At first I thought it was not the right tool for me. Actually, most digital artists use Krita for its painting features, like blending modes and textured brushes, which allow to obtain realistic light effects. Personally, I think that realism can be very boring and that is why I paint in a stylized way with uniform tints. Besides, I like to bound my range of possibilities in a set of limited elements: palettes of 5-8 colors and 2-3 brushes. So at the beginning I felt like Krita had too many options for me. But little by little I adapted the GUI to my workflow. Now I really think everybody can find their own way to use Krita, no matter the painting style they use.

What do you love about Krita?

Two elements I really love:
1) The favourite presets docker which pops up with right click. It contains everything you need to keep painting and it is a pleasure to control everything with a glance.
2) The Composition tab, which allows to completely change the color palette or experiment with new effects without losing the original version of a drawing.

What do you think needs improvement in Krita? Is there anything that really annoys you?

I think that selections are not intuitive at all and could be improved. When dealing with complex selections, it is time-consuming to check the selecting mode in the option tab (replace, intersect, subtract) and proceed accordingly. Especially considering that by default the selecting mode is the same you had when you used the tool last time (but in the meantime you probably forgot it). I think it would be much better if every time a selection tool is taken, it would be be in “normal” mode by default, and then one can switch to a different modes by pressing Ctrl/Shift.

What sets Krita apart from the other tools that you use?

Krita is by far the most complete digital painting tool developed on Linux. It is widely customizable (interface, workspaces, shortcuts, tabs) and it offers a very powerful brush engine, even compared to proprietary applications. Also, a very important aspect is the that the Krita foundation has a solid organization and develops it in a continuous way thanks to donations, Kickstarter campaigns etcetera. This is particularly important in the open source community, where we have sometimes well designed projects which disappear because they are not supported properly.

If you had to pick one favourite of all your work done in Krita so far, what would it be, and why?

r_DM170-800
The musicians in the field.

What techniques and brushes did you use in it?

As i said, I like to have limited presets. In this illustration I mostly used the “pastel_texture_thin” brush which is part of the default set of brushes in Krita. I love its texture and the fact that it is pressure sensitive. Also, I applied a global bitmap texture on an overlay layer.

Where can people see more of your work?

www.liviofania.com
https://www.facebook.com/livio.fania.art

Anything else you’d like to share?

Yes, I would like to add that I also release all my illustrations under a Creative Commons license, so you can Download my portfolio, copy it and use it for non-commercial purposes.

Midterm update

Tue, 2015/06/30 - 1:34pm

As we reached the midway of our journey, i think some updates are in order. All i can say is that i had a really good time this last month. Coding and watching the project grow is just awesome. But enough talking, lets get to the interesting part.

During this month, with the assistance of my mentor (big thanks here to Jasem) , I designed a GUI for the Scheduler and i implemented the most simple scenario for an observation schedule (I will explain this in a minute). This was done for the purpose of testing, with the intent of further use, of the DBus calls functionallity. As a possible stand alone program, the Scheduler must be as independent as possible.

Here you can see how the GUI turned out:

GUI2

And now, back to the current scheduler logic. I implemented the functionallity for the “Now” scenario. Basically, after a user selects an object, he can specifiy that the observation should start right now by checking the “Now” checkbox. After the scheduler starts, it begins to make DBus calls through the ekos interface like slewing the telescope, loading the sequence file and starting the sequence. The next order of business will be to figure out an algorithm which can determine which is the best object that should be prioritised. Adding the implementation of the “Specific time” functionallity to this algorithm, will make the basic scheduler logic that neds to be implemented.

This is it for now. I will return with further updates. Stay tuned :D


Animated Keyframe widget

Tue, 2015/06/30 - 12:15pm

After Dan Dennedy implemented Mlt::Animation API for use, I've made a separate widget for the new Animation Keyframes.

Now I've tested this for Volume Effect in Kdenlive with 'level' property set to the value of "0=0.5;100|=1;200~=0.5", where (|) represents a discrete keyframes, (=) represents linear interpolated keyframe and (~) represents smooth spline keyframes.

We have got a animated-keyframe-widget, where we can see the keyframes, add or remove them and even edit the values of existing ones.

The type of keyframe editing is in progress as well after that it will be display these keyframes on the clip on timeline. And to be able to edit them directly from the track, atleast the positions.

back in the habit

Mon, 2015/06/29 - 7:43pm

back in the habit

Last year was a long year. It was a year of many transitions for me and many new adventures, most of which I haven't really shared outside of my close "inner circle" of friends and family. During that time, I let my practice of writing slip by and my blog went very quiet. In general, my online activity dropped to a quiet hum. It was actually quite enjoyable after many years of rather more public interaction.

I've slowly started attending tech events again, though. And I've made up my mind to blog more often. At first I will probably have to make it a purposeful, weekly exercise before it all starts flowing without effort as it once did. There's a good backlog of thoughts, ideas, projects and meanderings that have piled up and it feels like the right time to start sharing them.

Brace yourself. ;)

p.s. Hell, I actually wrote some postcards today. Some of you will know what I'm referring to. :)

Roundcube Next crowdfunding success and community

Mon, 2015/06/29 - 7:36pm

Roundcube Next crowdfunding success and community

A couple days ago, the Roundcube Next crowdfunding campaign reached our initial funding goal. We even got a piece on Venture Beat, among other places. This was a fantastic result and a nice reward for quite a bit of effort on the entire team's part.

Reaching our funding goal was great, but for me personally the money is secondary to something even more important: community.

You see, Roundcube had been an Internet success for a decade now, but when I sat to talk with the developers about who their community was and who was participating from it, there wasn't as much to say as one might hope for such a significant project used by that many people.

Unlike the free software projects born in the 90s, many projects these days are not very community focused. They are often much more pragmatic, but also far less idealistic. This is not a bad thing, and I have to say that the focus many of them have on quality (of various sorts) is excellent. There is also a greater tendency to have a company founded around them, a greater tendency to be hosted on the mostly-proprietary Github system with little in the way of community connection other than push requests. Unlike the Free software projects I have spent most of my time with, these projects hardly try at all to engage with people outside their core team.

This lack of engagement is troubling. Community is one of the open source1 methodology's greatest assets. It is what allows for mutual interests to create a self-reinforcing cycle of creation and support. Without it, you might get a lot of software (though you just as well might not), but you are quite unlikely to get the buy-in, participation and thereby amplifiers and sustainability of the open source of the pre-Github era.

So when we designed the Roundcube Next campaign, we positioned no less than 4 of the perks to be participatory. There are two perks aimed at individual backers (at $75 and $100) which get those people access to what we're calling the Backstage Pass forums. These forums will be directed by the Roundcube core team, and will focus on interaction with the end users and people who host their own instance of Roundcube. Then we have two aimed at larger companies (at $5,000 and $10,000) who use Roundcube as part of their services. Those perks gain them access to Roundcube's new Advisory Committee.

So while these backers are helping us make Roundcube Next a reality, they are also paving a way to participation for themselves. The feedback from them has been extremely good so far, and we will build on that to create the community Roundcube deserves and needs. One that can feed Roundcube with all the forms of support a high profile Free software product requires.

So this crowdfunding campaign is really just the beginning. After this success, we'll surely be doing more fund raising drives in future, and we'd still love to hit our first stretch goal of $120,000 ... but even more vitally this campaign is allowing us to draw closer to our users and deployers, and them with us until, one hopes, there is only an "us": the people who make Roundcube happen together.

That we'll also be delivering the most kick-ass-ever version of Roundcube is pretty damn exciting, too. ;)

p.s. You all have 3 more days to get in on the fun!

1 I differentiate between "Free software" as a philosophy, and "open source" as a methodology; they are not mutually exclusive, but they are different beasts in almost every way, most notably how one is an ideology and the other is a practice.

Four years later

Mon, 2015/06/29 - 1:40pm

At beginning of June 2011 I made my first blog post about KWin support Wayland clients featuring a screenshot of Desktop Grid effect with a Wayland window shown on each desktop.

desktopgrid-with-wayland

Now it’s almost four years later and I show once again such a screenshot:

Desktop Grid effect in a Plasma on Wayland sessionDesktop Grid effect in a Plasma on Wayland session

A few things have changed, for example the screenshot shows a KWin running on DRM. And there’s one huge difference: the KWin instance is using Wayland internally and no longer X11 (it still uses it for X11 applications, or course). Also KWin is able to properly integrate the Wayland windows. They are not rendered a top of the scene but take up normal spots like all other windows, too. Otherwise they are also more like “normal” windows. The plasmashell in this screenshot is also a Wayland client including all the panels and windows it creates (not visible as desktop grid effect hides panels).

A good question to ask is how much of the initial code written in 2011 ended in todays KWin and the honest answer is none. The branch got never merged and pretty quickly bitrotted. I got the rendering done pretty quickly, but at that time Wayland hadn’t had a stable release yet, so we simply couldn’t merge the code into master as that would have been rather inconvenient for development. With two moving targets (Wayland and KWin) the code diverged too quickly, broke too often and made development difficult. Thus when I heard a stable release of Wayland was planned it didn’t make much sense to continue the work on the branch.

But there was of course one thing which the branch provided: experience. The branch showed that our compositor is up to the task of integrating non-X11 windows, but the rest of KWin wasn’t. Which triggered a refactoring to make KWin more useable with multiple platforms. There were side-effects from that development which went already in some releases: e.g. the reworked scripting helped to identify the interface of a managed Client. Very recently this got split out into a new abstract base class which is now inherited by the good old X11 Client and the new Wayland ShellClient. A different area is screen edge handling which got reworked to not only allow multiple backends but also to have more fine grained per screen edges.

The work to get Wayland integrated into KWin took much longer than expected, part of it was the problem described above of the missing stable Wayland releases. Of course there was also a Qt5 port which I didn’t expect. The Qt5 port required a port to xcb – again something I didn’t expect. But most of all I underestimated the problem scope of how much work would be needed to get the window manager ported to Wayland. Still it will be quite some time till all features which are provided by KWin will be available for Wayland. Too much code is too X11 specific to be directly reusable.

And to get a complete session up and running there is much more to be ported than just KWin. We have Plasma which needs to access window management information provided by KWin (a taskbar should be able to show tasks after all). Some parts need to be moved into KWin for better security: screen locker and global shortcut handling (almost finished) are obvious candidates. We have functionality provided in our libraries which need adjustments: examples are KWindowSystem and KIdleTime. Our power management code need to learn how to talk with KWin to turn the screen off. KScreen needs to learn to interact with KWin to configure screens. And much, much more.

Also many of our applications need to be adjusted to work properly on Wayland. Obviously if one uses low-level X11 calls those won’t work any more. But also when you use higher level abstractions it might be that your feature doesn’t work any more, there are so many things which are X11 specific and you might not even know off. I plan to host a session on “Applications on Wayland” on this years Akademy. Make sure to go there. I’ll show all the “don’t” (and most are don’t on X11, too ;-) ) and show how one can easily setup a Wayland session to test the application.

Now I know that this blog post sounds a little bit like one shouldn’t expect Wayland on Plasma anytime soon. That’s not the case. I’m quite optimistic that I will shift my systems to Wayland as the primary work system before Akademy to properly dog food. And I’m sure that many Plasma developers will soon follow. After all I’m just editing this blog post in a full Plasma on Wayland session (though my Firefox is using XWayland).

And of course there are many, many tasks to work on. Most are really easy and easy to get into. Every contribution matters and helps us to embrace Wayland faster. Setting up a KWin development environment to test the Wayland progress is easy. All you need is to build kwayland and kwin through kdesrc-build (all other dependencies can be provided by your distribution). Once it’s installed just run:

kwin_wayland --windowed --xwayland

and a nice nested Wayland server will be started in your X session.

And if you want to get all down to running KWin directly on DRM like the screenshot above:

kwin_wayland --drm --libinput --xwayland

In all cases you can specify applications to be launched once KWin has fully started. Those can be either X11 or Wayland applications, e.g.:

kwin_wayland --drm --libinput --xwayland "konsole --platform wayland"

Will start a konsole as a Wayland client once can has started.

A recent addition as of today is that you don’t have to specify the commands like windowed or drm any more. KWin will automatically pick the correct backend depending on the environment variables which are exported. But be careful: if you start a kwin_wayland on an X session without DISPLAY exported, it will start the drm backend and that might break your X session!

Pages