Planet KDE

Subscribe to Planet KDE feed Planet KDE
Planet KDE
Updated: 2 hours 53 min ago


Tue, 2020/11/24 - 11:00pm

With Wake-on-LAN (WoL) it can be slightly easier to manage machines in-house. You can fire up the workstation and start the day’s compile jobs (to catch up with overnight work by the KDE community, say) while drinking an espresso downstairs and doomscrolling.

Wake-on-LAN relies on three things:

  • When a workstation is off, it is “off” in name only. Many machines still draw 5W or more power when “off”, which is enough to drive an entire Pine64 board. That power allows, among other things, the network chip to monitor the LAN.
  • The LAN can carry specially-crafted “wake” frames, which fit in the Ethernet protocol.
  • The network chip and workstation BIOS cooperate to react to those special frames.

The Wikipedia article on WoL is extensive. From a practical in-house perspective, FreeBSD gives you the tools to use WoL if you have a FreeBSD machine next to (or built-in to) the espresso machine. My notes here assume a simple one-network in-home LAN.

Wake up, little server

If all the administrative bits are in place, then the simple way to wake up a machine is wake <hostname>. This requires root, since it sends specially-crafted (broadcast) Ethernet packets, which isn’t something that regular users can do.

If only some of the administrative bits are in place, then wake <mac-address> will do the trick.

The administrative bits

To wake up a machine, you need to know the MAC-address (of a network chip in that machine, that supports WoL).

  • If the machine is on, and you can login on it: ifconfig will tell you. Look for the ether line in the output, like this one, and also the WOL_MAGIC option:

    Linuxes which don’t ship ifconfig can use ip link list in the same way.

  • If the machine is on and on the network, arp -a elsewhere in the network will tell you what is on the network and what MAC-addresses there are. Hopefully you can pick the right machine out of that. ( at 22:ff:55:66:00:cc on re0 expires in 72 seconds [ethernet]
  • If the machine is on, or has been on, and does DHCP, the /var/db/dhcpd.leases file on the DHCP server contains entries for the DHCP leases, and you can look for the MAC-address and identifying host information there (some lines stripped out in the middle here).
    lease { hardware ethernet 22:ff:55:66:00:cc; client-hostname "wakeme"; }

Armed with the knowledge of the MAC-address of the machine to wake up, wake can do its thing, but if you are not into remembering MAC-addresses, then the file /etc/ethers provides a persistent database of MAC-to-hostname mappings; adding a line in that file puts all the administrative bits in place:

22:ff:55:66:00:cc wakeme

The end-game is, clearly, to have Mycroft running on something near the espresso machine, so you can shout “hey, Mycroft, wake the workstation” and we can wake up in parallel.

Pages from the manual: wake ethers(5)

Kdenlive 20.08.3 is out

Mon, 2020/11/23 - 11:41am

The third and last minor release of the 20.08 series is out with the usual round of fixes and improvements. Focus is now set on finishing the same track transitions and the subtitler features for the next major release due in December. Please help test the Beta release and report any issues.


  • Fix on monitor displayed fps with high fps values. Commit.
  • Ensure timeline ruler is correctly updated on profile switch. Commit.
  • When switching project profile and there is only 1 clip in timeline, update the timeline clip duration accordingly to profile change. Commit.
  • When switching project profile and there is only 1 clip in timeline, update the timeline clip duration accordingly to profile change. Commit.
  • Project archiving: check after each file if archiving works, add option to use zip instead of tar.gz. Commit. See bug #421565
  • Fix opening project files with missing version number. Commit. See bug #420494
  • Fix duplicated audio from previous commit. Commit.
  • Fix playlist clips have no audio regression. Commit.
  • Fix keyframeable effect params left enabled when selecting a clip, leading to possible crash. Commit.
  • Don’t allow removing the only keyframe in an effect (was possible from the on monitor toolbar and crashing). Commit.
  • Fix crash inserting zone over grouped clips in same track. Commit.
  • Fix previous commit. Commit.
  • Check ffmpeg setting points to a file, not just isn’t empty. Commit.
  • Qtcrop effect: make radius animated. Commit.
  • Render widget: avoid misuse of parallel processing. Commit.
  • Fix resizing clip loses focus if mouse cursor did not get outside of clip boundaries. Commit.
  • Fix rounding error sometimes hiding last keyframe in effectstack. Commit.

Kubuntu is not Free, it is Free

Sun, 2020/11/22 - 4:32pm

Human perception has never ceased to amaze me, and in the context of this article, it is the perception of value, and the value of contribution that I want us to think about.

Photo by Andrea Piacquadio from Pexels

It is yours in title, deed and asset

A common miss perception with Open Source software is the notion of free. Many people associate free in its simplest form, that of no monetary cost, and unfortunately this ultimately leads to the second conclusion of ‘cheap’ and low quality. Proprietary commercial vendors, and their corporate marketing departments know this and use that knowledge to focus their audience on ‘perceived value’. In some ways free of cost in the open source software world is a significant disadvantage, because it means there are no funds available to pay for a marketing machine to generate ‘perceived value’.

Think, for a moment, how much of a disadvantage that is when trying to develop a customer/user base.

Kubuntu is completely and whole contributon driven. It is forged from passion and enthusiasm, built with joy and above all love. Throughout our community; users use it because they love it, supporters help users, and each other, maintainers fix issues and package improvements, developers extend functionality and add features, bloggers write articles and documentation, youtubers make videos and tutorials. All these people do this because they love what they’re doing and it brings them joy doing it.

Photo by Tima Miroshnichenko from Pexels

Today Linux is cloud native, ubiquitous and dominates wholesale in the internet space, it is both general purpose, and highly specialised, robust and extensive, yet focused and detailed.

Kubuntu is a general purpose operating system designed and developed by our community to be practical and intuitive to a wide audience. It is simple and non-intrusive to install, everyday it continues to grow a larger user base of people who download it, install it and for some, love it! Further more, some of those users will find their way into our community, they will see the contributions given so freely by others and be inspired to contribute themselves.

Image from Wikipedia

Anyone who has installed Windows 10 recently, will atest to the extent of personal information that Microsoft asks users of its operating system to ‘contribute’. This enables the Microsoft marketing teams to further refine their messaging to resonate with your personal ‘perceived value‘ and indeed to enable that across the Microsoft portfolio of ‘partners‘!
The story is identical with Apple, the recently announced Silicon M1 seeks, not only to lock Apple users into the Apple software ecosystem and their ‘partners‘ but also to lock down and isolate the software to the hardware.

With this background understanding, we are able to return full circle to the subject of this article ‘Kubuntu is not Free, it is Free‘ and further more Kubuntu users are free.
Free from intrusion, profiling, targeting, and marketing; Kubuntu user are free to share, modify and improve their beloved software however they choose.

Photo by from Pexels

Let us revisit that last sentence and add some clarity. Kubuntu users are free to share and improve ‘their’ beloved software however they choose.
The critical word here is ‘their’, and that is because Kubuntu is YOUR software, not Microsoft, Apple, and not even Canonical or Ubuntu’s. It is yours in title, deed and asset and that is the value that the
GNU GPL license bequithes to you.

This ownership also empowers you, and indeed puts you as an individual in a greater place of power than the marketeers from Microsoft or Apple. You can share, distribute, promote, highlight or low-light, Kubuntu wherever, and whenever you like. Blog about it, make YouTube videos about it, share it, change it, give it away and even sell it.

How about the for perceived value ?

About the Author:

Rick Timmis is a Kubuntu Councillor, and advocate. Rick has been a user and open contributor to Kubuntu for over 10 years, and a KDE user and contributor for 20

KStars v3.5.0 is Released

Sat, 2020/11/21 - 3:03pm

Glad to announce the release of KStars v3.5.0 for Windows, MacOS, and Linux. This release marks a significant milestone for KStars with the integration of StellarSolver, the Cross Platform Sextractor and Internal Astrometric Solver.

Check out the Change log for more details.

StellarSolverRobert Lancaster spent a significant portion of 2020 developing and refining this amazing library. Here is an excerpt from StellarSolver Github on the motivations behnd its development: is a fantastic astrometric plate solver, but it is command line only and has many dependencies such as python, netpbm, libjpeg, cfitsio, and many other things. It is fairly easy to install on Linux and works incredibly well in that environment. With the correct recipes in homebrew, craft, macports, or other package managers, it can be installed and run from the command line on Mac OS X as well. On Windows, however, it must be run within a compatibility layer such as Microsoft Subsystem for Linux, Cygwin, or Ansvr. None of these things will stop the program from running, but it does make it more difficult to set up and use and all of the files and dependencies and configuration files must be setup properly in order to get it to work.

StellarSolver major features:
  • An Astrometric Plate Solver for Mac, Linux, and Windows, built on and SEP (sextractor)
  • Meant to be an internal library for use in a program like KStars for internal plate solving on all supported operating systems
  • Python is not required for the library.
  • Netpbm is not required for the library.
  • Internal Library, so calls to external programs are not required.
  • No Astrometry.cfg file is needed, the settings are internal to the program
  • Directly loads the image data into SEP and then takes the generated xy-list internally from Sextractor into, so there is no need to save any files.
  • No temporary files need to be created for solving and no WCS file needs to be created to read the solved information. Although does monitor for the creation of 2 files indicating that a field is solved or a cancel was made, so these are created for now.
  • The Index Files are still required for solving images, but the program or the user can specify the folder locations rather than putting them in the config file.

It took significant re-tooling inside KStars to integrate StellarSolver, but right now we are confident in the benefits in will bring to all our users. For the first time since Ekos was developed back in 2012, KStars does not require any external applications to perform any of its astrophotography related tasks. The solver is completely built-in and supported on all platforms equally. Windows users would be glad to learn they do not have to download and install extra programs to get the astrometry functionality to work.

But StellarSolver is not only limited to astrometry, its major function in Ekos is actually for star detection. Therefore, it is now used in Capture, Guide, and Focus modules as well. Future versions should pave the way to perform photometric calculations right within FITS Viewer.

FITS Viewer

While the name remains as FITS viewer, we also added support to load JPG/PNG and RAW files from DSLR cameras. Granted, not all the features of a FITS file is going to be available, but this feature has long been requested by users and now we finally have it baked in!

Analyze Module

Hy Murveit introduced a new Ekos module to help analyze the imaging session in details. It records and displays what happened in an imaging session. That is, it does not control any if your imaging, but rather reviews what occurred.

Sessions are stored in an analyze folder, a sister folder to the main logging folder. The .analyze files written there can be loaded into the Analyze tab to be viewed. Analyze also can display data from the current imaging session.The Analyze Module records and displays what happened in an imaging session. That is, it does not control any if your imaging, but rather reviews what occurred. Sessions are stored in an analyze folder, a sister folder to the main logging folder. The .analyze files written there can be loaded into the Analyze tab to be viewed. Analyze also can display data from the current imaging session.

Testing Framework

This release culminates 3 months of continued incremental improvements to KStars testing framework spearheaded by Eric Dejouhanet. These tests cover both Unitary tests and User Interface tests. While we are still far away from covering sufficient tests for all KStars uses, it paves the way to Test Driver development approach in which tests are created to illustrate the issue and then fixes.

MR #114 by Wolfgang Reissenberger perfectly highlights Test Driven development and we hope to follow this model in upcoming releases.

Qt Creator 4.14 Beta2 released

Fri, 2020/11/20 - 12:02pm

We are happy to announce the release of Qt Creator 4.14 Beta2 !

Qt Creator 4.13.3 released

Fri, 2020/11/20 - 12:02pm

We are happy to announce the release of Qt Creator 4.13.3 !

Making an AppImage in Nitrux

Thu, 2020/11/19 - 10:42am

AppImages are the focus of our Linux distribution. We already include several AppImage-related tools that improve their user experience in our distribution, from desktop integration to sandboxing and management. Also, we include one conversely important AppImage by default, Wine (see Using Wine in Nitrux).

In today’s tutorial, we will make an AppImage file using a tool called appimage-builder. appimage-builder makes it very easy to create AppImages of your favorite applications. appimage-builder works by using files called recipes; these are simple text files in the YML format that contain the information from which appimage-builder will make our AppImage.

One of the main features of appimage-builder is building an AppImage from existing, pre-compiled traditional packages like Debian packages, RPM packages, etc. Currently, only Debian packages are supported; however, more package managers will be supported in the future, such as Pacman.

appimage-builder is developed by Alexis Lopez Zubieta, a former Nitrux developer and an AppImage collaborator.

Getting started

To make our AppImage for this tutorial, we will only need Docker. We won’t be compiling anything at all (of course, if you want to compile the program instead, you can go ahead and do that. I’d still recommend that you use a container, though). We will be using Docker to keep our root filesystem free of build and package dependencies (libraries, development libraries, headers, and what not). This will ensure that our builds can be reproducible every time (see Continous Integration) without braking or bloating our installation.

Since Docker is already installed in Nitrux, there’s no need to use the package manager. Please note that the Docker daemon is not started by default, but it’s pretty easy to start it, open Station, and run the following commands.

sudo cgroupfs-mount sudo dockerd &

For this tutorial, we will make an AppImage of mpv the default multimedia player in Nitrux; for reference, mpv is a free (as in freedom) media player for the command line. It supports a wide variety of media file formats, audio and video codecs, and subtitle types.

Now that the daemon is started, we will be using the Docker container provided by the appimage-builder documentation. To create the container, run the following commands.

sudo docker pull appimagecrafters/appimage-builder:latest sudo docker run --name appimage -it appimagecrafters/appimage-builder /bin/bash

Now that we’re in the container (which uses Ubuntu 18.04.5), we will create our working directory to put the recipe and the contents of the AppDir. For the sake of simplicity, I will create my structure as /builder/mpv/.

mkdir -p /builder/mpv/ cd /builder/mpv/
  • Please note that my directory structure is not mandatory; you can use your own conventions.

And we also need a text editor to use in the container; I will use nano.

Making a recipe

For appimage-builder to create our AppImage, we will need to provide a recipe. Using your de facto text editor, we can use one of the documentation examples as a template; here’s my version.

  • Although the recipe is self-explanatory, please refer to the appimage-builder documentation to know more about what each section does.
version: 1 script: [] AppDir: path: ./AppDir app_info: id: mpv name: mpv icon: mpv version: 0.32.0-1ubuntu1 exec: usr/bin/mpv apt: arch: amd64 sources: - sourceline: 'deb [arch=amd64] focal main restricted universe multiverse' - sourceline: 'deb [arch=amd64] focal-security main restricted universe multiverse' - sourceline: 'deb [arch=amd64] focal-updates main restricted universe multiverse' key_url: '' include: - mpv - libdc1394-22 - libwbclient0 exclude: - adwaita-icon-theme - appmenu-gtk-module-common - appmenu-gtk2-module - appmenu-gtk3-module - baloo-kf5 - bolt - breeze - breeze-cursor-theme - breeze-icon-theme - bubblewrap - cpp - cpp-9 - dconf-cli - dconf-gsettings-backend - dconf-service - dirmngr - dmidecode - drkonqi - e2fsprogs - fdisk - fuse - gamin - gcc-10-base - gcc-9-base - gdb - gdisk - gir1.2-atk-1.0 - gir1.2-freedesktop - gir1.2-gdkpixbuf-2.0 - gir1.2-glib-2.0 - gir1.2-gtk-3.0 - gir1.2-ibus-1.0 - gir1.2-pango-1.0 - glib-networking - glib-networking-common - glib-networking-services - gnupg - gnupg-l10n - gnupg-utils - gpg - gpg-agent - gpg-wks-client - gpg-wks-server - gpgconf - gpgsm - gpgv - gsettings-desktop-schemas - gtk-update-icon-cache - hicolor-icon-theme - humanity-icon-theme - hwdata - ibus - ibus-data - ibus-table - ibus-table-emoji - liba52-0.7.4 - libaa1 - libacl1 - libaom0 - libapparmor1 - libappmenu-gtk2-parser0 - libappmenu-gtk3-parser0 - libappstream4 - libappstreamqt2 - libarchive13 - libaribb24-0 - libasn1-8-heimdal - libasound2 - libasound2-data - libasound2-plugins - libass9 - libassuan0 - libasyncns0 - libatasmart4 - libatk-bridge2.0-0 - libatk1.0-0 - libatk1.0-data - libatspi2.0-0 - libaudit-common - libaudit1 - libavahi-client3 - libavahi-common-data - libavahi-common3 - libavc1394-0 - libavcodec58 - libavformat58 - libavutil56 - libbabeltrace1 - libbasicusageenvironment1 - libblkid1 - libblockdev-fs2 - libblockdev-loop2 - libblockdev-part-err2 - libblockdev-part2 - libblockdev-swap2 - libblockdev-utils2 - libblockdev2 - libbluetooth3 - libbluray2 - libbrotli1 - libbsd0 - libbz2-1.0 - libcaca0 - libcairo-gobject2 - libcairo2 - libcanberra-pulse - libcanberra0 - libcap-ng0 - libcap2 - libcddb2 - libchromaprint1 - libcodec2-0.9 - libcolorcorrect5 - libcolord2 - libcom-err2 - libcrypt1 - libcups2 - libcurl3-gnutls - libdatrie1 - libdb5.3 - libdbus-1-3 - libdbusmenu-qt5-2 - libdca0 - libdconf1 - libdevmapper1.02.1 - libdmtx0b - libdouble-conversion3 - libdvbpsi10 - libdvdnav4 - libdvdread7 - libdw1 - libebml4v5 - libeditorconfig0 - libelf1 - libepoxy0 - libevdev2 - libevent-2.1-7 - libexpat1 - libext2fs2 - libfaad2 - libfdisk1 - libffi7 - libflac8 - libfontenc1 - libfribidi0 - libfuse2 - libgamin0 - libgcrypt20 - libgdbm-compat4 - libgdbm6 - libgdk-pixbuf2.0-0 - libgdk-pixbuf2.0-common - libgirepository-1.0-1 - libgit2-28 - libglib2.0-0 - libglib2.0-bin - libglib2.0-data - libgme0 - libgmp10 - libgnutls30 - libgomp1 - libgpg-error0 - libgpgme11 - libgpgmepp6 - libgpm2 - libgps26 - libgraphite2-3 - libgroupsock8 - libgsm1 - libgssapi-krb5-2 - libgssapi3-heimdal - libgstreamer-plugins-base1.0-0 - libgstreamer1.0-0 - libgtk-3-0 - libgtk-3-common - libgudev-1.0-0 - libharfbuzz0b - libhcrypto4-heimdal - libheimbase1-heimdal - libheimntlm0-heimdal - libhogweed5 - libhttp-parser2.9 - libhunspell-1.7-0 - libhx509-5-heimdal - libibus-1.0-5 - libice6 - libicu66 - libidn11 - libidn2-0 - libinput10 - libisl22 - libixml10 - libjack-jackd2-0 - libjansson4 - libjbig0 - libjpeg-turbo8 - libjpeg8 - libjs-underscore - libjson-glib-1.0-0 - libjson-glib-1.0-common - libk5crypto3 - libkmod2 - libkrb5-26-heimdal - libkrb5-3 - libkrb5support0 - libksba8 - libkscreenlocker5 - liblcms2-2 - libldap-2.4-2 - libldap-common - liblirc-client0 - liblivemedia77 - liblmdb0 - libltdl7 - liblua5.2-0 - liblz4-1 - liblzma5 - libmad0 - libmatroska6v5 - libmbedcrypto3 - libmbedtls12 - libmbedx509-0 - libminizip1 - libmm-glib0 - libmount1 - libmp3lame0 - libmpc3 - libmpcdec6 - libmpdec2 - libmpeg2-4 - libmpfr6 - libmpg123-0 - libmtdev1 - libmtp-common - libmtp9 - libmysofa1 - libncurses6 - libncursesw6 - libndp0 - libnettle7 - libnewt0.52 - libnfs13 - libnghttp2-14 - libnl-3-200 - libnl-genl-3-200 - libnl-route-3-200 - libnm0 - libnotificationmanager1 - libnpth0 - libnspr4 - libnss3 - libnuma1 - libogg0 - libopenconnect5 - libopenjp2-7 - libopenmpt-modplug1 - libopenmpt0 - libopus0 - liborc-0.4-0 - libp11-kit0 - libpackagekitqt5-1 - libpam-modules - libpam-systemd - libpam0g - libpango-1.0-0 - libpangocairo-1.0-0 - libpangoft2-1.0-0 - libpangoxft-1.0-0 - libparted-fs-resize0 - libparted2 - libpci3 - libpcre2-16-0 - libpcre2-8-0 - libpcre3 - libpcsclite1 - libperl5.30 - libphonon4qt5-4 - libphonon4qt5-data - libpipewire-0.3-0 - libpipewire-0.3-modules - libpixman-1-0 - libplacebo7 - libplasma-geolocation-interface5 - libpng16-16 - libpopt0 - libpostproc55 - libprocesscore9 - libprocessui9 - libprotobuf-lite17 - libproxy1v5 - libpsl5 - libraw1394-11 - libre2-5 - libreadline8 - libresid-builder0c2a - librest-0.7-0 - libroken18-heimdal - librsvg2-2 - librsvg2-common - librtmp1 - libsamplerate0 - libsasl2-2 - libsasl2-modules-db - libscim8v5 - libsdl-image1.2 - libsdl1.2debian - libsecret-1-0 - libsecret-common - libselinux1 - libshine3 - libshout3 - libsidplay2 - libslang2 - libsm6 - libsmartcols1 - libsnapd-glib1 - libsnappy1v5 - libsndfile1 - libsndio7.0 - libsoup-gnome2.4-1 - libsoup2.4-1 - libsoxr0 - libspa-0.2-modules - libspatialaudio0 - libspeex1 - libspeexdsp1 - libsqlite3-0 - libsrt1 - libss2 - libssh-4 - libssh-gcrypt-4 - libssh2-1 - libssl1.1 - libstemmer0d - libstoken1 - libswresample3 - libswscale5 - libsystemd0 - libtag1v5 - libtag1v5-vanilla - libtaskmanager6 - libtasn1-6 - libtdb1 - libteamdctl0 - libthai-data - libthai0 - libtheora0 - libtiff5 - libtinfo6 - libtomcrypt1 - libtommath1 - libtss2-esys0 - libtwolame0 - libudev1 - libudisks2-0 - libunistring2 - libupnp13 - libusageenvironment3 - libusb-1.0-0 - libuuid1 - libva-drm2 - libva-wayland2 - libva-x11-2 - libva2 - libvdpau1 - libvlc5 - libvlccore9 - libvorbis0a - libvorbisenc2 - libvorbisfile3 - libvpx6 - libvulkan1 - libwacom-common - libwacom2 - libwavpack1 - libweather-ion7 - libwebp6 - libwebpdemux2 - libwebpmux3 - libwebrtc-audio-processing1 - libwind0-heimdal - libwrap0 - libx264-155 - libx265-179 - libxau6 - libxaw7 - libxdamage1 - libxext6 - libxfixes3 - libxft2 - libxi6 - libxinerama1 - libxkbcommon-x11-0 - libxkbcommon0 - libxkbfile1 - libxml2 - libxmu6 - libxmuu1 - libxpm4 - libxslt1.1 - libxss1 - libxt6 - libxtst6 - libxv1 - libxvidcore4 - libxxf86dga1 - libxxf86vm1 - libyaml-0-2 - libzstd1 - libzvbi-common - libzvbi0 - logsave - milou - mime-support - mobile-broadband-provider-info - ocl-icd-libopencl1 - oxygen-sounds - parted - pci.ids - pciutils - perl - perl-base - perl-modules-5.30 - pinentry-curses - pipewire - pipewire-bin - python3 - python3-gi - python3-ibus-1.0 - python3-minimal - python3.8 - python3.8-minimal - qtchooser - qtvirtualkeyboard-plugin - readline-common - sed - sound-theme-freedesktop - sudo - tpm-udev - tzdata - ubuntu-mono - udev - udisks2 - usb.ids - usbutils - vlc-data - vlc-plugin-base - vlc-plugin-video-output - wpasupplicant - x11-utils - x11-xserver-utils - xdg-desktop-portal - xdg-desktop-portal-kde - xkb-data - zlib1g runtime: env: PATH: $APPDIR/usr/bin:$PATH APPDIR_LIBRARY_PATH: $APPDIR/usr/lib/x86_64-linux-gnu/:$APPDIR/usr/lib/x86_64-linux-gnu/pulseaudio:$APPDIR/usr/lib/x86_64-linux-gnu/samba/ files: exclude: - usr/lib/x86_64-linux-gnu/gconv - usr/share/man - usr/share/doc/*/README.* - usr/share/doc/*/changelog.* - usr/share/doc/*/NEWS.* - usr/share/doc/*/TODO.* AppImage: update-information: None sign-key: None arch: x86_64

If you notice there’s quite a lot of packages in the exclude section, that’s because I prefer to keep the AppImages that I generate relatively small. While this has the obvious benefit of a smaller file, it also means that the probability that the file works across multiple Linux distributions (from different families) is reduced, a.k.a, the file is less portable. Hence, you have to keep that in mind.

To put it another way, everything you don’t put inside the AppImage is something you’re expecting to find in the target system, and vice-versa, i.e., desktop Linux distributions are likely to include the adwaita-icon-theme package, so that’s something that we can exclude from the final binary.

  • In my recipe, I’m using Ubuntu repositories to tell appimage-builder from where to pull the packages. However, this is not mandatory; you can use any APT/dpkg repository (Debian, Devuan, Ubuntu, Trisquel, etc.), including Launchpad PPAs.

Now that we have a recipe for appimage-builder, we can copy it to the container using Station. Create an empty file in our working directory using your de facto editor (I’m using nano), and copy the recipe and save it.

Generating the AppImage

To generate the AppImage using appimage-builder, all we need to do is run a single command.

appimage-builder --recipe recipe.yml --skip-tests
  • In the command above, my recipe name is recipe.yml; like before, you can name your recipe any way you want, then I tell appimage-builder not to perform the built-in tests as we will be testing the AppImage directly on our installation.

By this point, the AppImage will begin to be generated from the recipe.

  • To verify the AppImage file name, you can use the command ls -l.
Testing the AppImage

To test that the AppImage works, we will need to copy the AppImage file from the container to our host. First, we need to know the container ID of the container that we used. To find the container ID, run the following command.

sudo docker ps

The output of the command will look similar to this.

CONTAINER ID IMAGE COMMAND CREATED 57293d54a61a appimagecrafters/appimage-builder "/bin/bash" 39 minutes ago

Since now we know our container ID and know our file name and path, we can copy it (to our home directory, for example).

sudo docker cp 57293d54a61a:/builder/mpv/mpv-0.32.0-1ubuntu1-x86_64.AppImage ~/
  • Please note that the file will be copied with root ownership (because while inside the container, you’re root, and we’re using Docker as root); you may want to change that before executing it.

Now we just run the AppImage.

And it works!.

Lastly, if you’d like, you can remove the container. To stop and remove the container, run the following commands.

sudo docker container stop 57293d54a61a sudo docker container rm 57293d54a61a Troubleshooting

In any given case that your AppImage doesn’t work, it may be, usually, because of one or two things. Either your AppImage is missing a library, or the AppRun runtime does not found the library path.

  • In the first scenario, you will want to continue fine-tuning your recipe by adding packages to the section include (and removing them from the exclude section if you added them here).
  • In the second scenario, you will want to check to add the library path to the section runtime.

If your recipe failed to generate an AppImage, it might be because of missing GPG keys, connectivity problems with the repositories used, or missing packages, i.e., the package doesn’t exist.

That’s it; this concludes today’s tutorial.

The post Making an AppImage in Nitrux appeared first on Nitrux — #YourNextOS.

Educational Software GCompris is 20 Years Old Today

Thu, 2020/11/19 - 9:48am
... And version 1.0 is here!

GCompris is a popular collection of educational and fun activities for children from 2 to 10 years old. GCompris has become popular with teachers, parents, and, most importantly, kids from around the world and offers an ever-growing list of activities -- more than 150 at the last count. These activities have been translated to over 20 languages and cover a wide range of topics, from basic numeracy and literacy, to history, art, geography and technology.

GCompris offers children between the ages of 2 and 10 more than 150 fun educational activities.

The newest version of GCompris also incorporates a feature that teachers and parents alike will find useful: GCompris 1.0 lets educators select the level of the activities according to the proficiency of each child. For example, in an activity that lets children practice numbers, you can select what numbers they can learn, leaving higher and more difficult numbers for a later stage. An activity for practicing the time lets you choose whether the child will practice full hours, half hours, quarters of an hour, minutes, and so on. And in an activity where the aim is to figure out the change when buying things for Tux, the penguin, you can choose the maximum amount of money the child will play with.

We have built the activities to follow the principles of "nothing succeeds like success" and that children, when learning, should be challenged, but not made to feel threatened. Thus, GCompris congratulates, but does not reprimand; all the characters the child interacts with are friendly and supportive; activities are brightly colored, contain encouraging voices and play upbeat, but soothing music.

GCompris now lets you select the level of some activities according to the child's proficiency.

The hardware requirements for running GCompris are extremely low and it will run fine on older computers or low-powered machines, like the Raspberry Pi. This saves you and your school from having to invest in new and expensive equipment and it is also eco-friendly, as it reduces the amount of technological waste that is produced when you have to renew computers to adapt to more and more power-hungry software. GCompris works on Windows, Android and GNU/Linux computers, and on desktop machines, laptops, tablets and phones.

Your browser does not support the video tag.
Practicing additions and substractions with GCompris.

GCompris is built, maintained and regularly updated by the KDE Community and is Free and Open Source Software. It is distributed free of charge and requires neither subscriptions nor asks for personal details. GCompris displays no advertising and the creators have no commercial interest whatsoever. Any donations are pooled back into the development of the software.

Your browser does not support the video tag.
Learning about electrical circuits with GCompris.

Seeking to engage more professional educators and parents, we are working on several projects parallel to our software and have recently opened a forum for teachers and parents and a chat room where users and creators can talk live to each other, suggest changes, share tips on how to use GCompris in the classroom or at home, and find out upcoming features and activities being added to GCompris.

Apart from increasing the number and variety of activities, for example, an upcoming feature is a complete dashboard that will provide teachers with better control of how pupils interact with GCompris. We are also working with teachers and contributors from different countries to compile a "Cookbook" of GCompris recipes that will help you use GCompris in different contexts. Another area where we are working with contributors is on translations: if you can help us translate GCompris into your language (with your voice!), we want to hear from you! Your help and ideas are all welcome.

Visit our forum and chat and tell us how you use GCompris and we will share it with the world.

KDE is a community of volunteers that creates a wide range of software products, like the Plasma desktop, the Krita painting program, the Kdenlive video editor, the GCompris educational suite of activities and games, as well as dozens of other high-quality applications and utilities. Among them, KDE develops and maintains several educational programs for children and young adults.

All KDE's products are Free and Open Source Software and can be downloaded, used and shared without charge or limitations.

Bringing light to life

Thu, 2020/11/19 - 7:44am

Some of you may be wondering what I have been up to lately since I took a break from my work in the KDE community. Well, it was time for a change, a change towards family, friends and a more local life. The result is a more balanced, a more grown up me. These changes in my life lead to me having a small family and a group of new friends, both of which I spend a lot of time with. They brought more light into my life, one could say.

That is not all I want to talk about, however. I the past 1.5 years I worked on a new project of mine that combines my love for software with the physical world. I created a product and brought it to the market last month. Now, we’re ready for the international launch of Organic Lighting. The product is a design smart lamp for the living room. It combines unique and dynamic visual effects with natural, sustainable materials.
Meet our Lavalamp:

It’s a connected device that can be eighter controlled using its physical knop on the front, or via its web ui (or REST interface). Effects can be changed, tweaked and its firmware can be updated (nobody should want an IoT device that can’t get security of feature updates). The concept here, technically is to do “light in software”. The lamp is run by a microcontroller embedded in its foot. Its roughly 600 leds produce ca. 4000 Lumen and render effects at more than 200 frames per seconds.
The lamp is built with easy repairs in mind, and it’s designed for a long-lasting experience, it respects your privacy, and it creates a unique atmosphere in your living space.

With our products, we’re offering an alternative to planned deprecation, throw-away materials and hidden costs of cheap electronics that make you the product by consuming your data for breakfast.

In the future, we will build on these concepts and technology and offer more devices and components that match our principles and that enhance one-another. Stay tuned!

If you want to go far, together is faster (II).

Thu, 2020/11/19 - 7:00am

This is the second of a series of two articles describing the idea of correlating software product delivery process performance metrics with community health and collaboration metrics as a way to engage execution managers so their teams participate in Open Source projects and Inner Source programs. The first article is called If you want to go far, together is faster (I). Please read it before this post if you haven’t already. You can also watch the talk I gave at InnerSource Commons Fall 2020 that summarizes these series.

In the previous post I provided some background and described my perception about what causes resistance from managers to involve their development teams in Open Source projects and Inner Source programs. I enumerated five not-so-simple steps to reduce such resistance. This article explain such steps in some detail.

Let me start enumerating again the proposed steps:

1.- In addition to collaboration and community health metrics, I recommend to track product delivery process performance ones in Open Source projects and Inner Source programs.
2.- Correlate both groups of metrics.
3.- Focus on decisions and actions that creates a positive correlation between those two groups of metrics.
4.- Create a reporting strategy to developers and managers based on such positive correlation.
5.- Use such strategy to turn around your story: it is about creating positive business impact at scale through open collaboration.

The solution explained

1.- Collaboration and community health metrics as well as product delivery process performance metrics.

Most Open Source projects and Inner Source programs focus their initial efforts related with metrics in measuring collaboration as well as community healthiness. There is an Open Source project hosted by the Linux Foundation focused on the definition of many types of metrics. Collaboration and community health metrics are among the more mature ones. The project is called CHAOSS. You can find plenty of examples of these metrics applied in a variety of Open Source projects there too.

Inner Source programs are taking the experience developed by Open Source projects in this field and apply them internally so many of them are using such metrics (collaboration and community health) as the base to evaluate how successful they are. In our attempt to expand our study of these collaboration environments to areas directly related with productivity, efficiency etc., additional metrics should be considered.

Before getting into the core ones, I have to say that many projects pay attention to code review related metrics as well as defect management to evaluate productivity or performance. These metrics go in the right direction but they are only partial and, in order to demonstrate a clear relation between collaboration and productivity or performance for instance, they do not work very well in many cases. I will put a few examples why.

Code review is a standard practice among Open Source projects, but at scale is perceived by many as an inefficient activity compared to others, when knowledge transfer and mentorship are not a core goal. Pair or mob programing as well as code review restricted to team scale are practices perceived by many execution managers as more efficient in corporate environments.

When it comes to defect management, companies have been tracking these variables for a long time and it will be very hard for Open Source and Inner Source evangelists to convince execution managers that what you are doing in the open or in the Inner Source program is so much better ans specially cheaper than it is worth participating. For many of these managers, cost control goes first and code sustainability comes later, not the other way around.

Unsurprisingly, I recommend to focus on the delivery process of the software product production as a first step towards reducing the resistance from execution managers to embrace collaboration at scale. I pick the delivery process because it is deterministic, so it is simpler to apply process engineering (so metrics) than to any other stage of the product life cycle that involves development. From all the potential metrics, throughput and stability are the essential ones.

Throughput and Stability

It is not the point of this article to go deep into these metrics. I suggest you to refer to delivery or product managers at your organization that embrace Continuous Delivery principles and practices to get information about these core metrics. You can also read Steve Smith’s book Measuring Continuous Delivery, which defines the metrics in detail, characterize them and provide guidance on how to implement them and use them. You can find more details about this and other interesting books at the Reads section of this site, by the way.

There are several reasons for me to recommend these two metrics. Some of them are:

  • Both metrics characterize the performance of a system that processes a flow of elements. The software product delivery can be conceive as such a system where the information flows in the form of code commits, packages, images… .
  • Both metrics (sometimes in different forms/expressions) are widely used in other knowledge areas, in some cases for quite some time now, like networking, lean manufacturing, fluid dynamics… There is little magic behind them.
  • To me the most relevant characteristic is that, once your delivery system is modeled, both metrics can be applied at system level (globally) and at a specific stage (locally). This is extremely powerful when trying to improve the overall performance of the delivery process through local actions at specific points. You can track the effect of local improvements in the entire process.
  • Both metrics have simple units and are simple to measure. The complexity is operational when different tools are used across the delivery process. The usage of these metrics reduce the complexity to a technical problem.
  • Throughput and Stability are positively correlated when applying Continuous Delivery principles and practices. In addition, they can be used to track how good you are doing when moving from a discontinuous to a continuous delivery system. Several of the practices promoted by Continuous Delivery are already very popular among Open Source projects. In some cases, some would claim that they were invented there, way before Continuous Delivery was a thing in corporate environments. I love the chicken-egg debates… but not now.

Let’s assume from now on that I have convinced you that Throughput and Stability are the two metrics to focus on, in addition with the already in use collaboration and community health metrics your Open Source or Inner Source project is already using.

If you are not convinced, by the way, even after reading S. Smith book, you might want to check the most common references to Continuous Delivery. Dave Farley, one of the fathers of the Continuous Delivery movement, has a new series of videos you should watch. One of them deals with these two metrics.

2.- Correlate both groups of metrics

Let’s assume for a moment that you have implemented such delivery process metrics in several of the projects in your Inner Source initiative or across your delivery pipelines in your Open Source project. The following step is to introduce an Improvement Kata process to define and evaluate the outcome of specific actions over prestablished high level SMART goals. Such goals should aim for a correlation between both types of metrics (community health / collaboration and delivery process ones).

Let me put one example. It is widely understood in Open Source projects that being welcoming is a sign of good health. It is common to measure how many newcomers the project attract overtime and their initial journey within the community, looking for their consolidation as contributors. A similar thinking is followed in Inner Source projects.

The truth is that not always more capacity translate into higher throughput or an increase of process stability, on the contrary, it is a widely accepted among execution managers that the opposite is more likely in some cases. Unless the work structure, so the teams and the tooling, are oriented to embrace flexible capacity, high rates of capacity variability leads to inefficiencies. This is an example of an expected negative correlation.

In this particular case then, the goal is to extend the actions related with increasing our number of new contributors to our delivery process, ensuring that our system is sensitive to an increase of capacity at the expected rate and we can track it accordingly.

What do we have to do to mitigate the risks of increasing the Integration failure rate due to having an increase of throughput at commit stage? Can we increase our build capacity accordingly? Can our testing infrastructure digest the increase of builds derived from increasing our development capacity, assuming we keep the number of commits per triggered build?

In summary, work on the correlation of both groups of metrics, so link actions that would affect both, community health and collaboration metrics together with delivery metrics.

3.- Focus on decisions and actions that creates a positive correlation between both groups of metrics.

There will be executed actions designed to increase our number of contributors that might lead to a reduction of throughput or stability, others that might have a positive effect in one of them but not the other (spoiler alert, at some both will decrease) and some others that will increase both of them (positive correlation).

If you work in an environment where Continuous Delivery is the norm, those behind the execution will understand which actions have a positive correlation between throughput and stability. Your job will only be associated to link those actions with the ones you are familiar with in the community health and collaboration space. If not, you work will be harder, but still worth it.

For our particular case, you might find for instance, that a simple measure to digest the increasing number of commits (bug fixes) can be to scale up the build capacity if you have remaining budget. You might find though that you have problems doing so when reviewing acceptance criteria because you lack automation, or that your current testing-on-hardware capacity is almost fixed due to limitations in the system that manage your test benches and additional effort to improve the situation is required.

Establishing experiments that consider not just the collaboration side but also the software delivery one as well as translating into production those experiments that demonstrate a positive correlation of the target metrics, increasing all of them, might bring you to surprising results, sometimes far from common knowledge among those focused on collaboration aspects only, but closer to those focused in execution.

4.- Create a reporting strategy to developers and managers based on such positive correlation.

A board member of an organization I was managing, once told me what I follow ever since. It was something like…

Managers talk to managers through reports. Speak up clearly through them.

As manager I used to put a lot of thinking in the reporting strategy. I have some blog posts written about this point. Beside things like the language used or the OKRs and KPIs you base your reporting upon, understanding the motivations and background of the target audience of those reports is as important.

I suggest you pay attention to how those you want to convince about participating in Open Source or Inner Source projects report to their managers as well as how others report to them. Are those report time based? KPIs based, are they presented and discussed in 1:1s or in a team meeting? etc. Usually every senior manager dealing with execution have a consolidated way of reporting and being reported. Adapt to it instead of keeping the format we are more used to in open environments. I love reporting through a team or department blog but it might not be the best format for this case.

After creating and evaluating many reports about community health and collaboration activities, I suggest to change how they are conceived. Instead of focusing on collaboration growth and community health first and then in the consequences of such improvements for the organization (benefits), focus first on how product or project performance have improved while collaboration and community health has improved. In other words, change how cause-effect are presented.

The idea is to convince execution managers that by anticipating in Open Source projects or Inner Source programs, their teams can learn how to be more efficient and productive in short cycles while achieving long term goals they can present to executives. Help those managers also to present both type of achievements to their executives using your own reports.

For engineers, move the spotlight away from the growth of interactions among developers and put it in the increase of stability derived from making those interactions meaningful, for instance. Or try to correlate diversity metrics with defects management results, or with reductions in change failure rates or detected security vulnerabilities, etc. Move partially your reporting focus away from teams satisfaction (a common strategy within Open Source projects) and put it in team performance and productivity. They are obviously intimately related but tech leads and other key roles within your company might be more sensitive to the latter.

In summary, you achieve the proposed goal if execution managers can take the reports you present to them and insert them in theirs without re-interpreting the language, the figures, the datasets, the conclusions…

5.- Turn your story around.

If you manage to find positive correlations between the proposed metrics and report about those correlations in a way that is sensitive for execution managers, you will have established a very powerful platform to create an unbeatable story around your Inner Source program or your participation at Open Source projects. Investment growth will receive less resistance and it will be easier to infect execution units with practices and tools promoted through the collaboration program.

Prescriptors and evangelists will feel more supported in their viral infection and those responsible for these programs will gain an invaluable ally in their battle against legal, procurement, IP or risks departments, among others. Collaboration will not just be good for the developers or the company but also clearly for the product portfolio or the services. And not just in the long run but also in a shorter term. That is a significant difference.

Your story will be about increasing business impact through collaboration instead of about collaborating to achieve bigger business impact. Open collaboration environments increase productivity and have a tangible positive impact in the organization’s product/service, so it has a clear positive business impact.


In order to attract execution managers to promote the participation of their departments and teams in Open Source projects and Inner Source programs, I recommend to define a different communication strategy, one that rely in reports based on results provided by actions that show a positive correlation between community health and collaboration metrics with delivery process performance metrics, especially throughput and stability. This idea can be summarized in the following steps, explained in these two articles:

  • Collaboration within a commercial organization matters more to management if it has a measurable positive business impact.
  • To take decisions and evaluate their impact within your Inner Source program or the FLOSS community, combine collaboration and community health metrics with delivery metrics, fundamentally throughput and stability.
  • Prioritize those decisions/actions that produce a tangible positive correlation between these two groups of metrics.
  • Report, specially to managers, based on such positive correlation.
  • Adapt your Inner Source or Open Source story: increase business impact through collaboration.

In a nutshell, it all comes down to prove that, at scale…

if you want to go far, together is faster.

Check the first one of this article series if you haven’t. You can also watch the recording of the talk provided at ISC Fall 2020 where I summarized what is explained in these two articles.

I would like to thank the ISC Fall 2020 content committee and organizers for giving me the opportunity to participate in such interesting and well organized event.

Qt 3D Changes in Qt 6

Tue, 2020/11/17 - 10:00am

Qt 6 is nearly upon us. While this has not been addressed by other publications, Qt 3D is also introducing a number of changes with this major release. This includes changes in the public API that will bring a number of new features and many internal changes to improve performance and leverage new, low-level graphics features introduced in QtBase. I will focus on API changes now, while my colleague, Paul Lemire, will cover other changes in a follow up post.

Distribution of Qt 3D for Qt 6

Before looking at what has changed in the API, the first big change concerns how Qt 3D is distributed. Qt 3D has been one of the core modules that ship with every Qt release. Starting with Qt 6, however, Qt 3D will be distributed as a separate source-only package. Qt 6 will ship with a number of such modules, and will use conan to make it easy for those modules to be built. This means that users interested in using Qt 3D will need to compile once for every relevant platform.

Since it ships independently, Qt 3D will also most likely be on a different release cycle than the main Qt releases. We will be able to release more, frequent minor releases with new features and bug fixes.

Another consequence of this is that Qt 3D will not be bound by the same binary compatibility constraints as the rest of Qt. We do however, aim to preserve source compatibility for the foreseeable future.

Basic Geometry Types

The first API change is minimal, but, unfortunately, it is source-incompatible. You will need to change your code in order to compile against these changes.

In order to make developing new aspects that access geometry data more straight forward, we have moved a number of classes that relate to that from the Qt3DRender aspect to the Qt3DCore aspect. These include QBuffer, QAttribute and QGeometry.

When using the QML API, impact should be minimal, the Buffer element still exists, and importing the Render module implicitly imports the Core module anyway. You may have to change your code if you’ve been using module aliases, though.

In C++, this affects which namespace these classes live in, which is potentially more disruptive. So if you were using Qt3DRender::QBuffer (often required to avoid clash with QBuffer class in QtCore), you would now need to use Qt3DCore::QBuffer, and so on…

If you need to write code that targets both Qt5 and Qt6, one trick you can use to ease the porting is to use namespace aliases, like this:

#if QT_VERSION >= QT_VERSION_CHECK(6, 0, 0) #include <Qt3DCore/QBuffer> namespace Qt3DGeometry = Qt3DCore; #else #include <Qt3DRender/QBuffer> namespace Qt3DGeometry = Qt3DRender; #endif void prepareBuffer(Qt3DGeometry::QBuffer *buffer) { ... }

The main reason this was done is so that all aspects could have access to the complete description of a mesh. Potential collision detection or physics simulation aspects don’t need to have their own representation of a mesh, separate from the one used for rendering.

So QBuffer, QAttribute and QGeometry are now in Qt3DCore. But this is not enough to completely describe a mesh.

Changes in Creating Geometry

A mesh is typically made of a collection of vertices. Each vertex will have several properties (positions, normal, texture coordinates, etc.) associated to it. The data for those properties is stored somewhere in memory. So in order to register a mesh with Qt 3D, you need:

  • A QGeometry instance that is simply a collection of QAttribute instances
  • Each QAttribute instance to define the details of a vertex attribute. For example, for the position, it would include the number of components (usually 3), the type of the component (usually floats), the name of the attribute as it will be exposed to the shaders (usually “position” or QAttribute::defaultNormalAttributeName(), if you are using Qt3D built-in materials), etc.
  • Each QAttribute to also point to a QBuffer instance. This may be the same for all attributes, or it may be different, especially for attribute data that needs to be updated often.

But this is still incomplete. We are missing details such as how many points make up the mesh and what type of primitives these points make up (triangles, strips, lines, etc) and more.

Prior to Qt 6, these details were stored on a Qt3DRender::QGeometryRenderer class. The name is obviously very rendering-related (understatement), so we couldn’t just move that class.

For these reasons, Qt 6 introduces a new class, Qt3DCore::QGeometryView. It includes a pointer to a QGeometry and completely defines a mesh. It just doesn’t render it. This is useful as the core representation of a mesh that can then be used for rendering, bounding volume specifications, picking, and much more.

Bounding Volume Handling

One of the very first things Qt 3D needs to do before rendering is compute the bounding volume of the mesh. This is needed for view frustum culling and picking. Internally, the render aspect builds a bounding volume hierarchy to quickly find objects in space. To compute the bounding volume, it needs to go through all the active vertices of a mesh. Although this is cached, it can take time the first time the object is rendered or any of its details change.

Furthermore, up to now, this was completely internal to Qt 3D’s rendering backend and the results were not available to the user for using in the rest of the application.

So Qt 6 introduces a QBoundingVolume component which serves two purposes:

  • it has implicit minimum point and maximum point properties that contain the result of the bounding volume computations done by the backend. This can be used by the application.
  • it has explicit minimum point and maximum point properties which the user can set. This will prevent the backend from having to calculate the bounds in order to build the bounding volume hierarchy.

The minimum and maximum extents points are the corners of the axis-aligned box that fits around the geometry.

But how does QBoundingVolume know which mesh to work on? Easy — it has a view property which points to a QGeometryView instance!

Reading bounding volume extents

So if you need to query the extents of a mesh, you can use the implicit values:

Entity {    components: [        Mesh {            source: "..."            onImplicitMinPointChanged: console.log(implicitMinPoint)        },        PhongMaterial { diffuse: "green" },        Transform { ... }     ] }

Note that if the backend needs to compute the bounding volume, this is done at the next frame using the thread pool. So the implicit properties might not be immediately available when updating the mesh.

If you need the extents immediately after setting or modifying the mesh, you can call QBoundingVolume::updateImplicitBounds() method, which will do the computations and update the implicit properties.

Setting bounding volume extents

But you know the extents. You can set them explicitly to stop Qt 3D from computing them:

Entity {    components: [        Mesh {            source: "..."            minPoint: Qt.vector3d(-.5, -.5, -.5)            maxPoint: Qt.vector3d(.5, .5, .5)        },        PhongMaterial { diffuse: "green" },        Transform { ... }     ] }

Note that, since setting the explicit bounds disables the computation of the bounding volume in the backend, the implicit properties will NOT be updated in this case.

Mesh Rendering

Now, before everyone goes and adds QBoundingVolume components to all their entities, one other thing: QGeometryRenderer, in the Qt3DRender module, now derives from QBoundingVolume. So, it will also have all the extents properties.

It also means you can provide it with a geometry view to tell it what to draw, rather than providing a QGeometry and all the other details.

It still, however, has all the old properties that are now taken care of by the QGeometryView instance. If that is defined, all the legacy properties will be ignored. We will deprecate them soon and remove them in Qt 7.

So what happens if you provide both a QBoundingVolume and a QGeometryRenderer component to an entity? In this case, the actual bounding volume component takes precedence over the geometry renderer. If it specifies explicit bounds, those will be used for the entity.

The main use case for that is to specify a simpler geometry for the purpose of bounding volume computation. If you don’t know the extents of a mesh but you know that a simpler mesh (with much fewer vertices) completely wraps the object you want to render, using that simpler mesh can be a good way of speeding up the computations that Qt 3D needs to do.

New Core Aspect

Most of these computations took place previously in the Render aspect. Since this is now in core, and in order to fit in with Qt 3D’s general architecture, we introduce a new Core aspect. This aspect will be started automatically if you are using Scene3D or Qt3DWindow. In cases where you are creating your own aspect engine, it should also automatically be started as long as the render aspect is in use via the new aspect dependency API (see below).

The core aspect will take care of the all the bounding volume updates for the entities that use the new geometry view-based API (legacy scenes using a QGeometryRenderer instances without using views will continue to be updated by the rendering aspect).

The Core aspect also introduces a new QCoreSettings component. Like the QRenderSettings component, a single instance can be created. It is, by convention, attached to the root entity.

Currently, its only purpose is to be able to completely disable bounding volume updating. If you are not using picking and have disabled view frustum culling, bounding volumes are actually of no use to Qt 3D. You can disable all the jobs which are related to bounding volume updates by setting QCoreSettings::boundingVolumesEnabled to false. Note that implicit extent vertices on QBoundingVolume component will then not be updated.

New Aspect and AspectJob API


The base class for aspects, QAbstractAspect, has gained a few useful virtual methods:

  • QAbstractAspect::dependencies() should return the list of aspect names that should be started automatically if an instance of this aspect is registered.
  • QAbstractAspect::jobsDone() is called on the main thread when all the jobs that the aspect has scheduled for a given frame have completed. Each aspect has the opportunity to take the results of the jobs and act upon them. It is called every frame.
  • QAbstractAspect::frameDone() is called when all the aspects have completed the jobs AND the job post processing. In the case of the render aspect, this is when rendering actually starts for that frame.


Similarly, jobs have gained a number of virtual methods on QAspectJob:

  • QAspectJob::isRequired() is called before a job is submitted. When building jobs, aspects will build graphs of jobs with various dependencies. It’s often easier to build the same graph every frame, but not all jobs might have something to do on a given frame. For example, the picking job has nothing to do if there are no object pickers or the mouse has not moved. The run method can test for this and return early, but this still causes the job to be scheduled onto a thread in the pool, with all the associated, sometime expensive locking. If QAspectJob::isRequired() returns false, the job will not be submitted to the thread pool and processing will continue with its dependent jobs.
  • QAspectJob::postFrame() is called on the main thread once all the jobs are completed. This is place where most jobs can safely update the foreground classes with the results of the backend computations (such as bounding volume sizes, picking hits, etc).
Picking Optimization

We have introduced optimization for picking. A QPickingProxy component has been introduced, deriving from QBoundingVolume. If it has an associated geometry view, that mesh will be used instead of the rendered mesh for picking tests. This applies to QObjectPicker and the QRayCaster and QScreenRayCaster classes. Since precise picking (when QPickingSettings is not set to use bounding volume picking) needs to look at every primitive (triangle, line, vertex), it can be very slow. Using QPickingProxy makes it possible to provide a much simpler mesh for the purpose of picking.

Entity { components: [ GeometryRenderer { view: actualView }, PickingProxy { view: simpleView } ... ] ...

So for example, you can provide a down sampled mesh, such as the bunny on the right (which only includes 5% of the total of primitives in the original mesh), to get fast picking results.

Of course, the picking results (the local coordinate, the index of the picked primitive, etc) will all be defined relative to the picking proxy mesh (not the rendered mesh).


Finally, QRayCaster and QScreenRayCaster now have pick() methods, which do a ray casting test synchronously, whereas the pre-existing trigger() methods would schedule a test for the next frame. This will block the caller until completed and return the list of hits. Thus, it’s possible for an application to implement event delegation. For example, if the user right clicks, the application can decide to do something different depending on the type of the closest object, or display a context menu if nothing was hit.


As you can see, Qt 3D for Qt 6 has quite a few changes. My colleague, Paul Lemire, will go through many more internal changes in a later post. We hope this ensures an on-going successful future for Qt 3D in the Qt 6 series.

KDAB provides a number of services around Qt 3D, including mentoring your team and embedding Qt 3D code into your application, among others. You can find out more about these services here.

About KDAB

If you like this blog and want to read similar articles, consider subscribing via our RSS feed.

Subscribe to KDAB TV for similar informative short video content.

KDAB provides market leading software consulting and development services and training in Qt, C++ and 3D/OpenGL. Contact us.

The post Qt 3D Changes in Qt 6 appeared first on KDAB.

If you want to go far, together is faster (I).

Tue, 2020/11/17 - 7:00am

This is the first of a series of two articles describing the reasoning and the steps behind the idea of correlating software product delivery process performance metrics with community health and collaboration metrics as a way to engage execution managers so their teams participate in Open Source projects and Inner Source programs. What you will read in these articles is an extension of a talk I gave at the event InnerSource Commons Fall 2020.


There is a very popular African proverb within the Free Software movement that says…

If you want to go fast, go alone. If you want to go far, go together.

Many of us used it for years to promote collaboration among commercial organizations over doing it internally in a fast way at the risk of reinventing the wheel, not following standards, reducing quality, etc.

The proverb describes an implicit OR relation between the traditional Open Source mindset, focused on longer term results obtained through extensive collaboration, and the traditional corporate mindset, where time-to-market is almost an obsession.

Early in my career I got exposed as manager (I do not code) to agile and a little later to Continuous Delivery. This second set of principles and practices had a big impact on me because of the tight and positive correlation that proposes between speed and quality. Until then, I had assumed that such correlation was negative, when one increases the other decreases or vice-versa.

During a long time, I also assumed unconsciously as truth the negative correlation between collaboration and speed. It was not until I started working in projects at scale when I became aware of such unconscious assumption and start question it first and challenging it later.

In my early years in Open Source I found myself many times discussing with executives and managers about the benefits of this “collaboration framework” and why they should adopt it. Like probably many of you, I found myself being more successful among executives and politicians than middle layer managers.

– “No wonder they are executives” I thought more than once back then.

But time prove me wrong once again.

Problem statement: can we go faster by going together?

It was not until later on in my career, when I could relate to good Open Source evangelists but specially good sales professionals. I learned a bit about how different groups within the same organization are incentivized differently and you need to understand those incentives to tune your message in a a way that they can relate to it.

Most of my arguments and those from my colleagues back then were focused on cost reductions and collaboration, on preventing silos, on shorten innovation cycles, on sustainability, prevention of vendor lock-in, etc. Those arguments resonate very well among those responsible for strategic decisions or those managers directly related with innovation. But they did not work well with execution managers, specially senior ones.

When I have been a manager myself in the software industry, frequently my incentives had little to do with those arguments. In some cases, either my manager’s incentives had little to do with such arguments despite being an Open Organization. Open Source was part of the company culture but management objectives had little to do with collaboration. Variables like productivity, efficiency, time-to-market, customer satisfaction, defects management, release cycles, yearly costs, etc., were the core incentives that drove my actions and those around me.

If that was the case for those organizations I was involved in back then, imagine traditional corporations. Later on I got engage with such companies which confirmed this intuition.

I found myself more than once arguing with my managers about priorities and incentives, goals and KPIs because, as Open Source guy, I was for some time unable to clearly articulate the positive correlation between collaboration and efficiency, productivity, cost reduction etc. In some cases, this inability was a factor in generating a collision that end up with my bones out of the organization.

That positive correlation between collaboration and productivity was counter-intuitive for many middle managers I know ten years ago. Still is for some, even in the Open Source space. Haven’t you heard from managers that if you want to meet deadlines do not work upstream because you go move slower? I’ve heard so many times that, as mentioned before, during years I believed it was true. It might at small scale, but at big scale, it is not necessarily true.

It was not until two or three years ago that I started paying attention to Inner Source. I realized that many have inherited this belief. And since they live in corporate environments, the challenge that represents convincing execution related managers is bigger than in Open Source.

Inner Source programs are usually supported by executives and R&D departments but receive resistance from middle management, especially those closer to execution units. Collaborating with other departments might be good in the long term but it is perceived as less productive than developing in isolation. Somehow, in order to participate in Inner Source programs, they see themselves choosing between shorter-term and longer-term goals, between their incentives and those of the executives. It has little to do with their ability to “get it“.

So either their incentives are changed and they demonstrate that the organization can still be profitable, or you need to adapt to those incentives. What I believe is that adapting to those incentives means, in a nutshell, to provide a solid answer to the question, can we go faster by going together?

The proposed solution: if you want to go far, together is faster.

If we could find a positive correlation between efficiency/productivity and collaboration, we could change the proverb above by something like…

And hey, technically speaking, it would still be an African proverb, since I am from the Canary Islands, right?.

The idea behind the above sentence is to establish an AND relation between speed and collaboration meeting both, traditional corporate and Open Source (and Inner Source) goals.

Proving such positive correlation could be help to reduce the resistance offered by middle management to practice collaboration at scale either within Inner Source programs or Open Source projects. They would perceive such participation as a path to meet those longer term goals without contradicting many of the incentives they work with and they promote among their managees.

So the following question is, how we can do that? how can we provide evidences of such positive correlation in a language that is familiar to those managers?

The solution summarized: ISC Fall 2020

I tried to briefly explain to people running Inner Source programs, during the ISC Fall 2020, a potential path to establish such relation in five not-so-simple steps. The core slide of my presentation enumerated them as:

1.- In addition to collaboration and community health metrics, I recommend to track product delivery process performance ones.
2.- Correlate both groups of metrics.
3.- Focus on decisions and actions that creates a positive correlation between them.
4.- Create a reporting strategy to developers and managers based on such positive correlation.
5.- Use such strategy to turn around your Inner Source/Open Source story: it is about creating positive business impact at scale through open collaboration.

A detailed explanation of these five points can be found in the second article of this series:

If you want to go far, together is faster (II).

You can also watch the recording of my talk at ISC Fall 2020.

Calamares and Plasma Look-and-Feel

Mon, 2020/11/16 - 11:00pm

Calamares is a Linux installer. Bluestar Linux is a Linux distribution. KDE Plasma Desktop is KDE’s flagship desktop environment. Together, these three bits of software got into a spot of trouble, but what’s more important, got out of trouble again with good communications, good bug reports and a “we can fix it” attitude.

When Calamares is run in a KDE Plasma Desktop environment, for a distro that uses KDE Plasma Desktop – and bear in mind, Calamares is a distro- and desktop-independent project, so it will just as gladly install a variant of Debian with i3 as a variant of openSUSE with GNOME as a variant of Fedora with KDE Plasma – one of the modules that the distro can use is the plasmalnf module. This configures the look-and-feel of KDE Plasma Desktop in the target system, so that after the installation is done you don’t have to set a theme again. You might think of this as one tiny part of a first-run “here’s some cool options for your desktop” tool.

Plasma Look-and-Feel Module Plasma Look-and-Feel Module

The distro is expected to provide a suitable screenshot on the live ISO for use with the Look-and-Feel module; since I don’t have one, the images used here are not representative for what Breeze actually looks like.

A minor feature of the module is that it also will update the live environment – if that is KDE Plasma Desktop – to follow the selection, so on an openSUSE variant you can try out Breeze, Breeze Dark and the openSUSE theme before installation.

Anyway, Bluestar Linux uses this module, and reported that the Look-and-Feel module was not correctly writing all of the keys needed to switch themes. And, here’s the thing, they reported it. In the right place (for me), which is the issue tracker for Calamares. And they described the problem, and how to reproduce the problem, and what they expected.

Give yourself a pat on the back for writing good bug reports: it’s amazing what a difference there is between “it doesn’t work” and something that I can work with.

I experimented a bit – most of Calamares works on FreeBSD as well, so I can check in my daily live environment, as well as in various Linux VMs – and it turned out there is a difference between what the lookandfeeltool writes as configuration, and what the Plasma Theme KDE Control Module (KCM) writes. It’s not the kind of thing I would spot, so I’m doubly glad for downstream distro’s that see things differently.

Having confirmed that there’s a difference, I took the problem to the KDE Plasma developers – this is where it is really useful to live in multiple communities at once.

The folks at Bluestar Linux had written a workaround already, and with the description of what was going on the KDE Plasma folks spent maybe an hour from start to finish (including whatever else goes on on a wednesday morning, so coffee, sending memes and debugging other stuff in the meantime) and we now have two results:

  • Look-and-Feel tool has a bugfix that will land in the next release (Plasma releases are monthly, if I recall)
  • Calamares has a workaround that landed in the latest release (Calamares releases are every-two-weeeks-if-I-can-swing-it)

So, as the person in the middle, I’d like to say “thanks” to downstream for reporting well and upstream for acting quickly. And then I can put on my sidestream hat and port the fix to FreeBSD’s packaging of KDE Plasma Desktop, too.

Missing the Point, PinePhone KDE Community Edition

Mon, 2020/11/16 - 12:22pm
If you expect the PinePhone to match up to your current pocket supercomputer/surveillance device, you don't get it.

KSeExpr 4.0.0 Released!

Mon, 2020/11/16 - 8:46am

Pattern generated by KSeExpr

Today, we’re happy to announce the release of KSeExpr 4.0.0!

KSeExpr is the fork of Disney Animation’s SeExpr expression language library that we ship with Krita. It powers the SeExpr Fill Layer that was done in Amyspark’s Google Summer of Code 2020 project.

The main changes

This is a ginormous release, but these are the most important bits:

  • We’ve rebranded the fork. This allows us to both ship the library without conflict with upstream.
    • The library as a whole is now namespaced (both in CMake and in C++) as KSeExpr.
    • The base library is now KSeExpr, whereas the UI is now KSeExprUI. The include folders have been flattened accordingly, to e.g. <KSeExpr/Expression.h> and <KSeExprUI/ExprControlCollection.h>.
  • We’ve changed the license to GPL v3. The original code was (and is still released) under a tainted form of Apache 2.0, which has brought us many headaches. We’ve followed LibreOffice’s lead and our changes are now released under this license.
  • All code has been reformatted and upgraded with C++14 features.
  • We’ve dropped the Python bindings, as well as pthread. If you just need the library (like us), all you need now is Qt and a C++14 compiler.
  • The existing optional LLVM evaluator has reached feature parity with the interpreter. We’ve patched missing functionality, such as automatic casting from 1D vectors to 3D and string operators.
  • Our fork fully supports static LLVM and Android. No more linking or API level issues.
  • Arc trigonometric functions and rand(), previously documented but non-existing in the runtime, have been added.

Source code: kseexpr-

Release hashes:

  • md5sum: 52264980708826d4c38469d6571236e4 kseexpr-
  • sha256: 3b2bfad1a60afb5efcea2c16e424203696e0440401e7169dec1db6df27ef2228 kseexpr-

GPG signature: kseexpr-

The tarball is now signed by Amyspark’s Github GPG key (FC00108CFD9DBF1E). You can get the key at their Github’s profile.

The full changelog for v4.0.0.0 (November 12, 2020) Added
  • Add implementation of rand() function (a84fe56)
  • Enable ECM’s automatic ASAN support (16f58e9)
  • Enable and fix skipped imaging and string tests (e8b8072)
  • Standardize all comment parsing (c12bdb4)
  • Add README for the fork (abc4f35)
  • Rebrand our fork into KSeExpr (97694c4)
  • Automagically deploy pregenerated parser files (0ae6a43)
  • Use SPDX license statements (83614e6)
  • Enable version detection (e79c35b)
  • Use STL-provided mutex and drop pthread dependency (1782a65)
  • Reimplement Timer (20a25bd)
  • Complete the relicensing process (b19fd13)
  • Enable arc functions (08af2ef)
  • Add abandoned test for nested expressions (2af1db3)
  • Add abandoned type check tests (65064ad)
  • Implement equality between ColorSwatchEditables (8d864ce)
  • Add the abandoned typePrinter executable (2171588)
  • Add BSD-3 release scripts (fe11265)
  • Automatically deploy version changes (1ebb54b)
  • Fix printf format validation (a77cbfd)
  • Fix LLVM’s support for string variables (13c1dcd)
  • Detect and link against shared libLLVM (b57c323)
  • Fix compilation on Android (3969081)
  • Only build KSeExprUI if Qt5 is enabled (63a0e3f)
  • Sort out pregenerated parser files (ee47a75)
  • Fix translation lookup (e37d5f0)
  • Fix path substitution with pregenerated files (46acc2e)
  • Restore compatibility with MSVC on Windows (9a8fa7c)
  • Properly trim range comments (6320439)
  • Fix Vec1d promotion with LLVM evaluator (cd9651d)
  • Fix interpreter state dump in MinGW (ee2ca3e)
  • Fix pointless negative check on demos (7328466)
  • Fix SpecExaminer and add abandoned pattern matcher tool (366e733)
  • Clean up various strings (8218ab3)
  • Remove Disney internal widgets (part 1) (a30cfe5)
  • Remove Disney internal widgets (part 2) (14b2610)
  • Remove Disney internal widgets (part 3) (d3b9d34)
  • Remove Disney internal widgets (part 4) (bc65b77)
  • Remove Disney-internal libraries (da04f96)
  • Remove Qt 4 compatibility (bdef3e2)
  • Drop unused demos (884a977)
  • Assorted cleanup (6c5134f)
  • Assorted linkage cleanup (18af7e6)
  • Clean up KSeExpr header install logic (98b4c50)
  • Assorted cleanup in KSeExpr (735958f)
  • Remove more unused documentation (8a2ac53)
  • Remove KSeExprUIPy (68baed1)
  • Remove Platform header (6d6db30)
  • Cleanup and remove the plugin system (b3c4d48)
  • Remove unused files in KSeExprUI (6229b88)
  • Remove last remnants of sscanf/old number parse logic (5717cd6)
  • Remove leftovers of Disney’s Python tests (df24cc4)
  • General cleanup of the package lookup system (d332d35)
  • Clean up last remaining warnings (36ea2d5)
  • Remove unused variable in the parser (813d1a0)
  • Remove redundant inclusion (fb55833)
  • Set Krita build (library and UI only) as default (2deb17a)
  • Update pregenerated files (2c8481c)
  • Update and clean Doxygen docs (7df9011)
  • Make performance monitoring an option (6253bcd)
  • clang-tidy: Curve (5584b30)
  • clang-tidy: Vec (b02a8b0)
  • clang-tidy: Utils (f9b89ae)
  • Update README (05212cb)
  • clang-tidy: ExprType (e07d9d1)
  • clang-tidy: ExprPatterns (03010ff)
  • clang-tidy: ExprEnv (a22d3a3)
  • Modernize imageSynth demo to C++14 (474e268)
  • Modernize imageEditor demo to C++14 (a9c7538)
  • Modernize asciiGraph demo to C++14 (ec103be)
  • Modernize asciiCalculator demo to C++14 (8939da6)
  • Modernize imageSynthForPaint3d demo to C++14 (7658d75)
  • clang-tidy in KSeExprUI (85860c0)
  • clang-tidy: Context (574b711)
  • clang-tidy: ErrorCode (74860fb)
  • constexpr-ize noise tables (7335fc7)
  • clang-tidy: VarBlock (935da03)
  • clang-tidy: Interpreter (83ed077)
  • Split tests by category and use GTest main() (933f0cc)
  • clang-tidy: ExprColorCurve (675f160)
  • clang-tidy: ExprBrowser (84e2782)
  • clang-tidy: ExprWalker (5d24b2b)
  • clang-tidy: ExprColorSwatch (c667d97)
  • clang-tidy: ExprControl (9313acf)
  • clang-tidy: ExprControlCollection (fd0693d)
  • clang-tidy: ExprCurve (efeff98)
  • clang-tidy: ExprEditor (338dc3c)
  • clang-tidy: Evaluator (LLVM disabled) (3927858)
  • clang-tidy: ExprBuiltins (part 1) (8e8fe4f)
  • clang-tidy: ExprBuiltins (part 2) (05c7e70)
  • clang-tidy unused variables (58aef1d)
  • Make Examiner::post pure virtual at last (e5cc038)
  • clang-tidy: ExprNode (7da56ba)
  • clang-tidy: ExprLLVM (LLVM disabled) (aa34f51)
  • clang-tidy: ExprFuncX (6715180)
  • Modernize tests to C++14 (455c3b6)
  • clang-tidy Utils (ec8c1f0)
  • clang-tidy: Evaluator (LLVM enabled) (9e82340)
  • clang-tidy: ExprLLVMCodeGeneration (f23aca9)
  • :gem: v4.0.0.0 (5f02791)

The post KSeExpr 4.0.0 Released! appeared first on Krita.

We’re working on Dolphin’s URL navigator teething issues

Mon, 2020/11/16 - 1:12am

The change to move Dolphin’s URL Navigator/breadcrumbs bar into the toolbar hasn’t been received as well as we were hoping, and I wanted to let people know that we’re aware and will find a way to address the concerns people brought up. Hang tight!

Linux on the Desktop

Sun, 2020/11/15 - 3:42pm

2020 has been a fascinating year, and an exciting one for Kubuntu. There seems to be a change in the market, driven by the growth in momentum of cloud native computing.

As markets shift towards creative intelligence, more users are finding themselves hampered by the daily Windows or MacOS desktop experience. Cloud native means Linux, and to interoperate seamlessly in the cloud space you need Linux.

Kubuntu Focus Linux Laptop

Here at Kubuntu we were approached in late 2019 by Mindshare Management Ltd. MSM wanting to work with us to bring a cloud native Kubuntu Linux laptop to the market, directly aimed at competing with the MacBook Pro. As 2020 has progressed the company has continued to grow and develop the market, releasing their second model the Kubuntu Focus M2 in October. Their machines are not just being bought by hobby and tech enthusiasts, the Kubuntu Focus team have sold several high spec machines to NASA via their Jet Propulsion Laboratory.

Lenovo launches Linux range

Lenovo also has a vision for Linux on the Desktop, and as an enterprise class vendor they know where the market is heading. The Lenovo Press Release of 20th September announced 13 machines with Ubuntu Linux installed by default.

These include 13 ThinkStation™ and ThinkPad™ P Series Workstations and an additional 14 ThinkPad T, X, X1 and L series laptops, all with the 20.04 LTS version of Ubuntu, with the exception of the L series which will have version 18.04.

When it comes to desktops, at Kubuntu, we believe the KDE desktop experience is unbeatable. In October KDE announced the release of Plasma-Desktop 5.20 as “New and improved inside and out”. Shortly after the release, the Kubuntu team set to work on building out Kubuntu with this new version of the KDE Plasma desktop.

KDE Plasma Desktop on Linux

Our open build process means that you can easily get your hands on the current developer build of Kubuntu Linux ‘Hirsute Hippo’ from our Nightly Builds Repo.

It’s been an exciting year, and 2021 looks even more promising, as we fully anticipate more vendors to bring machines to the market with Linux on the Desktop.

Even more inspiring is the fact that Kubuntu Linux is built by enthusiastic volunteers who devote their time, energy and effort. Those volunteers are just like you, they contribute what they can, when they can, and the results are awesome!

About the Author:

Rick Timmis is a Kubuntu Councillor, and advocate. Rick has been a user and open contributor to Kubuntu for over 10 years, and a KDE user and contributor for 20

Introducing the PinePhone - KDE Community edition

Sun, 2020/11/15 - 11:00am
Experience the future of KDE’s open mobile platform KDE and Pine64 are announcing today the imminent availability of the new PinePhone - KDE Community edition.

Wayland Status for Plasma 5.20

Sun, 2020/11/15 - 12:00am

The KDE community has made some great progress on Plasma Wayland support during this release cycle. Some people on the Internet have qualified Plasma Wayland session as stable, but I wouldn't go that far yet. I would qualify Plasma sessions as beta preview, we still have a long way to go. In some configurations and workflow It might suit you but certainly not all users for now.

I am going to highlight a bit this progress below but first I'd like to explain the technical challenges the KDE Wayland community Goal faces.

Why Wayland migration takes time

Wayland related issues have different origins. Here are the main ones :

  • Missing wayland protocol or missing protocol implementation.
    Wayland defines a way to exchange data between an application and the compositor (in Plasma that's KWin). Those exchanges are formalized with protocols. Wayland provides quite a few standard ones. For instance we have a protocol for when a GUI Application starts, it will ask the compositor some memory to draw its GUI in, and another for when the compositor gives the application the keyboard focus. And for each particular window interaction between applications and the compositor we need such a protocol.
    Standard protocols are not enough to build a Plasma session upon, it is generic and is meant to be usable by desktop and embed all the same. So KWin and Plasma have some specific protocols. Those, for instance, allow Plasma taskbar to manage other windows.
    Despite Wayland is not anymore a new technology, its protocols mature slowly. Their definition takes time, they are validated through a review process, and are updated as needed. After that step, developers must implement their support in compositors, and sometimes also in applications or frameworks.
    If you are interested on this subject, I can recommend again Drew DeVault's the wayland book.
    Those are often the cause of missing features whether we a protocol does not exist yet or we lack an implementation. The task manager window thumbnails is in this category.

  • Fill the blank issues: the X.Org Display Server encompasses a lot of things, from keyboard input to screensavers, to screen management. And those features need to be reimplemented somehow whether it is through Wayland protocols or new completely new solutions. We have made good progress on this but a few missing cases remain.

  • Immaturity issues: Wayland implementations and related APIs are relatively young especially when you compare to X.Org Server. And on the desktop those implementation have not been used much, preventing issues to be discovered and fixed. Furthermore clients and compositors especially have a lot more responsibility compared to what they had with X, meaning a lot of new code is written, and new code means less stability. With display server now abandonware, this should help motivate more users and developers to test and stabilise things. Those cause crashes, misbehaviour or feature missing.

  • Compatibility issues: Wayland is very different to what we had in X. And we need to adapt to this new paradigm in a lot of places. For instance in plasma, with Xorg the taskbar could simply access Xorg API and manage windows directly. Now it must ask the Compositor to use its Window management API and this is a different API and KWin is the one defining and implementing it. One way to mitigate those issues, is to have a proxy. we try to maintain compatibility with X applications through XWayland notably.

And some issues can be caused by several of those subjective origins.

Major Plasma 5.20 Wayland improvements
  • The Task Manager has now window thumbnails Wayland. (Aleix Pol Gonzalez) merge request

  • Screen recording and screencasting now works on Wayland for compatible applications (e.g. OBS Studio and more to come) (Aleix Pol Gonzalez) merge request
    That's a fill the blank issue and missing protocol issue : Xorg allows any program to record the screen with its own API. We needed a new protocol to expose a screencasting feature.

  • Klipper now uses the Wayland clipboard and works as you would expect in a Wayland session (David Edmundson) merge request

  • Implemented the Wayland input-method-unstable-v1 protocol, which opens the door for proper virtual keyboard support on Plasma Mobile, among other benefits! (Aleix Pol Gonzalez) merge request

Bug fixes Stability improvement
  • In a Plasma Wayland session, XWayland no longer brings down the whole session when it crashes; it just restarts normally (Vlad Zahorodniy, Plasma 5.20) merge request

  • Clearing the clipboard history on Wayland no longer crashes Plasma (David Edmundson, Plasma 5.20) bug

  • On Wayland, clicking on a Task Manager entry while that entry’s tooltip is visible no longer crashes Plasma (Vlad Zahorodnii, Plasma 5.20) bug

  • KWin no longer sometimes crashes when exiting or re-launching (Vlad Zahorodnii, Plasma 5.20) merge request

Feature Parity
  • On Wayland, context menus on the desktop and throughout Plasma now close when they’re supposed to (Vlad Zahorodnii, Plasma 5.20) bug

  • On Wayland, Task Manager tooltip window thumbnails are no longer overlapped by the app’s icon (Nate Graham, Plasma 5.20) bug

  • On Wayland, pressing Ctrl+Alt+Esc twice no longer results in the “Click a window to kill it” message being re-positioned into the top-left corner of the screen (Vlad Zahorodnii, Plasma 5.20) bug

  • KRunner is now more responsive to typed text on Wayland (Alexander Lohnau, Plasma 5.20) bug

  • Fixed the initialization of dmabuf textures in KWin on Wayland, which in practical terms should ensure that videos played Firefox no longer sometimes display garbage instead of the video (Vlad Zahorodnii, Plasma 5.20) merge request

  • Clicking on a Task Manager thumbnail now activates that window, as you would expect (Marco Martin, Plasma 5.20) bug

  • the window stacking order is now always correct (Vlad Zahorodnii, Plasma 5.20) merge request

  • context menus now always have shadows, as expected (Vlad Zahorodnii, Plasma 5.20) merge request

  • Improved the graphics performance on Wayland (Gang Wu, Plasma 5.20) by allowing KWin not to draw windows placed behind opaque others. merge request

  • It’s now possible to drag windows on Wayland from their empty areas, just like on X11 (Vlad Zahorodnii, Plasma 5.20) merge request

  • Plasma no longer sometimes crashes when you hover the cursor over an auto-hide Panel (Andreas Haratzis, Plasma 5.20) merge request

  • Fixed a case where KWin could crash when logging out of a Wayland session (Andrey Butirsky, Plasma 5.20) bug

  • Edge swipe gestures and showing a hidden panel by tapping the screen edge now work on Wayland (Xaver Hugl, Plasma 5.20) bug bug

  • The System Settings Accessibility page is now available (Michael Weghorn, Plasma 5.20) bug

  • Fixed the “Windows can cover” panel setting on Wayland (Xaver Hugl, Plasma 5.20) merge request

  • The last-used keyboard layout is now remembered on Wayland (Andrey Butirsky, Plasma 5.20) bug

  • Fixed a crash on Wayland when waking up the computer while multiple screens are attached (Andreas Haratzis, Plasma 5.20) bug

  • you can now enter full screen mode in MPV by double-clicking on the video (Benjamin Port, Plasma 5.20.0) bug

  • Previews for cursor themes now correctly display real-time previews as you hover your cursor over them on Wayland (David Redondo, Plasma 5.20) bug

  • Spectacle now lets you take a screenshot on Wayland without needing to click first to confirm it (Méven Car, Spectacle 20.12) merge request

Special shoutout to our newest ambitious and prolific KWin contributor Xaver Hugl.

Thanks to Nate blog that makes putting this together so much easier.

We are on (telegram)[] and IRC #kde-wayland-goal on freenode. More on

Maui 1.2.0 Release

Sat, 2020/11/14 - 1:20pm

Today, we are pleased to announce the release of MauiKit and Maui Apps 1.2!.

Are you a developer and want to start developing cross-platform and convergent apps, targeting, among other things, the upcoming Linux mobile devices? Then join us on Telegram: If you are interested in testing this project and helping out with translations or documentation, you are also more than welcome.

The Maui Project is free and open-source software incubated by the KDE Community and developed by Nitrux Latinoamericana S.C.

We are present on Twitter and Mastodon:

Stable release

The 1.2.0 version brings updates, new features, bug fixes, and an improved cross-platform and convergent experience from the apps and the framework.

For this release, the packages will be distributed directly from the MauiKit official webpage.

The packages for Linux AMD and ARM AppImages, Android APKs, Debian DEBs, and Windows EXE, will be released soon. We will keep you informed when this happens.

Some apps might be missing features or present some bugs; if it is the case that you want to report a bug or a feature request, you can open a ticket at the corresponding project repo; you can check the repositories at the Maui Invent Group

Follow the instructions on how to file an issue.

For the upcoming point releases, we will have performance and bug fixes for 1.2.1 and 1.2.2 before moving on to the 1.3 cycles, which will include new features.

For more detailed information, check out the previous posts from Maui Weekly:

Maui Weekly Report 5

Maui Weekly Report 6

Maui Weekly Report 7


As of now, the Maui Project provides a set of applications to cover the basic set of standard utilities you might need on your personal computer, from a phone, tablet, or desktop:

A file manager (Index), music player (VVave), an image viewer (Pix), text editor (Nota), notes taker (Buho), terminal emulator (Station), contacts manager (Communicator), document viewer (Shelf),  a video player (Cinema), a camera app (Booth), and web browser (Sol).

Some of the apps are in different states, most of them are stable, some of them are more feature-rich, and some others are in early stages.

We have been testing the apps on the PinePhone with Plasma Mobile, and they work quite nicely, there’s still work to be done for startup times, but we are on the right path to get you covered.

For this release cycle, we provide packages for Index, VVave, Pix, Nota, Buho, Station, Communicator, Cinema, and Shelf.

For users, we provide packages for Linux x64 and ARM (aarch64, a.k.a arm64) devices like Plasma Mobile, APKs for Android, and Windows installers for x64.

And for distribution maintainers, we provide links to the applications and the framework source code.

Index 1.2.0

What’s new

The Index file manager gained support for compressed file types, an embedded file previewer for font file types, archives, and other multimedia file types.

There are now more reliable configurations on the settings dialog, responsive support for split views, and a cleaner user interface.

When enabled now, there are thumbnail previews for video files and PDFs; this is only available on Linux; other platforms might get this feature in future releases.

The search field has been merged with the filter field, and the search now works correctly recursively.

For more details, you can read the previous MauiWeekly blog posts linked above.

Known issues
  • Thumbnail previews for PDF and video file types are only available for Linux.

For reporting new issues, bugs, or feature requests fill a ticket at:

  • Data syncing for cloud accounts using NextCloud API, there is already some work for it; further work needs to be done to bring it back.


Pix 1.2.0

What’s new

Optimized grid thumbnail preview sizes and shapes for small screens.

A better settings dialog with more options to tweak the browsing experience.

Paths are now added or removed correctly to the collection sources.

The database system has been dropped in favor of using the file system using MauiKits FileLoader for asynchronous loading images directly from the file system, meaning your collection is always up to date.

And overall, nicer browsing views for the tags, aka Albums and Folders views, with collage delegates that let you preview the contents and are updated dynamically.

A new information dialog.

Known issues
  • The basic editing tools are only available for testing when using the KImageEditor component, only available on Linux.
  • When AutoReload is enabled, and a new image is added to one of the collection sources, the whole collection is refreshed; this is not the desired behavior.

For reporting new issues, bugs, or feature requests fill a ticket at:

  • Add simple image editing options, right now, thanks to KImageEditor tools like the rotation, flip, crop, and other transformation options are already implemented.
  • Syncing of your image gallery across devices using NextCloud.
  • Face detection using AI
  • Implement image metadata information with exiv2. Some initial work is done and should be ready for the next point release.


VVave 1.2.0

What’s new

Gained MPRIS support on Linux and now can use multimedia keys from a remote control or keyboard function keys to control your music or any other services that consume the MPRIS Dbus API.

The backend for loading music files and creating playlists has been refactored to be faster.

The album and artists’ artwork providers have been fixed, and now the images are correctly retrieved from services like Spotify, iTunes, LastFm, MusicBrainz, and Genius.

Added a complete settings dialog with more options, like sorting, collection sources, artwork fetching, and many others.

The playlists view has been cleanup and now uses the MauiKit Tagging feature, so tracks added to a playlist, aka a tag, can also be browsed from Index, the file manager.

Several different small paper cut fixes were introduced, from the playback bar to the focus view.

The YouTube support has been dropped and is planned to be migrated into its own application.

Known issues
  • The cloud view artists roll does not work for filtering.
  • The streaming of cloud music files is not fully optimized and can take some time to load.
  • Vvave makes usage of NextCloud Music app API, so this app should be installed in your NextCloud instance.

For reporting new issues, bugs, or feature requests fill a ticket at:

  • Faster streaming of cloud music files and full usage of the available API


Nota 1.2.0

What’s new

Faster loading times of text documents.

Correctly applying syntax highlighting styles when switching.

Added more options to configure the text document, like tab spacing, font sizes, and family.

A focus mode to hide distracting interface elements.

The improved workflow of split views and tabs.

Performance fixes on creation and deletion of tabs and splits.

The views for browsing the recent and local text documents have been reviewed and now use the newest MauiKit components and display more relevant information.

Known issues
  • When the places sidebar is disabled, you can still peek at the sidebar by mistake from time to time.
  • Rich text support is still not correctly supported.
  • Some text document file types might not be recognized.
  • Missing markup support.

For reporting new issues, bugs, or feature requests fill a ticket at:

  • Add support for rich text file types and markup support.
  • Support for loading external plugins to extend the functionality, initial work is already done, and some available plugins.
  • Implement, find, and replace functionality.


Buho 1.2.0

What’s new

Redesigned books view with a bigger book and booklet pages.

The modal dialog popups for creating or editing notes have been removed to use stacked pages, resulting in gaining much more space.

The syncing and updating of notes is now better on internal or external changes.

The links view for collecting links has been dropped and is planned to be moved to Sol, the new Maui web browser app.

Known issues
  • The syncing of notes can get mess-up if the internet collection is lost and the notes are modified; meanwhile, after the app is restarted the notes might be modified and out of sync.

For reporting new issues, bugs, or feature requests fill a ticket at:

  • Support for markup should come directly from MauiKit Editor and DocumentHandler components, benefiting Nota, and any other app using these components.
  • Support for creating to-do lists.
  • Improve the syncing of notes.
  • Support for taking handwriting notes with Doodle component from MauiKit and image processing for translating the image to plain text.


Communicator 1.2.0

What’s new

Redesigned views and contact pages, dropped the modal dialog popups in favor of stacked pages.

The backend was clean up and now adding or editing existing contacts is smoother.

The Contact page now has contextual actions depending on the field: email, phone, or address.

Known issues
  • Android support was dropped due to some refactoring and code cleaning from MauiKit; Android support should be restored in the next point release.

For reporting new issues, bugs, or feature requests fill a ticket at:

  • Syncing of contacts using NextCloud API
  • Support for online accounts from NextCloud
Station 1.2.0

What’s new

Correct handling of split views and tabs.

Gained a settings dialog for basic configurations.

Known issues
  • The station makes usage of QmlTermWidget; for the app to work, the version needs to be 1.0 and not later, which might cause the app to crash; this issue is still being investigated.

For reporting new issues, bugs, or feature requests fill a ticket at:

  • Add more options to the settings dialog to further configuration.
  • Improved touch UX interaction patterns.
Shelf 1.0.0

What’s new

Preview thumbnails in the grid view for the documents collection.

Updated interface using the newest MauiKit components, using  AltBrowser for switching from list to grid views.

The usage of the Poppler library is now done using the binaries instead of building the sources for Android, macOS, and Windows.

Added support for CMake and added desktop file manifest for Linux.

Known issues
  • Some of the viewer functions do not work.
  • Bookmarks of document pages do not work.

For reporting new issues, bugs, or feature requests fill a ticket at:

  • Add support for other document file types, such as ePub.
Cinema 1.0.0

What’s new

Cinema is a new app for playing video, browsing, and organizing your video collection by creating albums, aka tags.

It has support for not only quick-playing videos but also for queueing videos into a playlist.

Known issues
  • Thumbnails previews for the video is only available on Linux.
  • Some video file types might not have support, depending on the platform and available plugins installed.

For reporting new issues, bugs, or feature requests fill a ticket at:

  • Polished the playback experience by allowing to have a different backend like VCL
  • Add MPRIS support
MauiKit 1.2

MauiKit 1.2 introduces new components and a few more utilities for helping to build convergent applications. Some parts have been cleanup or dropped in favor of making MauiKit lighter.

There is still a big load of work ahead of us, and this new release takes us a step closer to that convergent future.

Since last May of 2020, when we had our first 1.1 stable release, we have been busy working on the framework components and utilities. With over 200 commits, 11360 additions, and 9430 deletions later, we are ready to introduce the new 1.2 release.

What’s new

The updated components include SettingsDialog, Page, ToolBar, ToolActions, and AboutDialog, among other fixes for the rest.

New components:

  • Separator
  • SettingTemplate
  • Platform – dependent integrations classes for the different platforms
  • FileLoader – for asynchronous loading of files in the local filesystem
  • Thumbnailer – image provider for generating image previews of different file types from a file URL

The FMH namespace implementation, which includes many useful utilities from file management, data modeling functions, etc., has now been moved to its won CPP file, clearing many compile warnings.

Some of the removed parts include the FilePreviewer, the Tagging TagsModel in favor of using MauiKit abstract modeling classes, and some Android platform functions like contacts loading, which only makes sense on Communicator, the contacts app.

Known issues
  • The Thumbnailer right now depends on KIO, so it won’t be available for other platforms other than Linux.
  • The Platform utility still lacks many needed functions and properties for getting relevant info like the presence or lack of input hardware, like a mouse, physical keyboard, or touch screens.
  • Multi-selection with touch gestures is still not perfect and can accidentally cause scrolling.
  • Data syncing classes and an improved API for it.
  • Drop qmake in favor of fully using CMake.
What’s next for MauiKit & Maui Apps 1.3
  • Enable syncing of data and images on Pix, files in the Index file manager, and of contacts in Communicator.
  • Make Booth and Sol initial beta release.

Our plans for 2021 are:

  • Fully utilize CMake.
  • More feature-rich applications
  • Improve data synchronization using NextCloud.
  • Improve performance.
  • Improve the UI cohesion on all supported platforms.
  • Move beta apps to stable.

The post Maui 1.2.0 Release appeared first on Nitrux — #YourNextOS.