I’ve visited both FOSDEM and SCALE over the last weeks, where spoke with dozens of people and gave talks about ownCloud 8. We’ve been getting a lot of positive feedback on the work we’re doing in ownCloud (thanks!) and that has been very motivating.Does it scale?
A question which comes up frequently is: “What hardware should I run ownCloud on?” This sounds like a simple questions but if you give it a second thought, it is actually not so easy to answer. I had a small cubox on the booth as a demonstration that this is a way to run ownCloud. But development boards like the Raspberry Pi and the cubox might give the impression ownCloud is only suitable for very small installations – while in reality, worlds’ largest on-premise sync and share deployment has brought ownCloud to 500,000 students! So, ownCloud scales, and that introduces the subject of this blog.
If you look up the term scalability on wikipedia you get the explanation that software scales well if you get double the performance gain out of it if you throw twice the hardware at the problem. This is called linear scalability, and rarely if ever achieved.The secret to scalability
ownCloud runs on small Raspberry Pi’s for your friends and family at home but also on huge clusters of web servers where it can serve over hundreds of thousands of users and petabytes of data. The current Raspberry Pi doesn’t deliver blazing fast performance but it works and the new raspberry pi 2 announced last month should be great hardware for small ownCloud deployments. Big deployments like the one in Germany or at CERN are usually ‘spread out’ over multiple servers, which brings us to the secret sauce that makes scalable software possible.
This secret to building great scalable software is to avoid central components that can be bottlenecks and use components that can easily be clustered by just adding just more server nodes.How ownCloud scales
The core ownCloud Server is written in PHP which usually runs together with a web server like Apache or ISS on an application server like Linux or Windows. There is zero communication needed between the application nodes and the load can be distributed between different application servers by standard HTTP load balancers. This scales completely linear so if you want to handle double the load because you have double the users, you can just double the number of application servers making ownCloud perfectly scalable software.
Unfortunately an ownCloud deployment still depends on a few centralized components that have the potential to become bottlenecks to scalability. These components are typically the file system, database, load balancer and sometimes session management. Let’s talk about each of those and what can be done to address potential performance issues in scaling them.File system scalability
The file system is where ownCloud has its data stored, and it is thus very important for performance. The good news is that file systems are usually fast enough to not slow down ownCloud. A modern SSD, RAID setup, NFS server or Object Store can deliver data rates that are a lot faster than the typical internet network uplinks so you rarely bump into limits with data storage. And if you do, there are solutions like GlusterFS which help you scale performance quite easily. On the App server, a properly setup and configured Temp directory is important to achieve good performance as data has to flow via the ownCloud installation to the user (sync clients or web interface).
Sometimes, this isn’t enough. If you have to handle petabytes of data, ownCloud 8 offers a solution developed together with CERN. This solution lets ownCloud act as a storage broker to direct read and write requests directly to the storage node where the data resides. The result is that that no actual payload flows through the ownCloud servers anymore and the storage performance is entirely dependent upon the data storage itself. Thanks to this solution, the file system storage should never be the bottle neck for ownCloud.Database scalability
ownCloud uses a database to store all kind of metadata so it depends on a database which is very fast and scalable to keep performance up. ownCloud can use enterprise databases like MSSQL and Oracle DB which offer all kinds of fancy clustering options. Unfortunately they are proprietary and not necessarily cheap. Luckily there are Open Source alternatives like PostgreSQL and MySQL/MariaDB which also offer impressive clustering and scalability options. Especially MySQL combined with a Galera cluster is a very nice and fast option that is used by a lot of the larger ownCloud installations.
Note that scalability doesn’t always mean big! Scalability also means that ownCloud should run fine on very tiny systems. Some embedded systems like the first Raspberry Pi had very limited RAM. In such situations it is nice to use SQLite which is embedded in the PHP runtime and has a very tiny memory footprint, saving precious system resources. This is is all about choice for the right system size!load balancer scalability
If you have more than one application server than you need a way to distribute the incoming requests to the different servers. ownCloud uses a standard protocol like HTTP so that off the shelf solutions can be used for load balancing. There are standard and enterprise grade appliances from companies like F5 that are very fast and reliable if used for redundancy with a heat beat. Nowadays there are also very good and affordable options like the Open Source HAProxy on top of a standard Linux system available. This also works very well and is very fast. If you really have a lot of traffic and don’t want to buy hardware appliances you can combine several redundant HAProxy servers with DNS round robin. This has to be done very carefully so that you don’t compromise your high availability. There are several blogs and articles out there describing how to set up a system like this.Session management scalability
There are two fundamentally different ways to do session management which are both supported by ownCloud. One is local session management on the application servers. The other is a centralized session management. I don’t want to discuss the pros and cons of both solutions here because they are a lot of aspects to consider. But with regards to scalability I want to mention that the simpler solution to have local session management together with sticky connections has the benefit that it does not need a central component. This means that it provides linear scalability. If a centralized session management is used then something like memcached is recommended and supported by ownCloud because it can also scale easily internally.summary
ownCloud has been designed to scale from tiny embedded systems like a Raspberry Pi for a few users to a standard Linux server for a small workgroup to a big cluster for several hundred thousands of users. A wide variety of solutions and technologies can be used to make this possible and if you are interested in ways how to do this than have a look at the owncloud documentation for more information and look at the third party resources and white papers available for this on owncloud.com
I'm holidaying in Alaska for a few weeks around June, anyone has been there and can share the stuff we should totally not miss/do when visiting?
Last week I had to work with docutils, a Python library to turn reStructuredText (.rst) into documentation. I was using it to extend a Sphinx-based documentation I am setting up. It was quite a frustrating experience: despite loads of search, I could not find any simple, working examples demonstrating Docutils API usage. To save me (and possibly you) some more frustration next time I need to use this library, I am writing down a few examples.
My goal was to create a custom Docutils Directive. A Directive is a class which can be referred to from a .rst file to generate custom content. Its main method is run(), which must return a list of Docutils nodes, representing the custom content. Each node can itself contain other nodes, so run() actually returns a list of node trees.
Available nodes are listed in the Docutils Document Tree. reStructuredText is powerful and expressive, which means creating simple text structures can require quite a lot of nodes, as we shall see.
Let's start with an "Hello World", a simple paragraph:
An error I made a lot when starting was to pass the text of the paragraph as a positional argument. I kept writing that:
Instead of this:
It does not work because the first argument of paragraph() is the raw source: the string which would produce the paragraph if it came from a .rst document.
Next example, let's create some sections, the equivalent of this .rst source:
Let's now create a bullet list, like the one which would be created by this .rst:
This is done with a bullet_list node, which contains list_item nodes, which themselves contain paragraph nodes.
And now for something a bit crazier, what about a table? The rough equivalent of:
This one is a bit more involved:
That's it for today, hope this was helpful for some of you. If you want to experiment with this, here is the source code for all these examples: docutils_snippets.py.
PS: I am no Docutils expert, this article may suggest wrong ways to do things, please leave a comment if you notice any error.
We have just opened the registration for Akademy-es 2015.
This year we are piggy-backing on the Akademy 2015 registration system since Akademy-es 2015 happens in the same place just 2 days before so we thought it made sense have a common registration for both.
More info at https://www.kde-espana.org/akademy-es2015/registro.php
See you at A Coruña!
Wrishiraj Kaushik announced SuperX 3.0 on March 23 Linux operating system for computers. This major version includes a great number of features, updated applications, new artwork, and lots of under-the-hood improvements.
Dubbed Grace, SuperX 3.0 is Kubuntu based on 14.04 LTS (Long Term Support). It features the Linux kernel 3.13.0, and KDE Application 4.13.3 desktop environment, showcasing a flat theme with bright contrasting colors. SuperX 3.0 includes the latest available versions of the Mozilla Firefox and Chromium web browsers, Mozilla Thunderbird email and news client, LibreOffice office suite, GIMP image editor, FileZilla file transfer client, VLC Media Player video player, Musique music player, Minitube YouTube downloader, OpenShot video editor, and Telegram messaging client. Apt-fast has also been integrated in SuperX 3.0 for faster apt-get functionality. The USB Modem Manager has been added in order to manage USB dongles, Indian language input support is provided via the IBus keyboard input method, an on-screen keyboard has been added, supporting both English and Indian languages, as well as a beautiful splash screen.
A great slideshow is available at SOFTPEDIA.
We plan to close the Doodle for the Randa Meetings date selection at the end of this week. So if you plan to participate please vote on the date that best fits you! And keep in mind two things:
- You might bring your partner or family with you to Randa. We started this last year and people really liked it (and Randa is a nice holiday place in the Alps too – near to the world-known Zermatt).
- If you see a lot of well-known names on the Doodle don’t think you shouldn’t be part of this. We always want to see new people with fresh energy and inspiring thoughts and ideas in Randa.
So please add yourself as quickly as possible or write me an email (fux AT kde) or ping me on IRC (unormal).
After a long period of silence I’m coming with a news: Telepathy-Morse project is “still alive” and the first release is going on.
- TelegramQt — Qt-based library which supports messaging, contact-list operations and other telegram client capabilities. The purpose of this library is to abstract from the telegram protocol details and provide a convenient API for a client application development.
- Telepathy-Morse — Qt-based Telegram client (connection manager) for Telepathy communication framework. Uses TelegramQt under the hood.
Note: In order to use Morse, you need to have a complementary Telepathy Client application, such as KDE-Telepathy or Empathy.
Note: Telepathy-Morse depends on the latest telepathy-qt version (0.9.6), which might be not available yet.
Now, let’s talk about the development, current progress and plans for the near future.
What is expected to work (on high level):
- Contact list with first/last names
- Contact avatars
- Contact management (you can add/delete contact by its phone number)
- Personal messaging (one to one)
- User typing events
- Message acknowledgment
- Own presence (online, offline, hidden)
- Loading unread messages on connect
- DBus activation
- Sessions (Means that you don’t have to get confirmation code again and again)
- Restoring connection on network problem
- Initial low-level encryption sometimes generates bad random values, which leads to “connection doesn’t work” issue.
- Can not send long messages (Missed TelegramQt gzip packing implementation; limit is about 400 characters; telegram protocol limitation is 4095 characters)
Both TelegramQt and Telepathy-Morse are Qt4 and Qt5 compatible.
Information about CMake build: by default CMake looks for the Qt5 build. You can pass USE_QT4 option (-DUSE_QT4=true) to process Qt4 build.
Telepathy-Morse almost works on the sailfish devices, but there is one show-stopper: The authentication dialog invocation doesn’t work. We do basically the same thing, as do Telepathy-Gabble (which is known to work), but it have no effect. Sadly, I don’t have a Sailfish device and didn’t test it. Big thanks are going to Teemu Varsamaki, who managed to build Morse for Sailfish, find-out authentication issue and, nonetheless, use the client with an uploaded copy of the auth-info file from desktop.
Telegram Blackberry Contest, Teletonne
I’ve contacted by the “Telegram client for Blackberry” developers. They’re use TelegramQt application in their Teletonne client and win Second prize. More details at https://telegram.org/blog/bb-results. Now it looks like they’re give up with further competition, while it’s really easy to get next prize for “the developers who make the most progress”, as there is many major improvements in TelegramQt. Sadly again, I have no good phone :-) to take a part.
There is no tarballs yet. I’m not familiar with KDE release process, but I’m going to tackle it soon. (I hope to fix the Sailfish issue)
I see three basic directions in the Telegram client development:
- Group chat
- Secure chat
- Multimedia messages, files receiving/upload
Group chat support is mostly implemented in TelegramQt library, but TelepathyQt service bindings requires changes to support (such type of) group chat. Of course, TelepathyQt client bindings have everything we need, so clients (e.g. KDE Telepathy) works well with group chat (e.g. in pair with telepathy-gabble, jabber connection manager).
In its turn, Telepathy-Morse is the first “more, than proof of concept” Qt-based telepathy connection manager, thus TelepathyQt services previously was not used so much. Telegram group chat has “not addressable” rooms and, as I see, the services might needs a little redesign to support it.
Secure chat. As I know, technically it looks like embedded telegram session with the same cryptography methods, which already implemented for the basic telegram connection. This is not a priority task for me (at least at this moment).
Multimedia messages. File download capability is already implemented and used for avatars and initial photo-messages support. I need to tune it a bit for “big files” operation support. Outgoing media messages needs some significant work, because one can not just upload files “as is”, but should meet Telegram requirements, such as format, resolution, etc. I have no strong decision on the API and TelegramQt responsibility yet.
Implementation of the last task means almost automatic “done” for the self avatar changing (uploading) capability.
I have never heard about open source Telepathy stack component with multimedia message support, so, probably, there will be a lot of work. Telepathy specification have some hints on this subject.
Short note about TelegramQt test application
The test application is not intends to be telegram client for end users, but supposed to be feature full developer-oriented application with easy access to some artificial operations, just to be sure that TelegramQt works as expected. Because of TelegramQt documentation nonexistence, the test app source can be useful as API usage example as well.
Group chat with multimedia messages in TelegramQt TestApp.
Morse and TelegramQt TestApp in process of one-to-one chatting. As you can see, there is avatars in contact-list, typing status in ktp-text-ui, messages timestamp, delivery-report and a little allusion.
- KDE Project and especially David Edmundson for making this project possible.
- David Edmundson again, for moral and technical support and for the ktp-accounts-kcm plugin for Morse.
- Matthias Gehre for his work on the TelepathyQt services.
- Previous post autumn commentators, especially Alberto, who eventually made me to continue this project, instead of delaying it again and again under pressure of everyday cares.
- Teemu Varsamaki for actual code contribution, ideas, testing.
- Official TelegramQt repository
- Official Telepathy-Morse repository
- My personal TelegramQt repository (WIP, unstable, some force-push probability)
- My personal Telepathy-Morse repository (WIP, unstable, force-push warning again)
- My TelepathyQt fork with experimental features (Regular rebase on upstream master, so unavoidable force push)
- TelepathyQt code generator (can be useful for the TelepathyQt services developers)
I’m so sorry for the slow development, it’s a consequence of mine “always busy” reality. The project is not abandon and will not be abandon. I hope you’ll see an update next few months. Thank you for reading.
On Monday, 30/3, the foss-gbg group will meet and hack on 3D printers. Invitation in Swedish – tickets are free.
Välkomna på foss-gbg hackafton!
Vi träffas klockan 17:00 den 30/3 och lär oss om 3D-skrivare.
Vi får besök av Göran Frykberg som kör en 3dhub i Mölndal. Han kommer att snacka om printerteknologier och material. Han kommer även att visa lite bruksprodukter och visa varför 3D-skrivare är här för att stanna.
Vid åttasnåret drar vi vidare och umgås över en öl.
Pelagicore står för lokaler och bjuder på lättare tilltugg under tillställningen.
Gratis biljetter hittar ni på eventbrite . Antalet platser är begränsade, så först till kvarn gäller!
Göran Frykberg, Johan Thelin och Mikael Söderberg
Cutelyst the Qt/C++ web framework just got a new release, 0.7.0 is brings many improvements, bugfixes and features.
Most notably of the changes is a shinny new Chained dispatcher that finally got implemented, with it Cutelyst Tutorial got finished, and a new command line tool to bootstrap new projects.
* Request::bodyData() can return a QJsonDocument is content-type is application/json
* Chained dispatcher
* Fixes on QStringListeral and QLatin1String usage
* More debug information
* Some API changes to match Catalyst’s
* Some rework on the API handlying Plugins (not the best thing yet)
* Moved to Gitlab (due to the gitorious being acquired by it)
* “cutelyst” command line tool
For the next release I’ll try to make the QML integration a reality, and try to improve the Plugins API, which is right now the part that needs more love would love to hear some advice.
Download it here.
The KDE Gardening team has at last finished the Love project KRecipes with the 2.1 release which can be found here: http://download.kde.org/stable/krecipes/2.1.0/src/krecipes-2.1.0.tar.xz.mirrorlist
After 1723 posts and several years of use, I've switched my blog from the proprietary, Google-owned-and-hosted Blogger to a self-hosted instance of the free software Ghost blogging application. If you are reading this in one of the blog agregators I'm carried on, you may have seen some old postings showing up as a result; apologies if this happens.
I've been watching the Ghost project for a while waiting for it to reach the point I could reasonably use it for my needs. It got there recently and I had the opportunity to move to a new server recently so took the plunge.
Here's what I really like about it, and why I went with it:
- It is free software. Obviously this is #1 in the list
- It is actively and openly developed, moving at a good clip
- I was able to import my Blogger blogs into it quite easily
- It is focused on minimalism
- It runs as a separate application
While the first three may be obvious in terms why they are important, I'd like to take a moment to talk about the last two.
It may sound like stating the obvious, but: I want a blogging system for my blog. Not a web site builder. Not a content management system. A blog application. As a counterexample, Wordpress has long exited the land of simple blogging system and become a website building suite. There is nothing wrong with that, but it means it diverges from my desire for a clean, minimal and elegant system for blogging. I've run a few Wordpress sites over the last few years, so I'm not just speaking hypothetically but from first-hand experience.
With Ghot, you write your blogs in a very plain and simple full-screen, two-pane user interface using markdown. No word-processor-wannabe toolbars, switching between keyboard and mouse, lots of menus everywhere. Just a text editor, a live preview next to it and all the editing and formating you could want in a blog. Tags live in a bar below, configuration in a slide-in drawer on the right, and a simple button to save and publish.
Yet in this simple system it still manages to have multiple author support, tag and author specific RSS feeds, static pages, theming ...
Since it runs as its own process, I can have systemd manage it as a separate service listening on a local socket with haproxy forwarding traffic to it. Upgrading the blog becomes a simple matter of setting up a parallel install, testing it, and when I'm happy with it changing the haproxy routing and restarting haproxy. Zero downtime, even if things go wrong. If a security problem or some other issue arrises, I can shut down just the blog and none of the other things I have running on the server are affected. I can even move it to another machine if desired without interuption.
While setting it up, along with other "must have" tools for me such as Kanboard, it reminded me of some aspects of the Kolab architecture, specficially how it is built around a swarm of services that take care of different aspects of the task of "collaboration suite" in concert with each other.
Finally, I'd like to give a shout-out to the author of the Willsong II theme. I purchased the theme after trying his no-cost Willsong 1 theme; it looks nice and professional, and I both wanted to support his efforts and get some of the nice features of Willsong II. It also happens to use the KDE Oxygen font by default, which I thought was pretty spiffy. Unfortunately I ran into some small problems with the theme after installing it. I wrote the author and he responded almost immediately and quickly identified and fixed a bug in the theme. He even customized it a bit for me, even though I didn't ask for that. So: great customer support, and he's got a few other themes with a shared common core that he's working on. If you're looking for a good Ghost theme I recommend giving his efforts a look.
Last week I merged in a few important changes for the upcoming KWin 5.3 release. The rootless Xwayland support is integrated, which means we are a huge step closer to managing Wayland clients. Rendering, input and cursor is already using the Wayland code paths and will be shared with proper Wayland clients. I have already started working on code for that and have it working quite nicely already but decided to delay the integration for Plasma 5.4.
This means that kwin_wayland starts a Wayland server, but except Xwayland there is no application which can properly connect to it. Nevertheless it’s an important step and allows to also test the code in a better way.
In addition I worked on a better support for nesting compositors. So far one had to start Weston in order to test kwin_wayland. This is of course not optimal as it makes the development setup more difficult than it has to be. So last week I looked into developing a new backend which can use an X11 window to render to. This is comparable to Weston which can be used nested on X11. The backend is relatively straight forward: it can render to the created window using either the OpenGL or QPainter compositor, accepts input events and delegates them and passes the cursor from Wayland windows to the X11 window. The tricky part is that we cannot use any of our X11 specific libraries to create the window: we don’t use the xcb QPA, so no support from Qt and we cannot use KWindowSystem as it only allows one xcb_connection and that’s already needed for the Xwayland window manager. So I had to do things myself and as I consider this just a development feature it’s probably the worst X11 client in existance (Long term KWindowSystem should be changed to support multiple connections, would be very useful for unit tests).
The new backend triggered a small refactoring which makes it easier to implement further backends like e.g. a plain framebuffer or a drm/kms. As a small reminder: there’s an open GSoC idea for implementing the drm/kms backend and application period is to close soon.
Anyway with the nested KWin it’s now possible to create a kwin_wayland instance on the X Server using kwin_x11 and on the kwin_wayland one can create yet another kwin_wayland so that we have a KWinception:Aus 2015-03-19
To start such a nested compositor use:
kwin_wayland --windowed --xwayland
Please watch the debug output for:
X-Server started on display :1
This tells the id of the created X Server (normally the next free id) and allows to start an application on the nested KWin:
KWin_Wayland picks the windowing system depending on the environment variables which are defined. For DISPLAY it uses X11, for WAYLAND_DISPLAY it uses Wayland. If both are defined Wayland is preferred. If one wants to specify explicitly one can use the command line arguments –x11-display or –wayland-display. E.g.
kwin_wayland --windowed --xwayland --x11-display=:0
For X11 there are also the options –width and –height to specify an initial size (default is currently 1024×768). On Wayland these options are currently not available, the window always opens fullscreen.
Please remember that this is pretty new code and the support is still incomplete. Feature might be missing or broken. This is at the current state expected and because of that we do not yet accept bug reports. If you find a bug which you think should be fixed, please open an editor
– Desktop actions
– Dolphin magic
– Actions and language
– Network connection edit
– Trash auto cleanup
– Undo widget removal (great for the whoops moments)
Tagged #kubuntu, of course.
Another year, another opportunity to annoy Nuno Pinheiro with some terrible “engineer art” that I produced many moons ago (see header image). This, my friends in KDE, is why you should be going to Akademy and why you should talk there. Akademy: The Social Heartbeat of KDE Akademy, in short, is the annual conference of the KDE community. If you are… Read more →
Para una revisión de Kubuntu 15.10 Beta en español (For a review on Kubuntu 15.10 Beta in Spanish) …
This is an interesting transitional period in the Qt world for desktop applications. We are in the phase where QML is becoming better and better for the use in a Desktop context, even for full fledged applications.
We noticed that there were some bits derived from the many years of experience in Plasma that can be very useful for every application developer out there, that fall pretty much in those categories:
- where to install and how to access QML files.
- integrating KDE pieces, such as the translation system
- how to write an application that is 100% QML and how to distribute it
- how to write an application that is a mix between C++ and QML
- how to write a KCM for systemsettings that is purely based on qml (and doesn’t even link to QWidgets)
Years ago, with the need of having plasmoids implemented with scripted language, we had also the need of having a simple way to distribute those plasmoids around. They are in the end just a bunch of files, that can beany kind of data: the actual source code of the scripts or any needed data assets such as graphics and sounds.
This brought the class Plasma::Package in libplasma, that’s the way to both install/unistall the plasmoid and access any data file or script file from plasmoids.
The problem then posed itself almost identical for things like scripting support in any generic Qt application, like any pack of addons, such as graphics or souds themes.
Same thing for applications based upon QML, that ends up being composed by two parts: purely C++ parts with the central logic and the QML files together any asset that may be needed.
One interesting feature is also that the same files (QML, assets or whatnot) can have versions specific for a particular device, for instance an application may have 99% of the same code between the normal desktop version and one optimized for touchscreen, intended at tablets. Just the few, interested QML files would be duplicated between the desktop and tablet versions.
Since Frameworks 5.6, there is a new tier 2 framework: KPackage, that offers what Plasma::Package offered in libplasma, but with less dependencies and usable by any Qt application that can use a tier 2 framework.
KDeclarative is the KF5 center for all things related to extending QML.
The framework provides several qml import plugins, such as imports specific of a particular KDE Framework.
It also offers two C++ libraries: libkdeclarative and libquickaddons.
libkdeclarative focuses on KDE related functionalities of the QML engine:
- KDeclarative: installs things in the QML engine: such as a KIO-based networkaccessmanager and the 18n() transpations functions.
- QmlObject: A class similar to QQuickView: in which you can just set the url of a QML file or a KPackage instance and it loads it, instances the QQmlEngine etc. The difference with QQuickView is that it’s not graphics based and is not a QWindow, you’ll just have the instantiated QObject *. Useful if your project is not graphic or if you already have a view, and you just want to instantiate a new thing for reparenting it into the view you have.
The other one is libquickaddons that focuses on utilities related to QtQuick, the actual graphical components.
- ManagedTextureNode: it’s a QSGSimpleTextureNode that will manage its own texture, making simpler to implement your own QQuickItem.
- ImageTexturesCache: helps to manage textures by creating images and reference counts them, use it if your QQuickItem starts from QImage.
- KQuickAddons::ConfigModule: the base to do KControlModules based on QML, without dependencies from QWidgets, more on that later :).
qmlpackagelauncher is a tiny command line tool provided by KDeclarative: it’s used to launch an application that is written in QML-only (no main application executable).
The application QML is intended to be distributed in a KPackage structure, with the QML engine initialized with KDeclarative, therefore having things like the i18n() functions available.
The QmlObject class of KDeclarative can now load files from a KPackage, either by setting an existing KPackage instance to it or just setting the kpackage plugin name, making very simple to load a qml file from a plasmoid-like KPackage structure installed on the disk, such as in you main:KDeclarative::QmlObject *qmlObj = new KDeclarative::QmlObject; qmlObj->loadPackage("org.kde.example");
For a full application is recomended to use the component ApplicationWindow from QtQuickControls, that’s its own QWindow, so it’s not even necessary to create a QQuickView or QQuickWindow, having the really needed C++ part really just those two lines.QML based KControlModules
An important part for the ongoing redesign of the Plasma Desktop by the VDG also passes trough writing (and rewriting) modules for Systemsettings or KInfoCenter in QML, to make easy making those franly often outdated UIs beautiful.
One important thing for a mass-migration like that,
ConfigModule has the same API as the old base for kcontrol modules, KCModule, but is a pure QObject, making possible for a future QML-only Systemsettings version.
Start with your old KCModule subclass, keep all the logic in it, but start to scrape off all the QWidget based UI bits and give the class a nice property-based, QML friendly API to read and set the values that will eventually go into the config files.
The actual load and save from the config file, will be done just as in KCModule, by reimplementing ConfigModule::load() and ConfigModule::save().
The UI, QML based part will be provided by a KPackage, with the same name as the component name of the KaboutData of the ConfigModule instance. The QML part will be able to access the configModule instance as the “kcm” global object, just as “plasmoid” is accessible from within the QML code of plasmoids, as well as the ConfigModule QML attached property.
ConsequencesUnfortunately, the mentioned and the many other flaws in our thinking have consequences for decision making in our society, especially when there's money to be made. The lobby of weapon manufacturing is rather stronger than that of companies creating anti-slip mats in showers and car manufacturers, well, security is merely a factor increasing the costs of cars so there's little incentive for them to hammer on that issue either. The combination of our innate inability to judge the likelihood of these and other things to harm us and the financial pressure on politicians results in massive over-spending on what is in essence irrelevant or even dangerous and harming our society. The NSA, for one, stupidity around net neutrality is another and the war on drugs is third rather prominent example. And now Ebola, of course - a disease so unlikely to kill you, you're probably more likely to be killed by a falling piano.
I think it is pretty clear, as I mentioned above, that politics and business happily abuse our lack of rationality. But probably more often, 'the system' causes issues by itself, as the insanely huge political divide in the US shows. It pays of for the media to put extreme people in front of their audience - and today, we have a country where you can't discuss politics at the office because people see the world so vastly different that only conflict can come out of a conversation. Think of the biases I discussed earlier: these world views aren't likely to get fixed easily, either.
Never attribute to malice that which is adequately explained by stupidity.I don't think anybody set out to create this divide - but it is with us now.
Now indeed, the media are part of a system working against us. They get rewarded for creating an impression of problems; and they are often uninformed and biased themselves. As John Oliver pointed out, we don't put a believer in regular human abductions by aliens in front of a camera to debate a scientist, attempting to give both sides of the debate an equal chance! We pity the few who still don't get that this, and many other issues, are settled.
Yet this is what often happens in areas where science has long come to a conclusion. Not just the moon landing but also vaccinations, global warming, evolution and a great many more things. Take the recent "Snowden wants to come home" media frenzy!
I don't think any of that is intentional. It's the system rewarding the wrong things. We are part of that 'system': we prefer news that supports our view point; and we prefer new and exciting things - a balanced point of view is boring.
Quality decision making gets harder and harder.
DealingOne way of dealing with disagreement has been to essentially declare all facts 'up for discussion'. It all depends on your point of view, proponents of this idea say. But reality isn't as malleable as relativists make it out to be. You can choose to leave your house through the front window on the 3rd floor, but gravity's a bitch. It's nice that some want to value everybody's opinion, but the universe imposes limits to that.
We have to realize that the world is real. People can be right or wrong about it and the choices we make matter!
As a society, we need to find new ways to make decisions in a healthy way. We've done good things - we eliminated polio and smallpox, diseases that were around for many thousands of years and nobody has had them in a long, long time. River blindness is hopefully next, and others will follow. We also drove half the worlds' animals near extinction and are abusing this planet to the point where it just will become a much more hostile place in a century or two unless we change something. You can guess I'm not much into libertarianism - it is clear that we can and do impact the world and going it all alone does not solve the tragedy of the commons style issues we have. There's a problem - and the inherent complexity of the world is certainly part of it, as is our lack of rationality.
How do we deal? I used to be an optimist - when I discovered the Internet, I thought it would democratize knowledge (it has) and news (not so much). Social media, sites like Digg where people vote on what the 'best news' is, it seemed a new and improved reality. No more single points of failure. No journalists who can be bought or oppressed. And then there were open source communities, with their flat structure of decision making, ideals of equality and meritocracy. Democracy would thrive!
Reality was harsh. The Internet has allowed us to hide in our corners with like minded people. It has lots of good stuff (if you're not into economy or net neutrality, this is a good read on both) but the Internet didn't kill conspiracy theories, it fuelled them. And open source works for some, but has its own issues of inequality (and that is just one problem).
Perhaps technology can help - Google has apparently found ways to find out what's true and what isn't. I'm not so sure. I wonder what it would do to sites like this proving that even with mere facts you can create conspiracy nonsense.
Methods to the rescueI think the solution has to be found in a system or a process in the way the scientific method works. Wikipedia describes it as follows:
The scientific method is a body of techniques for investigating phenomena, acquiring new knowledge, or correcting and integrating previous knowledge.I prefer to call it a process, rather than a 'body of techniques'. The key is that if left to common sense, humanity decides that life emerges from lifeless matter until more than 2000 years later Louis Pasteur shows it really, really doesn't (except for this). The scientific method thus aims to take human decision making out of the equation, or at least, rigorously deal with the biases that cloud our judgment. Lots of books have been written on the subject of philosophy of science - I got my portion by way of Chalmers, worth reading.
However it works, key is that while science relies on people and thus makes mistakes, it has a process for dealing with these mistakes, correcting them over time. Confidence is gained over long periods and the result is that we have been largely refining our knowledge gained since the scientific method became widely used, rather than rewriting the world as we know it over and over again. Yes, Newton's theories on physics still stand - quantum mechanics and Einsteinian physics merely refines it, providing better results in areas Newton can't reach. Uncertainty exists in science, but only at the 'edges', where new knowledge is created. While many facts of evolution are debated, since evolutionary synthesis, we've settled on a core which is as solid as Newton's ideas about gravity; climate models might be imperfect today but much of what we discovered does not have to be debated over again and again.
Method for decision makingWe have methods, systems, processes for decision making, too. Democracy is one, the trias politica part of it. But it has flaws and needs some refinement, ideally in the opposite direction of Citizens United. I don't think we can make a Philosopher King system work, so whatever we come up with has to be a bureaucracy, evolved from today's system. I think decentralization is part of the solution (majors should rule the world?), but we live together on this planet so there have to be over-arching structures, too.
I know there is research being done on the topic. And we've already come up with some strategies like the advocate for the devil.
What exactly the solution should look like - don't ask me. I'm a psychologist, I can merely tell you not to trust people and their gut instincts. If this feels like an anti-climax, well, it should. We will have to come up with solutions together - not one blogger alone!
Soon?But we should hurry.
I believe, with self-described plutocrat Nick Hanauer, that the pitchforks are coming. Perhaps the militarization of police and governments (the NSA in particular) disrupting online security are attempts of governments to prepare for social unrest.
What I'm certain about is that humanity can't continue the way it is functioning now. If social inequality doesn't put a stop it, the depletion of natural resources or religious fundamentalists will. The Dutch would say: the wall will turn the ship.
Let's see how hard we'll hit it.
A freshly-baked release of digiKam Recipes is ready for your reading pleasure. The new version features the Export and Share Photos with digiKam recipe which offers a comprehensive overview of digiKam's sharing and exporting capabilities.
The new Extract and Examine Metadata from digiKam's Database recipe explains how to pull and analyze photographic metadata stored in the digikam4.db database. Finally, the Rescale Photos with the Liquid Rescale Tool recipe explains how to use the Liquid Rescale tool for intelligent rescaling. As always, the new release includes minor updates, fixes and tweaks.
The Call for Papers deadline for Akademy 2015 is just 10 days away. So you should submit a talk now, you know you have cool stuff to share, so do a small write up and tell the world that awesome new stuff you're working on.
And of course don't forget to register as always it's free but let's us know how many of you nice people are going to come over ;)
Ah and we also have the badges available, thanks to Alba Carro for the nice pictures :)
The call for papers for Akademy 2015 is on the 31st of March, which is scarcely over a week away.
If you want to talk at Akademy it is important to submit your application on time.
We have a large number of short and lightning talks available again this year, which is a fantastic opportunity to give everyone a brief overview of what's been happening in your project over the past year. I would like to see every active project presenting something.
Don't leave it too late and miss out.
Instructions on how to submit can be found at https://akademy.kde.org/2015/cfp.