Subscribe to Planet KDE feed
Planet KDE - http://planetKDE.org/
Updated: 26 min 43 sec ago

Planet KDE on Telegram

Sun, 2016/03/27 - 10:48pm

Planet KDE on Telegram

Ehilà guys, you can follow Planet KDE as Telegram channel:
telegram.me/planetkde
…or in your Telegram client type “@planetkde” in global search field.

(I have set a bot to mirror the RSS feed to a Telegram channel).

You can also see KDE announcements at @kdenews.

that’s all, ciao!

Workshop de Software Livre 2016 – call for papers and tools

Sun, 2016/03/27 - 8:34pm

wsl2016-banner

The call for papers and call for tools for the WSL – Workshop de Software Livre (Workshop on Free Software), the academic conference held together with FISL – Fórum Internacional de Software Livre (International Free Software Forum) is open!

WSL publishes scientific papers on several topics of interest for free and open source software communities, like social dynamics, management, development processes, motivations of contributors communities, adoption and case studies, legal and economic aspects, social and historical studies, and more.

This edition of WSL has a specific call for tools to publish papers describing software. This specific call ensures the software described was peer-reviewed, is consistent with the principles of FLOSS, and the source code will be preserved and accessible for a long time period.

All accepted and presented papers will be published in WSL open access repository. This year we are working hard to provide ISSN and DOI to the publications.

The deadline is April 10. Papers can be submitted in Portuguese, English or Spanish.

See the official WSL page to get more information.

Wiki, what’s going on?

Sun, 2016/03/27 - 4:17pm

It’s been two weeks since we are back from the Sprint @CERN: we came back with lot of work done but much more to be done, that’s why I’ve decided to post something about our post-Sprint activity up to now.

These weeks have been full of important work for us and since I’m part of the editor team let me tell you two words about what we are working on.

We are focusing very much on organizational aspects such as the introduction of new users, our internal structure, solving some bugs related on how math is rendered in some browsers and the conversion from tex (or any other format) to mediawiki. According to me our strenght is the fact that we are a young – and extremely willing – community: all these points are very important to us, since we understand that the future of the community is based on them and that’s why we are trying to do our best to deepen these aspects and not to leave anything to chance.

One of the things that I really appreciate is the attention we are paying to new users: we really care about them! We though about the idea of creating the role of “tutors” (i.e. more experienced users) to help newcomers, because we want everybody to be perfectly introduced in the community and to feel free to ask for any doubt. Moreover we’ve also decided to make editing experience even more user friendly than it is by using buttons and interactive tools on the personal userpage and on the editor environment. More precisely: new users are really important to us and we care about them, that’s why this point has been so fundamental in these weeks.

We are working, we are growing and the best is yet to come: #operation1000 is coming! 😀

Bye,

Matteo

L'articolo Wiki, what’s going on? sembra essere il primo su Blogs from WikiToLearn.

Free Krita tutorials out every week on GDquest

Sun, 2016/03/27 - 2:09pm

gaq-vol1-banner-800

(Guest post by Nathan Lovato)

Game Art quest, the 2d game art training series with Krita, is finally coming out publicly! Every week, two new videos will be available on YouTube. One on Tuesdays, and one on Thursdays. This first volume of the project won’t cost you a penny. You can thank the Kickstarter backers for that: they are the ones who voted for a free release.

The initial goal was to produce a one-hour long introduction to Krita. But right now, it’s getting both bigger and better than planned! I’m working on extra course material and assignments to make your learning experience as insightful as possible.

Note that this training is designed with intermediate digital art enthusiasts in mind. Students, dedicated hobbyists, or young professionals. It might be a bit fast for you if you are a beginner, although it will allow you to learn a lot of techniques in little time.

Subscribe to Gdquest on Youtube to get the videos as soon as they come out: http://youtube.com/c/gdquest

Here are the first two tutorials:

Easy access to Kdenlive builds on *ubuntu

Sat, 2016/03/26 - 10:04pm

Hurrah, Launchpad now handles git, and can generate daily builds directly from this clone!
So ubuntu users now have 3 repositories to get the latest of Kdenlive:

As all these require Qt5/KF5, builds are activated for Wily (for some time) and Xenial (LTS to be released soon)... Live in the present ;)

This work is now owned by a team on Launchpad, you are welcome to join if you want to co-maintain the packages. I deleted my own unmaintained PPA, and invite users to switch to one of the above.

If daily builds are available on other distributions, maintainers are welcome to advertise their work on our wiki, and will be glad to relay the info here!

Happy testing and editing!

Connect to your server in your LAN via your WAN url: an openWRT solution.

Sat, 2016/03/26 - 6:59pm
So, I run my own ownCloud. Figures, right?

Can't reach the server from the LANOf course, I sync files on my desktop between my laptop and phone. The desktop client is setup with the IP address of the server in my living room. But my phone and laptop, configured to connect to my public, DynDNS URL (so they work when I'm traveling), can't connect from the home network. Triple-uncool. I like my photos from my phone to by auto-uploaded when I connect to wifi at home; and more importantly my laptop should sync when I get home from travel!

Danimo blamed my router - a Cisco (Linksys) E4200. That was (once upon a time) an expensive, high-end router. Sadly, having been abandoned by its manufacturer, it has become an expensive, high-end liability. I can't even log into the administration interface, browsers tell me that the connection is insecure. There are more issues, like the slow WLAN-LAN transfer speeds I experienced and I'm not even talking about security here. Linus once eloquently expressed his feelings towards NVIDIA, a resentment I now feel towards CISCO.

openWRT to the rescueI learned my lesson. No router not able to run an open source firmware will get in my house. While I don't feel any need whatsoever to fiddle with things that do their job, Linksys screwed up here: they left me on broken software long before I had any need for new hardware.

After some digging, I learned that TP-Link has been (mostly inadvertently) a decent citizen for OpenWRT fans. So, even if they'd abandon their router like Linksys/Cisco did, there was a future. I bought a TP-Link Archer C7. Affordable and it can run OpenWRT.

After setting it up initially, things worked. For a day. After that, no amount of fiddling could make it work again. Magic. Today I gave up on the original firmware and installed OpenWRT. It was easy - as easy as upgrading to a new TP-Link firmware: download the openWRT firmware, go to the upgrade interface, select it, hit start. A while later you ca visit the web interface. Which is a tad more complicated, but not much - and noticeably more capable. It didn't take me any longer than on the original firmware to set up my wifi and guest networks.

How to make it workBut it didn't solve the problem. I had to resort to a web search and found a neat trick, which I'm happy to share (assuming 192.168.1.11 is your server on your LAN):
  • Log into your router over ssh
  • Add to your /etc/dnsmasq.conf file the following: address=/example.com/192.168.1.11
  • Add to your /etc/hosts file: 192.168.1.111 example.com
A few minutes later, things will work.

Essentially, the DNS provider in OpenWRT will provide your local server address to local clients... It thus breaks when you use another DNS than the one provided by the router via DHCP.

I'd be happy to hear from other and/or better solutions. Heck, this might only work for a day or might be horrible or maybe I changed something else which made it work. What do I know...

Weekly Update: Integrate Cantor with LabPlot

Sat, 2016/03/26 - 4:19am
This is a weekly update on my GSOC 2015 Project Integrate Cantor with LabPlot. As I mentioned in my last post that I will be starting my work with integration of UI of Cantor inside LabPlot. I would like to inform my fellow developers that I have integrated the UI successfully. I present the screenshots hereafter.



For now I have created Maxima Session and now I work on different backends, implementing property section, different actions needed for the management of cantor's session in the upcoming week. I hope to integrate cantor's worksheet completely by midterm evaluation.

GSOC 2015

Sat, 2016/03/26 - 4:19am
I got accepted for the project, Integration of Cantor into LabPlot in this year's google summer of code (GSOC) under mentor Alexander Semke with KDE Organization. Looking forward to ..





First phase to GSOC 2015 has ended, that is community bonding period. I have tried my best to interact with my mentor and community members, but I was mostly occupied with my university examination during most of the time. 

Looking forward to the coding period I have started working on integration of Cantor's UI into LabPlot. I will push all my commits to branch integrate-cantor[1] of LabPlot. 

Looking forward to learn, code and develop this summer !

[1] https://projects.kde.org/projects/kdereview/labplot/repository/show?rev=integrate-cantor

GSoC 2015 (Mid Term Evaluation)

Sat, 2016/03/26 - 4:19am
The Midterm evaluation week has almost come to an end and midterm evaluation deadline ends today. This post will describe about what all I have achieved in my project "Integration of Cantor with LabPlot" and what I plan to do further.

Below are the screenshots of LabPlot.

As you can see in the above screenshots Cantor's session is integrated. Variable manager, Print, Print Preview and all other relevant actions in Cantor has also been implemented into LabPlot. With implementation of all these I have successfully achieved my midterm evaluation target. I was working on improvising my code implemented so far and implementing my mentor's suggestion to code.
I will now move on to extract variables from the cantor's session so that we can use them to create new plots inside LabPlot. After that I will work on saving Cantor's data along with LabPlot's data, so that user could save and load both the worksheets.
I have learned a lot during this journey so far. I learn how duscussion plays an important role in the development of a software. I learn about some of the best practices that should be followed and their importance in real life code.
That is not all! My upcoming weeks will see more of coding and hence I am prepared to learn more during that time.

Final evaluation

Sat, 2016/03/26 - 4:19am
As we all know we have our final evaluation of our GSOC project next week. I have completed my project and would like to display how the integration between Cantor and LabPlot works.
For this demonstration I will be using Python3 and numpy.

Let's first create a Python3 session inside LabPlot.


I will be referring to the script from here.
So let's execute the script and transfer every variable to lists as LabPlot for now supports only lists and tuple data structure. The screenshot bellow shows the final script when written.


Every variable that is either list or tuple is converted to a column inside LabPlot which are child objects of the main CAS Session. These columns can then be used to generate plots as shown below:


Finally, we chose the columns/variables we want to use and plot the graph. The following graph is generated if we plot "t vs s1" and "t vs s2" columns from the above data:


I have created parser, that is used to parse the variables to columns, for every backend, but testing for Sage and R backend is left. I will be testing those two parsers extensively during this week.

Below are some more screenshots showing zoomed in and zoomed out plots.




Moreover, user can now save all the data of the session as well as the plots inside one LabPlot's .lml file and it can be loaded to be used next time user want.
I hope with integration of the two applications, user experience of the users will increase and they will have a richer experience while ploting and using CAS .

FOSSASIA 2016

Sat, 2016/03/26 - 4:18am
Thank you FOSSASIA for inviting me to share my experience with different developers around the globe. It was an exciting place to meet and make new friends, share our experience and learn from other's experience. I would also like to thank Google and KDE for helping me financially and sending me to this year's FOSSASIA. The event was very well planned and executed. I hope the participants find all the talks and workshops effective and implement them someday.
All the speakers had something inspirational to the audience in their talk, who seem to be enthusiastic too. I was amazed to see small kids interested in software development. I helped Jigyasa Grover and Mohammad Umair in taking an Android session for school kids which excited me the most to help around small kids getting started with android development

It was amazing to meet Stephanie and Catherine in person. They shared their GSOC experience and how such a big event is managed and successfully organised from past 12 years continuously.

I also met Mike McQuaid and had a healthy chat about homebrew and GitHub (where he is currently working)

It was a great experience hope to be in touch with all the new developers I met :)

Kerning feature for Malayalam fonts

Fri, 2016/03/25 - 8:28pm

At SMC, we’ve been continuously working on improving the fonts for Malayalam – by updating to newer opentype standard (mlm2), adding new glyphs, supporting new Unicode points, fixing shaping issues, reducing complexity and the compiled font size, involving new contributors etc.

Recently, out of scratching my own itch, I decided that it is high time the annoyance that combination of Virama(U+D04D ് ) with quote marks (‘ ” ‘ ’ “ ” etc) used to overlap into an ugly amalgam in all our fonts. The issue is quite prominent when you professionally typeset a book or article in Malayalam using XeTeX or SILE. Fontforge’s tools made it easy to write opentype lookup rules for horizontal pair kerning to allow more space between Virama(് ) and quote marks. You can see the before and after effect of the change with Rachana font in the screenshot.

Rachana-kerning-before-and-after

Other fonts like AnjaliOldLipi, Meera and Chilanka also got this feature and those will be available with the new release in the pipeline. I have plans to expand this further to use with post-base vowels of വ(്വ) and യ(്യ) with abundant stacked glyphs that Malayalam have.


Tagged: fonts, smc

Kubuntu 16.04 LTS Beta 2

Fri, 2016/03/25 - 12:16am

Kubuntu-16_04-Beta2

We are preparing Kubuntu Xenial Xerus (16.04 LTS) for distribution on April 21, 2016. With this Beta 2 pre-release, you can see what we are trying out in preparation for our next stable version. We would love to get some testing by our users.

Plasma 5, the next generation of KDE’s desktop has been rewritten to make it smoother to use while retaining the familiar setup. The fifth set of updates to Plasma 5 is the default in this version of Kubuntu. Plasma 5.6 should be available in backports soon after release of the LTS.

Kubuntu comes with KDE Applications 15.12 containing all your favorite apps from KDE, including Dolphin. Even more applications have been ported to KDE Frameworks 5 but those which aren’t should fit in seamlessly. Non-KDE applications include LibreOffice 5.1 and Firefox 45.

Please see Release Notes for more details, where to download, and known problems. We welcome help to fix those final issues; please join the Kubuntu-Devel mail list[1], just hop into #kubuntu-devel on freenode to connect with us.

1. Kubuntu-devel mail list: https://lists.ubuntu.com/mailman/listinfo/kubuntu-devel

Ways to Help Krita: Work on Feature Requests

Thu, 2016/03/24 - 8:06pm

“OpenBlackMischief PhotoPaintStormTool Kaikai has a frobdinger tool! Krita will never amount to a row of beans unless Krita gets a frobdinger tool, too!”

The cool thing about open source is that you can add features yourself, and even if you cannot code, you talk directly with the developers about the features you need in your workflow. Try that with closed-source proprietary software! But, often, the communication goes awry, leaving both users with bright ideas and developers with itchy coding fingers unsatisfied.

This post is all about how to work, first together with other artists, then with developers to create good feature requests, feature requests that are good enough that they can end up being implemented.

For us as developers it’s sometimes really difficult to read feature requests, and we have a really big to-do list (600+ items at the time of writing, excluding our own dreams and desires). So, having a well written feature proposal is very helpful for us and will lodge the idea better into our conscious. Conversely, a demand for a frobdinger tool because another application has it, so Krita must have it, too, is likely not to get far.

Writing proposals is a bit of an art form in itself, and pretty difficult to do right! Asking for a copy of a feature in another application is almost always wrong because it doesn’t tell us the most important thing:

What we primarily need like to know is HOW you intent to use the feature. This is the most important part. All Krita features are carefully considered in terms of the workflow they affect, and we will not start working on any feature unless we know why it is useful and how it is exactly used. Even better, once we know how it’s used, we as developers can start thinking about what else we can do to make the workflow even more fluid!

Good examples of this approach can be found in the pop-up palette using tags, the layer docker redesign of 3.0, the onion skin docker, the time-line dockers and the assistants.

Feature requests should start on the forum, so other artists can chime in. What we want is that a consensus about workflow, about use-cases emerges, something our UX people can then try to grok and create a design for. Once the design emerges, we’ll try an implementation, and that needs testing.

For your forum post about the feature you have in mind, check this list:

  • It is worth investigating first if the feature in question has similar functionality in Krita that might need to be extended to solve the problem. We in fact kind of expect that you have used Krita for a while before making feature requests. Check the manual first!
  • If your English is not very good or you have difficulty finding the right words, make pictures. If you need a drawing program, I heard Krita is pretty good.
  • In fact, mock-ups are super useful! And why wouldn’t you make them? Krita is a drawing program made for artists, and a lot of us developers are artists ourselves. Furthermore, this gets past that nasty problem called ‘communication problems’. (Note: If you are trying to post images from photobucket, pasteboard or imgur, it is best to do so with [thumb=][/thumb]. The forum is pretty strict about image size, but thumb gets around this)
  • Focus on the workflow. You need to prepare a certain illustration, comic, matte painting, you would be (much) more productive if you could just do — whatever. Tell us about your problem and be open to suggestions about alternatives. A feature request should be an exploration of possibilities, not a final demand!
  • The longer your request, the more formatting is appreciated. Some of us are pretty good at reading long incomprehensible texts, but not all of us. Keep to the ABC of clarity, brevity, accuracy. If you format and organize your request we’ll read it much faster and will be able to spent more time on giving feedback on the exact content. This also helps other users to understand you and give detailed feedback! The final proposal can even be a multi-page pdf.
  • We prefer it if you read and reply to other people’s requests than to start from scratch. For animation we’ve had the request for importing frames, instancing frames, adding audio support, from tons of different people, sometimes even in the same thread. We’d rather you reply to someone else’s post (you can even reply to old posts) than to list it amongst other newer requests, as it makes it very difficult to tell those other requests apart, and it turns us into bookkeepers when we could have been programming.

Keep in mind that the Krita development team is insanely overloaded. We’re not a big company, we’re a small group of mostly volunteers who are spending way too much of our spare time on Krita already. You want time from us: it’s your job to make life as easy as possible for us!

So we come to: Things That Will Not Help.

There’s certain things that people do to make their feature request sound important but are, in fact, really unhelpful and even somewhat rude:

  • “Application X has this, so Krita must have it, too”. See above. Extra malus points for using the words “industry standard”, double so if it refers to an Adobe file format.
    We honestly don’t care if application X has feature Y, especially as long as you do not specify how it’s used.
    Now, instead of thinking ‘what can we do to make the best solution for this problem’, it gets replaced with ‘oh god, now I have to find a copy of application X, and then test it for a whole night to figure out every single feature… I have no time for this’.
    We do realize that for many people it’s hard to think in terms of workflow instead of “I used to use this in ImagePainterDoublePlusPro with the humdinger tool, so I need a humdinger tool in krita” — but it’s your responsibility when you are thinking about a feature request to go beyond that level and make a good case: we cannot play guessing games!
  • “Professionals in the industry use this”. Which professionals? What industry? We cater to digital illustrators, matte painters, comic book artists, texture artists, animators… These guys don’t share an industry. This one is peculiar because it is often applied to features that professionals never actually use. There might be hundreds of tutorials for a certain feature, and it still isn’t actually used in people’s daily work.
  • “People need this.” For the exact same reason as above. Why do they need it, and who are these ‘people’? And what is it, exactly, what they need?
  • “Krita will never be taken seriously if it doesn’t have a glingangiation filter.” Weeell, Krita is quite a serious thing, used by hundreds of thousands of people, so whenever this sentence shows up in a feature request, we feel it might be a bit of emotional blackmail: it tries to to get us upset enough to work on it. Think about how that must feel.
  • “This should be easy to implement.” Well, the code is open and we have excellent build guides, why doesn’t the feature request come with a patch then? The issue with this is very likely it is not actually all that easy. Telling us how to implement a feature based on a guess about Krita’s architecture, instead of telling us the problem the feature is meant to solve makes life really hard!
    A good example of this is the idea that because Krita has an OpenGL accelerated canvas, it is easy to have the filters be done on the GPU. It isn’t: The GPU accelerated canvas is currently pretty one-way, and filters would be a two-way process. Getting that two way process right is very difficult and also the difference between GPU filters being faster than regular filters or them being unusable. And that problem is only the tip of the iceberg.

Some other things to keep in mind:

  • It is actually possible to get your needed features into Krita outside of the Kickstarter sprints by funding it directly via the Krita foundation, you can mail the official email linked on krita.org for that.
  • It’s also actually possible to start hacking on Krita and make patches. You don’t need permission or anything!
  • Sometimes developers have already had the feature in question on their radar for a very long time. Their thinking might already be quite advanced on the topic and then they might say things like ‘we first need to get this done’, or an incomprehensible technical paragraph. This is a developer being in deep thought while they write. You can just ask for clarification if the feedback contains too much technobabble…
  • Did we mention we’re overloaded already? It can easily be a year or two, three before we can get down to a feature. But that’s sort of fine, because the process from idea to design should take months to a year as well!

To summarize: a good feature request:

  • starts with the need to streamline a certain workflow, not with the need for a copy of a feature in another application
  • has been discussed on the forums with other artists
  • is illustrated with mock-ups and example
  • gets discussed with UX people
  • and is finally prepared as a proposal
  • and then it’s time to find time to implement it!
  • and then you need to test the result

(Adapted from Wolthera’s forum post on this topic).

Plasma Weather widget: add your favorite weather data provider

Thu, 2016/03/24 - 12:25pm

So Plasma 5.6 has seen the revival of the Weather widget that is part of KDE Plasma Add-ons module (revival as in: ported from Plasma4 API to Plasma5 API).
(And even got the interesting honour to be mentioned as the selling item in the headline of a Plasma 5.6 release news article by a bigger German IT newsite, oh well :) )

This revival was concentrating for 5.6 to restore the widget in its old state, without any bugs. But that’s not where things are to end now…

And you, yes, you, are invited and even called in to help with improving the widget, and especially for connecting to more weather data providers, incl. your favourite one.

For a start, let’s list the current plans for the Weather widget when looking at the next Plasma release, 5.7:

  • Overhaul of look (IN PROGRESS)
  • Breeze-style weather state icons (IN PROGRESS)
  • also temperature displayed in compact widget variant, like on panel (TODO)
  • support for more weather data providers (YOUR? TODO)
  • privacy control which weather data providers are queried on finding weather stations (YOUR? TODO)
Overhaul of widget look

The KDE meta sprint at CERN (of groups WikiToLearn, Plasma, VDG, techbase wiki cleanup) at begin of this March, where I mainly went for techbase wiki cleanup work, of course was a great occasion to also work face-to-face with Plasma developers and VDG members on issues around the Weather widget, and so we did. Marco helped me to learn more about Plasma5 technologies which resulted in some small bugs fixed in the widget still in time for Plasma 5.6 release.
Ken presented me the drafts for the look & design and the config of the weather widget that he had prepared before the sprint, which we then discussed. Some proposals for the config could already be applied in time for Plasma 5.6 release (those without need for changed UI texts, to not mess the work of the translators). The rest of it, especially the new look & layout of the widget, is soon going to be transferred from our minds, the notes and the photos taken from the sketches on the whiteboard at CERN into real code.

Breeze-style weather state icons

Ken also did a Breeze style set of the currently used weather type icons. While touching the icons, a few icon naming issues are going to be handled as well (e.g. resolving inconsistent naming pattern or deviation from the weather icon names actually defined in the XDG icon naming spec (see Table 12. Standard Status Icons). Should soon be done as well.

Temperature display in compact widget variant

One thing that has been missing also from the old version of the Weather widget is the display of the current temperature in the compact widget variant, where now only the current weather condition is shown (e.g. when on the panel). While some weather data providers (like wetter.com) do not provide that detail, so there is nothing to display, others do, and often it is pretty nice to know (clear sky can happen with temperatures of any kind, so it’s good information about if and then what kind of jacket to put on before stepping outside). First code is drafted.

Now, finally the things were you can improve things for yourself and others:

Supporting more weather data providers

The Weather widget does not talk to any weather data providers directly. Instead it talks to a weather dataengine (currently part of the Plasma Workspace component), to query for any weather stations matching the entered location search string when configuring the widget and to subscribe to the data feed for a given weather station from a given weather data provider.
That weather dataengine itself again also does not talk directly to any weather data providers. Instead it relies on an extendable set of sub-dataengines, one per weather data provider.

The idea here is to have by the one weather dataengine an abstraction layer (he, after all this is KDE software ;) ) which maps all weather data services into a generic one, so any UI, like the Weather widget, only needs to talk one language to whoever delivers the data.
Which works more. Or less. Because of course there are quite some weather data specifications out there, what else did we expect :P And possibly the spec interpretations also vary as usual.

((You might think: “Well, screw that over, there is only one user of the weather dataengine, so integrate that directly into the Weather widget!”
While that might be true right now, it does not need to stay this way. There are ideas like showing the weather forecast also with the days in Plasma calendar widgets. Or having a wallpaper updater reflecting the current weather by showing matching images, yes, not everyone has the joy to work close to a window, enjoy if you do. And also alternative weather widgets with another UI, remember also the WeatherStation widget in kdeplasma-addons (still waiting for someone to port it), which focussed on details of the current weather. These kind of usages might prefer a normalized model of weather data as well, instead of needing custom code for each and any provider again. Actually long-term such a model would be interesting outside of Plasma spheres, like for any calendaring apps or e.g. a plugin for Marble-based apps showing weather states over a map))

While I only took over kind of maintainership in the last weeks, so did not design the current system myself, I still think it’s a good approach, having in mind reusable UIs and relative independence from any given weather data providers. So for now I do not plan to dump that, simply lacking a more promising alternative.

So, given you are still reading this and thus showing me and yourself your interest :) let’s have a closer look:

The sub-dataengines for the different weather data providers have been named “ion”s. On the code level they are subclasses of a class IonInterface, which itself is a subclass of Plasma::DataEngine.
See here for the header defining IonInterface: https://quickgit.kde.org/?p=plasma-workspace.git&a=blob&f=dataengines%2Fweather%2Fions%2Fion.h

This header and the respective lib libweather_ion are public and should be installed with the devel packages of Plasma Workspace. Which means you should be able to develop your custom ion as 3rd-party developer without the need to checkout the plasma-workspace repository and develop it in there.

Plasma 5.6 itself installs three such ions:

  • wetter.com: adapter to service of the private company running wetter.com
  • envcan: adapter to service of Environment Canada, by Government of Canada
  • noaa: adapter to service of USA’s National Oceanic and Atmospheric Administration

Find their sources here: https://quickgit.kde.org/?p=plasma-workspace.git&a=tree&f=dataengines%2Fweather%2Fions

In that source folder you will also spot a bbcukmet ion, for the BBC weather service. While being ported to build and install at least with Plasma5, the service API of BBC as used by the code seems to have been either changed or even removed. So that ion is currently disabled from the build (uncomment the #add_subdirectory(bbcukmet) to readd it to the build). Investigations welcome.

Another old ion which though already got removed during the revival was a more fun one: there used to be a debian “weather” service (random related blog post), which reported the status of Debian-based distros by number of working packages as weather reports, and this ion connected to them. But the service died some years ago, so the ion was just dead code now (find the unported code in versions of “Plasma 5.5” and before)

Talking about funny weather reports: why not write an ion for the CI system Jenkins, e.g. with build.kde.org, which while perhaps not being able to give forecasts at least reports the current build state, with builds mapped to stations. After all the short report for a build uses the weather metaphor, see https://build.kde.org/ :)

For more serious weather data provider ions again, as you surely know or can guess, there are many more public & private weather data providers than those 3 currently supported. And they not only may have a better coverage of your local area, but might also provide more data or more suited ones.

Please also see the proposal for using “weather data from the open data initiatives” in a comment on the first blog post on the Weather widget.

It would be great to have a larger selection of weather data providers supported in Plasma 5.7. So while having your ion as 3rd-party plugin somewhere else is fine, consider maintaining it inside one of the Plasma repositories, either with the existing ions in the repo plasma-workspace or as part of addons in the repo kdeplasma-addons. This should ensure more users and also more co-developers.

Do a good check of the licenses for using the data services of the weather data providers. Especially public ones should be fine given their purpose, but if in doubt after reading the details, ask the providers.

Privacy control

One issue in the old and current Weather widget code is that when searching for a weather station suiting the users desire, as expressed by the location search string, all currently installed ions are queried. Which of course is a problem from a privacy point of view. And will be worse the more ions there will be.

So there needs to be a way to limit the scope of ions that would be queried. Given that dataengines by design should be used by all kind of dataengine consumers, a Weather widget-only solution might be only a short jump here. There are very possibly other Plasma modules talking to remote services as well, like geolocation services. And ideally one is able to control system-wide (so even beyond Plasma scope) which remote services are allowed and which not.

Until such a global solution exists a Weather widget-only solution is better than nothing surely. Still, it needs to be designed UI-wise, so a job to be picked up by someone :)

Getting started with your own ion

So while I am currently impeded these days from hacking, among other things by waiting for new development-capable hardware being delivered (disadvantage of small towns: need to use an online shop for special hardware desires, and there latency unit is days, not minutes, especially bad when wrong stuff is delivered and then also holidays get into the game. Looking on the bright side of life, my old hardware only broke right after the CERN sprint, not before :) )…
… Do not hesitate to look into things already. I would have liked to provide a KAppTemplate-package with an ion sample in this post already, so you could experiment right away. But perhaps you are experienced enough to get a new ion working by looking at the existing ones. If not, in a few weeks hopefully there should be a template, so watch out.

And do not hesitate to ask for support on #plasma on irc or on the Plasma mailinglist, I received lots of help there with the Weather widget porting, so you should when trying to write an ion :)


Cutelyst 0.11.0 released!

Wed, 2016/03/23 - 7:19pm

Cutelyst the Qt web framework just got a new release, I was planning to do this a while back, but you know we always want to put in a few more changes and time to do that is limited.

I was also interested in seeing if the improvements in Qt 5.6 would result in better benchmark tests but they didn’t, the hello world test app is probably too simple for the QString improvements to be noticed, a real world application using Grantlee views, Sql and more user data might show some difference. Still, compared to 0.10.0 Cutelyst is benchmarking the same even tho there were some small improvements.

The most important changes of this release were:

  • View::Email allows for sending emails using data provided on the stash, being able to chain the rendering of the email to another Cutelyst::View so for example you can have your email template in Grantlee format which gets rendered and sent via email (requires simple-mail-qt which is a fork of SmtpClient-for-Qt with a sane API, the View::Email hides simple-mail-qt API)
  • Utils::Sql provides a set of functions to be used with QSql classes, most importantly are serializing QSqlQuery to QVariantList of QVariantMap (or Hashes), allowing for accessing the query data on a View, and the preparedSqlQuery() method which comes with a macro CPreparedSqlQuery, that is a lambda that keeps your prepared statement into a static QSqlQuery, this avoids the need of QSqlQuery pointers to keep the prepared queries around (which are a boost for performance)
  • A macro CActionFor which resolves the Action into a static Action * object inside a lambda removing the need to keep resolving a name to an Action
  • Unit tests at the moment a very limited testing is done, but Header class has nearly 100% of coverage now
  • Upload parser got a fix to properly find the multipart boundary, spotted due Qt client sending boundaries a bit different from Chrome and FF, this also resulted in the removal of QRegularExpression to match the boundary part on the header with a simple search that is 5 times faster
  • Require Qt 5.5 (this allows for removal of workaround for QJson::fromHash and allows for the use of Q_ENUM and qCInfo)
  • Fixed crashes, and a memory leak is Stats where enabled
  • Improved usage of QStringLiteral and QLatin1String with clazy help
  • Added CMake support for setting the plugins install directory
  • Added more ‘const’ and ‘auto’
  • Removed uwsgi –cutelyst-reload option which never worked and can be replaced by –touch-reload and –lazy
  • Improvements and fixes on cutelyst command line tool
  • Many small bugs fixed

The Cutelyst website is powered by Cutelyst and CMlyst which is a CMS, at the moment porting CMlyst from QSettings to sqlite is on my TODO list but only when I figure out why out of the blue I get a “database locked” even if no other query is running (and yes I tried query.finish()), once I figure that out I’ll make a new CMlyst release.

Download here.

Have fun!


KDE neon Developer Edition Installable Image FAQ

Wed, 2016/03/23 - 10:46am

The tech preview of the KDE neon developer edition installable image was out yesterday.  It’s an 800MB download you can dd onto a USB disk and run live or install.

files.kde.org

Why are there no apps?

KDE neon doesn’t have builds of applications, we’re doing Qt, Frameworks and Plasma for now to keep it manageable while we get it up and running.

No Konsole?

No apps as I say, xterm is included.

How do I install software?

Appstream issues mean Muon isn’t working well, you can use apt on an xterm.

It has “stable” in the name, does that mean it’s stable?

Not necessarily, it’s built from Git stable branches, not released software.

When will the user edition be available?

No timelines, whenever we can get it done

Does this include Qt 5.6?

It comes with Qt 5.5.  Backporting Qt will be done when possible, it’s not our priority and there’s a bug with 5.6 currently anyway.

Why no Xenial?

Neon only uses a stable base, we’ll move to Xenial when it’s stable.

How do I install this?

It’s the developer edition, if you don’t know how to install it then it’s probably not for you.

Is this or is this not a distribution? You are a leader we must be told!

I lead nothing. You are all individuals, you must all think for yourselves.

facebooktwittergoogle_pluslinkedinby feather

Marble Vector OSM Update

Tue, 2016/03/22 - 8:49pm

Recently I found some time again for Marble development, and today Sanjiban and me made some nice progress on vector rendering. This can easily be explained with pictures, so let’s look at an example rendering. Best viewed full size:

vectorosm

Things that are new — to Marble at least — are the rendering of house numbers and the styling of roads with restricted access. House numbers and names are shown on the centroid of the building, and also on building sides if there are entries to the house. Roads which have no public access are drawn in pale red now to indicate that they might not be traversable.

It’s also interesting to compare our OSM vector rendering to the standard OSM image layer (openstreetmap-carto aka mapnik).For an easy comparison, open both images in a tab of your browser (middle mouse button), set them to full size (click on the image) and toggle between the tabs (Ctrl+Tab, Ctrl+Shift+Tab or Ctrl+Number).

mapnik

Marble’s vector OSM map is designed to look familiar to OpenStreetMap users and therefore shares elements like the general color scheme or icons. We do not want to replicate it though and diverge at several places: Buildings show a nice 3D effect, icons are larger and the colors are more vivid.There’s still some polishing to do on our side, but things are coming together nicely now. Thanks to our software architecture and Qt, all vector rendering improvements are directly available in our Android app Marble Maps as well!

If you’re interested to work on OSM vector rendering, please join the Marble project in KDE Phabricator. We will also be mentors in Google summer of code. If you are a student, please check our GSoC ideas and get in touch with us!

 

Editors: it’s our time.

Tue, 2016/03/22 - 8:00pm

Hi there,

As announced in my last post, we planned a web meeting. It has been held yesterday (monday March 21st) and it was the first step for a better editors’ organization. The step 0 has been done at sprint, and it consisted in creating an internal structure for editors which allows us to divide the work and to be organized.

Yesterday we debated a lot (almost two hours) about the roles an editor could have and what those roles have to do. It was a pleasure talking with the guys, many of them were new faces for me (I’ve never seen them before, I only talked with them through Telegram or mails) and this was great.

Anyway, the meeting was not a failure: in almost two hours we defined all the duties of the users, we made some important decisions to help new users to join the project (as istitutioning the figure of the “tudor”, a more expert user who helps the new entries to become friendly with the site and the layouts) but the central topic of the hangout was dividing the communication channels between every role.

Today, the core team of WikiToLearn has too many Telegram groups (I have 13 groups, some persons have 16 or even more, as 22 or 27) which make our social life inexistent. Riccardo tried to turn off Telegram just for one day, and the following day he found over 1500 messages from over 48 chats. This is just crazy, and we obviously can’t allow other people to condemn their life. So, we decided to start from the botton: the lower roles of the hierarchy, the editors and the reviewers, just need to follow the flow pages on the site (which will be implemented in the following days) and they could create a private and personal group on Telegram, but they don’t need to subscribe the mailing lists or join the other groups.
From the moderators coming up in scale, mailing lists become essencial: they are a great tool to debate of delicate topics without losing any intervent. Obviously, moderators maybe have to join a larger number of groups.
At the top there are the edit admins, whose role is to coordinate the editing work on the site. I just don’t know how many groups an edit admin will have, but I am sure they will be less than the groups I already have. A friend suggested me that my sleeping problems could be connected with these groups, but really I can’t go out. I love the cafè. It is like a son for me. And everyone would give his life for his son.

Save me. Please.

 


Implementing OpenPGP and S/MIME Cryptography in Trojita

Tue, 2016/03/22 - 6:02pm

Are you interested in cryptography, either as a user or as a developer? Read on -- this blogpost talks about some of the UI choices we made, as well as about the technical challenges of working with the existing crypto libraries.

The next version of Trojitá, a fast e-mail client, will support working with encrypted and signed messages. Thanks to Stephan Platz for implementing this during the Google Summer of Code project. If you are impatient, just install the trojita-nightly package and check it out today.

Here's how a signed message looks like in a typical scenario:

A random OpenPGP-signed e-mail

Some other e-mail clients show a yellow semi-warning icon when showing a message with an unknown or unrecognized key. In my opinion, that isn't a great design choice. If I as an attacker wanted to get rid of the warning, I could just as well sign a faked but unsigned e-mail message. This message is signed by something, so we should probably not make this situation appear less secure than as if the e-mail was not signed at all.

(Careful readers might start thinking about maintaining a peristant key association database based on the observed traffic patterns. We are aware of the upstream initiative within the GnuPG project, especially the TOFU, Trust On First Use, trust model. It is a pretty fresh code not available in major distributions yet, but it's definitely something to watch and evaluate in future.)

Key management, assigning trust etc. is something which is outside of scope for an e-mail client like Trojitá. We might add some buttons for key retrieval and launching a key management application of your choice, such as Kleopatra, but we are definitely not in the business of "real" key management, cross-signatures, defining trust, etc. What we do instead is working with your system's configuration and showing the results based on whether GnuPG thinks that you trust this signature. That's when we are happy to show a nice green padlock to you:

Mail with a trusted signature

We are also making a bunch of sanity checks when it comes to signatures. For example, it is important to verify that the sender of an e-mail which you are reading has an e-mail which matches the identity of the key holder -- in other words, is the guy who sent the e-mail and the one who made the signature the same person?

If not, it would be possible for your co-worker (who you already trust) to write an e-mail message to you with a faked From header pretending to be your boss. The body of a message is signed by your colleague with his valid key, so if you forget to check the e-mail addresses, you are screwed -- and that's why Trojitá handles this for you:

Something fishy is going on!

In some environments, S/MIME signatures using traditional X.509 certificates are more common than the OpenPGP (aka PGP, aka GPG). Trojitá supports them all just as easily. Here is what happens when we are curious and decide to drill down to details about the certificate chain:

All the glory details about an X.509 trust chain

Encrypted messages are of course supported, too:

An ancrypted message

We had to start somewhere, so right now, Trojitá supports only read-only operations such as signature verification and decrypting of messages. It is not yet possible to sign and encrypt new messages; that's something which will be implemented in near future (and patches are welcome for sure).

Technical details

Originally, we were planning to use the QCA2 library because it provides a stand-alone Qt wrapper over a pluggable set of cryptography backends. The API interface was very convenient for a Qt application such as Trojitá, with native support for Qt's signals/slots and asynchronous operation implemented in a background thread. However, it turned out that its support for GnuPG, a free-software implementation of the OpenPGP protocol, leaves much to be desired. It does not really support the concept of PGP's Web of Trust, and therefore it doesn't report back how trustworthy the sender is. This means that there woldn't be any green padlock with QCA. The library was also really slow during certain operations -- including retrieval of a single key from a keystore. It just isn't acceptable to wait 16 seconds when verifying a signature, so we had to go looking for something else.

Compared to the QCA, the GpgME++ library lives on a lower level. Its Qt integration is limited to working with QByteArray classes as buffers for gpgme's operation. There is some support for integrating with Qt's event loop, but we were warned not to use it because it's apparently deprecated code which will be removed soon.

The gpgme library supports some level of asynchronous operation, but it is a bit limited. Ultimately, someone has to do the work and consume the CPU cycles for all the crypto operations and/or at least communication to the GPG Agent in the background. These operations can take a substantial amount of time, so we cannot do that in the GUI thread (unless we wanted to reuse that discouraged event loop integration). We could use the asynchronous operations along with a call to gpgme_wait in a single background thread, but that would require maintaining our own dedicated crypto thread and coming up with a way to dispatch the results of each operation to the original requester. That is certainly doable, but in the end, it was a bit more straightforward to look into the C++11's toolset, and reuse the std::async infrastructure for launching background tasks along with a std::future for synchronization. You can take a look at the resulting code in the src/Cryptography/GpgMe++.cpp. Who can dislike lines like task.wait_for(std::chrono::duration_values::zero()) == std::future_status::timeout? :)

Finally, let me provide credit where credit is due. Stephan Platz worked on this feature during his GSoC term, and he implemented the core infrastructure around which the whole feature is built. That was the crucial point and his initial design has survived into the current implementation despite the fact that the crypto backend has changed and a lot of code was refactored.

Another big thank you goes to the GnuPG and GpgME developers who provide a nice library which works not just with OpenPGP, but also with the traditional X.509 (S/MIME) certificates. The same has to be said about the developers behind the GpgME++ library which is a C++ wrapper around GpgME with roots in the KDEPIM software stack, and also something which will one day probably move to GpgME proper. The KDE ties are still visible, and Andre Heinecke was kind enough to review our implementation for obvious screwups in how we use it. Thanks!

Pages