To use Ekos for alignment, first you have to put your telescope in the starting home position, which simply means that it should be leveled and pointed at the North Celestial Pole (NCP), if you are in the northern hemisphere. With my mount (HEQ5), I have a polar finder scope so pointing it at the NCP was quite trivial simply by adjusting the mount altitude and azimuth knobs until polaris is within a small circle designated in the polar find scope.
Next, I fired Ekos and started it with two drivers: EQMod & QSI CCD INDI drivers. It would have been possible to use the Synscan driver, but the driver is very limited compared to EQMod. After you use EQMod, you never look back.
Then I asked EQMod to track a nearby star. Now this is where the magic of Ekos comes. You don't have to do anything yourself! No more looking in the eye piece or CCD image to align your scope. You just hit "solve" and it figures where in the sky the telescope is really pointing. Once it finds a solution, you can ask it to either sync the telescope coordinates to the solution coordinates, or sync and slew back to the target we were tracking just earlier. I set it to "Slew to Target" and I had to stop the first iteration of the solver because it was taking too long, but after adjusting some options (as you can see in the video), the solver only took 8 seconds to find a solution.
Each time the solver finds a solution and syncs, an alignment point is added to EQMod Alignment Model, which is N-Star by default. The more alignment points you have, the more accurate your slews become. I then asked EQMod to track another nearby star and repeated the same process, this time the solver only took 2 seconds. Finally, to show that the alignment is really working, I slewed to M31, took an image, and it was dead in the center!
All those exciting Ekos features are coming up in the next KDE 4.12 release. Have to give a shout-out to Jean-Luc Levaire, INDI EQMod driver developer, and Dustin Lang from astrometry.net for all their hard and beautiful work!
Check out this Youtube video showing the whole process!
A fast update: now Cantor backends for Python2 and Scilab were merged in master branch. I will do more polishing until the stable release in KDE 4.12. You can follow the new status of development compiling and testing Cantor from master branch.
In a related topic, KDE Edu sprint in A Coruña, Spain, began and runs through the October 30th. Unfortunately I can not participate this time but I expect go to the next meeting (maybe, in Akademy 2014). =)
Have a good work, edu-gearheads!
I’m working on different ideas for the activity switcher for plasma 2. Unfortunately, I haven’t had much inspiration for that. The only thing I had a real inspiration for was the calendar.
I really dislike out current calendar widget. It is confusing, and the layout is overcrowded and asymmetric (in a bad way :) ). It often confuses me when it shows events – I always think everything is today.
So I made this. It is just a mock-up, no real code yet. Opinions?
I love languages. All of them. I have my favorites (English, in particular), but I love languages as such. And for that reason, I think they deserve more respect than they currently get.
I am not a proponent of keeping a language “pure”, i.e. free from the influence of other languages. I find it rather ridiculous when e.g. Germans try to come up with German words for technical stuff which was invented in an English speaking country (or if it was invented in a non-English-speaking country, but given an English name by their inventors). They rarely find a term which isn’t long, cumbersome and sounding silly. I embrace the fact that languages get more and more mixed. I often insert English words when speaking German, even if a good German words exists for them, just because I like the English word more.
However, because I love languages, I feel that if words from one language are imported into another one, they should be treated with respect. And for me, that means knowing enough about the language they come from in order to spell, inflect, pronounce and use them correctly.
And this is why I die a little inside each time a foreign word is spelled, inflected, pronounced or used incorrectly. Let me give examples for each type of error.Spelling
Most people know the words “extroverted” or “extrovert” and have a basic understanding of what it means. Probably many people also know – or would guess – that it originated from Latin. What apparently most people do not know is that the prefix “extro” does not exist in Latin. The prefix for “outside” is “extra”, and therefore the correct spelling is “extraverted”. However, in languages like English or German, the Latin origin was lost and “extraverted” was aligned with its antonym “introverted”, changing it to “extroverted”. Only the scientific name for the corresponding personality dimension is still “extraversion/introversion”. Interestingly, when we learned in psychology that it should be spelled “extraverted”, some of my fellow students over-corrected and started saying “intraverted” as well, which is wrong, because the prefix for “inside” is indeed “intro”. Since “extrovert” has been part of English as well as German for so long, I don’t blame people for not knowing it’s incorrect, but I still think it’s sad that the origin of the word seems to have been completely lost.Inflexion
Wrong inflexion (usually spelled “inflection”, which, in itself, is another case for a misspelled Latin word) is something that happens regularly with words imported from Italian. An example is “pizzas”. In Italian, the plural form of words ending with “a” ends with “e”, not “as”, so the correct plural form of “pizza” is “pizze”. Even worse is “paparazzi”, which is the plural form of “paparazzo”, but – regularly in German and I think also sometimes in English (see for example Lady GaGa’s song – people use “paparazzi” as singular with “paparazzis” as plural form.Pronunciation
Here, I will use examples from English words imported into German, because that’s what I hear most often. My three favorite examples here are “review”, “maintenance” and “PayPal”. Since I am a scientist, I hear the word “review” very often, and since my research is in the area of online fraud, I hear “PayPal” quite often, too. In my previous job at an IT company, I heard “maintenance” quite often as well. So what are people doing wrong here?
“Review” is pronounced by an estimated 80-90% of German – even academics – as if it was spelled “ravview”. I have no idea how this came to be, since I don’t hear people pronounce e.g. “remake” like “rammake”, but for some reason, this error became so widespread that people don’t even seem to notice if I try to lead by example and pronounce it correctly. People probably just think I’m pronouncing it wrong but don’t bother telling me.
“Maintenance” is – at least in Germany – often pronounced “maintainance”. Okay, in people’s defense, this word is tricky. Why is the noun which corresponds to the verb “to maintain” “maintenance”? What happened to the “ai”? Still, I know no language and no word where “e” is pronounced like “ai”, so people should notice that something is wrong. Interesting fact: I just learned from Wiktionary that actually, “to maintain” is wrong. It originates from Old French “maintenir”, so the “ai” shouldn’t be there!
Last but not least, “PayPal” is – in this case by what feels like 99% of Germans – pronounced like “PayPaul”. Again, I have no idea whatsoever how that happened, but – seriously – it often feels like the only Germans around me who pronounce it correctly are the ones I’ve specifically told how to pronounce it! And in this case as well, people don’t even seem to notice when I pronounce it correctly. Only very few people have ever even asked “Isn’t it pronounced PayPaul?”, which was my opportunity to finally correct them.Use
My favorite – or rather most dreaded – example of a word which is regularly misused by professionals is “methodology”. Analogous to “biology” or “endocrinology”, “methodology” is the study of methods. Wikipedia defines it as “the systematic, theoretical analysis of the methods applied to a field of study, or the theoretical analysis of the body of methods and principles associated with a branch of knowledge”. However, apparently some smart-ass decided at some point that “method” just didn’t sound “sciency” enough and henceforth should just be replaced with “methodology” on any occasion. And for some reason, others jumped on the bandwagon, and now we are at the point where just about everyone – old or young, experienced or straight from college – uses “methodology” where in >90% of cases “method” or “methods” would do perfectly fine. Think about it: People use the name for a scientific field for the thing that field studies. It is the same as if “psychology” would be used as a substitute for “mind” or “soul” and people would start saying “What’s on your psychology?” or the movie would be called “Dangerous Psychologies”. Or if people would say “I’m living a wonderful biology!”. That would sound crazy, wouldn’t it? Yet, people don’t think it’s weird to write things like “We applied the methodology of clustering to group the items”. Oh no you didn’t! You didn’t apply the study of the the method, you applied the method!Conclusion
I have never learned Latin, Italian or Ancient Greek, but when I use a word which originates from one of those languages regularly, I usually look it up to find out how to spell, inflect, pronounce and use it correctly, because I think I owe that to the original language. I am not infallible either, of course. Up until a few minutes ago, I was convinced that “dementi” (a word not known in English, but used regularly in German) was the plural of an Italian word, like paparazzi, because of the “i” in the end. However, now that I looked it up, I found that it’s actually of French origin and is a singular form there as well. So, apologies to anyone I corrected when they said “Dementis”.
What about you, dear reader? Do you think I’m just a nitpicker who obsesses about things which aren’t really any important, or do you agree that people should care more about applying loanwords correctly, or maybe even die a little inside when you read or hear mistakes such as the ones I described, too? Do you have other examples that drive you crazy – or maybe just annoy you a little, or even amuse you? Please feel free to post them in comments!
To the planet KDE readers: I know this is not exactly KDE-related, but I feel that respecting the origins of a language is something that a Free Software / Free culture community should do as well, like we respect the original authors of code or other creative works we re-use.
Filed under: General, KDE
Kate has this nifty little plugin called “XML Completion.” This plugin loads a Meta DTD file and uses this information for context sensitive completion. To use it, you first have to load it in the settings dialog, and then assign a Meta DTD through the XML menu:
In our example, we work on a Kate XML highlighting definition file and therefore loaded the file “language.dtd.xml” which is shipped with Kate. Having assigned a Meta DTD file, we now have these nice code hints:
Kate ships with several Meta DTD files, such as HTML4 (strict, loose) or XHTML 1.0 transitional, KConfigXT DTD, KPartsGUI or XSLT. While this is really cool, you may ask about arbitrary DTDs you may be using. Unfortunately, Kate only supports Meta DTD, so what now?Installing dtdparser
Luckily, the tool dtdparser (on sourceforge) converts a DTD to Meta DTD. We first need to install dtdparse. Since openSUSE does not provide a package (what about other distros?), I downloaded SGML-DTDParse-2.00.zip, extracted it and ran (see README file)perl Makefile.PL
Make sure there are no missing perl dependencies. I for instance had to install perl-Text-DelimMatch and perl-XML-DOM:sudo zypper install perl-Text-DelimMatch perl-XML-DOM
Then continue with the build and install process (the result of make test should be PASS):make make test sudo make install
Now we successfully installed dtdparse on the system. So we are finally ready to convert DTDs.Converting DTD to Meta DTD with dtdparser
Having installed dtdparser, it is as easy as callingdtdparse file.dtd > file.dtd.xml
to convert a DTD to Meta DTD. The conversion should work out of the box. If you want, you can edit the generated .xml file, for instance the “title” attribute is always set to “?untitled?”. While this is not used by the XML Completion plugin (yet?), it’s still nicer to have it properly fixed.Contributing Meta DTDs to Kate
Whenever you have a DTD that is of use also for other users, please send the generated Meta DTD to firstname.lastname@example.org (our mailing list). Further, it would be really cool if someone added support to convert DTDs on the fly to Meta DTD, so the Kate XML Completion plugin would just work for DTDs as well. Any takers?Call at Distribution Packagers
Please consider including dtdparser by default, as it seems to be a very useful too. Are there alternatives to convert DTD to Meta DTD?
I’d like to start a new series of blog posts focused on Qt on Android.
The first article is about how it began, how it works, the current status, what to expect from 5.2 and what are my plans for 5.3. In the next article I’ll focus on development setup for Android.
Let’s get started:
How did it begin?
In June 2009 I joined ROUTE 66 as a senior Linux developer. My first task was to port the navigation engine on Android. Back then Google hadn’t yet released any NDK officially, so I had to create one of my own from Android sources.
Shortly after, I managed to have a working engine running on Android. I began to love Android but something was missing, something that I cared a lot about. That something was Qt, my favorite framework. That was what was missing! And I said to myself that I had to do something about it.
In October 2009 Nokia (yep, Qt was owned by Nokia back then, what days …) announced the Lighthouse project. The Lighthouse project was created to allow developers to easily port Qt to (almost) any platform.
In late December 2009 (I think it was after Christmas) I had enough free time to begin the port. I also chose to use the Lighthouse project even if it was a very young research project. As far as I know my Android port was the first port that used Lighthouse. Just after a month (in January 2010), I saw the first graphics rendered by Qt on my phone. The feeling was fantastic!
A few months later, after Qt was in a good shape, I began the work on a Qt Creator plugin and on Ministro. The Qt Creator plugin enables the user to manage, develop, deploy, run & debug Qt applications on Android Devices/Emulator very easily.
Even if almost everything was in place, still not too many people wanted to use it because they had to compile everything manually. Then I decided to do something about it.
So, in 2011, a few weeks after Nokia announced their big strategy shift, I announced the first usable SDK for Android. This is how the Necessitas project began and after it become a huge success, I decided to join KDE and to continue the project under the KDE umbrella. Why KDE? Well, because we share the same goal: to keep Qt powerful and free for everyone. Also I could use their fantastic infrastructure to distribute Qt libs.
The very first SDK release was Linux only. Soon Ray Donnelly contacted me and ported the SDK to Windows and Mac. If you are using Necessitas (and Qt5 Android SDK) on any of these platforms, he is the guy you have to thank!
Because the Qt Creator plugin was an immense success, I upstreamed the plugin in 2012, when Qt was still owned by Nokia!
In November 2012, I contributed the Android port to Qt Project for Qt 5 integration. Here I’d like to make some clarifications: ONLY Qt 5 is developed under the Qt Project umbrella. Qt 4 is still owned by KDE as Necessitas project.
Let’s see how Qt on Android broadly works. I’m not going to go way too deep in details but enough for you to have an idea how it works.
As I wrote previously, the port is based on the Lighthouse project. Lighthouse (now re-branded as QPA) is the platform abstraction layer for Qt. Basically this layer sits between Qt and the platform, making the porting pretty easy. Because Android applications are written in Java, and it is impossible to connect the Qt event loop to Android’s event loop, I had to move the Qt main event loop to another thread. If you want to extend your application keep in mind the fact that the Qt event loop and Java UI are running on different threads. Even after Google added the NativeActivity, it was impossible to use it, mostly because it didn’t expose all the features we need in Qt.
A Qt for Android application consists of two big parts:
- The native part, containing one or more .so files, which are actually your Qt application. If you choose to bundle Qt libraries, it will contain also these libs.
- Android specific part. This part contains the followings sub-parts:
- Android manifest. This is the entry point of your application. Android uses this file to decide which Application/Activity to start, it contains the application permissions, declares the minimum of the Android API that application needs, etc.
- Two Java classes which load the dependencies and your application. They are also a part of the Java bridge between Qt QPA and the Android world.
- Ministro service .aidl files. These are two interfaces used to communicate with Ministro service. Ministro service is one of the deployment solutions (will write more about that later).
- OIther resources, e.g. assets, strings, images, etc.
All these parts are bundled in a single package which represents your final application. Now lets see how those parts works together.
When your application starts, Android uses the manifest file to run an activity. This activity (which is a custom activity) is a small part of the magic in the Java world of your application. To be able to update Qt libs without breaking the existing application, the Java part is split in two: a very small part which ships together with your application (a part of it) and another part (a Java library, a .jar file) which contains all the logic for the QPA plugin.
The first Java part that is shipped with your application is responsible for finding the missing libs (Qt and Java) and to load them. It also forwards all the events (touch, application states changes, screen changes, etc.) to the second Java part.
The second Java part is responsible for communicating with Qt. It contains all the logic needed by the Android QPA plugin, e.g. the creation and management of the drawing surface, virtual keyboard handling, and so on.
So, the (first) Java part finds and loads all the missing libs and your application and it forwards all the events to the second part, but how does your “main” method get called? Well, the QPA plugin does that. No, I’m not insane! And yes the QPA plugin is loaded before your application starts (actually is loaded even before your application is loaded ).
Ok, let me try and explain why I had to come to such a crazy design.
My dream was to find a way so that the developers could just compile their existing Qt applications for Android, so I had to find a way to call the “main” method (I didn’t want to force you to create other main entry methods for your application, similar to the “WinMain” thing ).
The problem is that in order to call a method from Java, someone has to register that method first, otherwise you can’t call it, here the QPA comes in the scene. After the QPA plugin is loaded, it registers a few native function which are used by the Java part to make calls in the native world. After the Java part finishes to load all the libs and your application, it just calls “startQtApplication“ native method which was registered by the QPA plugin earlier. This method searches for “main” method symbol, creates another thread and runs the “main” method from within that thread. It must create another thread because calling “main” method blocks the caller until it exists and we must keep the Android UI thread free to allow the Android event loop to run.
In a later article, I will cover how to use JNI to do calls from/to Java to/from native (C/C++) world.
Finally, have a look at the current status of Qt on Android, what you’ll get in Qt 5.2 and what are the plans for Qt 5.3 for Android.
Qt Essentials status:
missing system semaphores and shared memory
shared memory is on my TODO list
video and audio works, missing camera support
brings camera support
ATM no other plans
missing SSL support
brings SSL support
ATM no other plans
Qt Quick Controls
missing android native style
brings android native style
ATM no other plans
only sqlite is provided by Qt-Project SDK
Qt WebKit & Qt WebKitWidgets, Qt WebEngine
There is hope for Qt WebEngine
missing android native style
brings android native style
ATM no other plans
Qt GUI, QML, Quick,Quick Layouts, Test
Qt Add-Ons status:
Qt Android Extras
additional functionality for development on Android
android services/binder support is on my TODO list
on my TODO list
on my TODO list
on my TODO list
missing, android uses the binder IPC.
Missing, but as I said Qt will have something similar for Android
commonly used sensors
more sensors added
ATM no other plans
missing, no native print support on Android
limited to one top level widget, can’t mix QGLWidget with other QWidget(s)
there is hope to use one more top level widget
can mix a single QGLWidget with other QWidgets
any volunteer(s) ?
Qt Concurrent, Declarative, GraphicalEffects, ImageFormats, Script, ScriptTools, SVG, XML, XMLPatterns
Thank you for your time.
See you next time when we’ll discuss how to setup the development for Android.
Aaron wrote a number of blogs about Bodega. Our open market for digital content. Today I wanted to write a bit about the technology behind it. Plus my prose is unbeatable - it works out, a lot, and is ripped like Stallone during his "definitely not on steroids, just natural, inject-able supplements" years, so it should be appreciated more widely.
If you ever used a content distribution platform, like the Google Play or Apple's App store, I'm sure you noticed the terrible way in which they handle content discovery, rankings, communication with the publishers and the vetting process. To some extend Bodega solves all of those and every day it's closer to the vision that we had when we started it. And we weren't even high that day. Well, I wasn't, Aaron was high on life. And crack. Mostly crack. I apologize, crack isn't funny. Unless you're a plumber, in which case it's hilarious.
Node.js was a great choice for Bodega. Testing became trivial (we're using Mocha). Between our unit tests and jshint support, I can live with the dynamic nature of the language.
What makes node.js great is npm and its extensive collection of packages. That tooling means that you can focus just on the code that you need to write to finish your project not the code that you need to start it. That's why I picked it.
If I didn't have a day job I'd pick either Go, Haskell or just do everything by hand in C++. I have to admit that I like Go a lot (even though the system level graphics hacker in me shakes his head at a language without native vector support) but it's hard to argue with the fact that node.js allows us to move very quickly. So Bodega is written in node.js and uses Posgresql to store all of its data and that combination works amazingly well.
It's a Free Software content distribution platform that actually works and it's just the start. In my obviously biased opinion it does a lot of really interesting things that make it a very important project, but that's a different topic.
The other day I wrote about Bodega, an open market for digital content. It was impossible to get everything into one blog entry, however, as Bodega engages with people in three different ways: as consumers, as creators and as distributors. In that previous entry, I focused on what Bodega is to you when you are interacting with it as a consumer. Today we'll be exploring Bodega from the perspective of a content creator.
Warehouses and Stores
It knows to ask you for the platforms it supports, category (e.g. 'Education' or 'Communication'), a content rating (Everyone, Adult, etc) as well as screenshots and icons. Great! But what if you are publishing a book?
- Partner Manager: Partner information, members and role assignment
- Store Manager: Creating and managing stores
- Content Creator: Creating and managing assets
- Validator: Content approval and curation
- Account Manager: Managing the financial aspects of the partner (e.g. bank accounts)
Following the actionThe manager application also provides access to a variety of statistics so you can follow the success of your assets: how many downloads, how many purchases, how much earned. Using the nice built-in graphs you can compare multiple assets and control the time frame you are looking at.
Each asset you upload also gets a tag in the discussions forum and it becomes available for rating by other Bodega participants. We'll be adding to these tools over time as we feel this full communication cycle is really important to both creators and consumers alike.
Ok, but how does this all come together?
Kubuntu Türkiye is the new Kubuntu website for Turkish speakers. Thanks to Volkan for setting it up.
The FLOSS ecosystem consists of a large amount of independent projects which develop their software. In the end this software should be used by a user and we have the “distributions” to provide the software to them. As the name implies it’s about distributing the software. A distribution takes software from a large amount of independent projects and integrates them to provide a working set applications. A huge and impressive task and that’s the main task of a distribution: they are software integrators.
Various software products depend on each other and there is the chance of conflicts. The distributions need to ensure that this all works. The best matching metaphor for how this works is a river. It flows from the spring to the sea and takes on the water of many other rivers. The code flows from one project to the other to reach at the end the user.
Although that looks rather linear, it is not. In truth each independent project has many upstream and many downstream projects. It’s an n:m relationship in each position of the stack. For example for KWin it looks something like that:
For each project it’s important to remember that they are just one of the many, many downstreams of their upstream project. They are not the most important piece of software around, but just one of many. This is important for a working relationship. This also influences how code should flow. Code should flow up the stream. Nobody is helped if each of a downstream of a given project fixes the same bug in their code base. It should be fixed in the project upstream of them. To put it simple: I should not work around a bug in Qt, but fix it in Qt directly.
Nevertheless not all code should be upstreamed. If the code is needed for the downstream integration task it needs to be kept downstream. For example openSUSE used to have a small geeko in the Oxygen window decoration. This should not be upstreamed, because it would cause other downstreams (e.g. Kubuntu) to remove it again. Or it would bring other downstreams to the idea to have their branding feature integrated, too. So we as the upstream would have to accept the Debian logo, the Kubuntu logo, the Gentoo logo – you get the idea. The basic idea is that an upstream project should not try to integrate with their downstream as it’s the downstream’s task to do the integration. This is true for any step of the graph, not just the last one. Circular dependencies are not a good idea.
There are exceptions to the circular dependency rule, but this is very rare. An example is the relationship between the KDE and Qt project which are both upstream and downstream to each other. But the integration inside Qt with KDE is done through plugins allowing to have this integration code been kept downstream.
Also problems can arise if a project starts to become it’s own upstream replacing some of their upstreams. They might want to see their new code being exposed to more projects but other projects on the same level of them might not pick it up as they think it’s specific to this one project. Also it might harm the relationship to other upstream projects. They might think that this downstream is no longer interested and fear that their other downstreams might start to replace them, too. This is not in the interest of the user in the end.
An example of how this can look when it goes wrong could be observed this year with Cinnamon and Cinnarch. Cinnamon is too close to the Linux Mint project, in fact it started as part of Mint and depended on the GNOME version shipped with Mint. This made it impossible for Cinnarch to provide the latest of Arch and Cinnamon. It resulted in Cinnarch dropping support for Cinnamon, but also in Cinnamon trying to get more independent from Linux Mint. Whether this step came in time will be seen in the future.
So in summary: downstreams integrate their upstreams and not the other way around.
QMake users: public service announcement. If you use CONFIG+=ordered, please stop right now. If you don't, I'll hunt you down. I promise to god I will.
There is simply no reason to use this, ever. There's two reasons this might be in your project file:
- you have no idea what you are doing, and you copied it from somewhere else
- you have a target that needs to be built after another target, and you don't know any better
SUBDIRS = src plugins tests docs
plugins.depends = src
tests.depends = src plugins
And then you'll have docs built whenever the build tool feels like it, and the rest built when their dependencies are built.
If you have subdirectories involved in this, then you need an extra level of indirection in your project, but it's still not rocket science:
TEMPLATE = subdirs
src_lib.subdir = src/lib
src_lib.target = sub-src-lib
src_plugins.subdir = src/plugins
src_plugins.target = sub-plugins
src_plugins.depends = sub-src-lib
SUBDIRS = src_lib src_plugins
For those of you wondering why I sound frustrated about this, I've fixed so many instances of this by now that it's just getting old and tired, frankly. And I still keep running into more. That's countless minutes of wasted build time, all because of laziness boiling down to a single line. Please fix it.
- It runs purely on your server. No communication with centralized services like Google -- so your data is always protected against surveillance.
- We didn't introduce any new server requirements here. Just take ownCloud and put it into your web server document root and you have your own collaborative editing server. This is far easier to install and run than for example Etherpad.
- All the documents are based on ODT files that live in your ownCloud. This means that you can sync your documents to your desktop and open them with Libreoffice or Calligra in parallel. Or you can access them via WebDAV if you want. You also get all the other ownCloud features like versioning, encryption, undelete and so on. This is very unique I think.
- All the code is completely free software. The PHP and the JS components are released under the AGPL license. This is different than most other solutions. Some of them claim to be open source but use creative commons as a code license which is not free software.
HDD cleanup continues apace. I did like the wisecrack about semantic storage — although it’s not quite correct, since these are drives removed from otherwise decommissioned machines, or tarballs rolled of my university student account before the departmental Solaris server was decommissioned. It’s more like moving boxes never before opened (I know there’s several that I moved house with 8 years ago that are still awaiting an opening moment). In the mean time I’ve discovered a bunch of academic papers I had forgotten I had ever written and a bunch of OCaml I wrote that I no longer understand, as well as several versions of my bachelor’s thesis written in the winter of 1998.
I gave a lightning talk at Linuxcon about KDE and Kubuntu highlighting some of the mega rollouts with Kubuntu such as the one which is about to take over Taipai.
Dan Shearer wins my award for nicest guy at the conference by introducing me to Cloudsoft who are interested in providing some cloud power to Kubuntu and generally making sure I've enough energy (I just came out of eye surgery so feeling fragile). Here giving a talk on zero knowledge proofs.
I just stumbled over a really good tutorial on youtube that teaches C++ from the very beginning (in German). The tutorial is split into lots of small episodes, each about 10 to 15 minutes. The quality of these tutorials is very good in terms of video, voice and also contents. Subject is mostly “pure” C++ and later a bit of Qt is used, so it does not cover C nor lots of additional libraries. Still, if you want to understand the details, you might want to give it a try
PS: The tutorials use Kate Part, just look at the code folding bar on the left. So the author is definitely doing something right :-p
Linus is visiting Edinburgh today for a Q&A session at LinuxCon.
Asked where he'd like to see Linux in 5 years time he said he started Linux because he wanted to use it on his desktop but he's been disappointed how that hasn't taken off. He wishes desktop communities would stop arguing amongst competing technologies and would work together. How true. (And I continue to feel smug about being part of a community which hasn't started a new project to replace any desktop or desktop technology).
Here is the second part of Freestyle Fiction first episode, both in original French and English translation.
Again, these pages were entirely made from scratch using Krita, on Linux (Manjaro 64bit, custom KDE Plasma Desktop installation).
On a related topic, next month I’ll be in Toulouse for the “Capitole du Libre” and “Akademy-Fr” events, during the 23/24 November week-end. The “Capitole du Libre” is a big yearly event in Toulouse, about Free Software and Free Culture, both for general public and for specialists. And once again the french KDE community is organizing the Akademy-Fr event at the same place.
On Saturday, I’ll make a short conference about Krita at the Capitole du Libre. Then on the next day at Akademy-Fr I’ll make a workshop about Krita drawing basics, using the new Konqui design as topic.
(check the links above for the schedule)
I have implemented two new ways to filter the code completion popup in kate: filtering the list using an abbreviation, and filtering the list using text not occuring at the word's beginning. This can probably best be demonstrated by lots of pictures:
You can match completion items by their abbreviation. This works for both camel case and underscore notation.
You can also match entries by words they merely contain but do not start with. This matching is only allowed at word borders (capitals or underscores). This feature makes it far easier to find that damn class which has an unexpected name prefix, or the m_foo variable you thought was called foo.
The abbreviation expansion engine also allows you to type parts of the words from the abbreviation, making your search more specific in a convenient way.This feature is not specific to kdev-python, it works in all kate-based apps. It is available in kate's master branch, and will be available in KDE SC >= 4.12.
If you have more suggestions or cases which are not handled well, I'm happy to discuss this further. Have fun hacking!
P.S. If anyone can come up with an efficient algorithm for doing what is depicted in the last image, I'd be interested. The current one is quite slow for some corner cases.
When describing what Bodega is to people for the first time, I usually start with the following phrase: "It is an open market for digital content." We usually then spend the next good while unpacking the meaning in those eight small words. In this blog entry, I'm going to focus on what that phrase means from the perspective of those experiencing it from the consumer side.
Freeing ourselves from old notions
According to Wikipedia, a market is "one of the many varieties of systems, institutions, procedures, social relations and infrastructures whereby parties engage in exchange." This is precisely how we use the term "market" in relation to Bodega. It is a digital place people can come together to engage in exchange. Whether that includes money or not is really up to those involved.
One of our (not-so-)hidden agendas with Bodega is to reclaim concepts that ought to be integral to that special, and I'd say sacred, set of interactions that occur when people coming together to share and trade. It is not just about money and salesmanship, it's also about the interaction and the human flexibility.
A focus on people
People can track beyond simply counting the number of times their things have been downloaded or (if there is a price on the item) purchased, Bodega has a few tricks up its sleeve. One is ratings. Most people are familiar with the common, and utterly useless, 5 star rating system seen in most online stores. This xkcd comic sums it up perfectly:
Instead of a single meaningless "star review", each type of content in Bodega has a unique set of attributes which you can use to rate an item in the store. Applications, for instance, feature the following attributes:
Games have attributes related to game play, wallpapers related to artistic qualities, books about the writing style and storyline .. you get the idea. This lets you provide useful feedback to the publisher, and allows the person whose item it is to understand better how people are perceiving their work.
You also do not have to leave a comment to leave a rating, or leave a rating to make a comment. This is a baffling misfeature we noticed in many other content systems out there. Of course, discussion is important as it communicates things simple ratings never can. Being Free software people, our first thought was to look around and see what kind of good discussion software existed already. We'd rather re-use than re-invent, after all. What we found was Discourse. The Discourse team uses the phrase "Civilized Discourse Construction Kit" to describe their forum software, and it is a thoroughly modern affair .. not to mention quite beautiful.
When you create a Bodega account, you can also use that same login to visit the Discourse forums. Immediate feedback and glancing at recent postings can also be done right from within the client software without having to go to a web browser. (That final bit of integration is currently sitting in a branch awaiting final merging for the next update.)
These forums are not the usual limited "leave a message" style affairs; they support fully threaded conversations with all the features we expect these days: private messages, being able to tag individuals in your comments, notifications of follow-ups and more. This is not about simply giving people a way to leave a 140 word message, but about actually engaging with each other.
Eventually we will provide integration with defect tracking and feature requests as well, making it a full life-cycle system.
Making a living
Used in other contexts, however, points might be handed out to students enrolled in a school, used as an incentive program in a company or .. just not used at all.
We will also be bringing tipping, pay-what-you-want and subscription systems in future updates. I'm even toying with providing a built-in crowd-funding feature. Best of all, because it is Free software, you can participate in helping define the mechanisms of trade.
.. oh, and you can download things and stuff