Subscribe to Planet KDE feed
Planet KDE - http://planetKDE.org/
Updated: 38 min 19 sec ago

QtLocation 5.9

Mon, 2017/09/04 - 8:24pm

With Qt 5.9, the QtLocation module has received a substantial amount of new features, as briefly summarized in the release post. The goal of this post is to provide additional details about what’s now possible with QtLocation 5.9.

Rotating and tilting maps

One long-standing limitation of the built-in tile renderer was the inability to draw rotated or tilted maps and map items. We finally removed this limitation, and in 5.9 maps as well as the map items added to them can be rotated and tilted. This also applies to MapQuickItems, which can be tilted by setting the appropriate zoomLevel value to them. With the support for tilting and rotation come two new dedicated gestures in the gesture area of the map: two fingers rotation, and two fingers vertical drag for tilting.

It is now possible to rotate and tilt maps. Items on it will be transformed accordingly. In this figure a MapQuickItem embedding a QtMultimedia Video element is used to overlay a video of the shore.

It is now possible to rotate and tilt maps. Items on it will be transformed accordingly. In this figure a MapQuickItem embedding a QtMultimedia Video element is used to overlay a video of the shore.

In addition to this, a new fieldOfView property has been added to the Map, to control the camera field of view when the map is tilted. Note that this property, like the tilt and the zoomLevel properties, have lower and upper bounds. In some cases these bounds will prevent changing the property (e.g., a plug-in using a third-party renderer that does not allow changing the field of view will return a lower bound for the field of view equal to the upper bound).

Tile overzooming

Another long-standing limitation that got us many complaints over the last years was the inability of using cached data to approximate tile content not yet provisioned.
Lower zoom level tiles are now used to approximate higher zoom levels tiles, until these are fully provisioned. This finally prevents, where possible, the appearance of empty tiles revealing the (usually gray) background.

The difference in loading the map tiles, 5.8 (top) vs 5.9 (bottom)

The difference in loading the map tiles, 5.8 (top) vs 5.9 (bottom)

Improved third party map renderer support and the MapboxGL plugin

Prior to 5.9, the only way to display a map using QtLocation was to cut it into raster tiles, and feed the tiles to the built in tiled map renderer. Some have created plug-ins that embedded third party map renderer, but in the end they all had to do the rasterization-to-tiles step, in order to get the map on the screen.

QtLocation 5.9 removes this roadblock, allowing to draw custom QSG nodes in a custom QGeoMap implementation, so that it is finally possible to plug a third party renderer directly into the QtQuick scene graph.
This can be done in different ways, the most common being rendering the map off-screen into a texture, and then rendering the texture in the QtQuick scene graph, or using a QSGRenderNode to issue graphics commands directly.
The second approach is more efficient, but also more complex to set up and with more corner cases to handle.

How this performs can be seen by trying out the new Mapbox GL plugin, that, in cooperation with Mapbox, has been included in Qt for most of the supported platforms. This plug-in renders Mapbox Vector Tiles using the mapbox-gl-native renderer, supporting both online and offline map sources. This plug-in also allows to select either of the two rendering approaches mentioned above (although it should be noted that the QSGRenderNode approach is experimental and may introduce problems).

The MapboxGL plugin in action

The MapboxGL plugin in action

Handing map items rendering over to the plug-in

In addition to handing the map rendering over to the plug-in, it is also possible for a QGeoMap implementation to define which types of map items it can to render by itself. With QtLocation 5.9, if a plug-in instantiates a QGeoMap that reports the ability of drawing MapPolylines, these items are handed over to the plug-in, and the default rendering path for items of that type will be disabled. They will behave just like before, but their visual appearance will depend entirely on how the plug-in renders them. The MapboxGL plugin can, for example, render native MapRectangles, MapCircles, MapPolylines and MapPolygons. They will be anti-aliased and their style can be customized using Mapbox style specifications.
On the downside, the rendering of these items entirely depends on the plugin. So, on a MapboxGL map for example, they not support borders, so setting border properties on the items will have no effect. They also do not currently support visiblity and opacity for the items, although this is something that may get fixed very soon.

MapPolyline rendered on a map using the MapboxGL plugin. The polyline has been styled using MapParameters

A MapPolyline rendered on a map using the MapboxGL plugin. The polyline has been styled using MapParameters

Exposing Plug-in – specific mapping features

Enabling the integration of third-party mapping engines opens up for the opportunity of using features that are specific to the one or the other engine. For example, the mapbox-gl-native library offers a very flexible API for changing the style of map elements at runtime.
To allow plugins to expose engine-specific features, we have introduced a new QML type, MapParameter. Elements of type MapParameter are essentially duck-typed, as properties have to be defined by the programmer according to what the documentation of the plug-in requires. In the case of the Mapbox GL plugin, a map parameter to control the visibility of a Mapbox style layer named “road-label-small” would look like this:

MapParameter { type: "layout" property var layer: "road-label-small" property var visibility: "none" }

Note that properties in map parameters are dynamic. They can be changed at runtime and the effects will be immediate.

Improved stacking of multiple Maps

Combining multiple map layers has been challenging with the previous Qt releases. That’s because different Map elements would have different zoom level ranges, and these ranges would also be enforced. In addition, map copyright notices could only be turned on or off, with the result of having them overlapping on screen.
In 5.9 the Map element allows the zoomLevel property to go beyond the minimum or maximum supported zoom levels, if set programmatically (that is, not through mouse or gestures interaction) and the used plug-in supports overzooming.
This way it is possible to use property bindings on the zoomLevel property of all the overlay maps to be driven by the value in a base map “layer”.

In addition, it is now possible to manually add elements of type MapCopyrightNotice on top of the maps, and source their content from the map elements. In this way they can be freely arranged, and also styled using custom CSS code.

Finally, one problem that was previously common when changing maps at runtime was that map items added to a map would be gone when the map would be replaced with another map sourced from a different plug-in.
For this reason, we include a new plug-in called itemsoverlay, whose only purpose is to provide a completely empty, transparent, map that costs almost nothing to render. The intended use is to have one Map element using this plug-in on top of the map stack, where to keep map items, so that the layers underneath can be freely removed.

Other improvements

A new method, fitViewportToVisibleMapItems, has been added to the Map element, to only consider visible items when fitting the viewport. A new QML type, MapItemGroup, has been introduced, to combine multiple map items together, like in a separate qml file. Note that it is currently not possible to use MapItemGroups in combination with MapItemViews.

The post QtLocation 5.9 appeared first on Qt Blog.

NGRX Store and State Management 3

Mon, 2017/09/04 - 3:48pm

In previous articles I described the basic Store and Reducer layout of the pattern.

Reducers are the only way state is modified, and the modifications should be pure, meaning synchronous functions. Anything asynchronous should be handled elsewhere. Other desired side effects may be chaining of actions, where one action sets in motion a chain of changes and asynchronous calls.

The NGRX solution is Effects. These are essentially a subscription to the Action observable. Every action can be looked at, some side effect applied, and a new action mapped into the stream to be dispatched.

Here are the actions.

export const ARTICLES_LOAD = 'Load Articles';
export const ARTICLES = 'Article list';

export class ArticlesLoadAction: Action {
   type = ARTICLES_LOAD;
}

export class ArticlesAction: Action {
   type = ARTICLES;
   constructor(public payload: Articles[]) {}

export type Action 
   = ArticlesLoadAction
   | ArticlesAction;


The ARTICLES type action was handled in the articlesreducer, inserting the array of articles into the state. But the ARTICLES_LOAD action was not. It is a command action, and here is where Effects do their magic.

@Injectable()
export class ArticlesEffects {
   constructor(private actions$: Actions,
                      private http: Http
) {}

@Effect() loadarticles$ = this.actions$
   .ofType(ARTICLES_LOAD)
   .mergeMap(() => this.http.get(api)
      .map(response => response.json())
      .map(articlearray => new ArticlesAction(articlearray))
      .catch(err => Observable.of(new ErrorAction(err))
   );

Effects are an injectable service. Actions, the action observable stream is injected in the constructor, along with any services required. 

The effect is an observable chain, starting with ofType, which is similar to .filter. We are only interested in ARTICLES_LOAD. Multiple actions can be listed, 

.ofType(ARTICLES_LOAD, ARTICLES_RELOAD)

mergeMap calls a function that returns an observable, merging the results back into the actions$ stream. The response is mapped to an action with the array of articles as payload. The reducer is listening for this action, and the array will be inserted into the state.

Note the .catch is off the http.get observable? Effects in v2 of NGRX would complete if the .catch was off the actions$ observable. As well, if you .map the values off the http.get you can return a type that the actions$ observable likes.

If you want to dispatch multiple actions, replace the .map with
.mergeMap(data => [new Action1(data), new Action2(data)])

and each action will be merged into the stream for dispatch.

In some situations you may not want to dispatch a new action. 

@Effect({dispatch: false}) = ...

The possibilities are almost endless, and this is where much of the work of getting and posting data will occur.


CMake Module Libraries

Mon, 2017/09/04 - 1:30pm

The KDE Extra CMake Modules (docs in CMake-style) are a collection of CMake code that do three things: add useful features to CMake, provide Find modules for dependencies for KDE code, and provide tooling for KDE code. There is slow osmosis between KDE ECM and the official CMake modules shipped with each CMake release.

Some days of the week I work on the ARPA2 project, which is a software stack devoted to security and privacy handling. There’s a bunch of TLS wrangling, and Kerberos hounding (not done by me) and LDAP bit-banging. There’s a whole stack of software, starting with an allocation-free DER-handling library, going up to a TLS handling daemon with desktop UI intended to allow you to quickly and efficiently select pivacy settings and online identities (e.g. X509 certificate identities). Most of the software stack now uses CMake, and most of it uses roughly the same modules. So I’ve been thinking of setting up another “Extra CMake Modules”, but this time for the ARPA2 project.

Much like KDE ECM, there are two things I would want to get out of a CMake module collection:

  • Curated Find modules. Much of the ARPA2 stack works with Berkeley DB, OpenLDAP, GPerf, GnuTLS, Kerberos .. and there seems to be a huge selection of crappy CMake modules for finding those dependencies, and not much that lives up to the quality and consistency standards of KDE ECM. I’ve started collecting these modules, or writing my own, and will try to live up to those same standards.
  • Useful features like version-extraction from git, consistent packaging and config-file generation (and pkg-config files, too). These features spill over into tooling easily: based on consistent source structures across the software stack, it’s possible to do automatic packaging and uninstalls as well. This reduces the complexity of the CMakeLists in each sub-project / software product from ARPA2.

So my goals are pretty clear, and KDE ECM serves as a big inspiration, but I’m left with an existential question: why aren’t there more of this kind of curated library? Especially curated Find modules are really valuable so that CMake-based projects can easily and consistently find components from outside the CMake ecosystem. I can’t possibly be the only one who needs to find Berkeley DB on both Linux and FreeBSD. I can understand core CMake not shipping Find modules for everything. I can understand Berkeley DB itself not shipping CMake files so that find_package() can use config mode. What surprises me thoush is that no one has sat down and said “here’s the bestest, most portable, shiniest FindBDB.cmake you can get, and everyone should be using FindBDB from the DatabaseCMakeModules package (which, incidentally, also comes with Find modules for a dozen other databases and …)”

In a way, I’m wondering where the “CMake extragear” is, something close to CMake, but not part of the official distribution.

So, with that much ado, here’s a link to the ARPA2 CMake Modules github repository; it’s in an early stage of development and still collecting modules from the ARPA2 stack, but I do hope to reach a point where find_package(ARPA2CM) becomes as but-of-course as find_package(ECM).

Interview with Miri

Mon, 2017/09/04 - 8:37am

Could you tell us something about yourself?

Hi there, I’m Miri. I’ve been drawing ever since I could pick up a pencil and switched to digital in the past 3 or so years, currently enrolled in college to grind out credits and hopefully become an animator.

Do you paint professionally, as a hobby artist, or both?

At this moment in time I’m just a hobbyist who does volunteer work for the Smite Community Magazine, but I’d love to be a professional someday.

What genre(s) do you work in?

Mainly fantasy and mythology gods and gaming fanart, I really love drawing characters.

Whose work inspires you most — who are your role models as an artist?

Oh man, I really love Araki Hirohiko’s figure style, the way he draws poses is just fascinating and what really kickstarted my interest in drawing people. Jaguddada on DeviantArt is absolutely hands down my favorite digital painter, the way he handles colors and how his digital paint strokes look like oil on canvas just fascinates me, I’d love to learn color theory and his painting skills one day. Baby steps!

How and when did you get to try digital painting for the first time?

I think it was in 2014 with Paint Tool Sai, I had interest in making fanart of some band I was into, it was rough and blocky and simple, but really fun.

What makes you choose digital over traditional painting?

The ability to just erase mistakes like they never even happened, and being able to redline things without it smudging, just having so many ways to fix mistakes like they weren’t even there to begin with, and the non-messy cleanup when you’re finished drawing. No pencils littered everywhere or shavings, no drying out markers to replace, etc.

How did you find out about Krita?

I made the switch to Linux in early 2016, finding out that Sai and CS6 wouldn’t be available unless I used WINE to emulate them, I just started looking for a free software that fit my taste. GIMP was okay, but it had a few quirks I couldn’t really iron out, and it was a bit simple for me. I think I found out about Krita through an art thread on some imageboard for taking requests, tried it out, it ran like a smoother SAI and I haven’t looked back.

What was your first impression?

I was like “Wow, this is a lot to take in and learn.” I’m still trying to figure out everything! There’s so many buttons I’m still unsure on what they do, I am just a simpleton!

What do you love about Krita?

I really love how it’s Linux friendly, and how it’s like a great replacement for Sai, like, I can actually make cool things in it without too much effort, and then there’s even levels further that I haven’t even explored yet that will probably even improve my stuff even more in the future. Also the opacity slider right above layers? Found that the first time I opened it, and it’s been my best friend ever since, took me a while on other programs to figure out what that bar did.

What do you think needs improvement in Krita? Is there anything that really annoys you?

There was one bug in the past when I used my left handed script that it would reset the pen size and opacity and brush type once you flipped it over from using the eraser, and it was my burden. Also the unexpected crashes. But the bug got fixed so I finally got to update my Krita again and that was a very good day. Also, not sure if it exists and I just haven’t found it yet, but Sai had a clipping tool that made it so you could clip layers to other layers so if you colored out of one of the layers it wouldn’t bleed to the rest of the drawing, that’s something I’ve googled but haven’t found the answer to yet, but it would
be a godsend to find.

What sets Krita apart from the other tools that you use?

Free Software and Linux compatible. Cannot stress it enough. Linux support is a dealmaker and the fact I don’t have to pay out of pocket for a program that works just as good if not even better than the paid ones is really awesome.

If you had to pick one favourite of all your work done in Krita so far, what would it be, and why?

Every new piece beomes my favorite for a certain time, until I start pointing out flaws in it, and then go back to an older piece. I’m really into my Ra + Thoth piece, though, just because of the details and shading coming out so nice, not to mention a background I was really pleased with.

What techniques and brushes did you use in it?

Oh god it’s been so long, I believe I used the paint settings marker brush with it’s slight opacity for shading, one of the square brushes under blending for lighting, and the pencil tool from Deevad’s set for outlining. I’m really obsessed with the paint set of default pens for the basics.

Where can people see more of your work?

Deviantart: Pluuck
Twitter: Hikkikomiri
(Or if you like MOBAs, check out the latest SMITE community gaming magazine!)

Anything else you’d like to share?

Honestly, never give up. Accept critique but don’t let it eat you away
to make you fear picking up a pen again. What may be something you find
a struggle, may be even harder for someone else. Be unique, be creative,
and always keep improving.

Rust Qt Binding Generator

Mon, 2017/09/04 - 12:00am

This blog post is the announcement of Rust Qt Binding Generator. The project is under review for inclusion in KDE. You can get the source code here.

Scroll to bottom for the screenshots.

Rust Qt Binding Generator (Logo by Alessandro Longo)

Rust Qt Binding Generator (Logo by Alessandro Longo)

This code generator gets you started quickly to use Rust code from Qt and QML. In other words, it helps to create a Qt based GUI on top of Rust code.

Qt is a mature cross-platform graphical user interface library. Rust is a new programming language with strong compile time checks and a modern syntax.

Getting started

There are two template projects that help you to get started quickly. One for Qt Widgets and one for Qt Quick. Just copy these folders as new project and start coding.

Here is a small schematic of how files made by the generator are related:

Qt Widgets (main.cpp) / Qt Quick (main.qml)

⟵ UI code, written by hand

src/Binding.h

⟵ generated from
binding.json

src/Binding.cpp

rust/src/interface.rs

rust/src/implementation.rs

⟵ Rust code, written by hand

To combine Qt and Rust, write an interface in a JSON file. From that, the generator creates Qt code and Rust code. The Qt code can be used directly. The Rust code has two files: interface and implementation. The interface can be used directly.

{ "cppFile": "src/Binding.cpp", "rust": { "dir": "rust", "interfaceModule": "interface", "implementationModule": "implementation" }, "objects": { "Greeting": { "type": "Object", "properties": { "message": { "type": "QString", "write": true } } } } }

This file describes an binding with one object, Greeting. Greeting has one property: message. It is a writable property.

The Rust Qt Binding Generator will create binding source code from this description:

rust_qt_binding_generator binding.json

This will create four files:

  • src/Binding.h

  • src/Binding.cpp

  • rust/src/interface.rs

  • rust/src/implementation.rs

Only implementation.rs should be changed. The other files are the binding. implementation.rs is initialy created with a simple implementation that is shown here with some comments.

use interface::*; /// A Greeting pub struct Greeting { /// Emit signals the the Qt code. emit: GreetingEmitter, /// The message of the person. message: String, } /// Implementation of the binding /// GreetingTrait is defined in interface.rs impl GreetingTrait for Greeting { /// Create a new greeting with default data. fn new(emit: GreetingEmitter) -> Greeting { Greeting { emit: emit, message: "Hello World!", } } /// The emitter can emit signals to the Qt code. fn emit(&self) -> &GreetingEmitter { &self.emit } /// Get the message of the Greeting fn get_message(&self) -> &str { &self.message } /// Set the message of the Greeting fn set_message(&mut self, value: String) { self.message = value; self.emit.message_changed(); } }

The building block of Qt and QML projects are QObject and the Model View classes. rust_qt_binding_generator reads a json file to generate QObject or QAbstractItemModel classes that call into generated Rust files. For each type from the JSON file, a Rust trait is generated that should be implemented.

This way, Rust code can be called from Qt and QML projects.

Qt Widgets with Rust

This C++ code uses the Rust code written above.

#include "Binding.h" #include <QDebug> int main() { Greeting greeting; qDebug() << greeting.message(); return 0; }
Qt Quick with Rust

This Qt Quick (QML) code uses the Rust code written above.

Rectangle { Greeting { id: rust } Text { text: rust.message } } Demo application

The project comes with a demo application that show a Qt user interface based on Rust. It uses all of the features of Object, List and Tree. Reading the demo code is a good way to get started.

Qt Widgets UI with Rust logic Qt Widgets UI with Rust logic Qt Quick Controls UI with Rust logic Qt Quick Controls UI with Rust logic Qt Quick Controls 2 UI with Rust logic Qt Quick Controls 2 UI with Rust logic Docker development environment

To get started quickly, the project comes with a Dockerfile. You can start a docker session with the required dependencies with ./docker/docker-bash-session.sh.

More information

Last week in Kube

Sun, 2017/09/03 - 6:46pm

“Kube is a modern communication and collaboration client built with QtQuick on top of a high performance, low resource usage core. It provides online and offline access to all your mail, contacts, calendars, notes, todo’s and more. With a strong focus on usability, the team works with designers and UX experts from the ground up, to build a product that is not only visually appealing but also a joy to use.”

For more info, head over to: kube.kde.org

  • Improved connected status tracking in IMAP resource. The IMAP resource now correctly goes into offline status if an operation times out.
  • Introduced a ConnectionLost error when a connection times out that Kube can display.
  • Fixed a couple of threading issues in sink. One could lead to a deadlock on shutdown of the synchronizer process which resulted in synchronizer processes not dying.
  • Fixed account status monitoring when creating a new account. Previously the account status would not include newly created resources, which broke account status monitoring when creating a new account without restarting the application afterwards.
  • The scrollbars are now properly hidden if the content is smaller than the container (nothing to scroll).
  • Set a color according to the signature state on the colorbar indicating the signature state.
  • Added a tooltip to the signature state bar providing some basic info. This is a stub until we have a proper UI element for that.

Kube Commits, Sink Commits

Previous updates


NGRX Store and State Management 2

Sun, 2017/09/03 - 6:05pm

In the first post I described what application state is and how to use it in your components.

Here we will get into modifying the state.

If you remember, our application State looked like this.

export interface Article {
   author: string;
   title: string;
}

export interface ArticleState {

   articlelist: Article[];
}

export interface State {
    articles: ArticleState;
    auth: AuthState;
    preferences: PreferencesState;

}

And we defined the reducers.


export function articlesreducer(state: ArticleState = InitialArticleState, action: Action) {}


On startup of the application we want the State to be usable as is in components, otherwise we would need to jump through hoops to check the existence of properties etc. On initialization the reducers are called with an undefined state parameter, so we can pass it an initialization value, here called InitialArticleState.

export const InitialArticleState: ArticleState = {
   articlelist: []

};

The ngFor in our template can handle an empty array without error.

State is modified by dispatching an action. How you define your actions is how your application logic will be executed. Victor Savkin wrote a blog post describing the types of messages you will use in your application. As your application grows having an established pattern for naming your actions makes it much easier to use.

So what actions do we need for our articles? We need to get them from somewhere. So we need a command action. And the articles need to be inserted into the store by a document type action.

An Action looks like this.

export interface Action {
  type: string;
}

To define them.

export const ARTICLES_LOAD = 'Load Articles';
export const ARTICLES = 'Article list';

export class ArticlesLoadAction: Action {
   type = ARTICLES_LOAD;
}

export class ArticlesAction: Action {
   type = ARTICLES;
   constructor(public payload: Articles[]) {}

export type Action 
   = ArticlesLoadAction
   | ArticlesAction;

The Articles reducer function would respond to these actions.

export const articlesreducer(state: ArticleState = InitialArticleState, action: fromArticles.Action) {
   switch (action.type) {
      case ARTICLES:
         return Object.assign({}, state, {
               articles: action.payload;
            }
         });
      default: return state;
   }
}

The ARTICLES action is responded by assigning the articles property with the array of articles. Each reducer is called with every action, so return state as the default case or the state will disappear into undefined. I don't respond to ArticlesLoad here because the state isn't modified by that action. A loading indicator could watch for Load actions if desired.

How to dispatch an action? 

constructor(private store: Store<State>) {
    this.store.dispatch( new ArticlesLoadAction());
}

An important point. State should be immutable; don't modify the state slice, return a new instance. If you are working with items in an array, don't push or pop, return a new instance of the array with the changes.

Reducers are very easy to test. Pass it an object and a type, then see what it spits out.

How do we get the articles?
And how do we break up our state/reducers into modules?

NGRX Store and State Management

Sun, 2017/09/03 - 4:46pm

https://gitter.im/ngrx/platform

This is a great chat site for getting assistance. The angular/angular chat is active as well.

I have seen a couple of times experienced developers finding NGRX difficult to grasp. That was my experience as well. So I'll try to explain what it is about.

The best was to start is to lay out the problem that it solves. The Redux pattern implemented in NGRX is a way to maintain application state.

What is application state? All the data that the application requires to build and render the view over time.

Starting from the view, or Component level in Angular, here is how the data is presented by NGRX.

@Component({
   selector: 'article-view',
   template: `<div ngfor="let a of articlelist$ | async">
                  {{a.author}}{{a.title}}
              </div>`  
})
export class ArticleListView {

   articlelist$ = this.store.select(getArticles);

   constructor(private store: Store) {}

}

First you inject the store using the constructor.
Then you assign a value with a selector, in this example, getArticles.

Then in the template articlelist$, an observable is piped through async, returning an array, passed to ngFor which loops through each item and displays the author and title.

State is a list of articles each with a title and author. Let's define that.

export interface Article {
   author: string;
   title: string;
}

export interface ArticleState {

   articlelist: Article[];
}

Then define that application state

export interface State {
   articles: ArticleState;
}

I think you may hit the first objection to this pattern right here. All you have is a list of objects, why the excessive boilerplate? Believe me it will get worse. In fact for such a simple application, NGRX doesn't really make sense. But fill out this example into a full content display application with users, multiple data sources, complex content structures, multiple developers. Then the boilerplate, the very clear definition of everything in a way that is easy to read becomes very important. Embrace the Boilerplate!

Let's add a second state slice, which will help illustrate the pattern.

export interface State {
    articles: ArticleState;
    auth: AuthState;
    preferences: PreferencesState;
}

So we add authentication and authorization, and a way to track preferences. I won't define them here for brevity's sake, but it will help you understand the Store structure when it comes to initialization.

A basic rule of this pattern is that the state is changed in one way only, with a reducer function. This is a function that looks like this:

export function reducer(state, action) {}

It gets passed state and an action. Let's define the reducer functions for the three state slices.

export function articlesreducer(state: ArticleState = InitialArticleState, action: Action) {}

export function authreducer(state: AuthState = InitialAuthState, action: Action) {}


export function preferencesreducer(state: PreferencesState = InitialPreferenceState, action: Action) {}


Here with types. As you can see, the articlesreducer doesn't get State as a parameter, but State.articles. How does that work? This is how you define your reducers.

export const reducers: ActionReducerMap = {
   articles: articlesreducer,
   auth: authreducer,
   preferences: preferencesreducer
}

Then to initialize the state, in your app.module.ts file

imports: [
   StoreModule.forRoot(reducers),
]

The reducers object and the State interface properties match. When a reducer is called, the auth reducer is called with State.auth.

In the reducer function, if state is undefined it is assigned the value of the default. On initialization all reducers are called with an undefined state, initializing the state on startup.

What about that selector used in the component? Selectors are composed of a series of selectors that extract a specific property from the State.

export const getArticleState = (state: State) => state.articles;
export const getArticleList = (state: ArticleState) => state.articlelist;
export const getArticles = createSelector(getArticleState, getArticleList);

More boilerplate. Why not something like this?

export const getArticles = (state: State) => state.articles.articlelist;

Because every time any changes happen in State, the subscription in your component would fire. 

This is one way of the two way data flows in the NGRX pattern. But it shows promise. A change in the state values will show up in the view. The data acquisition and handling is outside of the component. The Store structure is extensible, adding another state slice breaks nothing, making adding a feature to your app or component almost trivial.

How do you modify State?
How do you break down the State into pieces that are modular?
How do you deal with api calls to get data from elsewhere?

GSoC- Final month analysis

Sun, 2017/09/03 - 4:21pm

header

So, the final month of GSoC just wrapped up, and in this post I will be talking about the last month, the implementation of tutorial mode for the Digital Electricity activity.

I started off the month with separating the two modes of the activity, with the help of an additional config option in the bar.

config

Whenever the selected value is changed, the current mode is saved in dataToSave["modes"] using the saveData signal, to ensure that the user can start of with the mode in which they left off, as the current mode is set to the value stored in the dataToSave["modes"] variable whenever the activity is loaded.

Next, I moved on to create a dataset which will essentially contain some values which is to be provided to the activity for every level, with the goal of removing redundancy in the code. It mainly consists of:

  • a list of all the components present in the activity
  • data required for each of the tutorial levels

Creating and importing data from the dataset went really smoothly, using which pre-loaded components were placed for the tutorial levels. The pre-loaded components (the electrical components and the wires) were made immune to deletion by disabling the MouseArea component.

tuts

By the next week, I started implementing levels with various types, maintaining a constant difficulty curve. Regarding the levels, instead of keeping the difficulty of the levels increase gradually, I thought of varying the difficulty curve, so that once the user has solved a hard puzzle, they are rewarded with a relatively easy one.

graph_2

Constant increase in difficulty

graph_1

Rewarding the user with an easy level for a difficult one

By this time, the biggest task that remained on the checklist was comparing the user’s answers and checking the correctness. We came up with a number of approaches to solve the problem and ended up with the following one, with the goal of keeping a good balance between readability, optimisation and low redundancy:

  • The levels are divided into few similar parts:
    • lightTheBulb: The levels with the goal to light the given bulb
    • equation1Variable: The levels which asks the user to solve a puzzle based on an equation with 1 variable
    • equation2Variables: Solving a similar equation using 2 variables
    • equation3Variables: Solving a similar equation using 3 variables
    • others: Special levels, which needs to be dealt with separately

Implementing the first type was very easy, we just needed to check the values of the bulb when the “OK” button was pressed and display the results accordingly.

For the levels requiring the player to solve a specific equation, the general algorithm was:

- Store the current state of the switches - Loop through all the possible input scenarios - for each input combinations, check the answer via result() function - if the expected answer and current answer are not the same, restore previous state and return - display correct answer and move on to the next level

The result() function is present in the dataset for each levels of “equationXVariables” type, which takes the input as arguments and returns the calculated result specific to that level. As an example:

result: function(A, B, C) { return A | (B & C) }

The final task in the todo list was to come up with a way to access the toolbars for devices with small screen size, which was too small to be targetted correctly as shown below:

toolbar_initial

The solution we came up was, to replace the four icons with one single “Tools” button, on clicking which will bring up a menu of toolbar options. The goal of this implementation was to keep a balance between ease of accecibility and number of inputs required to access a specific tool. The final result was something like this:

toolbar_final

On further testing, we thought that it would be nice to have an option of zooming in and out of the playArea, in order to allow the player to create and test bigger circuits. The zooming was implemented via multiplying the width, height and the position of each component by currentZoom component. For moving around the area, we take in the player input and move the components relative to it, providing the following result:

toolbar_final

What is remaining?

Besides from finding and fixing bugs, the only major task that remains ahead for the activity is to ensure the traversal along the playArea in touch screen devices as well, which currently only supports the input via arrow keys. The idea as of now, is to implement swipe along the playArea as the source of input. Will post an update once the implementation starts rolling out.

This concludes the overview of the final month of GSoC, it was a fun ride and I will be back with another post covering the whole GSoC experience in general in the coming week.

Relevant links

Kubuntu Artful beta 1 milestone released today

Thu, 2017/08/31 - 9:32pm

I'm happy to announce that Kubuntu Artful beta 1 milestone released today, having passed all the mandatory testing, thanks to lots of testers! Thanks so much to each of you.

If possible, we'll also be participating in Beta 2 with the next round of KDE bug-fix releases for the last testing milestone before release of Kubuntu 17.10 on 19 October 2017.

Release notes: https://wiki.ubuntu.com/ArtfulAardvark/Beta1/Kubuntu

If you would like to help us seed the torrents, go to http://torrent.ubuntu.com:6969/. To quickly find all the betas: control f and type beta.

Join us in freenode IRC: #kubuntu-devel with praise, help, or bug reports.

KTorrent 5.1

Thu, 2017/08/31 - 5:02pm

As an acting release manager I would like to announce KTorrent 5.1.

https://download.kde.org/stable/ktorrent/5.1/ktorrent-5.1.0.tar.xz.mirrorlist
https://download.kde.org/stable/ktorrent/5.1/ktorrent-5.1.0.tar.xz.sig.mirrorlist
https://download.kde.org/stable/ktorrent/5.1/libktorrent-2.1.tar.xz.mirrorlist
https://download.kde.org/stable/ktorrent/5.1/libktorrent-2.1.tar.xz.sig.mirrorlist

KF5 port is now more complete than in KTorrent 5.0:
Multimedia, search, scanfolder, ipfilter, stats, scripting, syndication (rss) plugins
are now ported to Qt5. The only missing bits are webinterface plugin and plasmoid.

Also thanks to Luigi Toscano who released took over KTorrent 5.1 RC release
after my laptop screen broke.

Note to libktorrent crashes if both qca is built with botan support and botan is built
with gmp support. Make sure at least one of these of these is not enabled. In fact botan 2
already has gmp support completely removed but most distributions come with botan 1.

Also, libktorrent apparently requires Qt 5.7 even though CMakeLists.txt only requires 5.2.
There is a patch to lower Qt requirement in 2.1 branch
https://phabricator.kde.org/R472:bcb17b62ff492a7bc7d65c59a5b0a3513199c65d if you need it
although, right now KTorrent requires Qt 5.7 anyway.

SDDM v0.15.0

Thu, 2017/08/31 - 2:21pm

SDDM is a Qt based Display Manager used by multiple desktops, but most importantly (certainly for the PlanetKDE crowd), KDE.

After a year of seemingly little activity, I’ve released SDDM v0.15.0
It is mostly a bugfix release with important changes, but nothing to get particularly excited about.

For full release notes please see: https://github.com/sddm/sddm/wiki/0.15.0-Release-Announcement

Now this is out, I shall be merge a huge queue of pending larger changes – hopefully we shall see 0.16 in only a few months.

KDE: New release for Libkvkontakte!

Thu, 2017/08/31 - 10:31am

Libkvkontakte version 5.0.0 has been released today and can be downloaded from
https://download.kde.org/stable/libkvkontakte/5.0.0/src/
GPG:
Scarlett Clark (Lappy2.0 Debian Packaging)
7C35 920F 1CE2 899E 8EA9 AAD0 2E7C 0367 B9BF A089

The release enables distribution packagers to enable the new features in the latest Digikam release.
Enjoy!

Latte bug fix release v0.7.1

Thu, 2017/08/31 - 8:21am

Latte Dock v0.7.1  has been released containing many important fixes and improvements for which you can find more details in the end of the article.

KDE Project

One more surprise.. :) Latte Dev Team decided that it would be best for the project to become officially a kde project and become part of the fantastic community and infrastrucure that provides. All the infrastructure after this release will be moved under kde umbrella meaning, source code, bug reporting/requests, translations, forum etc... That means that all translations will be part of kde processes and in the future the users will be benefit also with much easier installations and might new exciting features for v0.8. In the back of my head is that we might be able to use the store.kde.org infrastrucure for layouts, so the user can upload/download latte layouts online etc... For now our main focus will be to publish the next bug fix release entirely from kde infrastructure...


Go get v0.7.1 fromgithub ! 





Fixes/Improvements

  • added “New” button in Layouts manager
  • “Close” window from context menu was moved in the end
  • provide always valid task geometries, fixes any lamb minimize/unminimize effect issues
  • improve scroll wheel behavior, it is only used to show and activate windows and not minimizing them
  • fix issue with Firefox 55 that was blocking the dock from showing
  • improve combination or previews and highlight effect. (the user can now highlight windows from their previews)
  • provide a previewsDelay which can be used from advanced users to lower the delay between showing previews or highlighting windows. Be careful, very low values dont provide correct previews. 150ms is by default the lowest value that is taken into account. The value must be added in the Latte plasmoid general settings in any layout file
  • show correct icon when a single window is removed
  • allow for 1px substitutions for applet sizes when in advanced mode and the user has disabled to automatic shrinking… This way for example you can have a Latte panel with size of 29px.
  • Behavior for show background only for maximized windows now respects the applets shadows settings… concerning visibility, color, size etc…
  • fix a crash when changing layouts from settings combobox

Akademy 2017

Wed, 2017/08/30 - 2:31pm

This year I attended my first ever conference, Akademy, the annual world summit of KDE.

akademy2017-groupphoto.jpg
(You can find people by name here)

I presented a talk on Ruqola [link to the video] and was amazed to see the reaction and support I (and Ruqola) received.

I have been working on Ruqola since February this year and later as my Google Summer Of Code 2017 project. You can read my final status report here.

Akademy started on quite an adventurous note for me. My luggage was delayed at Madrid Airport (and later at my final destination to Almeria). On my way to Civitas (the place where most of us were staying), I met a group of people going to Civitas as well. So we got lost together and found the destination together.
IMG_20170723_093018

At the warm up party the same evening, I met a lot of people and spent most of my time with Timothee Giet and Jure Repinc (such cute and funny people).

Next morning I got a call from the airport that they’ve got my luggage (yayyy :D)

When I reached Universidad de Almeria, I was Stunned to see the view.WhatsApp Image 2017-08-30 at 5.49.38 PM
WhatsApp Image 2017-08-30 at 5.33.10 PM

WhatsApp Image 2017-08-30 at 5.33.10 PM(1)

The day started with some amazing talks including Tales of Unicorns and Cake (by Robert Kaye, the Keynote speaker), Plasma (by Sebastian Kügler), Developing for our users (by Aleix Pol Gonzalez), Clazy (by Albert Astals Cid), Kirigami (by Marco Martin) and many more &#55357;&#56832;

Robert-kaye-akademy

Aditya Mehra‘s talk on the Mycroft plasmoid was the highlight of the day for me. The topic, after all, was intrinsically interesting — issuing voice commands to an AI assistant on your desktop that appeals to everybody.

After listening to such amazing talks during the day, it was my turn by the evening. You can have a look at my talk here.

Before my talk I was sitting beside Davide Faure (not knowing each other). After I came back to my seat, Davide had already built Ruqola on his laptop and was using it (Yes, wow). He gave me feedbacks on a lot of things and set up a bug list which I could refer to for improvisation.

Then Tomaz Canabrava helped importing Ruqola from qmake to cmake.

Then Marco Martin, maintainer of kirigami, helped set up kirigami for Ruqola’s UI.

Gerry Boland tried his best to fix my ubuntu (and I really really appreciate his effort) but couldn’t get hand of the weird things going on in my system &#55357;&#56832; . At the end, I rebooted with KDE Neon User Edition with the help of Jonathan Riddell.

(You see how many people helped me out.. such is the entire KDE community ❤ )

The second day had equally awesome talks including The KDE Community and its Ecosystem (by Antonio Larrosa Jimenez, the keynote speaker), Input Methods in Plasma 5 (by Eike Hein), KDE neon Docker Images (by Jonathan Riddell).

Also, Baltasar Ortega (from KDE Blog) took my interview regarding  GSoC and Ruqola.

Then we had a beach party thrown by Jonathan Riddell &#55357;&#56832;

IMG_20170723_214144

WhatsApp Image 2017-08-30 at 6.14.51 PM(1)

Then from next day on started workshops and BoFs. I attended Anu Mittal’s workshop on Qt Quick Controls 2, followed by Paul Brown’s (this guy is next level &#55357;&#56832; ) workshop on Increasing your audience’s appreciation for your project highlighted how we should present our product to the users.

I also met Lukas Hetzenecker, the only other current GSoC student present there and who is also the organizer of next year’s Akademy!

It was a pleasure to meet Valorie (she’s lovely!) , Aleix Pol, Baltasar, Anu Mittal, Boudhgyan Gupta, Arnav Dhamija, John Samuel, Gabrielle Ponzo, Sebastian Kügler, Dominik Haumann, Thomas Pfeiffer, Frederik Gladhorn and so many more people whom I don’t remember by name but by face &#55357;&#56832;

whatsapp-image-2017-08-30-at-6-14-51-pm.jpeg

WhatsApp Image 2017-08-30 at 7.02.06 PM.jpeg
It was my first Akademy, first conference, first GSoC and it couldn’t have been any better! Though I (and a lot of other people) missed Riccardo Iaconelli at this year’s akademy but I hope to meet him in person soon. A big hug to the entire Akademy team for organizing everything so nicely. And another big hug to all the people I met in Almeria &#55357;&#56898;
WhatsApp Image 2017-08-30 at 5.54.43 PM

whatsapp-image-2017-08-30-at-5-36-58-pm.jpeg

WhatsApp Image 2017-08-30 at 6.01.01 PM.jpeg

WhatsApp Image 2017-08-30 at 5.28.14 PM

 


Akademy 2017

Wed, 2017/08/30 - 1:37pm

This year I attended my first ever conference, Akademy, the annual world summit of KDE.

akademy2017-groupphoto.jpg
(You can find people by name here)

I presented a talk on Ruqola [link to the video] and was amazed to see the reaction and support I (and Ruqola) received.

I have been working on Ruqola since February this year and later as my Google Summer Of Code 2017 project. You can read my final status report here.

Akademy started on quite an adventurous note for me. My luggage was delayed at Madrid Airport (and later at my final destination to Almeria). On my way to Civitas (the place where most of us were staying), I met a group of people going to Civitas as well. So we got lost together and found the destination together.
IMG_20170723_093018

At the warm up party the same evening, I met a lot of people and spent most of my time with Timothee Giet and Jure Repinc (such cute and funny people).

Next morning I got a call from the airport that they’ve got my luggage (yayyy :D)

When I reached Universidad de Almeria, I was Stunned to see the view.WhatsApp Image 2017-08-30 at 5.49.38 PM
WhatsApp Image 2017-08-30 at 5.33.10 PM

WhatsApp Image 2017-08-30 at 5.33.10 PM(1)

The day started with some amazing talks including Tales of Unicorns and Cake (by Robert Kaye, the Keynote speaker), Plasma (by Sebastian Kügler), Developing for our users (by Aleix Pol Gonzalez), Clazy (by Albert Astals Cid), Kirigami (by Marco Martin) and many more &#55357;&#56832;

Robert-kaye-akademy

Aditya Mehra‘s talk on the Mycroft plasmoid was the highlight of the day for me. The topic, after all, was intrinsically interesting — issuing voice commands to an AI assistant on your desktop that appeals to everybody.

After listening to such amazing talks during the day, it was my turn by the evening. You can have a look at my talk here.

Before my talk I was sitting beside Davide Faure (not knowing each other). After I came back to my seat, Davide had already built Ruqola on his laptop and was using it (Yes, wow). He gave me feedbacks on a lot of things and set up a bug list which I could refer to for improvisation.

Then Tomaz Canabrava helped importing Ruqola from qmake to cmake.

Then Marco Martin, maintainer of kirigami, helped set up kirigami for Ruqola’s UI.

Gerry Boland tried his best to fix my ubuntu (and I really really appreciate his effort) but couldn’t get hand of the weird things going on in my system &#55357;&#56832; . At the end, I rebooted with KDE Neon User Edition with the help of Jonathan Riddell.

(You see how many people helped me out.. such is the entire KDE community ❤ )

The second day had equally awesome talks including The KDE Community and its Ecosystem (by Antonio Larrosa Jimenez, the keynote speaker), Input Methods in Plasma 5 (by Eike Hein), KDE neon Docker Images (by Jonathan Riddell).

Also, Baltasar Ortega (from KDE Blog) took my interview regarding  GSoC and Ruqola.

Then we had a beach party thrown by Jonathan Riddell &#55357;&#56832;

IMG_20170723_214144

WhatsApp Image 2017-08-30 at 6.14.51 PM(1)

Then from next day on started workshops and BoFs. I attended Anu Mittal’s workshop on Qt Quick Controls 2, followed by Paul Brown’s (this guy is next level &#55357;&#56832; ) workshop on Increasing your audience’s appreciation for your project highlighted how we should present our product to the users.

I also met Lukas Hetzenecker, the only other current GSoC student present there and who is also the organizer of next year’s Akademy!

It was a pleasure to meet Valorie (she’s lovely!) , Aleix Pol, Baltasar, Anu Mittal, Boudhgyan Gupta, Arnav Dhamija, John Samuel, Gabrielle Ponzo, Sebastian Kügler, Dominik Haumann, Thomas Pfeiffer, Frederik Gladhorn and so many more people whom I don’t remember by name but by face &#55357;&#56832;

whatsapp-image-2017-08-30-at-6-14-51-pm.jpeg

WhatsApp Image 2017-08-30 at 7.02.06 PM.jpeg
It was my first Akademy, first conference, first GSoC and it couldn’t have been any better! Though I (and a lot of other people) missed Riccardo Iaconelli at this year’s akademy but I hope to meet him in person soon. A big hug to the entire Akademy team for organizing everything so nicely. And another big hug to all the people I met in Almeria &#55357;&#56898;
WhatsApp Image 2017-08-30 at 5.54.43 PM

whatsapp-image-2017-08-30-at-5-36-58-pm.jpeg

WhatsApp Image 2017-08-30 at 6.01.01 PM.jpeg

WhatsApp Image 2017-08-30 at 5.28.14 PM

 


KDE: Libmediawiki has been released!

Wed, 2017/08/30 - 12:04pm

Libmediawiki version 5.37.0 has been released today and can be downloaded from
https://download.kde.org/stable/libmediawiki/
GPG:
Scarlett Clark (Lappy2.0 Debian Packaging)
7C35 920F 1CE2 899E 8EA9 AAD0 2E7C 0367 B9BF A089

The release enables distribution packagers to enable the new features in the latest Digikam release.
Enjoy!

Design choices ahead

Wed, 2017/08/30 - 1:45am

As many of you may know by now, we are currently doing a code refactoring which will be taking a step forward in making our software more suitable for professional use. In the process, we are facing some critical design choices, and want to hear the opinion of the editors of the community.

Currently, a clip inserted in the timeline in Kdenlive can be one of three things: video only, audio only or both audio and video. While this approach gives flexibility to the user, it is quite non-standard amongst video editing software, and may cause troubles if we try to implement some more advanced features like an audio mixer. The alternative, implemented in other softwares, is to avoid hybrid clips altogether, and only allow video only and audio only clips. Of course, in such a situation, inserting a clip from the bin to the timeline would actually create two clips on the timeline: one for the audio and one for the video.

We will be asking at forums, websites, subreddits, etc… for people to voice their opinion for one of the approaches, or outline advantages/drawbacks that we may not have thought of. We are looking forward to hearing from you!

Great Web Browsing Coming Back to KDE with Falkon, New Packaging Formats Coming to KDE with Snap

Tue, 2017/08/29 - 3:01pm

Today is a good day filled with possibility and potential.  The browser formerly known as QUpZilla has gained a better name Falkon and a better home, KDE.  This bring quality web browsing back to native KDE software for the first time in some years.  It’s a pleasingly slick experience using QtWebEngine and integrating with all the parts of Plasma you’d expect.

At the same time we at KDE neon are moving to new packaging format Snaps, a container format which can be used on many Linux distros.  Falkon is now built by KDE neon CI and is in the edge channel of the Snap archive.

You’ll need to install the KDE Frameworks snap first if you haven’t already, this comes pre-installed on recent builds of KDE neon.  Then install Falkon from edge.

snap install kde-frameworks-5 snap install --edge falkon

Remember this works on any distro with Snap support, which is most of them.

Let us know how you get on.

 

 

GSoC - Third month analysis

Tue, 2017/08/29 - 7:00am

As you know I was working on musical activities and wrote about note names in my previous blog post. The gtk+ version of GCompris had the following musical activities:

  1. Note names
  2. Piano composition
  3. Play piano
  4. Play rhythm

My aim was to work on note names and piano composition and those are the most basic activies kids need to learn. A child should learn note names first to have a good understanding of note position and naming convention. Then the activity piano composition should be played to have the knowledge of musical notation and musical staff, then comes the play piano activity which explains how the piano keyboard can play music as written on musical staff and then the rhythms are learnt on the basis of what they see and hear in play rhythm. In the last two weeks I worked on completing Note names which you can test in my branch note names.

I also worked on piano composition in which I worked on the following things:

  1. Added the components including playing of music on staff, changing note length to quarter, whole, half, eight note, change of clef to bass or treble. Layout
  2. I also added the initial load button for loading melodies and worked on save button for saving the melodies. Load button The saving of notes is still in progress where we will save the melodies to ~/.local/share/GCompris/piano_composition in the form of .json which user can load and delete.
  3. Improved the vertical layout of piano composition. Vertical layout

In the last month I also worked on polishing the overall house animation for oware and also added the score animation. Though the score animation is yet to be polished to make it more user friendly for kids. Overall this was a musical month for me where I enjoyed working on musical activities (listening to notes and melodies like Twinkle Twinkle little star refreshed my childhood memories). There was also an emergency in my area and curfew was imposed for 72 hours and we had no internet access during that period :( But now everything is back on track. The 3 official months of our working comes to an end but well that’s the start of a journey in itself of helping new contributors to be a part of our family and help them in the best way we can. I hope that our experience and reviews can help them produce clean and factored code :) At the end it should not be just workable code but clean and maintainable to make it easy for future contributors to read and make changes :) I would work on the remaining things and features and polishing my activities in the coming time and would cover the whole GSoC period in my next blog :)

Pages