Stefan Westerfeld recently posted a detailed
summary of the KDE 2.2 Multimedia Meeting, held on IRC on March 6.
The meeting focused on aRts, the linchpin of the KDE 2 multimedia architecture.
Topics covered include threading the aRts server; improved error handling
when distributed objects fail; increasing aRts user-friendliness (e.g.,
useful error messages); compatability with GNOME (e.g., providing a
C/CSL interface); improving KMedia2 (noatun, etc.) by removing gaps between
files, allowing for streamed input and embedding video inside noatun;
making aRts effect GUIs (e.g., Synth_FREEVERB) toolkit-independent;
and adding a mixer to aRts (aRts does not currently work with KMix).
For the full report, read below.
The KDE2.2 Meeting, IRC discussion summary
written by Stefan Westerfeld,
This summary provides an overview of the things that we talked about
during the KDE2.2 multimedia meeting which took place on IRC (2001-03-06).
If you want to read more than the summary, the full IRC logfile is
Participants (in no particular order) are listed at the bottom.
Somehow, it could have been called the
aRts development meeting.
Maybe that was a result of talking about the aRts TODO list "first"
(because it was the only document which was available before the meeting),
and then getting to nothing else. Maybe it was a result of me moderating ;).
And finally, maybe it was because aRts is a very central component of the
KDE multimedia development, and thus working on KDE multimedia almost
always implies somehow getting in touch with aRts.
Anyway, the following will try to give the idea what we talked about, and
try to reconstruct the tasks that were assigned.
- aRts Core/SoundServer
- Error handling
- SoundServer userfriendliness
- CSL/C API
- Gapless playing
- PlayObjects for custom data streams
- X11 Embedding
- GUIs for aRts objects
- Mixer issues
Threading: mpeglib is threaded, decoders are threaded, the first point is
about making aRts threadsafe enough to develop threaded playobjects and such.
It's mostly done anyway, so it will be in KDE2.2 (libmcop-mt).
We discussed error handling in MCOP apps. The problem is that currently when
artsd dies apps segfault unpredictable. This is usually since when artsd died,
remote operations returning object references return null pointers. Calling
will return an "StereoEffectStack::null()" as outstack(), and calling a
method on that will segfault. It should be possible with some coding to
create a callback once proxies (soundserver) become invalid, so that the
applications can be terminated more or less gracefully before they come
in the situation that they can crash.
Tobi volunteered to care about this.
Jeff Tranter volunteered to care about the following two TODO items:
- add kmessagebox style output of error messages (often, people
don't know why artsd is not running, does not start, doesn't like
- add -S option (to change the autosuspend timeout)
Note: Actually, it seems he has already completed this. ;)
Providing a compatible way to do sound based on aRts in KDE/Gnome would be
nice. It seems that having a pure C implementation for the CLIENT is a good
idea for Gnome. CSL intends to provide that. This doesn't mean that CSL is
replacing the aRts C API (it will be kept for compat reasons but might end
up being a thin CSL wrapper one day).
I am working on CSL a bit, and maybe it will be more ready/popular soon.
When playing more than one file with noatun there is a gap of 0.5 seconds
between the current and the next file. We came to the conclusion that noatun
is causing this by polling when a file is done. Luckily for noatun, there
are change notifications in aRts, so that it should be possible to find out
when a file is done using these.
That way, no gap should occur. Noatun will need to implement an MCOP object
(and be a server) to do this.
Charles Samuels volunteered to care about this.
Arts::PlayObject has as interface for specifying what to play a
bool loadMedia(string filename)
method. So PlayObjects always play local files, and nothing else. Thats
suboptimal, because you can't for instance get an mp3 from a web server
(via KIO) and play it. The ideal way would be to use KIO, in a KDE process
(for instance noatun), and stream the data to the playobject, like
[ noatun ] [ artsd ]
KIO class ------- [ stream ] ---------> PlayObject
That way, the PlayObject could play anything that KIO can read (and thats
a lot) without artsd linking KIO itself.
There were some suggestions for doing the actual streaming, like
- RTP (Internet standard, documented in a RFC2068 and RFC2326)
- call a method and pass the data (PlayObject->playData(sequence rawData)
- http://www.arts-project.org/doc/mcop-doc/async-streams.html (asyncronous
Probably using the MCOP streams will offer the best integration/performance.
They are also used by artscat and artsdsp (and the C API) so they are quite
tested. The only problem will be seeking in a total asynchronous deliver/send
It's a bit undecided who wants to do this. Neil Stevens and/or Charles Samules
might be the developers who are most interested in getting this done.
Currently, noatun will open an extra window when playing videos. The optimal
solution would be that noatun embeds the video image.
Neil Stevens volunteered to take care about this.
There was a bit of talk about moving kaboodle to kdemultimedia. Kaboodle is
an alternative media player which is like noatun based on the KMedia2
architecture. It has two differences to noatun: it's small (as lightweight
as possible), and it can embed itself into konqueror.
Nobody was against that, except making it default is probably not such a
good idea (as noatun is by definition more powerful).
It might also be nice to support reading HTML-pages with embedded wav files
and things like that.
Note: Charles told me noatun should be competitive in speed with kadaboodle
after the recent changes (i.e. kdeinit support).
Often, aRts objects (like the freeverb effect) will need a GUI. The GUI should
be sharable between artscontrol and noatun (and other possible aRts aware
apps). An important design restrictions to meet is, that it should be possible
to keep effect GUIs toolkit independant.
So the idea for KDE2.2 might be having a "factory interface", which provides
a way to obtain a GUI for a given effect (like Arts::Synth_FREEVERB). This
- autogenerate a GUI out of hints
- get the GUI implemented in terms of Arts::Widget, Arts::Poti and such
- get the completely implemented GUI
For the first two cases, this will be toolkit independant.
Arts::Widget, Arts::Panel, Arts::Poti and so on are interfaces.
There is a "Requires=kde_gui" line in the implementation (.mcopclass file),
so the Requires/Provides mechanism of MCOP will be able to select a fitting
one for each application (i.e. Gtk version for Gtk apps). The GUI will be
built inside the client application and thus be able to interact with the
application toolkit in a natural way.
Of course, the whole GUI issue will not only be useful for effects, but also
for PlayObjects (configure interpolation quality of the mikmod .xm engine),
Instruments, and all other kinds of objects.
I hope to get that done until KDE2.2.
There was quite some discussion about mixers. The facts until now are:
- kmix is about the hardware mixer only;
- aRts doesn't know anything about hardware mixers at all;
- artscontrol controls only what artsd does, which is among other things,
controlling the output effect stack, midi instruments and so on; and
- it might be nice to have aRts simulate a multichannel software mixer with
- Stefan Schimanski is working on giving kmix pluggable architecture, that
will work on the base of the Qt object model and dynamic loading, so it
might be possible to easily plug aRts into kmix. A "one mixer for everything"
approach might be more useful from a user-point-of-view than the current split.
- Still, the general problem might be that aRts itself has no real
"abstraction" for mixers in the IDL. There is the audiomanager thing that
assumes the following:
- there are apps that dynamically come and go
- there are destinations (in form of busses) where audio might get played or
- you can assign apps to destinations
On the other hand, one possible kmix model for apps would be dynamically
appearing channels per app, i.e. a new quake channel would pop up once you
start quake (whereas the current aRts idea would be more like: there are
eight static audio outputs, audio01 .. audio08, and once you start quake,
you can assign it to one of these channels).
It might be nice to add an IDL based abstraction for "mixers", which will
work the same for hardware and software mixers. This would provide things
like network transparent (and language independant) mixer control. Maybe
it would also make a good addition to the "AudioManager" style mixer
interface we have now.
This abstraction would also make it possible to have different clients
access the mixer abstraction in a unified way. I.e. a Gtk frontend to
the mixer would use the same software/hardware mixer than the KDE frontend.
Turning the volume in the KDE frontend would make the sliders move int he
Gtk frontend as well.
are midi related tasks in artscontrol which really don't belong into a
mixer. But somehow unifying mixing would be "user-friendlier" than what
we currently have.
mixer as software (and thats little code), and it might be possible to add
that to artscontrol. Assignment of programs to channels is trivial. You
can have equalizers (Synth_STD_EQUALIZER), per-channel-effects and so on.
It will work on all applications. It should be implementable with not too
much work (less than 1000 LOC).
Whatever, this is a bit of a are research topic ;), and different things
need to be done.
Stefan Schimanski is working on the kmix plugin architecture (1).
Stefan Gehn wanted to have a look at the GUI side of building a mixer (4).
I can help with getting the necessary backend work done for (4).
From the aRts point of view, the soundserver architecture and the core MCOP
technology seem solid, so what we do here is mostly polishing.
Then, there are services basing on the MCOP/IDL model. The most important
service here is KMedia2, which allows playing arbitary media. The current
IDL interfaces are good for many simple tasks, but more needs to be added
for more complex tasks.
Finally, there are areas which are not yet standarized in any way by the
IDL interfaces. We talked about GUI and mixer, but there are others where
ino work has been done yet (like hard disk recording or video filtering),
and others where more work needs to be done (like midi).
And of course, thats just one side of viewing things. Existing apps, such as
kmix, kmid(i) and games often do not take fully advantage of what aRts can
provide, or have goals/directions in which they develop as well. Sometimes,
integration (as for libkmid or a timidity PlayObject) might be a good idea,
sometimes, adding a bit of "interoperability code" with aRts might be the
right way to go.
As mentioned at the beginning, this meeting looked mostly at aRts.
Tobi (missing your realname)
(... there was a lot of joining and leaving on the channel, so maybe
I forgot somebody ...)