Matthias Ettrich's talk on Universal Components at LinuxForum (also given at LinuxTag) is finally now online. You can view the slides
as you watch in Real Video format (part
2) or listen in ogg
format. Matthias covers everything from basic concepts of
components to CORBA, COM, Universal Objects, and UCOM
Particularly notable is the coverage of Universal
Objects, which is apparently a concept that originated from a Borland/Kylix developer, and
promise of future component interoperability between the likes of
Qt, KDE, Kylix, Phoenix Basic, Python, GTK+ and what not.
Intrigued, I thought I'd take a closer look at QCom. Unfortunately, qcom.h
itself seems pretty evil (it may help if you come from COM-land), so
it is advisable to also consult the write-up which is fairly nice and concise. Personally, I find it somewhat surprising that one would have to deal with seemingly low-level details such as UUIDs and the such; one wonders what other options are available.
Finally, another useful thing you may learn from this presentation
is whether and when to pronounce Qt as "queue-tea" or "cute". (Hint: it depends on random intricate QuanTum fluctuations in ice
grep is not dawn handy. It is only useful for line-oriented text files. As soon as you have to handle structured data, it completely fails. For example, for xml data, sgrep is much more useful. For searching in a database, the respective database APIs are more useful.
So your example is flawed, and so is your argumentation.
For a better example for extensibility of GUI programs, compare Konqueror or Nautilus with classical Unix programs like netscape, the CDE file manager or similar stuff. Then come back and repeat your myth about loose coupling.
For more structured data use awk.
Get a grip, awk is line oriented as well.
It takes ungodly coercion to make awk take most kinds of structured data (as the XML example).
Ok, I will byte: are you unfamiliar with the concept of structured data, or are you unfamiliar with awk?
Awk is record oriented, not line oriented. Furthermore, awk deals with tabular data, not just lines of text. And structured data can be encoded very easily and conveniently in tabular data, and awk is very efficient at processing it.
But your response is indicative of a general disease in the industry: people don't learn to know how to use their general purpose tools anymore. Instead, we get an awful profusion of special purpose tools and overly complex standards.
"Normally, records are separated by newline characters. You can control how records
are separated by assigning values to the built-in variable RS."
This is exactly the same as being line-oriented.
It can be called record-oriented, sure. But it can not handle nested data structures very naturally. And the given example was XML, which is nothing but nested structures.
Ok, anything can be encoded to be easily handled by AWK? Yes.
But XML is already encoded. If you need to encode it so AWK can handle it, then it is not AWK that can handle XML, is it?
"But it can not handle nested data structures very naturally."
Of course it can, in the same way systems like Prolog or SQL can represent structured data easily and conveniently.
"But XML is already encoded. If you need to encode it so AWK can handle it, then it is not AWK that can handle XML, is it?"
I did not claim that AWK could handle XML easily, I claimed that it could handle structured data easily (not to be misunderstood: AWK has its limitations as a language, but its tabular parser is fine). Nothing really can handle XML easily because XML is not a particularly convenient representation of structured data; what XML has going for it is that it makes the structure blatantly obvious to even the most uneducated user--XML, after all, has its roots in document formats.
As I said before, the analogy between AWK and XML is a good one because it is indicative of what's wrong with a lot of software being designed in industry these days: people build unwieldy special purpose solutions (like XML) because they simply don't understand how to use more general purpose tools effectively.
"This is exactly the same as being line-oriented."
Line are lines and arbitrary records are arbitrary records. The two are different. The built-in awk parser is record oriented and assumes a tabular data model. Using that data model, you can easily represent nested data structures in a variety of ways.
Line are lines and arbitrary records are arbitrary records.
They are, however, functionally identical in AWK case.
All AWK does is use a record separator. Turning the record separator into newlines is a 1 line command (use sed or AWk, at will ;-), and it will convert any file AWK can parse into a "a record is a line" thingy.
I did not claim that AWK could handle XML easily, I claimed that it could handle structured data easily
Ok, just read the thread and see where things came from, then ;-)
David O'Connell started babbling about how handy grep is because of its data driven design, and Bernd Gehrmann mentioned that grep sucks when you have to, for example, parse XML, since it is line-oriented.
Then David said that for more structured data (XML being the only one mentioned), AWK should be used.
I simply replied that AWK is line-oriented (ok, I admit it ain't, I still say it is pretty much the same thing, functionally), and that it takes ungodly coercion to make it handle the XML example.
So, you see, AWK was only brought up as a way to handle XML-like data, and when you later started talking about how AWK can handle structured data, if it is encoded in a way that AWK can handle (doesn't that ring a tautological bell? ;-)
So, yes, you didn't say AWK could handle XML. David O'Connell did. And I replied to him about that. And that is what you replied to.
I'd say, let's give it a rest. I am pretty convinced David O'Connell is just a troll, and honestly, I don't give a damn about AWK.
David O'Connell wasn't babbling about grep being data driven. It was me, Bud Millwood, trying to spur a discussion on GUI design. And I was not trolling, I was stating something that I think is very important to the future design of KDE. My post was extremely terse, but if you'll read the reply by Chris Kohlhepp, you'll get a broader picture of what I was driving at.
I would like to elaborate on my colleague's posting "COM inherently flawed". Also, if you have not already done so, please read David O'Connell's reply to "go away troll" at
He has some valuable insights.
I would like to look at how COM came into being, why Microsoft chose this path, and why many of us where disillusioned by it. But before I do, I would like to discuss three basic system analysis concepts.
Firstly: orthogonality. Loosely defined orthogonality within a system refers to the ability to arbitrarily combine system components regardless of their nature and design. Consider the architecture of Lego blocks. Each block has an indentation on the bottom and a corresponding knob on the top. In this fashion I can fasten a Lego propeller to the top of a Lego head and make - a propeller head. The peculiar thing here is that the propeller was designed to work with a Lego aircraft, not the top of a Lego head. Unix operates much in the same fashion and owes this to the universal nature of its design in which processes interact with one another through an interface that exchanges text based parameters through simple interprocess mechanisms such as standard-in and standard-out. Example : a TCP data stream may be redirected to a script by the Inetd super daemon and then fed to an X-based GUI component by said script. The Inetd super daemon need not have any knowledge of the specifics of the data, the inner workings of the script, or the GUI.
Secondly: Loose Coupling. Loose coupling refers to the degree to which a component is independent of the characteristics of another component. Our Lego head only interacts with the attached propeller through its little knob. It is not allowed to manipulate the specifics of the propeller. For example, if our little Lego man - or woman - was alive he/she would not be allowed to change the pitch of the propeller blades as the knob/indentation paradigm does not provide for this. While this is a limitation on the use of the propeller, our Lego head won't be impacted by any design changes to the propeller blades.
Thirdly: Data Driven Design. In a data driven design, the nature of the data dominates the architecture of a system, i.e. how components of the system interact and what components are conceived in the first place is a function of the data. UNIX has traditionally favored a straightforward data oriented design in which processes exchange textual data via pipes, sockets, and other interprocess mechanisms. The nature of this interface paradigm is both universal and non-specific. To briefly refer to our first example, this design is responsible for the fact that the developer of Inetd need not be aware of the script and ultimately the GUI with which it is to interact. This paradigm stands in stark contrast to object oriented methodologies in which objects or entities manipulate specific states and attributes of other entities in the system. Such objects gain the advantage of being able to exploit the specific characteristics of the entities with which they interact. Example: If our Lego man/woman could manipulate a "ChangeBladePitch(Degrees Clockwise)" API on the propeller, he/she might be able to make more efficient use of said propeller, but the interaction would have ceased to be universal and non-specific. If ever a new propeller is fitted to the same Lego head, a propeller which uses radians to set the blade pitch instead of degrees, then the interaction between head and propeller is broken. Thus once the interface to the propeller has been published, it must never be altered. The very use of representation specific data ( here angles in degrees ) across subsystem boundaries has rendered the interaction non-universal.
The decision when to use which methodology largely hinges on one factor: change and the ability to predict change across subsystem boundaries. The designers of "Awk" didn't know that the output of their program might years later be piped into a 256 color KDE window. They therefore had to design to a universal interface. Analogously, the engineer designing the propeller and fitting blades and drive shafts must concern himself with the specifics of each component within the context and scope of the subsystem in question.
The fundamental design of Windows and Windows NT in particular is essentially object oriented. Applications manipulate the state of the operating system and the computer through a set of API calls to OS components. This is in stark contrast to Unix where sometimes the line between operating system components and application components is very blurred. Is "Awk" an application or part of the OS ? The line between application components and OS components is drawn very clearly under Windows. Until the introduction of COM, the basic issue of application interaction, simply was not addressed at all. If you wrote a network application and I wrote a printer tool, the mode in which our applications might possibly interact and perhaps enhance the overall functionality of the operating system was - NOT. We both invariably had to turn to the API furnished by Microsoft to communicate at all. Further, because the interaction with the operating system is highly specific and non-universal, he who controls the operating system API layer ultimately controls the application layer also.
The design inherently does not scale well. As this became apparent to the software industry, a "patch" was needed. I call it a patch because I regard COM as poor fix to a bad design - albeit purposeful, but bad nonetheless.
COM purported to give applications and components the ability to interact universally but its very object oriented design and highly representation specific ( say binary ) interface ensured that universal interaction never happened. In fact the only application suite to achieve a high level of integration between individual components and the operating system is Microsoft Office. Microsoft sold the software industry the illusion of universal component interaction at the application level while ultimately retaining control over the application level by controlling the operating system API layer. Surprised ?
Of course today's applications require complexity beyond line-based standard-output redirection that we have concerned ourselves with so far in our simple "Awk" example. A comprehensive component framework must address issues such as application embedding, concurrency, synchronization, etc. Nonetheless, the design goals of universal data representation, loose coupling, and orthogonality are imperative if the framework is to be scalable.
Some years ago I worked on a project which utilized a component framework designed for VME backplane systems which housed multiple processor cards, each card running its own flavor of UNIX. The component framework in question was both scalable as well as platform independent, provided for synchronous and asynchronous message propagation, and achieved this seamlessly across process boundaries as well as across different hosts on the same VME backplane. At the time, I lacked the systems analysis background to fully appreciate the depth of the design the framework had to offer.
The framework in question is called SigMA ( Signaal Modular Architecture) and is fielded on large scale national defense systems. It is listed in Jane's Defense Glossary :
Unfortunately public links to the framework are limited.
I think that your comparison is a bit fuzzy.
Take the ChangeBladePitch(Degrees Clockwise) example: you add additional control in one case, and you say that it adds additional coupling, well duh!
Modifying Degrees to Radian breaks the coupling and what's your point ?
Let's say you use the Unix POV with the same capabilities otherwise the comparison is absurd.
So you have one program A which output Degrees number in ASCII, and another program B which parse the number from its standard output.
You change A to output Radians instead of B: congratulation you silently broke the application because even if B can continue to work the result is wrong.
So you have to update B so that the result is correct again.
I fail to see what is your point, practically I mean..
In fact I have some difficulties comparing Unix-style pipes and Windows-style components: for me the "Unix-style pipes" is usefull mainly for text processing/script whereas Windows-style components is aimed at for example embedding a spreadsheet inside a word processor (I know COM is more than OLE).
I don't really how you could do the "embedding" style with pipes,redirection, etc..
But maybe I don't have enough imagination :-), I heard that Plan9 push even further the traditional Unix-concept: the filesystem can now "contain" windows much-like /proc contain processes.
Does someone more knowledgeable with Plan9 knows if they have something "similar" to COM?
>Does someone more knowledgeable with Plan9 knows if they have something "similar" to COM?
No way. The Plan 9 way of exposing a interface is to just export a filesystem. They try to make every system service look like a filesystem with files in a tree hierarchy. Rather elegant I must admit.
> well duh!
Glad to see you can articulate your thoughts !
> You add additional control in one case, and
> you say that it adds additional coupling,
> Modifying Degrees to Radian breaks the
> coupling and what's your point ?
The point is not in the details of this example.
Please look at it the context of the systems analysis principles illustrated. I'm not arguing against binary representation in general. I am contrasting the use of specific interfaces with the use of universal interfaces and where to use which. The more universal the interface, the less efficient it is at manipulating the object in question, but the more readily it will interact with other objects, even those that were not designed for it.
The more specific the interface, the more efficient it is at manipulating particular aspects of the object in question. Specific interfaces are necessary for the operation of any system, but when you cannot predict with whom you will interact with - as is invariably the case when embedding a component "x" in application "y" - you will want to make your interface as universal as absolutely possible. The less specifics you make use of in your interface, the more universally usable it will be.
But lets dive into the details of the example anyway. Since we got off onto that tangent
> So you have one program A which output
> Degrees number in ASCII, and another program
> B which parse the number from its standard
> output. You change A to output Radians
> instead of B: congratulation you silently
> broke the application
I like your style. So cynical :-)
> I fail to see what is your point,
But at least you admit to missing the point. Let's keep this professional, shall we ?
Ok, yes, you are a right about the application being broken by changing the parameters from Degrees to Radians regardless of whether you encode the values in ASCII or binary. The use of Degrees or Radians makes the interface specific, a binary representation would merely render it even more specific. For example, if you represent the value in binary on an Intel platform and sent the value over a socket to a machine with different byte ordering or alternatively from an 32bit platform to a 16 bit platform, then the interaction is broken even if both ends speak Degrees. Of course you can do the necessary translations (network byte ordering, etc.) provided you know who you talk to on the other end - if you don't you can't. The other practical problem with most binary interfaces, be it a simple LIB or a COM object is that they are not readily extensible. For example you will need to publish an altogether new COM interface if you wanted to add a Radians based function to an existing Degree based function on your object. This impacts every user of your object. Text based interfaces tend to be more readily scalable.
I'm not a KDE developer, but I am planning a QT app, because it's nice to program and I want it cross-platform. Some of you people are missing the point when you say "why should QT do this when KDE already does?" Maybe KDE does but that doesn't help me a whole lot.
Wouldn't it be nice if a lot of Windows developers started doing QT? Then when people compare OS's, they can discover that outside of Microsoft apps, they can run the same software on whatever platform they choose. Don't you think that would help break Microsoft's monopoly on the desktop?
(Incidentally, I'm planning to GPL on the Linux version, and on other platforms work out the closest thing to GPL that I can.)
I am going to porting a internet browser into StrongARM assabet.
My cross compile tool is BlueCat.
I have cross-compiled some QT/Embedded applications.
And download them into assabet board.
On the touch panel , the windows ,the buttons are showed correctly.
But when I touch the panel with pen ,there is not response at all.
The cursor has no any actions at all.
But MicroWindow application works well.
Do you have such expierence ?
I will appreciate if you can provide some infomation to me.
I haven't used COM myself, only talked at length with someone who has, but...
I have the impression that a COM component on Windows isn't easy to code in C++. However, its saving grace is that a COM component is easy to use from VB. Thus, the hard work of a C++ coder makes life easy for VB coders thus encouraging code reuse. Can anybody confirm this?
If so, doesn't this mean QCOM is not right for KDE ON ITS OWN? ie there needs to be something in the role of VB (python? kscript?) which takes advantage of the QCOM components and is easy to use.
This is mostly correct. Only "mostly" because there is a template library which makes it much easier to write COM components in C++, so complexisty is not that much of an issue.
COM really shines when you are using (as opposed to writing) components and get to just click on the method you want. This can be done in C++ now too, I believe (someone using the latest VC++ please fill us in).
To return to the topic at hand, QCOM *can* be the right thing for KDE on its own if the tools (KDevelop?) are updated to take advantage of the QCOM components. This is true for C++, and it's also true of the script languages. (Q)COM creates a new abstraction of a "component", and one of the ways to leverage this is to make the tools aware of this new abstraction.
Hope this helps.
Actually, it is really easy if you stick to the basics of COM (like DirectX). In essence, COM is just a C++ object that exports virtual methods. (In VisualC++, that is actually the implementation, a C++ object and its vtable). I think all the arguements going on about COM being flawed are crap. The stuff that was built around COM (DCOM, OLE, etc) may be crap, but the internals are pretty elegant. Mozilla uses something like COM for a large part of its architecture, and it works pretty nicely.