Core KDE developer George Staikos recently hosted a meeting of the security developers from the leading web browsers. The aim was to come up with future plans to combat the security risks posed by phishing, ageing encryption ciphers and inconsistent SSL Certificate practise. Read on for George's report of the plans that will become part of KDE 4's Konqueror and future versions of other web browsers.
In the past few years the Internet has seen a rapid growth in phishing attacks.
There have been many attempts to mitigate these types of attack, but they
rarely get at the root of them problem: fundamental flaws in Internet
architecture and browser technology. Throughout this year I had the fortunate
opportunity to participate in discussions with members of the Internet Explorer,
Mozilla/FireFox, and Opera development teams with the goal of understanding and
addressing some of these issues in a co-operative manner.
Our initial and primary focus is, and continues to be, addressing issues in PKI
as implemented in our web browsers. This involves finding a way to make the
information presented to the user more meaningful, easier to recognise, easier
to understand, and perhaps most importantly, finding a way to make a distinction
for high-impact sites (banks, payment services, auction sites, etc) while
retaining the accessibility of SSL and identity for smaller organisations.
In Toronto on Thursday November 17, on behalf of KDE and sponsored by my company Staikos Computing Services, I hosted a meeting of some of these developers. We
shared the work we had done in recent months and discussed our approaches and
strengths and weaknesses. It was a great experience, and the response seems to
be that we all left feeling confident in our direction moving forward.
There was strong support for the ideas proposed and I think we'll see many of
them released in production browsers in the near future. I think we were
pleasantly surprised to see elements of our own designs in each other's
software, and it goes to show how powerful our co-operation can be.
The first topic and the easiest to agree upon is the weakening state of current
crypto standards. With the availability of bot nets and massively distributed
computing, current encryption standards are showing their age. Prompted by
Opera, we are moving towards the removal of SSLv2 from our browsers. IE will
disable SSLv2 in version 7 and it has been completely removed in the KDE 4
source tree already.
KDE will furthermore look to remove 40 and 56 bit ciphers, and we will
continually work toward preferring and enforcing stronger ciphers as testing
shows that site compatibility is not adversely affected. In addition, we will
encourage CAs to move toward 2048-bit or stronger keys for all new roots.
These stronger cryptography rules help to protect users from malicious cracking
attempts. From a non-technical perspective, we will aim to promote, encourage,
and eventually enforce much stricter procedures for certificate signing
authorities. Presently all CAs are considered equal in the user agent
interface, irrespective of their credentials and practices. That is to say,
they all simply get a padlock display when their issued certificate is
validated. We believe that with a definition of a new "strongly verified"
certificate with a special OID to distinguish it, we can give users a more
prominent indicator of authentic high-profile sites, in contrast to the
phishing sites that are becoming so prevalent today. This would be implemented
with a significant and prominent user-interface indicator in addition to the
present padlock. No existing certificates would see changes in the browser.
To explain what this will look like, I need to take a step back and explain the
history of the Konqueror security UI. It was initially modeled after Netscape
4, displaying a closed golden padlock in the toolbar when an SSL session was
initiated and the certificate verification project passed. The toolbar is an
awful place for this, but consistency is extremely important, and during the
original development phase of KDE 2.0, this was the only easy way to implement
what we needed. Eventually we added a mechanism to add icons to the status bar
and made the status bar a permanent fixture in browser windows, preventing
malicious sites from spoofing the browser chrome and making the security icon
more obvious to the user. In the past year a padlock and yellow highlight were
added to the location bar as an additional indication. This was primarily
based on FireFox and Opera.
I was initially resistant to the idea of using colour to indicate security -
especially the colour yellow! However the idea we have discussed have been implemented by Microsoft in their IE7 address bar, when I saw it in action I was sold. I think we should implement Konqueror the same way for KDE4. It involves the following steps:
- The location toolbar becomes a permanent UI fixture along with the status bar
- The padlock goes into the location combo-box permanently, is the only place
it appears, and the location bar stays white by default
- When verification on a site fails, the location bar is filled in red
- When a high-assurance certificate is verified, the location bar is filled in
green, the organisation name is displayed beside the padlock, and it rotates
displaying the name of the CA
I am afraid that the missing yellow will confuse our users, but at the same time
I think it was misguided to add the yellow when it was added, and I think this
is the price we must pay. Hopefully users will be able to adjust quickly, and
KDE4 is the right time to do it. The existence of the padlock and extended
identity information makes it safe even for those who have difficulty
One more key item that Microsoft is implementing is their anti-phishing
plug-in. I hope that Microsoft will be open with this system and allow us
to write our own Konqueror plug-in, allowing our users to contribute to their
database and take advantage of it. I think this is in everyone's best interest.
Microsoft says that they are not evangelising the anti-phishing service to other
clients at this time but they are "working with the community on the issue
through many avenues and groups, such as the Anti-Phishing Working Group and
Digital PhishNet". They didn't rule out the potential to open up their client
technology in the future. They suggested that others interested in offering
similar technologies could take their own approaches and work with the same
industry data providers that they use.
I'm very optimistic about the future of co-operation among browser developers
and I hope this recent work signals a new trend of good relations. Together we
can really create some amazing new technology and make it possible to solve some
of the major problems we face today.
Little surprised to see IE on board with a meeting like this, but it is good too see. For better or worse most people still use it. This way, those who migrate over will see things they are used to (colored address bar) and those who don't migrate will have that added security.
Congrats on bringing all those browser folks together!
Actually, judging from the IE Blog on MSDN (http://blogs.msdn.com/ie/), they're getting quite into the whole browser development these days. If they manage to get their browser straight we'll be up to developing by real standards in some 4-5 years when it gets adopted by normal users.
nice initiative on the SSL front.
about the phishing prevention i am not so sure. we should come up with a better solution to the identity verification problem and if that requires better technology _that_ should be supported.
fx. could banks sign our keys and when going to a site a small notification would tell me: this is your bank! if thats missing, then i know its not my bank...
another problem is the location field: half (if not more) of the people surfing the net, don't have a clue what is going on in there. now how do we make that more simple?
anyway, trying to blacklist or whitelist stuff is really not a smart way to do this. also, this problem is not new and exists in the same way in the real world: credit fraud, fake notes, weird guys ringing your door bell trying to sell stuff etc.
the reason why phishing (on the internet) is so attractive are:
- easy to do
- difficult to track
- clueless users
AFAICT the proposed solution does not solve any of these problems... the way i see it, it gets even worse, because "now we can feel safe again"!
... and i don't feel confident about sending each and every URL i visit to some server for authorization (even if it is only SSL).
in conclusion i would claim that the clueless user is the biggest problem here and i tend to agree that technology should be simple enough for the clueless user also... but he has do to his part as well. so lets educate those guys...
This was in a comment in the msdn blog concerning MS's Anti-Phishing plug-in.
This was in a comment in the msdn blog concerning MS's Anti-Phishing plug-in.
"another problem is the location field: half (if not more) of the people surfing the net, don't have a clue what is going on in there. now how do we make that more simple?"
I am glad you mentioned this. Raw URLs would not have gotten a prominent display in the GUI, had the WWW been developed with the general public in mind. URLs were just the kind of notation that nuclear phycisists would find user friendly! ;-)
After 15 years, the URLs have become established, and the editable address field is something the users expect. No browser vendor would dare changing it drastically in one go. But I think drastic change is called for.
A URL consists 4-5 logical items:
1) The scheme/protocol, e.g. http:// (mostly optional, http being default)
2) The domain name
3) The port number (optional)
4) The path/filename
5) The query strings (optional)
The protocol could be replaced by an icon. The domain and the port number could be considered one unit. The path and the query strings are somewhat dependant on each other; commonly the query strings are an optinal appendage to the path.
Displaying all these as one continuous, unformatted text string is not a good example of usability.
Actually this sounds like a really good idea. We could even hide the query completely, so that it is all displayed more clearly. Then have a button linked to a popup with all the queries editable.
I am liking this idea the more I think about it.
If the address field was read-only, changing the display to something more readable and structured would be trivial.
However, it is editable. The users expect to be able to paste raw URLs into it, and to edit those URLs freely. Because of this, any change is likely to upset a lot of users.
If the address field accepts a full URL with query strings as input, yet hide the query strings when the links are accessed by clicking them, then it breaks the "principle of least surprise". How could one make it clearer to the user what is going on? Press Tab to switch from raw to formatted URL? (too power-user-ish, IMHO)
Remember Gopher? It died about 1995, but in the early 90s it was more popular than WWW, and growing faster. One major limitation is it didn't have a URL. Every site gave directions in the following form:
goto the UofMN main gopher (which all clients had programed in)->all the servers in the world->North America->state->Company main page->link->link->text file. (Gopher did not support graphical pages like the web did).
URLs may not be perfect, but that was one large advantage of the web - you could give someone a (complex) URL and they would get right to their site.
Maintaining the UofMN main gopher site cost so much money that they started to charge for use of gopher, and killed the whole thing off.
I disagree completly!
I paste about 100 URLs per day and splitting them into 5 fields would be annoying.
How often do you keep anything fix but the protocol?
...but the port?
...but the path-and-file?
Sometimes you may only change the query-string, but who does this by hand?
If it is build as optional feature (splitted URL vs. classical URL), and the user is free to choose: Well, I can agree.
People would learn more easily how to read an URL, and might switch to the more usable feature (classical) later on.
I am not sure whether _editing_ the URLs in a 5-part form would be preferrable. But I think it is worth considering.
As for copying and pasting: there are context menus for page addresses and link URLs. I think a raw/formatted toggle is called for, to not disrupt the workflow of those who rely on raw, editable URLs in the address field. Changing it over night with no way back is way too disruptive.
bluGill: Sure, we shall forever be able to pass URLs around in one piece. Just like we can pass around files of HTML edited in Notepad. They just won't appear verbatim in modern clients, like HTML is not shown like a bunch of tags in monospaced font to most users. Old-timers will complain during the transition, because they think the URL widgets are more convoluted than plain raw URLs.
It's time to expect a richer representation of URLs than we are used to today! For example, a URL widget could contain the anchor text. Of course, the anchor text is not always helpful, like the WAI-defying "here" or "next". But would it hurt to make such accessibility failures stand out more?
Yeah your right, you do need a way to toggle off and on the "formatted URL" and you also need a quick way to copy a link. But both of those features can be implemented along side the formatted features.
I guess one of the reasons why I really like this idea is because in many cases the URL ends up having so many different query strings that they become completely unreadable and quite useless at a glance. Being able to quickly see the site name and path gives an indication of the sites structure. Which I think can be useful for navigation.
Keep in mind that some CMSses "cheats" with their URLs to make them look like static content URLs:
The latter is shorter and cleaner. And if the CMS is clever enough, it can provide all the proper directives (ETag, Last-Modified, Content-Length...) of a proper static document (brace yourselves for some weird bugs...)
This can be used to make the URLs more opaque, too. Instead of a lot of semi-readable parameters, you get one or two really long numbers, or random strings. You don't know how much of the URL is the path to the web application and where the input parameters to the web app start. Systematic digging for articles or vunlerabilities becomes harder that way.
Hence, the path and the query string are not separate entities.
I've suggested before and will suggest again that syntax highlighting be added to the address bar. It's not too hard to have some highlighting of key portions of the text (protocol, separator marks, query entries, whatever) and doesn't completely break proper usage of the location bar (pasting paths and hand-typing addresses). Also, it might make it easier for new users to learn what that stuff actually means.
Of course, I suggested that first over a year ago, so I expect nobody is gonna rush to implement it now.
The original posting was about user interfaces to maintain security. In such a context, "usability" gets a different meaning. The usual meaning is along the lines of "whatever makes the user happy, empowered, productive, faster, and is easy to learn".
Usable security features have another primary goal: Keep the user out of harm's way. Making sure that the program follows the user's _reflected_ intent. That often means deliberately slowing down the user, so that there is indeed time to reflect. It also means telling the user about perils that the user may know little or nothing about, yet avoid crying wolf too often.
In this context, the address bar has a flawed design. It is quite evident that it fails to keep the majority of users out of harm's way. The address bar does not ensure that the browser follows the user's real intent. All the successful phishing scams serve as proof of that.
The web is about serious stuff now. People buy expensive stuff and manage bank accounts with it. Yet they are totally oblivious of the underlying architecture. Which is OK! You don't need to know the building structure of a house to use it safely. Opening and closing doors is not supposed to have fatal side effects. Web browsers and web applications need to be the same way, so users can trust their gut feeling without getting burned again and again.
Konqueror browser web and konqueror browser file should be separated. Althought konqueror browser file can use khtml. This is the best. Less dificult, more open velocity, ...
I totally agree with you. Konqueror is bloated because of its "file manager" aspect.
Make an external browser using khtml and focus on it to improve add-ons development, plugins,... and let Konqueror use kpart to see html (like now).
If you guys actually knew anything, you would be dangerous.
When you become real-life coders and know what you are talking about...well, that's never going to happen, so let's all have a laugh.
While technically Joe's outburst is more learnèd than those he was responding to, the fact that there is a *perception* to the contrary is enough to perhaps to consider...
Of course I know what I am talking about. My name is on one of the Konqueror add-on changelog files. Rellinks is a small add-on but it was sufficient for me to stop working on Konqueror add-ons.
And what I can say is that a Konqueror add-on developer need to deal with the "file manager" aspect of Konqueror.
And I am not a GNOME troll. I use KDE since KDE 1.0. To love something doesn't mean that you mustn't critisice it :)
Konqueror can open PDF, but kpdf is the PDF application.
Konqueror can open images but kview it the image application.
I only want KDE to be able to open web-pages (like now) but I prefer an external KDE browser focused on web and flexibility for plugins, addons,... See how much addons Konqueror has and how much firefox has. I hope that one day it will be possible to program addon for Konqueror with kjsembed.
I don't why my name was changed to "Keep them" but the previous message was mine.
Less profiles konqueror and more two aplication :o)
what rellinks really needs is a means to say "i'm for this type of data or this type of URL/browsing only" and thereby have a way to filter what plugins are chosen / selected by default.
splitting konqueror into two pieces along the lines of "file" vs "web" won't actually solve this. it will address certain specific cases (e.g. rellinks, actually) but leave the problem for many other plugins.
Im a developer too. Linux Philosophy is a program for each necesity and this programa do it work very good.
Im sorry my English.
Unix philosophy is to have small tools that do one thing well, and that can be work together to do new things as well.
Think using ls | more in a shell.
Konqueror is just another kind of shell, and the KParts and KioSlaves are the equivalent to the shell commands. I actually see little point in KPDF as a program, actually - what can you do with it that you can't with its KPart embedded in Konqueror?
You can actually combine the same commands in a different way, too.
On the other hand, Akregator uses the khtml part to give a different functionality, and I'm increasingly relying on it to browse the net, to the point that I'd like it to grow some more "regular" browser functionality (the Location bar, for example).
You are guest, but...here we are speaking split konqueror.
Why kpdf if konqueror display pdf with kpart of kpdf?. Because is good.
The same is browser and konqueror. why dont have a browser out of konqueror as kpdf or other aplications?.
again Im sorry my English.
Are you Gnome trolls perhaps?
The above improvements sound good. But can I make a further suggestion?
Like many people, I have a few sites that I want complete assurance about, such as my personal banking sites. I don't want to simply trust a third-part CA to vet them, even if it is capable of providing high-assurance. As well as concerns about the business model for that CA, it still will sign a very large number of web-site certificates. If any of those web sites were compromised or the CA was tricked into signing a certificate, it opens an opportunity for the browser to say "highly trusted" when it isn't - and may even be a different web site if DNS could be compromised. And I expect it would take a long time, if possible at all, to persuade all sites to get the signed by one of the "blessed" CAs.
I much prefer the model used by the Petnames extension of Firefox (http://www.waterken.com/user/PetnameTool/), which allows me to register the server digital certificate thumbprint, and to give the site a nick-name ("My bank"). If the certificate changes in any way, I'll get warned and can do the appropriate checks. Effectively I'm managing my own white-list of a handful of sites, so don't need to trust someone else's whitelist of tens of thousands; or even worse a blacklist of far more.
This can co-exist with the proposals above; for example by allowing the user to store their trust relationship which then displays (say) a blue address bar. Other sites will go through the green / red / white display.
Certificates protect against hijacked DNS. It's a two-way authentication.
Yes, they do, but this Petnames business can protect against something *else*. Say I set a Petname for amazon.com to be 'Amazon Bookstore'. Now some tricky email directs me to amafon.com, and I'm blind, so I don't see the s/z/f/. The Petname 'Amazon Bookstore' will *not* come up, making it obvious I'm not at Amazon.
Note of course that there will be a nice yellow lock icon anyway, because Amafon.com has bought a security certificate certifying that they are Amafon.com.
Please, please, please, please, please dont't remove support for weaker ciphers, just have them disabled by default. I often make use of weaker ciphers in my work and if you remove support for them you will force me to use another browser. If you disable them I will then be able to re-enable the support when I need it.
i'm sorry, fizz, but as you might have read, most (if not all) browsers will remove support for weaker ciphers soon, so you'll have to just use an older version, or upgrade your sites.
but why do you do it?
the reason they are being disabled is because it is a bad idea...
I can see that most of the replies to my posting are along a similar line to this. I do however have a very specialised requirement for this feature, and they are not my systems that I will be accessing. I am a security specialist and have been recommending for some time now that all my clients remove support for weaker ciphers from their servers. Occasionally it isn't possible to use stronger ciphers.
Why not do what MS have said that they will do with IE and simply disable support for the weaker ciphers, but not remove it. Then when there is a requirement for weaker ciphers (for whatever the reason) IE can still be used. If KDE remove support for weaker ciphers it will be a reason not to use KDE. Please just disable support by default.
I have to speak up in favor of keeping weaker ciphers available but disabled. Otherwise, I'd have to keep an older version (probably a statically linked firefox) kicking around because my ISP is the only broadband provider available, "only supports Microsoft products" (blames your client if it complains), and would probably just provide instructions to re-enable weak ciphers when IE 7 rolled around.
Can you please tell me how to disable the weaker ciphers on IE 6.0?
The security concern and the cause is more important than those who continue to use terrible security practices. Supporting everything and everyone is only good if everything and everyone is safe and that isn't the case. This is a smart decision.
You can turn off encryption altogether if it comes to that.
Between being completely open and providing a false sense of security, I'd go with being completely open. At least (some) users will have a clue not to provide sensitive information on those work websites (even if on an Intranet).
George, thank you for all the effort and work that you are putting in this. It will definitely help the world of browser to move toward more security and without somebody tackling the problem as you did, we could have seen many different and incompatible approaches. You are doing a great job !
Yes, very nice job George. Great to see co-operation from the majority of browser vendors on such an important matter.
Now I know why they are improving the IE, they found a bunch of Indians doing the job....
This is not /.; be nice.
Who is racist? AFAIR Microsoft had indians do the not that appealing job of writing .NET bindings for various Windows libraries. And western (and especially US) software firms experiment with outsourcing tasks that they presume are routine tasks - easily managed from a distance.
I'm not being racist...
it was just funny to see that it were all indians doing it. Calling it IndianExplorer instead of Internet Explorer.
It just gave a new meaning to the verb: outsourcing....
Maybe the sentence wasn't really nice, but I'm no englishman...
please remember that there are color blind people out there.
Conveying information only by means of color is not so good from an accessibility point of view. But maybe using that in addition to distinct icons would be great.
I don't know, but is the (U.S.) traffic light a world standard? If so, it might be used as an indicator where the colored light would change in intensity and size for the visually impaired.
Red and Green are EXACTLY the same color to me.  I go when the traffic light turns BLUE. There is a green tint, but blue is a better description of the color. Modern traffic lights have the blue tint (which most people do not notice) because nearly all color blind people can tell blue from red. (There are about 100 different forms of color blindness, so I don't want to make a statement that is too strong)
There is a size and intensity difference to lights, but it is not something I actually notice. I've known people to stop at a strange light because all they knew was that one of the lights was lit, but had no clue which one. (When the lights were in a horizontal row, not vertical like the lights where they live)
I guess I have answered the objection though. When the color red is shown, it should be pure red, with no blue part at all. When Green is shown it should have a significant blue part, almost blue-green. If size and intensity can be varied, so much the better.
This is important to me, because there is a lot of color blindness in my family. (Enough that my sisters are color blind, which is nearly unheard of)
Actually, I'm not that color blind, I can normally tell red and green apart. Not always though. I do however know people who are that color blind that they cannot tell the two apart.