This is an interesting take on applying technology to solve a technology problem. This article on ITWorld’s site covers a panel discussion about the risks created by encrypting data.

“Risks caused by encrypting?”, you say, “I thought that was supposed to make things better!” The article points out that encrypting all your data could be a risky idea. If someone is able to compromise your keys somehow, all your data is now held hostage while you work out how to pay them.

“Organizations experienced with encryption are standing back and saying this is potentially a nightmare. It is potentially bringing your business to a grinding halt.”

It just goes to show that there’s no single silver bullet, and you have to weigh the risks vs. the payoffs for everything related to security.



This post over on 0x000000 talks about the newly released Firefox being vulnerable to a security compromise, as reported by Slashdot in this article. The NoScript plugin (which you should be running, incidentally) helps with this problem, apparently.

There’s some disagreement between Ronald and the Mozilla developers as to whether this is in fact a problem, and if so, whether it’s a big one. So far, the discussions have not come to conclusion, but it’s another example of assessing risks before determining whether you should fix them, which I just wrote about. It’s funny how these things happen in groups.



Slashdot is running this article on a flaw found in OpenBSD’s implementation of their pseudorandom number generator (PNRG). This number generator is used by the implementation of a number of network services on OpenBSD, and from there its found its way into a number of other *NIX implementations, including Darwin/MacOS X, FreeBSD, and NetBSD. Most of the implementations other than OpenBSD have committed to fix this bug, although Apple isn’t committing to when. Knowing what “random” number is going to come up next permits some exotic exploits that allow an attacker to compromise security. The question at hand isn’t whether this fault exists, it’s how important is it.

OpenBSD’s maintainers have decided that the bug is academic, and doesn’t represent a real enough threat to fix. This puts the debate squarely into interesting territory for me. I’m a pragmatist, and I think that the development activities you perform (and this includes fixing a bug) have to be related to their real value. I classify a bug in the category of “risk”, which means I use a two-part formula to determine how important it is to address.

Although many professionals don’t realize it, risks (and this includes bugs) have two dimensions, severity and probability. Probability is frequently given short shrift, and the focus is all on it’s more glamorous relative, severity. Priority is however a factor of these two dimensions.

For example of this principle in action, we need look no further than the example at hand: the PNRG generator in OpenBSD. The argument made by the maintainers is that the probability is low enough that regardless of how severe the consequences of the exploit, the chances that it will or can be exploited are low enough that the overall priority is low. This is a questionable assumption with an open source project such as OpenBSD, and with the amount of attention that this particular bug can get.

Developers, and their subspecies crackers, are a somewhat arrogant lot (and I include myself in that bucket, before anyone gets upset). If you tell us that we can’t do something, that usually is just blood in the water. Knowing that this exploit exists, even if we have to work really hard to get to it, well, we’ll try to figure out a way anyway. This adds to the probability, simply because there are so many of us out there. Additionally, the knowledge of this weakness is easily obtained, which also lowers the barrier to entry, again increasing the probability.

There’s always the possibility that someone else will fix the problem and contribute it to the OpenBSD community, since this is open source, but if the maintainers refuse to integrate a contributed fix, that could result in a fracture of the code base, and that isn’t good for anyone.

Time is the only way that we will tell whether the OpenBSD maintainers are correct in their assessment of the probability, and thus the risk involved with this PNRG bug. If proof-of-concept or actual exploits appear for this in the wild, then the maintainers may have little choice except to integrate a fix, or to suffer people moving to other BSD variants that don’t have the same flaw.



The irony is thick here. CSO is reporting that antivirus vendor AvSoft’s website has been compromised, and is serving up malware. This can’t look good to the neighbors…

The download section of AvSoft’s S-cop Web site hosts the malicious code, according to Roger Thompson, chief research officer with security vendor AVG. “They let one of their pages get hit by an iFrame injection,” he said. “It shows that anyone can be a victim. … It’s hard to protect Web servers properly.”



Dark Reading has an article up talking about the expected industry upswing in adoption of identity management solutions. The article points out that Sarbanes-Oxley, and other regulatory concerns, are driving this adoption. Centralized identity management is where one directory of users and their roles is created, and centrally managed. Applications make use of the central directory to get this information, rather than having users created individually within each system. This isn’t a new concept: tools like OpenLDAP, Microsoft Active Directory and IBM Tivoli Identity Manager have been around for a while now, but it hasn’t really got a lot of traction.

I wrote before about security as a chore, and discussed that there’s a historical tendency of some organizations to view security as an annoyance, and “somebody else’s problem”, because I’ve got enough to do with the work that I’ve got to get done.

This is despite the fact that centralized identity management for enterprises makes a lot of sense. It allows security to be managed by people who’s job it is to manage security. This stands a much better chance of being successful than taking someone who’s primary job is something else, and adding security in as well. As experience has shown, in an environment like the latter, your permissions tend to grow over time, as you move from job to job, department to department, because they’re never removed.

So why hasn’t centralized management gained more ground? There are a couple of reasons why, I think. The first is (or at least can be argued as) a valid business decision. When you’re given a choice between adding some new feature to your software that will give you a competitive edge, to do away with some annoyance that you’ve been fighting for what seems like forever, and integrating to a centralized identity management solution, what are you going to choose? Unless there’s a good business driver behind the identity management integration, it’s going to lose.

The second reason is a bit harder to justify: it’s the natural tendency for people to want to have control of their environments. If I have an application that my department uses, I may not look favorably on my user’s permissions for my application being in some system that I don’t control: “It gets in the way of doing my job most efficiently,” I may say, but to be honest, I just don’t like someone else being in control of that part of my world. This is a short-sighted view, and smacks of needless competitiveness, but within many  business environments, this is a common one. Managers typically don’t get to be managers without some degree of competitive drive.

I think it’s a good thing that regulatory pressures are adding weight to the centralized management side of the scale.  On the whole, I think that it’s a more manageable and thus more secure solution. You’ve given the management of security information to a group who’s main job it is to manage such information. This beats letting it continue to be handled by groups who’s primary responsibilities are tasks other than security. These organizations have a goal of doing work most efficiently, a goal that is in fact hampered by security, because it’s most efficient to give someone all the permissions, and to never to back to manage them, after all.



Last week, I posted about Yahoo’s CAPTCHA being cracked with a 30% success rate.

This week, Computerworld is reporting that Microsoft’s CAPTCHA, used for “proving” that users of their Live Mail service are people.

On average, the bot returns the correct response 30% to 35% of the time and successfully creates an account, Hubbard claimed.

Sounds suspiciously like the 35% success rate reported last week. Probably the same group in Russia. Looks like this one is in the wild now.



Last week, I wrote about the possibility of having your smartphone searched when you’re pulled over for a traffic violation. This is even more concerning, the Washington Post has this article up about searches of laptop and other electronic devices by federal agents in airports.

The lawsuit was inspired by two dozen cases, 15 of which involved searches of cellphones, laptops, MP3 players and other electronics.

The article cites examples where travelers were asked to surrender their login and password, to access their email, and divulge other potentially sensitive information. One woman had her laptop taken, after she surrendered the login and password, and it’s never been returned:

“I was assured that my laptop would be given back to me in 10 or 15 days,” said Udy, who continues to fly into and out of the United States. She said the federal agent copied her log-on and password, and asked her to show him a recent document and how she gains access to Microsoft Word. She was asked to pull up her e-mail but could not because of lack of Internet access. With ACTE’s help, she pressed for relief. More than a year later, Udy has received neither her laptop nor an explanation.

All this, without a warrant. If this doesn’t qualify as unreasonable search and seizure, I’m really at a loss as to what does.

As a result of these actions, some corporations have issued instructions that employees clear their hard drives of sensitive information before traveling overseas. You may wish to no longer travel with that laptop or smartphone.

Update: Computerworld has a follow-up on “5 Things You Need to Know About Laptop Searches at U.S. Borders



OpenID Gains Supporters

February 7, 2008 | 1 Comment

IBM, Microsoft, Verisign, Google and Yahoo! have joined the OpenID board, as reported by CSO. OpenID allows a single registry of authentication credentials (login and password) to be used at all participating web sites.

Single registry systems have been around in corporate intranet environments for a while (Microsoft ActiveDirectory, IBM WebSphere IdentityManager, OpenLDAP, etc). They’re a nice tool for a centralized organization to manage user credentials.

The hazard of widespread adoption of such a system are twofold, I believe: a single set of credentials allow you to log in to a variety of sites. If I can compromise your password, then I gain access to all these sites. This may be no worse than today, if you use the same login and password for all the sites anyway, but it does make it more difficult for you to have different logins and passwords, should you so desire.

Secondly, and perhaps more subtle, if I compromise your password, can I register for new sites that support  OpenID  that you don’t even know about? This needs more looking into…



Dark Reading is covering the Computer Forensics show in Washington DC, and has this  article on a presentation by Peter Tippett, the guy who invented what would become Norton Antivirus, exec at Verizon, and chief scientist at ISCA. Tippett’s point is that security departments need to be smarter about what they focus their time and effort on:

“You can’t always improve the security of something by doing it better,” Tippett said. “If we made seatbelts out of titanium instead of nylon, they’d be a lot stronger. But there’s no evidence to suggest that they’d really help improve passenger safety.”

Hallelujah, brother! I’m a pragmatist, and I believe that you have to carefully evaluate your level of security, because 100% secure is probably too expensive. There’s a definite point of diminishing returns for security, and it’s different for every application you’re going to build.

Now, I haven’t seen the complete  text of the presentation. Tippett cites a number of things that he thinks we should not be doing:

For example, today’s security industry focuses way too much time on vulnerability research, testing, and patching, Tippett suggested. “Only 3 percent of the vulnerabilities that are discovered are ever exploited,” he said. “Yet there is huge amount of attention given to vulnerability disclosure, patch management, and so forth.”

It’s rather short on things that he thinks we should be doing, unfortunately, citing only a single example. As a result, the article comes off as a bit of a “geez, we need to do this better” without any concrete recommendations as to how to go about improving. How is unfortunately what most folks need…



The Unified Modeling Language is primarily used for describing the design of software systems, although it can be used for other purposes as well, such as business process modeling. This is the second in a series of posts covering the fundamentals of the UML, it talks about UML behavioral diagrams. The previous post covered requirements diagrams, and a subsequent post will cover UML structural diagrams. Dynamic diagrams describe the behavior of a system, the interactions that it performs in the course of doing work.

Read more

« go backkeep looking »


WP Themes