Apr

30

IMAGINATION is a new candidate for replacing CAPTCHA, the recently fallen test for trying to determine if a computer or a person is on the other end of a connection. You’re probably familiar with CAPTCHA as that weird image composed of letters and numbers that you’re asked to read and type in to a box in order to do some operation on the web.

CAPTCHA was cracked some months ago (as I’ve previously mentioned) and one by one, the various implementations have fallen prey to the bots sending you spam.

The IMAGINATION program (click to try it out!) asks you to do two things: recognize images among a tiled set, clicking on the center of any one you choose, and then annotate an image with the correct description from a list of captions.

It will be interesting to see if this stands up to the hackers now that CAPTCHA is all but dead.

[ via Slashdot ]

Apr

30

Recently, the Kraken botnet has come into focus as the worlds largest, with an estimated number of zombie computers between 165,000 and 600,000. Each of these computers is probably sending you spam right now, and many have probably probed your computer to see if it can be compromised as well. Who knows, maybe your computer is already one of them.

Researchers at TippingPoint started out to determine the size of the network, which they did by building a server of their own, and waiting for zombies to connect to them for instructions. They eventually managed to attract a 25,000 in a week’s worth of time. Here’s where things get interesting.

Most botnets include a feature that lets the controller upgrade the zombie computer with a new version, so the researchers could use their new-found power for good, directing these machines to remove the infection, or render it benign. Due to liability concerns, TippingPoint, the good guys, decided they could not remove the infection.

In a comment attached to Amini’s initial blog post, Endler put it plainly. “Cleansing the systems would probably help 99% of the infected user base,” he said. “It’s just the 1% of corner cases that scares me from a corporate liability standpoint.”

I sympathize with TippingPoint, but it’s a sad commentary on the world when the good guys are afraid of doing something that’s clearly right out of liability concerns. While accessing a computer without the owner’s consent is illegal in the US, shouldn’t a Good Samaritan law apply in cases like this?

[ via CompuWorld ]

Apr

16

Back in the days of yore, security professionals used to be interested in things called covert channels. These are ways of communicating information into or out of a secured environment. Admittedly, most people interested in this also dealt with information that had access restrictions on them called things like “Top Secret” and “Special Access Required”. They also had prison sentences attached to disclosing them. Today, there are new covert channels that are far more of a concern. Read more

Apr

14

In recent years, the web has seen a proliferation of Web 2.0, or Rich Client, applications such as Google Docs. These web-browser based applications have a dynamic user interface using a technology called AJaX, for Active Javascript and XML. The user experience for these applications can rival traditional fat client applications, while at the same time, not needing to be installed and upgraded, since they’re downloaded to the browser as needed.

One of the great promises of these AJaX technologies is that of increased performance of the applications. If these applications make lots of small requests, this isn’t always the case in reality. Sometimes your performance can be significantly worse. Read more

Apr

9

I’ve been thinking a lot lately about the challenges of really doing SOA for an enterprise, and some of these thoughts were crystallized by the first of a two-part paper at InfoQ called “Beyond SOA“. It’s a bit of a challenging read, at least for me anyway. They’ve got some good points, that you have to identify the business as a system of interrelating units (subsystems) and that you have to design around real-world events and real-world entities. The author also believes that even the business stakeholders have an incomplete view of the problem domain, and they’re probably right:

 “A fundamental difference exists between an enterprise operator and an enterprise designer. To illustrate, consider the two most important people in successful operation of an airplane. One is the airplane designer and the other is the airplane pilot. The designer creates an airplane that ordinary pilots can fly successfully. Is not the usual manager more a pilot than a designer? A manager runs an organization, just as a pilot runs an airplane. Success of a pilot depends on an aircraft designer who created a successful airplane. On the other hand, who designed the corporation that a manager runs? Almost never has anyone intentionally and thoughtfully designed an organization to achieve planned growth and stability. Education, in present management schools, trains operators of corporations. There is almost no attention to designing corporations.”

And I’m with him so far. Designing services for the enterprise requires an uncommonly broad view: not only how the business functions, but how the business as system should function. But from there, the author makes some surprising suggestions:

We propose a fundamental change to how we architect and design information systems, in a way that will not require business stakeholders as the primary input. We recommend utilizing a framework centered on business entity lifecycle and event-model as the primary sources of input into the architecture. Business scenarios are used only as a post-architecture fine tuning exercise, instead of the main driver.

This framework-first approach accounts for the dynamic of the business from the beginning, and is not an afterthought. It implies that the designed enterprise application is dynamically adaptive, instead of statically adaptive as it is implemented by today’s methodologies. This cuts the cost of developing and maintaining software by orders of magnitude and cut into the over 70% of IT spend that is directly linked to changing and maintaining existing systems.

and most incredibly to me

A framework-first approach capable of integrating business changes into IT implementations and providing a clear translation between business and technology could raise the success rate to close to a 100 percent. Architecting complex systems could take days instead of months or years.

I’ve dealt with producing enterprise-grade systems for years. I  believe that treating an enterprise as a set of interrelated subsystems is a worthy idea, one that is particularly challenging because of the dynamic nature of businesses and their changing environments. When was the last time that your mailing subsystem changed without you looking at the code, after all? I think however it’s cavalier to suggest that a change to this style of development, one that’s incompletely described, can result in success rates of 100% while cutting IT spend by 70%. If you can do that, then I believe you have a license to print money. Until you can prove it, I’m a skeptic.

Apr

9

For quite a while now, we’ve thought that cyclomatic complexity was a pretty straightforward thing: less was better. Some recent research, covered over at InfoQ, has shown that maybe it’s not as straightforward as that.

For those who aren’t familiar, cyclomatic complexity numbers (CCNs, herein) are really pretty straightforward: they’re a measure of how many paths there are through a particular function. You start off with a natural score of one (because there’s one entry and one exit, therefore one path) and you add one for each loop or branch you encounter. They’re not a perfect measure of paths, but they do a pretty good job. There’s a close cousin which considers the nesting of control structures as well, but CCN seems to be more widely used.

For quite a while, we’ve assumed that there’s a linear relation between CCN a number of undesirable characteristics in code: fragility, bug rate, and lack of cohesion to name a few. I’m sure it probably causes sciatica as well. It’s reasonable to assume that a lower CCN is better. But here’s where it goes a little strange. According to the research

The results show that the files having a CC value of 11 had the lowest probability of being fault-prone (28%). Files with a CC value of 38 had a probability of 50% of being fault-prone. Files containing CC values of 74 and up were determined to have a 98% plus probability of being fault-prone.

So in general, you don’t want to be going out and creating functions with a CCN with a value of 400 (I’ve seen this, and no, you don’t want to know…), it appears that scores of somewhere less than the teens is not so bad.

The research isn’t without it’s critics. The scores are apparently aggregated for classes rather than the methods in them, so the results may be misleading. But it’s in intriguing result nonetheless.

The message here? If you’re using CCN as an indicator of where to look for problems (and you should be), you may want to recalibrate your radar for unnecessary complexity to be a little higher.

Apr

9

So I’ve been gone for a bit. It was kind of inevitable, as I was distracted by real-world events from blogging for a bit. I’m back now, so look out!

Blogroll

WP Themes