Typing@klopix

This thread is devoted to train my blind-ten-fingers typing.
 

Feb 11 2008
XP — Four variables

Today I've tried twenty programs that train the keyboard skill. The most appropriate for me are: Verseq, TyperShark, Touch Typing. But nothing can bring me so much pleasure and satisfaction as typing real texts does. So today Ima try Extreme Programming Explained — Four variables chapter.

Chapter 4. Four variables

We will control four variables in our projects — cost, time, quality and scope. Of these, scope provides us the most valuable form of control.

Here is a model of software development from the pespective of a system of control variables. In this model, there are four variables in software development: Cost, Time, Quality, Scope.

The way the software development game is played in this model is that external forces (customers, managers) get to pick the values of any three of the variables. The development team gets to pick the resultant value of the fourth variable.

Some managers and customers believe they can pick the value of all four variables. «You are going to get all these requirements done by the first of next month with exactly this team. And quality is job one here, so it will be up to our usual standarts». When this happens, quality always goes out the window (this is generally up to the usual standarts, though), since nobody does good work under too much stress. Also likely to go out of control is time. You get crappy software late.

The solution is to make the four variables visible. If everyone — programmers, customers, and managers — can see all four variables, they can consciously choose which variables to control. If they don't like the result implied for the fourth variable, they can change the imputs, or they can pick a different three variables to control.

... Time — «nine women cannot make a baby in one month». Or «Eighteen women still can't make a baby in one month». ...

Focus on Scope

Lots of people know about cost, quality, and time as control variables, but don't acknowledge the fourth. For software development, scope is the most important variable to be aware of. Neither the programmers nor the business people have more than a vague idea about what is valuable about the software under development. One of the most powerful decisions in project management is eliminating scope. If you actively manage scope, you can provide managers and customers with control over cost, quality and time.

One of the freat things about scope is that it is a variable that varies a lot. For decades, programmers have been whining: «The customers can't tell us what they want. When we give them what they say they want, they don't like it». This is an absolute truth of software development. The requirements are never clear at first. Customers can never tell you exactly what they want.

The development of a piece of software changes its own requiremetns. As soon as the customers see the first release, they learn what they want in the second release... or what they really wanted in the first. And it's valuable learning, because it couldn't have possibly taken place based on speculation. It is learning that can only come from experience. But customers can't get there alone. They need people who can program, not as guides, but as companions.

What if we see the «softness» of requirements as an opportunity, not a problem? Then we can choose to see scope as the easiest of the four variables to control. Because it is so soft, we can shape it — a little this way, a little that way. If time gets tight toward a release date, there is always something that can be deferred to the next release. By not trying to do too much, we preserve our ability to produce the required quality on time.

If we created a discipline of development based on this model, we would fix the date, quality and cost of a piece of software. We would look at the scope implied by the first three variables. Then, as development progressed, we would continually adjust the scope to match conditions as we found them.

This would have to be a process that tolerated change easily, because the project would change direction often. You wouldn't want to spend a lot on software that turned out not to be used. You wouldn't want to build a road you never drove on because you took another turn. Also, you would have to have a process that kept the cost of chages reasonable for the life of the system.

If you dropped important functionality at the end of every release cycle, the customer would soon get upset. To avoid this, XP uses two strategies:

Pretty nice this time — two misprints and one absence. It feels very strong that the most valuable and important thing in typing is keeping the rhythm.

Feb 05 2008
 

Latest couple days I've been trying a type trainer program. It is based on feedback from user. Firstly it gives some templates and then watches where I do mistakes and then generates key combinations with those keys. So it is always kind an insensible shit. And now Ima try a good text. Again it is from book called Extreme Programming Explained.

Chapter 3. Economics of software

By adding up the cash flows in and out of the project, we can simply analyze what makes a software project valuable. By taking into account the effect of interest rates, we can calculate the net present value of the cash flows. We can further refine our analysis by multiplying the discounted cash flows by the probability that the project will survive to pay or earn those cash flows.

With these three factors (cash flows in and out, interest rates, project mortality) we can create a strategy for maximizing the economic value of the project. We can do this by

Options

There is another way of looking at the economics of a software project — as a series of options. Software project management can be looked at as having four kinds of options:

Calculating the worth of options is two parts art, five parts mathematics, and one part good old-fashioned Kentucky windage. There are five factors involved:

Of these, the worth of options is generally dominated by the last factor, the uncertainty. From this we can make a concreate prediction. Suppose we create a project management strategy that maximizes the value of the project analyzed as options by providing:

The greater the uncertainty, the more valuable the strategy will become. This is true whether the uncertainty comes from technical risk, a changing business environment, or rapidly evolving requirements. (This provies a theoretical answer to the question, «When should I use XP?» Use XP when requirements are vague or changing.)

Example

Suppose you're programming merilly along and you see that you could add a feature that would cost you $10. You figure the return on this feature (its net present value) is somewhere around $15. So the net present value of adding this feature is $5.

Suppose you knew in your heart that it wasn't clear at all how much this new feature would be worth — it was just your guess, not something you really knew was worth $15 to the customer. In fact, you figure that its value to the customer could vary as much as 100% from your estimate. Suppose further (see Chapter 5, Cost of Change, page 21) that it would still cost you about $10 to add that feature one year from now.

What would be the value of the strategy of just waiting, of not implementing the feature now? Well, at the usual interest rates of about 5%, the options theory calculator cranks out a value of $7.87.

The option of waiting is worth more than the value (NPV = $5) of investing now to add the feature. Why? With that much uncertainty, the feature certainly might be much more valuable to the customer, in which case you're no worse off waiting than you would have been by implementing it now. Or it could be worth zilch — in which case you've saved the trouble of a worthless exercise.

In the jargon of trading, options «eliminate downside risk».

Jan 31 2008
 

Today I'm sure I wanna type about image processing software. This piece is from Applied C++ book devoted to build software working fast, using smaller amou t of memory. And also they write plenty about how to test the code. So, belt yourself and let's go.

7.2 Performance tuning

Writing efficient code is something of an art. Efficiency is not about rewriting your application in assembly code or anything that drastic. Efficiency is about writing software that meets whatever design criteria you set for it. If a design specification states that an application should «run as fast as possible», you need to rewrite the specification. It is far better to say that particular operations need to execute in a specific amount of time (on a specific platform). For many applications, especially image processing ones, performance is a very important issue. However, it is surprising how many design documents do not address performance in a concrete way.

Ler's look at it another way. If you want to train to become a sprinter, you know that you need to run very fast for a relatively short period of time. If you were to write a list of goals for yourself, would you include a statement that says you should «run as fast as possible»? Of course you wouldn't. You would probably say that you need to be able to run a certain distance in a certain amount of time. And based on this goal, you can set up a training program to meet it.

Writing software is no different. You need to have plausible goals and then you can design a plan to reach them. Your goals can be difficult, but not impossible, to achieve. Sometimes a goal may seem impossible, but further investigation reveals it is actually possible, though extremely challenging. Having a goal that is well defined is absolutely essential for getting the performance you need from your application.

7.2.1 Genereal guidelines

It is not always easy to decide when you need to worry about performance. Our recommendations is to assume that a piece of software does not have any special performance criteria unless you know this statement to be false. Avoid writing a highly optimized piece of software to solve a timing problem that may not even be a problem. It is better to design a reasonable solution first, and then discover that it must run faster. This iterative approach to design helps you reduce development time and wasted effort. It is true that you can have cases where your product does not meet the expectations of its testers on the first iteration, but this is what an iterative design approach is all about. The product design specification needs to be as clear as possible regarding the desired functionality and performance of the application.

For example, let us look at how a customer interacts with a graphical user interface (GUI). We see that the overhead of the framework and the way you write your code often has little effect of performance. The customer communicates with the software by making various requests in the form of mouse or other pointer manipulation or keyboard input. For complicated interfaces, these requests occur at mo more that one request per second. The steps that the software takes to process such a request can be listed as a series of events: 1) Receive the event. 2) Invoke the event handler responsible for the event. 3) Process the event. 4) Update the user interface.

If the customer generates events at the rate of one request per second, then this sequence of events, including updating the user interface, must happen in mo more than half that time. Where did this number, .5 seconds, come from? It is simply a guess based upon our perception of how customers operate such systems.

Now, without worrying about specific operating systems or GUI implementations, we can make some assumptions about how long it takes to handle each of the above steps. Receiving and invokinga an event handler is a fast operation, even when written in a general purpose C++ framework. This step comprises a number of table lookups, as well as one or more virtual function calls to locate the owner of an event. It certainly does not consume a lot of time. Processing the event is strictly an application-specific task, as is updating the user interface. The amount of overhead that can be tolerated in this example is fairly large. The customer will have a very hard time disringuishing betwenn one millisecond and 50 milliseconds.

Ads a constrasting example, let's look at a real-time system that has hard performance requirements. As opposed to the previous example, where there is a person waiting to see results updated on the screen, these results are needed in a certain perfio s of time, or else the information is useless. The framework's overhead, as well as how the processing is written, is important. However, it is not so obvious how much overhead is acceptable.

We have found that dealing with percentages makes it easier to gauge how overhead can be tolerated. If your processing function does not spend 98 to 99 percent of its time doing actual work, you should examine your design more closely. For example, for very dast processing cycles, say five milliseconds, the framework overhead should be kept to less than 100 microcesonds. For s ower real-time systems that require about 50 milliseconds to execute, overhead should be less than one millisecond. The design of each of these systems will be very different.

To measure overhead for an image processing system, or other system that performs a repeated calculation on a number of samples, it is customary to compute the overhead on a per row or per pixel basis. Let us assume that we must perform one or more image processing steps on a 512x512 pixel image. If we want to keep the total overhead to one millisecond, the per-tow overhead can be no more than two microseconds. Two microseconds is quite a bit of time for modern processors, and this permits numerous pointer manipulations or iterator updates. We are not so lucky if we have a goal of only two-tenths of a microsecond per row. In this case, you should consider optimizing your code from the onset of the design.

If you find this calculation too simplistic for your taste, you can write so e simple prototypes to measure the actual performance on your platform and decide if any code optimization is needed. Since many image processing functions are easy to write, you can get a sence for how much time the operation takes. Our unit test framework is a convenient starting point, since the framework computes and reports the execution time of each unit test function. To get started, you would need to write at least two functions. The first function would contain the complete operation you want to test. The second function would contain just the overhead components. The execution time of the first test function tells us how long the image processing takes, while the second tells us if the overhead itself is significant. If our unit test framework was more complicated than it is, we would also need a third function to measure the overhead of the unit test framework itself. But, since the framework is so simple, its overhead is negligible.

Ok, just one offer: push button stronger (harder).

Jan 29 2008
The first one

I don't even know what's better — to use a computer training program for typing, or just do some texts? Definitely I have some problems with particular letters and even more difficulties with punctuation marks. And it is not quite clear how to train it? Just do a couple thousands repetitions? Or type regular text where a lot of PMs are presented? Don't know!

Ok, for the first time I can try to type some text not watching the text I'm typing here and then I'll see what kind of mispring I produce. But here is a simple task before I start my training: how to mark out the text I'm gonna produce? Well, let's try the following.

Practical file system design: 1.2 Design goals

Before any work could begin on the file syssem, we had to define what our golas were and what features we wanted to support. Some features were not optional, suck as the database that the OFS supported. Other features, suck as journaling(for added file system integrity and quick boot times_, were extremeely attractive because they offered several benefits at a presunabl  small cost. Still ohter features, suck as 640bit file sezes, were requires for the target audiences of the BeOS.

The primary feature that a new Be File System had to support ewas the database concept of the old Be File System. The OFS supported a notion of records containing named fields. Records existed in the databse fo  ever  file in the underlying file system as well. Records could also exist purely in the database. The databse had a query interface that could find records matching various criteria about their fields. The OFS also supported live queries — persistent queries that would receive updates as new records entered or left the se  of matching recorsds. All these features were mandatory.

There were several motivating factors that prompted us to include j urnaling in BFS. Fitst, journaled file systems do not need a consistency check at the boot time. As we will explain later, by their very nature, journaled file systems are always consistent. This has several implications: bot  time is very dast bea use the intre disk does not need chec ing, and it avoids any problems with forcinf potentially naive users to run a fle systrems consistency check program. Next, since the file system neede  to support sophisticated inxe ing data struct res fo  the datab se functionality, journaling made the task of recovery from failtres muck simpler. The small development cost to implement j urnaling sealed our decision to support it.

Our decision to support 64-bit volume and file sizes was simple. The target ausiences of the BeOS, are people who manipulate large audio, video, and still-image files. It is not uncommon for these files to grow to several gigavytes in size (a mere 2 minutres of uncompressed CCDIE-5-1 vide  is greater than 232 byres). Fu ther, with disk sizes regularly in the muyltigigabyte ranfe today, it is unreasonable to expect users to have to create multiple partitions on a 9GB drive becayse of file system linits. All these factors pointed to the need for a 64-bit-capable file system.

In additions to the above design golas, we had the long0standing golas of making the system as myl ithreaded and as efficient as possible, which meant fine-grained locking everywhere and paing close attention to the overhead introduced by the file system. Memory usafe was also a big concern. We did no  have the luzury of assuming large amounts of memory for bufffers beca se th  primary development system for BFS was BeBox with 8 MB of memory.

It was really cool and a lot of fun :). Now there are many types of mistakes: replacements, missings, misprints. The most frequent mistake is replacement 'h' by 'k'. Then I do a lot of mistakes in digit printing, dash and identity signs.

Now let me do another one.

Extreme Programming — Chapter 2 — A Development Episode

But first a little peek ahead to where we are going. This chapter is the story of the heartbeat of XP — the development episode. This is where a programmer implements an engineering task (the smallest unit of scheduling) and integrates it with the rest of the system.

I look at my stack of task cards. The top one says «Export Quarter-to-date Withholding.» At this morning's stand-up meeting, I remember you said you had finished the quarter-to=date calculation. I ask if you (my hypothetical teammate) have time to help with the export. «Sure» ,— you say. The rule is, if you're asked for help you have to say «yes». We have just become pair programming partners.

We spend a couple of minutes discussing the work you did yesterday. You talk about the bins you added, what the tests are like, maybe a little about how you noticed yesterday that pair programming worked better when you moved the monitor back a foot.

You ask: «What are the test cases for this task?»

I say: «When we run the export station, the values in the export record sh uld match the values in the bins».

«Which fields have to be populated?»,— you ask.

«I don't know. Let's ask Eddie».

We go look at the structure of some of the existing export test cases. We find one that is almost what we need. By abstracting a superclass, we can implement our test case easily. We do the refactoring. We run the existing test. They all run.

We notice that several other export test cases could take advantage of the superclass we just created. We want to see some results on the task, so we just write down «Rertofit AbstractExportTest» on our to-do card.

Now we write the test case. Since we just made the test case superclass, writing the new test case is easy. We are done in a few minutes. About halfway through, I say: «I can even see how we will implement this. We can ...»

«Let's get the test case finished first»,— you interrupt. While we're writing the test case, ideas for three variations come to mind. You write them on the to-do card.

We finish the test case and run it. It fails. Naturally. We haven't implemented anything yet. «Wait a minute»,— you say. «Yesterday, Ralph and I were working on a calculator in the morning. We wrote five test cases that we thought would break. All but one of them ran first thing».

We bring up a debugger on the test case. We look at the objects we have to compute with.

I write the code. (Or you do, whoever has the clearest idea.) While we are implementing, we notice a couple more test cases we should write. We put them on the to-do card. The test case runs.

We go to the next test case, and the next. I implement them. You notice that the code could be made simpler. You try to explain to me how to simplify. I get frustrated trying to listen to you and implement at the same time, so I push the keyboard over to you. You refactor the code. You run the test cases. They pass. You implement the next couple of test cases.

After a while, we look at the to-do card and the only item on it is resstructuring the other test cases. Things have gone smoothly, so we go ahead and restructure them, making sure they run when we finish.

Now the to-do list is empty. We notice that the integration machine is free. We Load the latest release. The  we load our changes. Then we run all the test cases, our new ones and all the  ests everyone else has ever written. One fails. «That's strange. It's been almost a month since I've had a test case break during integration»,— you say. No pr blem. We debug the test case and fix the code. Then we run the whole suite again. This time it passes. We release our code.

That's the whole XP development cycle. Notice that 1) Pairs of programmers program together. 2) Development is driven by tests. You test first, then code. Until all the tests run, you aren't done. When all the tests run, and you can't think of any more tests that would break, you are done adding functionality. 3) Pairs don't just make test cases run. They also evolve the design of the system. Changes aren't restricted to any particular area. Pairs add value to the analysis, design, implementation, and testing of the system. They add that value wherever the system needs it. 5) Integration immediately follows development, including integration testing.

This time it is way better. And I did it in another way. I was looking into the editor when I felt that I was making a mistake. Though it was rare it made such a breakthrough.

Ivan Yurlagin