Wednesday, October 6, 2010

Who do we trust

In the old days of the internet (10-15 years  ago) there used to be about 10-20 trusted root ca's installed on my operating system. On my windows-xp machine at work i have hundreds.

For those of you who don't know what i'm talking about, here's a basic intro:

Root Certificate Authorities

When you connect to certain websites you will see a "Secure lock" icon appear in your browser in an address that starts with https.
What this is supposed to mean is that the connection is "secure".
This security is provided by a number of protocols, most importantly each website has a certificate that says "the company X owns this address".
Now the problem is that you can't just take someone at their word - you need some trusted third party that verifies that this person is who they are.

This is a similar idea to personal id- your government serves as a trusted authority that provides a certificate (id) that verifies someone is who they say they are.
In the world of the internet, this authority is called a Certificate Authority (CA).

The difference between the government and the internet in this case is that we implicitly assume that the government is a trusted issuer of id's.
In the world of the internet, there is no accepted, trusted authority that we can count on to produce these id's. Several commercial organizations then took this role and has been accepted as "trust worthy".
This makes them what is known as Trusted Root Certificate Authority.
What this means is that certain organizations were accepted as trusted and are allowed to ascertain  the identity of others.
The root ca is responsible for the validity of the certificates it provides and holds the power to revoke them if they are misused or stolen.

The problem is that once a certificate authority is accepted as root, it holds tremendous power for abuse.
So when you find yourself with hundreds of them installed on your machine - then something is very wrong.
What could a rouge root certificate authority do with it's power?

First of all, a rouge root ca is in a sense unstoppable - once a certificate is accepted as trusted on your machines local certificate store there is no "higher authority" that can revoke the certificate.

If someone gains control of a root certificate authority they can use it to fake the identity of anyone.
This opens up all "secure" traffic to an undetectable man-in-the-middle attack.

Main In The Middle

in  a man-in-the-middle attack works something like this:
say that you and i talk to each other on the phone.
we both assume that you are listening to me when i say something, and that when you speak i am hearing you talk.
now let's assume we've never met and neither of us knows what the other one sounds like.
I call you and i assume you answer, but in fact some other person is on the other end and he is on a separate call with you.
They repeat most of what i am saying to you, and most of what you are saying to me.
But since they are in the middle of the line, they can change what is being said.
Neither one of us is even aware that something is wrong...

This is the essence of a man in the middle attack. On the internet, the equivalent to you and me knowing what we sound like is the certificates given to us by the root certificate authority.

What could someone do if they were able to impersonate a root ca?
They could monitor any secure channel you have - you email. your Facebook account, basically anything you log into. They could also act on your behalf and you would not even suspect something is wrong, no software would alert you, no anti virus.

DONT PANIC

My intention is not to alarm anyone about someone reading your emails because of compromised root ca, but to point to the fact that the more root ca's are installed by default on our operating system, the grater the chance that one of them is compromised.
And that my computer has root ca's installed by certificate authorities around the world, many of which i do not consider the least bit trustworthy.

Saturday, May 22, 2010

Introduction To Programming - What is Code?



In the most technical sense computer code (or source code) is a series of instructions that can be translated through a process that varies from trivial to complicated* into machine "language".

While technically correct, the previous definition is pretty boring. It is also very far from what actual code is like. It certainly doesn't help us understand what it's for and why it works the way it works - which I think are the more interesting questions.

Before I try to talk more about what real world code is like and some more interesting technical aspects, I would like to say that there are many different programming languages and there are many things that can be considered computer code, from the structural language called HTML that describes the page you are reading to the assembly language that is used to write processor instructions.

It's hard to cover the huge variety of what can be considered "code" in a single, broad definition so instead of doing that I will try to explain how code is used in some of the common programming languages and the why's, how's and what's of this code.

Why do we need it?

Computer code is a tool we use to describe ideas, concepts and processes in a kind of functional way that can be converted into machine language.

We need this tool because there is a dividing line between what can be easily and efficiently done in hardware - the physical components of a computer, primarily in the processor - and what can be efficiently done in software. Code begins pretty much where the hardware leaves off.

Processors are good at things like adding one number to another number, moving bits from one place to another, multiplying, subtracting and going through these instructions in sequence. They are also good at performing logical decisions like "if this number is a zero move to the next instruction, if it's not zero jump 5 instructions"

Machine language is these basic instructions, there is a language called Assembly which exists directly "above" machine code. It allows giving instructions in a way that translates almost immediately into these instructions. It is rarely used for complicated things and is considered one of the toughest and most respected specializations in computer programming. Its proximity to the actual hardware gives the programmer the most control of any language over what goes on in the processor. It can allow for some of the most efficient code that can be written.

This control comes at a cost - expressing complex ideas is very hard, writing even the simplest programs requires a lot of thought and training and even with training reading Assembly is a tedious job. A good knowledge of Assembly is one of the essential tools for hackers.

Assembly is rarely used in software development. languages like Java, c and c++ are far more popular as they allow describing more complicated structures and expressing complicated ideas in simpler ways then assembly.

A 2 lines of code in java, like this one counting to 100 and printing each number:

for (int i=0; i<100; i++)
    system.out.println("i = "+i);

Will translate into hundreds of instructions in assembly.

When programmers speak of computer code, they rarely talk about Assembly.
The thing is, using other more "high level" languages like Java or c++ requires some kind of translation into this computer code as this is the only thing that can run on the physical hardware. This is why a process called "compilation" was introduces, which translates higher level languages into machine code. This abstraction allows us to create far more complicated software in exchange for giving up control of the specific instructions given to the hardware.

Higher and more specialized languages exist that are even further away from the processor.
The common thing to all programming languages is that they exchange some level of control over what really happens on the machine for clarity and ease of use.

The building blocks

We discussed the why, so now let's talk about what code is made out of.

The goal of any higher level programming language is to provide the syntax or "semantics" to express complicated ideas in a coherent, manageable and extendable way.

There are some concepts that are shared between most languages, these are:
variables, conditionals, loops, procedures (or methods/functions) and mathematical operations. Another important common concept is the Comment. Slightly less common are classes.

To better understand what code is let's look at the building blocks.

Variables - variables are a way of giving a name to some of the computer's memory in a way we can easily use in the code.
For example, we can define:

int myVariable = 100;

now we can refer to "myVariable" in other places in the code, we could change its value, add to it, subtract from it or compare it to other values.

Conditionals - conditionals are a way of asking for one thing or another to happen, depending on some logical condition.
For example this rule:


if (myVariable<100)
    system.out.println("My variable is less than 100");
else
    system.out.println("My variable is equal to or larger than 100");

States that the line "My variable is less than 100" should be printed if the value within "myVariable" is less than 100, and the line "My variable is equal to or larger than 100" otherwise.

Loops - loops are a tool we use when we want to perform repeated operations. One of the useful properties of computers is that they can do certain things many times and very fast. Loops are a way to facilitate this property. A loop generally consists of one or more instructions we want to perform and some condition that will make the loop stop.
For example:

for (int i=0; i<100; i++)
    system.out.println("i = "+i);

This code says something like this:
"For as long as the variable "i" - which starts out at zero - is smaller then 100, increase "i" and perform the next instruction"
put another way - do the next line 100 times.
This next line says write to the screen the text "i = " followed by the value of i.
the output would look like :
i = 1
i = 2
...(and so on)

Procedures/Functions/Methods - Procedures are one of the mechanisms that allow us to create new tools and break apart complicated logic into smaller, reusable and more manageable parts.

It is an essential mechanism when we want to collaborate with other programmers and logically separate different parts of our program.

One example of a function is the "system.out.println("some text");" that i was using in other examples.
In Java, this call hides all the complicated logic required to tell the computer to print out the text into a very simple, easy to use line of code.

Because someone else did the work of writing this method for us we don't need to worry about how exactly it was done or how to do it - we only need to know what it's supposed to do, and how to use it - which is far simpler then writing it in the first place.


The Comment - a comment is just as it sounds, commenting about the code that was written.
Code files are no different than other text files - that is, they are simply text. The compiler takes our text and converts it into machine code. By marking certain lines as comments we can tell the compiler to ignore these lines. This allows us to communicate with the reader and provide more insight into the code.

Comments are a vital component of programming languages and are one of the few tools that are dedicated to communicating our intentions to other programmers.

A comment in code looks very much like other text, but it contains a marker that tells the compiler to ignore
//This is a comment, it can contain any text we want and many special characters and we can use it
//To help other programmers understand the code we are writing
//For example
//The following code computes 2 * 2 and sets the result in myVariable
//Assign the value 2 to myVariable
int myVariable = 2;
//Multiply myVariable by itself, and set it to myVariable
myVariable = myVariable*myVariable;

Classes - These are more complicated elements and harder to explain without more technical background. In general terms classes are a way of creating useful metaphors and using them in code. They are the foundation of what is called "Object Oriented Programming" which is the ruling paradigm in software development for many years.


What is code?

Computer code and its components serve a dual function:
The first is to help us to describe our ideas in terms that allow us and other programmers to read understand the ideas behind the code - the purpose of what we want to achieve with the code.
The second is that the code must translate into machine code that can perform the function for which it was written.

This is duality is important to understand - all working code essentially complies with the second requirement, it will run and perform its function. The first and more important requirement - to provide a coherent picture of what the writer of the code intended is an often overlooked** and vitally important role of code. This requirement to be clear and coherent is central to the job of a programmer. It is the hardest to achieve is rarely taught.

The compiler imposes certain rules, for example it requires that lines be terminated with a semicolon. It requires that a parenthesis that opened must also be closed. It also requires the user of specific "keywords" to be used to express specific ideas. A conditional must always take a certain form (as demonstrated above). Many compilers are "case sensitive", this means that MyVariable is a different variable then myVariable simply because one uses a different capitalization on the "m".

Despite all of these constraints, the compiler doesn't particularly care about the "shape" of things or about the names of things. that is - it doesn't care what names you use, as long as you follow the rules of syntax.
In this sense a compiler is like a strangely strict teacher that will accept any answer as long as it is spelled and punctuated correctly.

The ability to name things, comment on them and use more coherent and reusable structures in code exists so that programmers can impart meaning to the code that would not exist otherwise. This meaning exists only in the minds of the programmers who read the code - the computer does not "understand" the intentions of the programmer. It does not even "see" the original code as written by the programmer.

The vast majority of the lines of code written and much of the effort in writing them is dedicated conveying an idea. Of course, it's important that they actually perform the function for which they were written, but it's at least just as important and often far more important that they do so in a way that describes ideas simply, coherently and on some rare occasions - beautifully.


* - The complexity of the translation depends primarily on the language, compiler and hardware involved.

** - When I was in high school my teacher would often complain that my code was completely incoherent. I would argue that it works. Only after trying in my second year of high school to re-read some code I wrote in the previous year that I really understood why writing well formatted, well commented and coherent code is so vitally important. Writing incoherent code means that to use it or change it you must first spend a long amount of time understanding it.

Monday, May 17, 2010

Introduction to programming - What is programming?

I've been wanting to write something like this for a long time.
My goal is to try and explain computer programming and more general things about computers in a way that i hope will be interesting even to those who are not particularly interested in learning programming.

So, Let's get started.

What is computer programming?

Well, generally speaking it's about making computers doing whatever you want them to.
This is not to say that I or any other programmer CAN make them do anything we want, actually we work under pretty limited constraints. It is by using these rules, constraints and limitations and manipulating them that the richness of software you see around you can be achieved.

In a way, practically any interaction with a computer system is about understanding, testing and manipulating it's capabilities and its bounds. Every interaction with a computer system carries within it the essence of programming.

The basic job of a programmer is to use their understanding of the internal workings of computers and provide a more natural and intuitive interface so that other people can take their programs and make some use of them.

It's easier to use an example, so let's talk about MS Paint. I am sure pretty much everyone saw or used MS Paint or something that resembles it at some point.
Let's try and take a look at the basics of MS Paint through the eyes of a developer.
We'll be working under an operating system (i.e. windows).

To a programmer an operating system is an environment that provides many many services.
In the same way that when someone wants to draw simple drawings they can open paint, choose a color and click their mouse, programmers receive tools and services from the operating system.
We don't have to worry about controlling the movement of the mouse cursor, controlling the various complicated mechanisms required to draw something on the screen, managing the memory, hard disk or any one of the many components needed to make computers work.

So what do we have to do?
Well, the operating system, along with our programming environment already cover many of the things we want to do. they will provide us with a window to work in, buttons to click and they will let us know when the user clicks the mouse inside our window, they will also tell us where he clicked it, and which button he clicked.
We also get something that can be thought of as a canvas that we can draw things on. To draw something we change the color of a pixel (a pixel is the smallest possible dot we can draw on the screen) to whatever color we want.

A simple recipe for a program that lets you draw with the mouse might look something like this:
1. draw a window 200x200 pixels (there is literally something called a Window or a Form and you can set its size)
2. when the mouse is clicked, change the pixel in the same position as the mouse to the color Black. (you can "ask" the window to let you know whenever someone clicks on it, and where it happened - then you can tell the window to change the color of the pixel at that point)

Doesn't look all that complicated right? that's mostly because it's really not all that complicated. it's not so far from opening paint and clicking the mouse.
Of course, I am oversimplifying certain aspects of what needs to be done - and there are languages and operating systems where doing what I just described is far more complicated.

But it's like saying that it's easier to dig a hole with a shovel then it is with your bare hands - True, but not very interesting :)


In its essence, computer programming is about creating tools for other people. I doubt if any of the people who wrote MS Paint ever really used it, personally i rarely use (other than during development) any of the software I write. What we did in our little programming thought exercise was to take an idea - drawing pictures with your mouse, and used our imagination (albeit limited) and the tools at our disposal, and created something for someone else to draw with.

Now that we know how to manipulate the pixels by changing their color, we don't need something like a mouse if we want to draw things on the screen, but by finding a simple interface other people can easily use we created a simple tool.


So what is computer programming? For me it is about taking our idea's, knowledge and understanding of what computers are and how they work and using it to create tools for (mostly) others to use.
Most professional software developers writing code today do exactly this - they spend their days thinking and writing tools and solving problems so that others can become more productive.

I'd be happy to get your thoughts/questions/comments.

Friday, April 23, 2010

No Gaming For Old Men

I know this might sound like the complaints of a grumpy old* gamer (because they are :) ) but i feel like i haven't played a truly new game in ages.

I grew up in the 80's and 90's when "electronic" games were constantly reinvented and evolved, there was always something new coming out. some better, some worse but never boring.
Today gaming is a multi-gazillion dollar industry and despite this, very little happened in the past decade of gaming. Especially when we compare them to the 90's and the 80's. I think probably more major genres died then evolved in recent years. 

I don't know why, perhaps the developers are too far from the hardware, It's very easy to keep making first person shooters when you have all these great engines out there. But when ID did wolfenstine and then reinvented the genre with doom, they worked with the bits and bytes - they even invented special number for Quake 3. There is no doubt we tend to be limited by the tools we have. Sometimes having no tools means more freedom of thought and more wild experimentation. Perhaps its the very high cost of entering the game - there are some very powerful players in the market and perhaps its easier to make a well defined game in a well defined genre then to try something new..

I think most of all what bothers me is that i feel that there has been little or no evolution in any of my favorite genres, and in most cases there haven't even been any serious attempts to change anything. 
As i have some more details complaining to do, i thought i would direct my complaints at the individual genres.


Real Time Strategy

Many would disagree but frankly i feel that there haven't been anything new in RTS's since the genre was truly defined by westwood with dune 2 and to a lesser extent Blizzard with Warcraft.
What's even more depressing is that since Starcraft - which was nothing innovative, but simply a masterpiece of balance  - there have been no (interesting/worthwhile) developments in the genre. And from what i am reading Starcraft 2 doesn't seem to be bringing anything new to the table either. I think the only game to try and bring something new to the table was the bizarre Perimeter by CodeMasters. Sure, it was innovative for the genre - but it really didn't work. 


Role Playing Games

I love role playing games, i loved them since i first played Eye of The Beholder. Not only that, but i love most sub-genres of RPG's. My problem with this genre is that i feel it is loosing it's way recently. Western rpgs have been dying since around Gothic 2 (which together with Gothic 1 are at the top of my list). Bethesda created a beautiful game in Oblivion but was too lazy to bother with any Gameplay. Why do they leave those endless piles of crap in the game world i will never understand. I also don't see why they can't be bothered to balance their game before releasing it - If someone at Bethesda happens to read this, try to understand Auto Leveling means you never advance, which is the cornerstone of role playing gameplay. And no - your pathetic attempt at "capped" auto leveling in Fallout 3 was not "good". (but at least there was *some* point to leveling). A bigger problem is possibly that others began to follow - Boarderlands seems to use a similiar misguided system, and while it was fun to play it wasn't fun to power-play.
And last but certainly not least are Pirhana bytes, makers of the excellent but unjustifiably relatively-unknown Gothic and Gothic 2 (and inexcusably also Gothic 3 and Risen). Gothic (1&2 - i am not counting the dissapointing 3) was a wonderful series. It focused on a very simple and very clear damage system. This, along with interesting battles that truely required varied tactics for each creature, a class system that created truly separate gameplay (to the extent of changing some plot elements) and a perfectly balanced world where you really wanted to slay every possible enemy and gain every bit of experience you can. The system was so rewarding that you always tried to get every tiny bit of advantage while still keeping the gameplay challanging.
So gothic 3 was a disappointment - even if we disregard the crappy and crashing game engine, the gameplay which made the gothic series was gone. it seems to have been "simplified" for the unknown masses, and it lost the balance and challenge that made the previous games so great. Pirhana bytes moved on to create Risen, which is a pale unbalanced shadow of gothic with very little plot and far too easy Gameplay. 
Finally, there's the KOTOR series which was excellent and a fine example in the genre and i would love to play another title in that series. 


Simulation

My favorite genre in simulation is racing and i must say i have no complaints in this area - a good simulation game aspires to, well, simulate reality. Titles like Fortze Morotsport and Dirt and the first serious Need for Speed - Shift are excellent titles, the genre is constantly becoming more and more realistic with an incresingly accurate level of simulation. Sure, i can hope for some faster loading on the xbox 360  (in some titles loading time is proportional to racing time) but other then that - love it, keep up the good work.


First Person Shooters

Well, i could say that first persons shooters haven't really changed since the days of Wallenstein and doom, and in some respects that's true, but in most respects they are climbing my genre-preference ladder.
Games like Farcry (excellent gameplay and excellent AI), Call of Duty -  Modern Warfare (1&2)   - Bringing a truely cinematic experience to FPS gameplay. In a sense FPS's and simulation have a lot to do with technology, the level of detail and quality of graphics are the first and for many the most critical factors in FPS's. Call of duty demonstrated that a great plot is just as important. All in all the genre is doing what it has always done - keep pushing the limits of the hardware to show more realistic ways of simulating shooting people with a wide variety of weapons..


Quests (aka** Adventure games)

Quests are dying, and its a real shame. Actually, the genre is all but dead. Telltale games are the only ones that seem to be keeping the genre alive these days, and the success of their latest monkey island show that there are enough of us around to appreciate the genre (i also recommend Strong Bad's Cool Game For Attractive People, if you read this far - this game is definitely for you). As one of the most successful genres of the 90's its surprising that almost no one is still around. Lucas Arts moved on to other pastures and Sierra basically died...very sad.  Oh, and i just remembered there's also the excellent Machinarium by Amanita Design. 


Turn based Strategy/Managements

Nothing much to say there, its hard to talk about change or advancement in this genre. Graphics are slowly improving over time, and the notable titles (Say Civilization) are changing and tweaking the gameplay, but somehow nothing exciting happen there in years. Some new and interesting title would be good. I would also settle for ...actually, i would be really excited about bringing back say Master of Orion or Ascendancy.. You know, thinking about it i can't think of any worthwhile recent Turn based 4X game besides the Civilization series, and even that one has taken a turn for the worst with their latest "Civ Revolutions" (although civ 5 should be coming out soon, so at least they didn't abandon the fans).


Casual*** Gaming

I have a love-hate relationship with casual gaming. On the one hand this is one area of gaming where there is still a lot of innovation. there are many new kinds of addictive puzzle games,castle defense, tower defense and notably World Of Goo with other less-defined categories of casual gaming. On the other hand you have all those social games on Facebook and other platforms. It's not that i mind what people play, it's that i feel they are lowering the bar that game creators have worked so hard to raise all these years.   


MMORPGs

finally the last genre on my list - MMORPG - my nemesis. 
As i mentioned before - I Love RPG's and because of this i have never played an MMORPG and never will. 
So why have i never played an MMORPG? The simple answer is addiction.
When i started playing gothic 2 for the first time, i played it for about a week straight. 
By "a week straight" i mean that i barely ate and i slept when i could no longer concentrate on the game. I did almost nothing else. The reason i allowed myself to live with such  a level of addiction untreated is that its short-lived. Once every few years there's a title good enough to warrant this level of addiction, but the experience is bound to end after a very short time. Even the most search-every-corner-twice-and-then-go-back-to-make-sure gameplay of the most complex RPG will still end after 40-100 hours. MMORPG's never end - it's the whole point of them. I can't allow something to completely take over my life, so i can't even try them. 
Why are they my nemesis? because a lot of excellent titles in  my favorite genre are going MMO which is leaving less and less non-MMO titles for me to play...
Also i am pretty sure MMO's are the first phase in the evil plan of the machines to enslaves us by making us live in a virtual world.  

* - well - old in gamer terms
** - they are mostly known as Advanture games, but in israel we call them "Quests" (or questim in Hebrish) 
*** - how can anyone call spending 10 hours a day on farmville "casual"? 

Tuesday, March 23, 2010

Strange New Spam

I just got a new comment on one of my articles (Estimating The Cost of Game Piracy) saying:

"Well your article helped me altogether much in my college assignment. Hats off to you dispatch, will look audacious in the direction of more related articles promptly as its anecdote of my favourite subject-matter to read."

Every other day i get a message from some bot selling one of the spammers greatest hits.
Naturally when i saw another "New Message from Anonymous" i immediately suspected spam.
Reading the comment i wasn't sure - it starts out reasonably coherent - but "Hats off to your dispatch" really caught my eye, as well as "its anecdote of my favorite..".

The text reads very much generated - i don't know if it's some kind of place holder message for spammers or some CS student somewhere is running an experiment with generation of comments. (its an excellent way to do a low-cost turing test. most blog comments are moderated one way or another - if you managed to post a message and it wasn't deleted, you could say that the writer is reasonably estimated as a human)

So i ran a goolge search for the strange expression "Hats off to your dispatch" - currently it returns 3 results - all of them look generated, and all from the past week or so.

One of them is identical to the comment above, the other are:

"Approvingly your article helped me truly much in my college assignment. Hats off to you dispatch, choice look audacious in the direction of more related articles in a jiffy as its anecdote of my choice subject-matter to read."

"Correctly your article helped me terribly much in my college assignment. Hats off to you dispatch, wish look ahead in behalf of more cognate articles without delay as its united of my choice topic to read."


looking at the comments there are two major connecting elements:
"Your article helped me" and "College Assignment"

Running this through google returns about 80 hits at the moment, from my surface scan the comments are from the past week and each and every one of them seem to have been generated in the same way.

I wonder who was the source of this - and if it is indeed some experiment, and the experimenter happens to stumble over this post - let me know :)


Update:
I scanned a few more of those comments (now google lists about 230 hits)
Seems to be related to "Debt Relief" some of the comments are named Debt Relief and where they can they link - So looks like it's just another spam flying around.
I guess they didn't make sure the blogs allow links before posting..


Sunday, February 28, 2010

Estimating The Cost of Game Piracy

The Motivation

A few years ago we had an idea on how to create a fairly hard to crack DRM.
Coming from a security background (among other things) we knew what most people in the field know - All defenses can be breached. It is simply a matter of resources.
In the case of game copy protection we knew that there are considerable resources invested by hackers and crackers to break copy protection. A good[1] copy protection can buy whoever is using it a few months at most. We figured that if we can make a sufficiently flexible and complex copy protection we can buy about 6 months for a given game before the protection is broken.
To see if this was a worthwhile cause we wanted to know what is the real value of a good copy protection. We also wanted to know if the 6 months we believed we can get would be good enough[2].

I believe that the value of a good copy protection should be the number of additional copies of a given game that would be sold if the game cannot be copied minus the number of copies that would have been sold if those resources would have gone to improving the game (or investing in marketing - in other words - the alternative cost).

It is very hard to say how many copies of a game could be sold if some additional funds are put into any aspect of making a game. It is also very hard to tell how much is lost due to the additional inconvenience introduced by some of the more intrusive protections[3].

On the other hand it might be possible to find out how many games could be sold if there was an unhackable copy protection.

Looking For Numbers

In my quest for numbers i ran a search to see if there were any research that could reasonably estimate how many copies of a given game could have been sold had it been protected.
Unfortunately none were available. What i found was plenty of articles citing the BSA and IIPA for pirated software numbers and estimates for the loss of income due to those pirated copies.
This might be considered good evidence but after taking a more in depth look at the numbers and methodology i decide i don't really trust the results (i might go more into that in a seperate article - but i am certainly not the first to question those surveys) .

I also found this very comprehensive article about game piracy in general. I recommend it to anyone who wants to get a good understanding of the different aspects of game piracy.

So the question remained - How do you measure the actual loss due to piracy?
If you want to compare things, finding something that is reasonably similar is a good start.
Like The Economist's Big Mac Index - you want a product that is arguably of the same quality, make, reputation etc' as possible if you want to make a good comparison.
I think that i found such a product for piracy loss research - Cross platform titles.

A Different Approach

There are two[4] 7th generation consoles on the market - Microsoft's Xbox 360 and Sony's Playstation 3.
These consoles are arguably[5] the same in terms of interface, level of graphics and the type quality and number of titles released for these consoles.
The Xbox 360 has been hacked many years ago (i think sometime in 2006).
The Ps3 on the other hand had not yet been hacked. Sony has made considerable efforts in securing the PS3 and so far - 4 years and counting - The PS3 had not been hacked.
My argument is that since the Xbox 360 had been hacked and the PS3 had not been hacked, the difference in sales of various titles should reflect the loss of revenue due to piracy.


In recent years more and more titles are being released to both consoles. These games constitute a comparable (identical?) product. Most cross platform titles look the same on both consoles, distributed enjoy the same publishing houses, have the same level of publicity and the same kind of gameplay, story and whatever other aspect you may choose to consider. They are also - for the most part (and for all games considered in this research) - released at the same time.

A good comparison should consider alternative or competing products. This aspect is much harder to account for. I cannot accurately or reasonably claim that it doesn't make any difference or that i can accurately measure or correct for it. I hope i can reasonably persuade you that it is of reduced importance because of two things:
1. If you look at the current (at the time of writing) top sellers lists of both the Xbox 360 and the PS3 you'll find that 5 of the top 10 sellers on the PS3 lists are cross platform titles (3 of them are the top sellers) and 6 of the top 10 sellers for the Xbox 360 are also cross platform.
2. To further reduce the impact of console specific titles on the overall numbers i tried to include as many games as i could find in the research.

Still, I acknowledge that competing exclusive titles have an impact on the overall results. But i think the numbers themselves demonstrate that this impact is not likely to be too significant.

Correcting For Console Sales Difference

The final significant difference between the consoles is the numbers of consoles sold.
Currently Wikipedia (no. for dec 2009) cites 33.5 million PS3 consoles sold vs. 39 million Xbox 360 consoles sold. For the sake of transparency i am showing the adjusted and unadjusted numbers.
It should be noted that without correcting for the number of consoles sold, the total number of copies sold from all titles i looked at are significantly higher on the xbox 360.

Commutative unadjusted game sales for 56 leading titles by console (unadjusted):
Without correcting by the number of consoles sold there is a predictably higher number of copies sold on the Xbox 360.


The Commutative game sales from 56 leading titles by console, adjusted by the number of consoles sold:
As you can see - once we adjust to the number of consoles soled the commutative number of copies sold is almost identical. The green line at the bottom shows the difference between the consoles.

Estimating The Cost Of Piracy

The following graph shows the ratio of adjusted PS3 sales and Xbox360 sales.
I believe that part of this ratio can, at least in part, be explained by piracy.


According to these results the estimated loss due to piracy is around 5-10%.
However it should be noted that this is the case if (and its a huge IF) all of the difference between the sales of these titles between consoles is solely due to piracy.
There are many other factors that may effect game sales.
If we make the assumption that game piracy incurs a significant loss and is by far the most significant possible factor effecting sales of these titles between consoles than this number may be accurate.

Personally i do not believe this to be the case. I believe that there are so many possible different contributing factors to the differences in sales between consoles that they must contribute (positively or negatively) to the sales of games between consoles.
In other words - i do not believe that there is enough of a difference in the number of copies sold to reasonably claim that piracy incurs a measurable loss on game sales.




[1] - Good at being a copy protection. The most successful copy protections tend to be pretty evil in what they actually do.
[2] - We were wrong btw. it seems that games sell mostly during the 2 months before Christmas. This is probably why many releases are targeted around that time.
[3] - Sony BMG's care is perhaps the most famously intrusive and destructive of those.
[4] - Well, Nintendo would have you believe that there are 3 but you can't really compare the level of graphics or the gameplay in Wii to the other consoles. Xbox360 and PS3 are consoles for gamers. Wii is the casual gamer console. You also can't reasonably claim that the titles released to the Xbox 360 and Ps3 are the same as the ones released on the Wii.
[5] - This is the subject of many arguments. For the sake of full disclosure i own an Xbox 360. I (no longer) claim it has better graphics than the PS3. Only that it both consoles are practically the same.


This is the list of title's whose sales numbers i used:
Army of Two
Assassin's Creed
Bad Company
Borderlands
Bound in Blood
Brutal Legend
Burnout Paradise
Carbon
Dark Sector
Dead Space
Devil May Cry 4
DiRT 2
Fallout 3
Far Cry 2
FIFA Soccer 09
FIFA Soccer 10
Fight Night Round 3
Fight Night Round 4
Grand Theft Auto IV
GRID
Guerrilla
Guitar Hero III
Hell's Highway
Los Angeles
Madden NFL 09
Madden NFL 10
Mirror's Edge
Modern Warfare 2
Modern Warfare
Mortal Kombat vs DC Universe
NBA 2K9
Need for Speed Carbon
out.txt
PES 2009
Prince of Persia
Project Origin
ProStreet
Prototype
Pure
Saints Row 2
Sid Meier's Civilization Revolution
Sonic Unleashed
SoulCalibur IV
Street Fighter IV
Tales of Vesperia
The Force Unleashed
The Simpsons Game
Tiger Woods PGA Tour 09
Top Spin 3
Undercover
Wet
World at War
World in Flames
World Tour
WWE SmackDown vs Raw 2009
WWE SmackDown vs. Raw 2010

Wednesday, February 17, 2010

Aspect Based INotifyPropertyChanged Implementation

There are quite a few articles floating around about how to implement an aspect oriented version of INotifyPropertyChanged Interface.
My version is based on an article by Tamir Khason Posted on CodeProject.

The original version had some problems - placing the Attribute on both a Parent and Child class caused an exception to be thrown and because the interface was implemented at compile time by PostSharp* it could not be used without an unsafe cast.

My version requires that the class implement the interface directly, adds an interface that reports which property that also provides the pre-change value (useful when you want to save undo for example) and checks with any implementing class to see if anyone is registered (and skips all other checks if no one is listening...). We then check to see if there's any modification and only fire notification on change - this might not be the required behavior in all cases so the check can be easily removed.

So, performance is probably better, usage is pretty straight forward - implement INotifyPropertyChanged and IFirePropertyChanged and mark your class with the NotifyPropertyChanged attribute.
If this is implemented on a base class all children will also fire modification notification on all properties. I am adding a sample base class implementation as well.
Any class that extends this base class gets a free change notification for all properties.

Example Base Class:


using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using AutoPropertyChangedWiring;
using System.ComponentModel;

namespace AutoPropertyChanged
{
/// <summary>
/// base class for any class that wishes to get INofityPropertyChanged for "free"
/// any class that extends this base class will get a PropertyChanged events, auto-implemented for all properties
/// </summary>
[NotifyPropertyChanged]
public abstract class PropertyChangeNotifier : INotifyPropertyChanged, IFirePropertyChanged
{
public event PropertyChangedEventHandler PropertyChanged;

public void FirePropertyChanged(string propertyName, object oldValue)
{
if (PropertyChanged != null)
PropertyChanged.Invoke(this, new PropertyChangedEventArgs(propertyName));
}

public bool HasPropertyListeners
{
get
{
if (PropertyChanged == null)
return false;
else
return true;
}
}
}
}



Aspect Implementation and base class:



using System;
using System.Collections.Generic;
using System.ComponentModel;
using System.Diagnostics;
using System.Linq;
using System.Reflection;
using System.Text;
using PostSharp.Extensibility;
using PostSharp.Laos;

namespace AutoPropertyChangedWiring {

public interface IFirePropertyChanged
{
void FirePropertyChanged(String propertyName,Object oldValue);
Boolean HasPropertyListeners{get;}
}
[Serializable, AttributeUsage(AttributeTargets.Assembly | AttributeTargets.Class, AllowMultiple = false, Inherited = true),
MulticastAttributeUsage(MulticastTargets.Class, AllowMultiple = false, Inheritance = MulticastInheritance.Strict, AllowExternalAssemblies = true)]
public sealed class NotifyPropertyChangedAttribute : CompoundAspect {
public int AspectPriority { get; set; }

public override void ProvideAspects(object element, LaosReflectionAspectCollection collection) {
Type targetType = (Type)element;
foreach (var info in targetType.GetProperties(BindingFlags.Public | BindingFlags.Instance | BindingFlags.DeclaredOnly).Where(pi => pi.GetSetMethod() != null)) {
collection.AddAspect(info.GetSetMethod(), new NotifyPropertyChangedAspect(info.Name) { AspectPriority = AspectPriority });
}
}
}

[Serializable]
internal sealed class NotifyPropertyChangedAspect : OnMethodBoundaryAspect {
private readonly string _propertyName;
private object oldValue;
private Boolean fireEvent;
public NotifyPropertyChangedAspect(string propertyName) {
if (string.IsNullOrEmpty(propertyName)) throw new ArgumentNullException("propertyName");
_propertyName = propertyName;
}

public override void OnEntry(MethodExecutionEventArgs eventArgs) {
var instance = eventArgs.Instance as IFirePropertyChanged;
fireEvent = true;
//check if anyone is listening.
if (!instance.HasPropertyListeners)
{
//no one is listening, no point in moving on.
fireEvent = false;
return; //no need for the other checks
}

var targetType = eventArgs.Instance.GetType();
var setSetMethod = targetType.GetProperty(_propertyName);
if (setSetMethod == null) throw new AccessViolationException();
oldValue = setSetMethod.GetValue(eventArgs.Instance,null);
var newValue = eventArgs.GetReadOnlyArgumentArray()[0];

if (IsEqual(oldValue,newValue)) //nothing changed, nothing to do..
eventArgs.FlowBehavior = FlowBehavior.Return;
}
protected static Boolean IsEqual(Object left, Object right)
{
//both are null == both are equal.
if (left == null && right == null)
return true;

//at least one is not null, if its left - lets use it for the "Equals" method.
if (left != null)
return left.Equals(right);
else //left is null, but both are not null - which means right is not null and they are not equal.
return false;
}
public override void OnSuccess(MethodExecutionEventArgs eventArgs) {
if (!fireEvent)
return; //no need to fire the event.
var instance = eventArgs.Instance as IFirePropertyChanged;
instance.FirePropertyChanged(_propertyName,oldValue);

}
}

}

* - This probably goes without saying - Using aspects requires Postsharp to be installed (v1.5)

Monday, February 15, 2010

Simple Technologies For Better Car Safety

I've been thinking about car accidents recently, or rather, about ways to prevent them.
I believe that at some point in the coming decades cars or whatever form of personalized transport we use will be fully computerized. Ultimately i do not think that regular public transportation will ever replace personalized transportation, but i do think we might have some form of automated point to point mass transportation system.

There are many different technologies currently being developed in the area of driving assistance, and some systems have even been implemented. Usually the way this works is that advancements are first implemented in luxury cars, and some of these technologies eventually become more widespread.
Some of these advancements involve various driving assistance systems, systems that "see" the road and can provide warnings and braking assistance in case the driver gets too close.

These technologies have a long way to go before they can be truly effective in preventing car accidents in common road cars. But i think that it is possible to implement some simpler technologies that may help us get closer to this future, and help us reduce car accidents in all cars.

Enhanced Brake Warning:
The first idea is to implement a slightly more sophisticated brake light system.
Current braking lights are pretty much boolean - on or off.
I think it shouldn't bee too complicated to have levels - say 3 levels (maybe even three lights) indicating "slow" brake "medium" brake and "hard" brake.
Modern cars have computers that control and monitor many of the car and engine functions, it shouldn't be too hard to implement a system that controls the brake light by the actual deceleration of the car instead of their current boolean implementation.
Why will this help? The way things work at the moment, when you see a brake light you don't know how quickly and how hard you need to brake. Your brain has to watch the car ahead and calculate the deceleration before you can decide how hard to brake. I think this period of time can be reduced and help prevent or reduce the damage in many accidents.

Inter-Car Signaling:
The Second idea is a little more complicated, but still in the not-too-hard category.
There are various radar and vision based technologies used to measure the distance and velocity of nearby vehicles. These are pretty expensive at the moment and don't work well enough to depend on.
It should be relatively cheap to implement the following:
If a standardized radio transmitter , broadcasting a low power signal with a known, well defined (and fixed) frequency and power. It should be relatively simple (and cheap) to roughly determine the distance between a receiver and a near by source. The idea is to optimize such a system to detect the distance between the receiver - your car - and the transmitter of the car ahead of you, behind you, and in your general area. to be affective and reduce interference the signal should be relatively weak and pretty directional.
By placing the transmitters in the corners of the car and designating a specific signature for each transmitter, it is possible to get a pretty good idea of the relative position of adjacent cars.
If we also broadcast the speed of the car using these transmitters we can very accurately know if a car ahead of us is braking, if there might be a car coming fast in our direction when crossing an intersection.

It should be pretty cheap to manufacture if standardized, and more importantly if mandated by governments. this kind of device can be easily installed on existing cars.
The system can be used for anything from advanced warnings for dangerous situations to actively engaging the brakes in dangerous situations - this last option would probably be only applicable to computer controlled cars.

I believe that the future of transportation is in more advanced version of computer controlled transportation. I think that computer controlled transportation can be far more efficient in traffic direction and possibly make car accidents a thing of the past.