Tag Archives: litigation

Multiprocessing CodeSuite-MP

Until now there were two ways of running really big jobs of CodeSuite. One was to simply run it and wait for as long as it took. Really large jobs can take as much as a week or two. The other option was to run the job on CodeGrid, our framework that distributes the job over a grid of networked computers. CodeGrid shows an almost linear speedup for each computer on the grid, but it requires someone to maintain the computers and the network and that can be a daunting job. Now there’s a third option;, CodeSuite-MP allows you to run multiple jobs on a single multicore computer. We’re seeing a near-linear speedup for the number of cores, and there’s no special maintenance required. We’re even seeing a near-linear speedup using virtual cores. If you want to get a license for CodeSuite-MP, contact our sales department.

The Report Generator (RPG)

The Report Generator (“RPG”) is a new program from SAFE that automatically generates draft expert reports and declarations for litigation. Reports have several generic sections such as an expert’s experience and descriptions of the technologies involved in the examination, which can be shared amongst reports. By automating the compilation of the generic information into a formatted and structured draft report, the expert can focus on performing the analysis and writing the case-specific arguments.

When using the RPG, an expert selects the type of case, type of report, types of technologies involved, types of tools used, and expert background profiles from a GUI. Then a Microsoft Word draft report is generated that includes all of the selected generic information intermixed with blank sections where case-specific information should be filled in manually.

Currently, many experts either dig through their prior works to find specific descriptions or write them from scratch each time. Maintaining a library of generic report elements is a challenge, especially when multiple experts are involved. RPG acts as a version control system between multiple experts who can upload and download detailed descriptions of experts, technologies, and tools from a central server. The reports are generated according to specific formats, so an entire team of experts can easily produce reports that are consistently formatted with the most up-to-date descriptions.

RPG also keeps synced descriptions of CodeSuite, so it can include the most up-to-date descriptions and pricing of the tools without having to search the S.A.F.E. website or CodeSuite help files.

If you’re interested in trying out RPG, contact our Sales Department.

CodeCLOC for software transfer pricing cases

Last month we announced CodeMeasure, our new standalone tool for measuring software growth. This month we announced the release of CodeSuite 4.0 that includes CodeCLOC for measuring how software evolves across versions of code. CodeCLOC uses the same algorithms that were implemented in CodeMeasure and that were developed for the landmark software transfer pricing case Symantec v. Commissioner of Internal Revenue.

You’re probably wondering what is the difference between CodeMeasure and CodeCLOC. CodeMeasure is a simple, inexpensive program for generating the CLOC measurement statistics for multiple versions of a program. CodeCLOC, intended for litigation, compares only two versions of code but produces a detailed database of results that can be further filtered and analyzed using CodeSuite or your own custom tools. The results from CodeCLOC can be presented in court and the CodeCLOC database can be presented to the opposing party for verification.

CodeSuite 4.0 also has a few other nice features including a revamped user interface. There’s also a new function to generate statistics from any CodeSuite database and the command line interface has been enhanced for integrating with other programs. CodeSuite 4.0 is available for download here and can be purchased on a term license or project basis. CodeCLOC is priced at $20 per megabyte. A one year term license for CodeSuite is $100,000.

North Face v. South Butt

Jimmy Winkelmann, a freshman biomedical engineering student at the University of Missouri, decided to create his very own line of sportswear and called his company The South Butt (motto: Never Stop Relaxing). The North Face, a San Leandro, California-based outdoor products company, was not amused and smacked Winkelmann with a cease-and-desist order that Winkelmann read and promptly ignored. Then came the trademark infringement lawsuit. South Butt’s reply, filed in court, is pretty funny. Among other things it defines the company name as “being the soft undercarriage of the non-mountain climbing human anatomy, commonly known and referred to in non-salacious form as, among others, rump, bootie, bottom, buttocks, posterior, rear, saddle thumper and butt.” In a similar vein it describes “Little Jimmy” himself as “a handsome cross between Mad Magazine’s Alfred E. Newman of ‘What me Worry’ fame, and Skippy the Punk from the Midwest” If anyone knows who Skippy the Punk from the Midwest is, please let me know.

The North Face didn’t get the joke. Their lawyers scheduled a deposition of Winkelmann’s father, James Winkelmann Sr. That didn’t go too well. It turns out that Winkelmann Sr. was once a partner at the St. Louis brokerage firm of HFI Securities where partner Don Weir Jr. pleaded guilty a year ago to charges he stole more than $10 million from clients (Winkelmann was never implicated in any wrongdoing).

I suggest you download the reply and the deposition when you want to have a good laugh at the expense of the legal system. The reply is pretty sarcastic and it’s not clear to me who it’s supposed to appeal to (except readers like us, but not necessarily the judge). The deposition reads like a Marx Brothers skit and is every bit as funny. Litigation has never been so much fun.

DUPE: Depository of Universal Plagiarism Examples

In 2003 I created the CodeMatch program that very quickly became a de facto standard in software IP litigation. I created a test bench of purposely plagiarized code that could be used to independently and objectively compare the results produced by different plagiarism detection programs. Some in the academic community claimed that my tests were biased toward the algorithms used by CodeMatch, which explained why CodeMatch fared so well compared to the other programs. However, these same critics, despite my requests, never produced their own set of standard tests.

Although I believe that the standard tests I have used are not biased, it occurred to me that there could be a better way to eliminate even unintentional bias. The solution would be to take the source code for certain open source programs and announce a new open source project that would involve purposely plagiarizing the code. Programmers from around the world would be invited, perhaps in a competition, to change the source code while retaining the functionality. The original programs and the plagiarized versions submitted from others would be stored in a database known as the Depository of Universal Plagiarism Examples or DUPE. Plagiarism detection programs would then be run on DUPE and comparisons of the results could be made to determine which programs best detected copying. Also, important statistics about plagiarized code could be determined, as well as patterns identified in order to improve the plagiarism detection programs.

SAFE Corporation has begun looking into creating this database. However, we realize that we would like to work with partners in academia and industry. We believe that there are several key issues that need to be resolved in creating DUPE. These are:

  1. Choosing appropriate open source projects.
  2. Creating a minimum definition of software plagiarism.
  3. Creating the database.
  4. Determining policies including who can access it, how it will be used, and who will maintain it.
  5. Determining how to run the tests, how to generate the results, and how to distribute the results.

Please contact me if you’re interested in working on this important and groundbreaking project.

Who really invented the computer?

The digital computer is usually credited as the invention of two professors at the University of Pennsylvania, J. Presper Eckert and John Mauchly. Funded by the United States Army, the ENIAC computer was designed to calculate tables for launching artillery shells accurately in World War II, but was not completed until after the war in 1946. Unlike earlier computers that had a fixed purpose, ENIAC (meaning “Electronic Numerical Integrator And Computer”) could be reprogrammed to handle many different purposes. But were Eckert and Mauchly really the pioneers of today’s modern digital age?

Actually no. The real inventors of the digital computer were physics professor John Atanasoff and his student Clifford Berry who created the first digital computer in a laboratory at Iowa State College. The ABC (“Atanasoff-Berry Computer“) was built in 1939, yet by the time of ENIAC’s introduction to the world, the ABC had been forgotten. What had happened? World War II broke out and  Iowa State as well as Atanasoff and Berry simply didn’t realize the power of what they had created. Atanasoff was called up by the Navy to do physics research, eventually participating in the atomic bomb tests at Bikini Atoll.

When Atanasoff returned to Iowa State he found that his invention was gone to make room for other equipment—because the ABC was built piece-by-piece in the laboratory, it was too big to move out and so it had to be dismantled. Iowa State had decided that a patent was too expensive and so never filed one. John Atanasoff went on to gain recognition for a number of inventions involving physics, but the ABC was mostly forgotten.

In the 1970s there were a handful of companies that saw the great potential in the electronic computer. Sperry Rand Corporation, which was formed through a series of mergers and acquisitions including the Eckert–Mauchly Computer Corporation, held U.S. Patent 3,120,606 for the digital computer. In 1973, Sperry Rand sued Honeywell, Inc. and Honeywell reciprocated. Thus began one of the most important intellectual property cases in history.

During the research for this case, Honeywell found out about John Atanasoff and the ABC, which became pivotal information. The case was tried for 7 months after which Judge Earl R. Larson handed down his decision that stated, among other things, that the Eckert-Mauchly patent was invalid.

Some people have disputed this finding, arguing that this was a “legal” finding or a “loophole” or that a lawyer or a judge simply couldn’t understand the complex engineering issues involved. Here’s my take on this.

  1. Both sides had a lot of time, and access to technical experts, to make the best case they could.
  2. So much was at stake, and a huge amount of money was spent to bring out the truth. Both sides had very significant resources. If a case with this much at stake could not convince a judge after seven months, then there is little hope for any IP case.
  3. Evidence was found and witnesses verified that John Atanasoff had attended a conference in Philadelphia where he met John Mauchly and described his work. He then invited Mauchly out to Iowa where Mauchly spent several days examining Atanasoff’s computer and many late nights reading Atanasoff’s technical specifications. Letters were produced, signed by Mauchly, that thanked Atanasoff for his hospitality and for the tour of his amazing invention.
  4. Mauchly testified at the trial. He admitted that he had met Atanasoff and eventually admitted that he had examined the ABC and read its specification.
  5. Mauchly and Sperry Rand Corporation were challenged to produce a single piece of evidence that Mauchly or Eckert had written about or researched digital electronics before Mauchly’s meeting with Atanasoff. The best Mauchly could do was produce a circuit for a model railway flasher that he claimed was a binary counter—it counted from 0 to 1 and then back to 0.

In fact, it became clear that Mauchly and Eckert attempted to claim much more credit than they deserved and tried to deny credit to others. They had actually greatly improved on Atanasoff’s original design. Had Eckert and Mauchly been more humble, had they added Atanasoff’s name to their patent, had they patented their own improvements instead of the entire invention, they may have given Sperry Rand the most powerful IP in technology history. Instead the invention of the computer entered the public domain without restriction, and the rest is history…

For a good book on the subject, read The First Electronic Computer: The Atanasoff Story by Alice R. Burks and Arthur W. Burks.

Interesting software IP cases of 2009

Here is my list of the most interesting software IP cases of 2009,
in chronological order:

What to look for in an expert?

I recently came across a study in the Journal of the American Academy of Psychiatry and Law out of the The University of Alabama entitled “Credibility in the Courtroom: How Likeable Should an Expert Witness Be?” To be honest, I’m not sure I understand their conclusion:

The likeability of the expert witnesses was found to be significantly related to the jurors’ perception of their trustworthiness, but not to their displays of confidence or knowledge or to the mock jurors’ sentencing decisions.

Reading the paper doesn’t make it a whole lot clearer for me, and I think their mock trial setup is a bit contrived, particularly since the jury consisted of psychology students, a demographic that you’d be unlikely to find on a real jury. Also there were only two expert witnesses for the comparison. To their credit, they discuss these potential shortcomings. I do think, however, that the paper points out something (that may have already been obvious)—there is more to being an expert witness than just being correct. Personality and presentation are strong factors.

On the other hand, I feel that this subjective aspect should be minimized. Experts need standards and measurable quantities whenever possible. Before I began developing the concept of source code correlation, the way software copyright infringement and trade secret theft cases were resolved was to have two experts give contrary opinions based on their years of experience. The judge or jury would tend to get lost in the technical details, a strategy purposely employed by some experts and attorneys, and a judgment would depend on which expert appeared more credible.

Instead, I decided to expand the field of software forensics and made it my goal to bring as much credibility to the field as DNA analysis, another very complex process that is well accepted in modern courts. I still believe that an expert’s credibility and likeability will always be factors in IP litigation, but that the emergence of source code correlation and object code correlation provide standard measures that bring a great deal of objectivity to a lawsuit’s outcome.

SAFE Corporation is looking for great ideas

There are a lot of unanswered questions about source code, and we want to work with you to figure them out. We realize that currently accepted algorithms for analyzing, comparing, and measuring source code leave a lot to be desired in many cases. Also, there are a lot of techniques that have never been studied on large bodies of modern code. For example, measurement techniques developed in the 1970s were probably tested on assembly languages and older programming languages like BASIC, FORTRAN, and COBOL. Do they still hold on modern object oriented languages like Java and C#?

If you have a research idea relating to code analysis, and you can use the SAFE tools, let us know. Email Larry Melling, VP of Sales and Marketing with your ideas. If they pass our review process you’ll get free licenses to our tools, free support, and help getting your results published. This could be the beginning of a beautiful friendship.

Software trade secrets

The precise language that legally defines a trade secret varies by jurisdiction, as do the particular types of information that are subject to trade secret protection. In the United States, different states have different trade secret laws. Most states have adopted the Uniform Trade Secrets Act, and those that don’t, have laws that only differ by subtle differences.

There are three factors that are common to all definitions; a trade secret always has these three specific characteristics:

  1. It is not generally known to the public.
  2. It confers some sort of economic benefit on its holder, where the benefit is due to the fact that it is not known to the public.
  3. The owner of the trade secret makes reasonable efforts to maintain its secrecy.

With regard to software trade secrets, algorithms that are known to the public usually cannot be trade secrets, though some jurisdictions require not only that the information be public but that it be “readily ascertainable,” meaning easily to find. For example, a sorting algorithm found in a well known textbook or in an application note on a high traffic website is, or can be, known to the public and easily ascertained.

There must be an economic benefit, so a sorting algorithm that can be easily replaced with a well-known sorting algorithm with comparable results is not a trade secret. Similarly if your company develops a program, perhaps as a side project, but does not sell it or incorporate it in any products, then it’s not a trade secret.

If the owner of the source code allows programmers to share code, or does not put notices of confidentiality in the source code, or does not take reasonable steps to insure that employees do not take the code home with them, then that source code cannot be a trade secret. This third point is a particularly important reason to take precautions to ensure your software does not go somewhere it shouldn’t. Make sure your employees, investors, and partners sign nondisclosure agreements (NDAs). Make sure you have written policies about how to handle source code. And make sure you treat all individuals and companies equally. You don’t want to be in court, defending a trade secret, and have to explain why one “trusted employee” or “trusted friend” was allowed to take home source code while others were not. That doesn’t look like “reasonable efforts to maintain secrecy.”