Friday, June 6, 2014

Interesting IT patents expiring in 2014

Most patents expire in 20 years after initial filing, leaving the disclosed technology available for everyone. Although most patents filed in 1994 are a bit outdated, there are inventions that still deserve our attention event after 20 years. Here I give you my top of expiring patents.

1. Electronic document verification system and method

This patent covers all digitally signing systems that embed secure information in your document (e.g. using OLE). The embedded information can contain digital signature and/or signator's digitized graphical signature or other graphic image. Here is the example (from the patent):

2. Probabilistic information retrieval networks

Bayesian inference is used to search and rank documents. Consider Bayes theorem:
where p(di|rj) is the probability of selecting document di when a search query contains representation rj. "Representations" can be simple words or more complex structures like phrases, concepts, synonyms, proximities and thesaurus classes. If representations were simple words, there wouldn't be a problem: we can estimate p(rj) by the word frequency, which we can compute during document indexing. For more complex representations this is however not possible: you cannot count every possible phrase, synonym and proximity during indexing because there are way too many combinations. Instead, the authors propose estimating p(rj) in during the document search, when all rj of interest are extracted from the search query. To do this, we compute rj frequencies on random samples of documents. Many other problems are solved as well.

3. Automated, non-invasive iris recognition system and method

Remember the movie called  Minority Report with Tom Cruise? There were cameras everywhere identifying people by their eyes, so that the poor guy had to replace his eyes to remain undetected in the streets. Well, this is it. The person identification system based on eye scanning.
The device features two cameras: one coarse-grained camera for detecting your head and eyes positions, while another fine-grained camera focuses right into your eye with the help of moving mirror.
The iris recognition algorithm is probably outdated, because a lot has happened during the last 20 years in the field. However, the patent is still monopolizing the industry, judging by the immense number of referencing patents.

4. System and method for calculating RAID 6 check codes

Although invented 20 years ago, RAID 6 dominates enterprise sector today. Unlike RAID 5, which only allows one disk failure at a time, RAID 6 allows two disks failed at the same time. In order to understand how does it do recovery, lets first recall how recovery works in RAID 5.
N-disk RAID 5 array divides data in N-1 blocks and adds N-th parity block, which is a XOR-ed data blocks. When a disk fails, its data is recovered by XOR-ing other N-1 disks.
RAID 6 recovery is much more complicated. First define the following operations on M-bit integers:
  1. Addition, a+b, which is a bitwise XOR; subtraction is the same operation;
  2. Multiplication, a*b, yielding an M-bit result; this is not a simple multiplication. Instead, it is a tricky operation involving polynomials in Galois field {0,1} and the boolean AND operation;
  3. Inverse a-1, yielding an M-bit result. This is a unique number such that a-1*a = 1 using the multiplication defined previously;
These operations should form a finite field. Now for a N-disk array, divide data in N-2 blocks and add two check blocks P and Q such that:
for some α and data bits di. Note that addition and multiplication here is as defined before. That is, P is a XOR of data bits, just like in RAID 5. There are the following cases of recovery:
1. Data disk j and the Q check disk fail; restore data disk j like in RAID 5:
2. Data disk j and the P check disk fail; restore data disk j via the formula:

3. Data disks i and j fail:


4. The P and Q check disks fail; reconstruct them from the data disks.

5. Speech efficient coding method

How do you compress human speech? You could of cause treat it as an arbitrary sound: divide into time frames, compute spectrum for each frame and quantize that spectrum. This is how sound compression (e.g. MP3) works. But this is not enough if you want to send it over a low-bandwidth channel like GSM. Mobile telephony employs special speech compression methods so that the radio channel can be shared among as many speakers as possible.
Human speech spectrum possesses few properties that can be used to improve compression rate in comparison to, say, an orchestra. First, there is a voice pitch frequency, which is the main tone. Then, there is a vocal tract that works like a linear filter, transforming the voice. In the frequency domain, the output spectrum S can be modelled as a product of the filter's frequency response T and the original voice's spectrum E:

 S = T E

The original voice E is also called the 'excitation signal', because this is the input signal that excites the filter. For periodic voiced sounds (like vowels), E contains the pitch frequency fp and the related harmonics 2fp, 3fp, etc. The pitch frequency usually varies from 54Hz (low male voice) to 400Hz (high female voice). For unvoiced sounds (like /t/ or /s/), the excitation signal is a white noise, and spectrum E contains some random mess. There are also sounds (like /z/) that appear as periodic in low frequencies and noisy in high frequencies. You may say that /z/ is a voiced sound in some parts of the spectrum and unvoiced in other parts.
To summarize, we can divide the spectrum E into bands fp wide and classify each band as either voiced (V) or unvoiced (UV). 

In the figure to the right, 7A shows the output voice spectrum S, 7B is the filter's frequency response T, and 7C is and excitation spectrum E, which represents an ideal vowel consisting of only V bands. See more real-world spectrums here.
After detecting the filter's coefficients e.g. using LPC analysis, we can encode the whole spectrum using (1) the filter's coefficients, (2) the pitch frequency fp (3) the bit vector containing the V/UV classification result for each band, and (3) amplitude value of each band. This model called Multi-Band Excitation vocoder (MBE) and is in fact more compact way to represent a speech spectrum then coding amplitude for every frequency as in MP3.
The problem of MBE is that in the real world (1) the input sound is noisy and (2) the spectrum of a voiced sound is not strictly periodic. Both problems impacts V/UV classification badly. This patent solves the problems using the following trick: let there be two frequencies, e.g. f1=500-700Hz and f2=3300Hz. If the bands to the left from f1 are classified as V, then, under certain circumstances, the bands between f1 and f2 can be automatically classified as V. This both improves V/UV classification and reduces bitrate.

5. Decoder for decoding ECC using Euclid's algorithm

Reed-Solomon error correcting codes are widely used in the industry: CD/DVD/Blu-Ray discs, barcodes, xDSL, data transmission in space, RAID-6 disk arrays. By introducing redundancy to the transmitted data, Reed-Solomon codes make it possible to recover the transmitted message even if some of the symbols of the codeword were distorted during transmission, or the storage media got corrupted.
Given an input message consisting of k m-bit symbols, the encoder outputs n=k+2t symbols of the codeword, where n2m-1. Reed-Solomon code can recover up to t errors in the codeword - that is, if less then t symbols of the codeword are changed (including both message and control symbols), then the original message can still be recovered.
Reed-Solomon (RS) codes work like this: let the codeword be represented as a polygon in Galois Field of size 2m:

See here how one can construct a finite field of size 2m. The first k coefficients are the same as the input message symbols, and next 2t coefficients are chosen so that c(x) has roots α, α2, ..., α2t, where α is primitive element in GF(2m).
Let r(x) be the received (potentially distorted) message. If r(x) is unchainged, all r(αi) should be zeros. Otherwise, r(x) = c(x) + e(x), where e(x) is the error polygon, and r(αi) = e(αi) (because c(x) = 0 in its root x = αi). The values Si=r(αi) are also called syndromes, and are the values of the error polygon e(x) in the roots α, α2, ..., α2t.
The decoding problem is then to find the coefficients of the polygon e(x) given syndromes Si. The problem can be solved by Euclid's algorithm for finding greatest common polygon divider. The algorithm involves dividing polygons is a loop, and its hardware implementation is tricky. This patent describes the circuit for decoding an RS code.

6. Rapid thumbnail image reconstruction of dct compressed image data

The idea behind this patent is pretty straightforward: use DC coefficients of the JPEG's DCT as the thumbnail pixel values, resulting in shrinkage of the image by the factor of 8 (recall that a DC coefficient is an average of a 8x8 pixel block). Funny that this is exactly what the open-source JPEG library does when you ask to scale an image down by the factor of 8. We were using this principle all these years without knowing it was patentized. In fact, the library goes even beyond that: it can rescale the image during inverse DCT by the ratios of 8-to-N, where N is an integer from 1 to 16. And I doubt the library creators were aware of the patent.

7. System for producing a personal ID card

This patent protects an idea of a pocket-sized card that includes both human-recognizable an machine-readable information, and usually includes the person's name and photo. While this may seem obvious today, it was not that obvious twenty years ago. Weird that Id Technologies LLC, the company now owning the patent, does not seem to be producing such cards currently. It specializes mainly on label printing.

8. Branch target and next instruction address calculation in a pipeline processor


9. Normalization method for floating point numbers

Both patents protect the important building blocks every modern processor consists of. Branch prediction has huge effect on performance, while floating point normalization is a part of every FPU. Now that these blocks are not licensed any more, we can expect some liberation of CPU market in the near future.

10. Method and apparatus for compressing system read only memory in a computing system

BIOS and the setup program in your PC are stored in ROM in a compressed form, so that more code can be fit into a small chip on your motherboard. This patent protects the idea of compressed ROM. The ROM itself contains two parts: the compressed data and uncompressed decompression algorithm. When computer starts, the decompression algorithm executes and decompresses the ROM data into the conventional memory, and then executes the decompressed code.

11. Distributed fingerprints for information integrity verification

A document fingerprint algorithm is proposed that can verify a document using a fingerprint decomposed into several peaces which are distributed among several locations. An adversary can change only a few of those peaces. The majority of the peaces is required to verify the document. The algorithm is based on a error-correcting code.

12. Write anywhere file-system layout

Owned by NetApp Inc, this patent discovers a filesystem featuring fast fileset snapshots and transactions.

Monday, March 14, 2011

Another web interface for JMX

We are using JMX to monitor our web servers. If you have ever tried to employ JMX you must already know what a pain in the neck it is. So I just skip that part. I tried jmanage, and it was cool, but required a password to be entered at start time, and required an additional java process to run on a web server, and it was pretty slow.

So I came up with an idea of creating a web servlet that would interface with JMX mbeans. The source can be found here: along with instructions. It requires Spring Framework to run.

You can use this both as a user interface and as a REST API to your MBeans.

Tuesday, December 21, 2010

Google App Engine Hurdles

Google App Engine has become quite a buzz around the web. I gave it a try and I want to share some experience here. My task was to run Solr search system on GAE/java. After two days of implementing index storage for the DAE datastore I finally gave up.

There are two major considerations against using Google App Engine as a hosting for your applications:

1. The Datastore does not find records you just added to it until you commit the current transaction. This transaction semantics differs from any other database system in the world. You may think this is not a problem: why ever fetch records you just saved? But if you try to develop a real application, with real complex transactions, you will find very soon that this is not an option. For example, consider importing large amounts of data: you can't access the data you have imported so far in a transaction (e.g. for checking uniqueness of the sku field). There are work-arounds for this problem, of course, like importing in a separate parent entity without using transactions. Which complicates your application a lot.

2. Eclipse plugin does not update the server when you change the code. The only way to update the server with the new changes is to stop and then start the server, but that can take a while. This little glitch makes the development environment almost unusable.

I'm gonna use Amazon EC2 for this application. Scalability is not a problem for me, and everything else is far better (at least I can write to files, which is not possible under GAE).

Tuesday, January 26, 2010

toInstance in Google GIN

Couldn't find anything like toInstance in GIN, so I've developed one myself. Here is the patch against svn revision 135: instance-patch.diff
Usage: declare void methods named like "setXXX" in your ginjector:
interface MyGinjector extends Ginjector {
 void setSomething(Something real);
 void setAnotherThing(@Named("ID") int id);
 void setAlltogether(Something real, @Named("ID") int id);
 void setOptional(@Optional Object object);

Once all non-optional dependencies are set, you can call any other ginjector methods. It's that simple. I've included some tests which you can use as an example.

The curious thing is that you can chain your ginjectors together:
interface ParentGinjector extends Ginjector {
 void injectMembers(ChildGinjector inj);

interface ChildGinjector extends Ginjector {
 @Inject void setSomething(Something real);
 @Inject void setAnotherThing(@Named("ID") int id);

Note the @Inject annotation in the child injector. Now you can simply:
  ChildGinjector child = GWT.create(ChildGinjector.class);

This is the pre-built version on google gin 1.0 with that patch:
GWT version < 2.3: gin.jar

Thursday, October 1, 2009

RPX is not a single sign-on solution

RPXnow is a great service, allowing you to sign in using most popular account providers (like Facebook, Google Accounts, Yahoo, etc.). But my experience with RPX was not that great: the problem is that you can't SHARE your sign-ups among websites. This was very disappointing when we started to use RPX on your website We ended up creating our own single sign-on service, and that was not easy. RPX claims to perform as a proxy between you and OpenId & OAuth & other services, still it lacks the very important "smoothness" of OpenId&friends: you visitors still have to click button in RPX iframe to sign-in your website, even if they have already signed in RPX or any of third-party providers RPX supports.

We had a main website and a Uservoice forum, both using the same RPX account. When you sign in the main website using RPX, and then go to Uservoice, you find yourself not signed in there, until you press that dreaded button.

I understand that RPX is popular because of a "wow" effect, but I hope they will add support for SSO in the future, so that users won't be disappointed any more.

Sunday, May 17, 2009

Ecwid: Innovating e-commerce

It's been a while since last time I wrote. Not that I gave up my blog. I was just busy creating an insertable e-commerce SaaS solution called Ecwid. If I didn't do something wrong, you should see it just above this text.

It's a large project which was started at November'08 and came out with its first release at September'09. It features:

  • Pretty advanced product catalog designer
  • Integration with Google Checkout, Paypal Express checkout, Authorize.Net, 2Checkout
  • Shipping cost calculation using FedEx, UPS, U.S.P.S, DHL
  • Customizable CSS
  • Ajax everywhere (meaning it's fast)
  • E-goods, mail notifications, mobile version for SEO
Written in GWT (client side) and Java/Spring (server side).

Friday, September 19, 2008


There is a lot of speculation in the web about what is better - JSP (java) or PHP. Some claim PHP is insecure, which is not exactly true. Others claim JSP is hard to use or slow, which is also not true, and I intend to show why.

First, I want to rebut some of the myths around PHP and JSP.

Myth 1. "PHP is insecure". Some PHP programs may be insecure, because of their creators' negligence. These are tips to make your PHP programs secure:
  • Use PDO for database access. This will eliminate the risk of SQL injection.
  • Do not include session ID in URLs, because those URLs may be sent to other sites as 'Referrer' HTTP header.
  • Do not use shared hosting. There are a lot of virtual and dedicated hostings on the market starting from $10/month. Neither can you run Java applications on a shared hosting.
Myth 2. "JSP is hard to use". Not that hard. If you already know Java, you will invest maybe 3-4 more days to learn JSF or Struts, and this investment will return soon enough. You can use one of free or relatively cheap IDEs to master your JSF pages, and these IDEs are more advanced then those available for PHP development.

Myth 3. "PHP is not scalable". Neither PHP nor Java is scalable, because those are programming languages. A system can be scalable, not a programming language. Well, there are of course programming languages like Erlang, Forte and CUDA C extension, which enable scalability through parallelization, but PHP and Java do not differ in this respect.

Scalability of a system is its ability to increase throughput by simply adding hardware and not changing the source code. To make your PHP software scalable, just store all the state, including large files and session variables, in a database, and use database clusters with apache server on each node. Then use Cisco Load balancer or simple DNS round robin to balance load between nodes.

Myth 4. "JSPs are slow". You may take that impression because it takes some time for a new JSP to compile into a high-performing native code. First, a JSP is translated into a Java class file. That class file will be interpreted during the first invocation, but are incrementally compiled into native code during following invocations. You will be surprised how fast server-side JVM can be. It trades memory consumption for performance, e.g. by more function inlining.

There are, of course, ways to make Java slow, like using EJBs, but the same applies to PHP development.

Second, let me postulate the rule of the thumb to choose between PHP and Java for web applications. Use Java when at least two of the following is true:
  1. You do not have programmers with PHP experience handy, but you've got Java programmers;
  2. You want to use some existing functionality written in Java, which is unavailable in PHP. There are a lot of (persistence, messaging, imaging, protocol implementation, file format) libraries and frameworks for Java. Also, don't miss existing open-source applications: This may seem to be a controversial point, because there are more scripts and applications available for PHP then for Java. However, only few of them are stable or maintainable. In other words, when you choose an open-source software written in Java, there are much more chances that it will work than for the same functionality written in PHP;
  3. Your projected project size is more then 50K lines of code;
  4. You will have a heavily loaded web site.
Use PHP otherwise.

Finally, here are some tips for those who's already made his choice.

You've chosen PHP
Welcome to the world of non-professional development. Most of PHP scripts you will see in the web will not work. It is not a good idea to base a new project on an existing open-source or commercial code either, unless you need only cosmetic changes or addition of minor functionality. While existing software like PHPNuke, osCommerce and its clones, ShopScript, or even Joomla, may seem pretty, making a totally different software on top of them will most likely be extremely time-consuming. Drupal is somewhat better in this respect. Consider starting a new projects from scratch, eventually adding independent scripts and libraries like Smarty. At least in this case you will be able to use modern PHP at full extent (I mean, features like PDO, SimpleXML, new OOP model). You will focus on functionality, not on a lot of legacy code written for PHP 3.

You've chosen Java
Java is a bit messy about those APIs. Remember: JDBC, plain JSP, and JAXP are not for use. Their mere existence is either nonsense or serves needs of middleware builders. Use JPA/Hibernate for database access, JSF/Struts for web logic, JAXB or jdom for XML processing. Use MyEclipse or other IDEs to master your web applications. Do not write JSF by hand, like most PHP programmers do!

Good luck.