So I don't create a massively long and hard to read without losing the will to live post, and as you've already read the book and have a good understanding of exactly what it's saying, I'm hiding the arguments made by that book inside spoiler tags. Feel free to have a look at them and cross reference with the book and tell me if I'm misrepresenting anything though if you wish.
Spoiler
1-No information can exist without a code.
2-No code can exist without a free and deliberate convention.
3-No information can exist without the five hierarchical levels: statistics, syntax, semantics, pragmatics, and apobetics.
4-No information can exist in purely statistical processes.
5-No information can exist without a transmitter.
6-No information chain can exist without a mental origin.
7-No information can exist without an initial mental source; that is, information is, by its nature, a mental and not a material quantity.
8-No information can exist without a will.
(Just a note on the third point for those without dictionaries close by, syntax means an established convention for formatting data (Gitt insists it must be consciously established); semantics means meaning; pragmatics means the structure of communication by the transmitter to achieve specific reactions in the receiver; and apobetics means purpose.)
Gitt then goes on to say that he is extending the mathematically proven information systems work started by Shannon (an American mathematician and information theorist)
On the basis of Shannon's information theory, which can now be regarded as being mathematically complete, we have extended the concept of information as far as the fifth level. The most important empirical principles relating to the concept of information have been defined in the form of theorems.
(basically he's saying that Shannon's work gets points 1-5 established, points 6-8 are all his own work)
In addition to the principles above, Gitt also offers up several theorems:
Theorem 1: The statistical information content of a chain of symbols is a quantitative concept. It is given in bits (binary digits).
Theorem 2: According to Shannon's theory, a disturbed signal generally contains more information than an undisturbed signal, because, in comparison with the undisturbed transmission, it originates from a larger quantity of possible alternatives.
Theorem 3: Since Shannon's definition of information relates exclusively to the statistical relationship of chains of symbols and completely ignores their semantic aspect, this concept of information is wholly unsuitable for the evaluation of chains of symbols conveying a meaning.
Theorem 4: A code is an absolutely necessary condition for the representation of information.
Theorem 5: The assignment of the symbol set is based on convention and constitutes a mental process.
Theorem 6: Once the code has been freely defined by convention, this definition must be strictly observed thereafter.
Theorem 7: The code used must be known both to the transmitter and receiver if the information is to be understood.
Theorem 8: Only those structures that are based on a code can represent information (because of Theorem 4). This is a necessary, but still inadequate, condition for the existence of information.
Theorem 9: Only that which contains semantics is information.
Theorem 10: Each item of information needs, if it is traced back to the beginning of the transmission chain, a mental source (transmitter).
Theorem 11: The apobetic aspect of information is the most important, because it embraces the objective of the transmitter. The entire effort involved in the four lower levels is necessary only as a means to an end in order to achieve this objective.
Theorem 12: The five aspects of information apply both at the transmitter and receiver ends. They always involve an interaction between transmitter and receiver.
Theorem 13: The individual aspects of information are linked to one another in such a manner that the lower levels are always a prerequisite for the realisation of higher levels.
Theorem 14: The apobetic aspect may sometimes largely coincide with the pragmatic aspect. It is, however, possible in principle to separate the two.
He also sets out four conditions for recognising information, two of them necessary (as in if there is information then the condition must be true) and two of them sufficient (as in if the condition is true then there is information):
NC1: A code system must exist.
NC2: The chain of symbols must contain semantics.
SCI: It must be possible to discern the ulterior intention at the semantic, pragmatic and apobetic levels (example: Karl v. Frisch analysed the dance of foraging bees and, in conformance with our model, ascertained the levels of semantics, pragmatics and apobetics. In this case, information is unambiguously present).
SC2: A sequence of symbols does not represent information if it is based on randomness. According to G. J. Chaitin, an American informatics expert, randomness cannot, in principle, be proven; in this case, therefore, communication about the originating cause is necessary.
4-No information can exist in purely statistical processes.
Theorem 3: Since Shannon's definition of information relates exclusively to the statistical relationship of chains of symbols and completely ignores their semantic aspect, this concept of information is wholly unsuitable for the evaluation of chains of symbols conveying a meaning.
Those are Gitt's statements. Now compare and constrast with Shannon:
The fundamental problem of communication is that of reproducing at one point either exactly or approximately a message selected at another point. Frequently the messages have meaning; that is they refer to or are correlated according to some system with certain physical or conceptual entities. These semantic aspects of communication are irrelevant to the engineering problem.
What Shannon is saying is that the meaning of the message is irrelevant to the problem of getting the message from A to B. Like a radio engineer doesn't need to know the song that will be played on the radio to make the radio work in the first place. What Gitt is saying is that Shannon is wrong in his definition of information, effectively that the radio engineer MUST know which song will be played in order to make the radio work properly. The song cannot exist as a purely statistical collection of electromagnetic waves to be transmitted, the meaning within the lyrics of the song are an integral part of how the radio will function.
The second major problem with Gitt's work comes from those conditions. In the second sufficient condition Gitt notes (with mathematical support from Chaitin, another mathematician) that randomness cannot be proven. He states that "communication about the originating source of the information is necessary." Yet in the condition right before that he relies on being able to discern the "ulterior intention at the semantic, pragmatic and apobetic levels". In plain English Gitt allows himself to make guesses about the intelligence and purpose behind a source of a series of symbols, even though he doesn't know whether the source of the symbols is random. Gitt is trying to have it both ways here. He wants to assert that DNA fits his strictly non-random definition of information, even after acknowledging that randomness cannot be proven.
There is a deeper problem with Gitt's assertions though. And that is that they are simply assertions. Gitt describes his principles as "empirical", yet the data is not provided to back this up. Similarly, he proposes fourteen "theorems", yet fails to demonstrate them. Shannon, in contrast, offers the math to back up his theorems. It is difficult to see how Gitt's "empirical principles" and "theorems" are anything but arbitrary assertions. Neither do we see a working measure for meaning (a yet-unsolved problem Shannon himself avoided). Since Gitt can't define what meaning is sufficiently to measure it, his ideas don't amount to much more than arm-waving.
By asserting that data must have an intelligent source to be considered information, and by assuming DNA sequences are information fitting that definition, Gitt defines into existence an intelligent source for DNA without going to the trouble of checking whether one was actually there. This is circular reasoning. If we use a semantic definition for information, we cannot assume that data found in nature is information. We cannot know a priori that it had an intelligent source. We cannot make the data have semantic meaning or intelligent purpose by simply defining it so.
And you're right by the way, I like arguing about this sort of stuff. I did philosophy at uni despite the terrible career prospects precisely because I like arguing about this stuff. My enjoyment of this arguing has no relevance or bearing, however, on whether my argument is good or bad. My argument stands on it's own, separate from me. But mostly I don't like to see people taken in by con-men and sham artists who try and pass nonsense off as science to fit their pre-conceived ideas about reality, sell books and fool anyone who doesn't take the time to sit down and really deconstruct their claims. The book you got your argument from is flawed. If you can fix and improve on Gitt's work then go right ahead and do so, but please don't believe something just because some guy wrote it in a book. Think for yourself!

