e-mail : ( Please write in ' Subject ' entry : ' METAPHYSICS ', in order for me to be able to distinguish your mail from spam )
Also here, in what follows, knowledge of the Aristotelian-Thomistic Metaphysics is presupposed. It can be found all over in First Part of Website (Back to Homepage), especially as to the notion of Substance (in the metaphysical sense). And also presupposed is knowledge of the theory of the subdivision of Reality into the Explicate and Implicate Orders, a theory so far developed in our noëtic theory of organic evolution, especially as it has been expounded in the two-document Theoretic Intermezzo after Part VIII of present Part of Website.
In the present document we will discuss the nature of "inorganic substance" (atoms, molecules, and crystals), based on the interpretation of the results of atomic theory, largely as expounded by HOENEN, 1947.
Introductory Remark by the writer of this Website.
In the present document the notion of Substance, (in the metaphysical sense) will be discussed further. "Further", because all over our Website, especially First and Fourth Part (and, with respect to organisms, also Fifth and Present (Sixth) Part) the notion of "Substance" was already central anyway. Central in the ensuing discussion will be the nature of inorganic Substance, namely atoms, molecules, and crystals (the "first instance" of "Substance" remains, of course, the organism). "Substance" is not a concept of natural science. There, in natural science, where one speaks of "substance", one merely means some inorganic matter, for example a volume of some chemical compound. But because our discussion is mainly philosophical, we mean by "Substance" a chunk of matter or material that is intrinsically individual and constitutionally holistic. This means that we distinguish existing things as to their degree of individuality and holism, determining their way of being as "self-being". A Substance is an independent entity which is a source of original activity, delimited from, but nevertheless connected with, its environment. Most of these (especially holism, i.e. holistic unity, self-being, and original activity) are purely metaphysical notions, not relevant in natural science, metaphysical, because it is about the "way of being" of things. Substances are relatively stable and constant qualitative patterns, multiplied over individuals of a species. By all these features Substances are distinguished from "aggregates" (of Substances). And indeed, but in contrast to Greek philosophy, the transition of some given immaterial qualitative pattern from the Implicate Order into the Explicate Order, and thereby becoming material, means an increase of "eminence of way of being". In that transition, that "projection", the qualitative pattern, the immaterial Form, is ontologically completed by letting that Form to in-form Prime Matter, to become material and to become individual, and therefore become temporal and spatial, and that is to exist in the Explicate Order.
As we have expounded earlier, not all Forms are able to materially exist in the Explicate Order. Some, driven by the omnipresent "aspiration" of Forms to become ontologically complete, must, before they can be projected into the Explicate Order, develop (in the Implicate Order) into "strategies". Such a strategy is a prescription of how for a given Form to actively maintain existence in the Explicate Order and so to maintain its ontologically complete constitution : In the Explicate Order they then appear as organismic species, organic Substances.
Other Forms (as immaterially present in the Implicate Order) do not need to be so developed. They can maintain themselves in the Explicate Order already as long as they remain thermodynamically stable or metastable, i.e. [they maintain themselves] as long as a number of individuals of them remain so stable. They do not have to guarantee their material existence in the Explicate Order actively, they are not "strategies", they are not organisms. These are the inorganic Substances, and we will deal with them in the sequel, by following the expositions in the last Chapter of HOENEN's "Philosophy of the Inorganic Nature" (written in Dutch, and translated, paraphrased and supplemented by me). While organisms are definitely Substances in the metaphysical sense, Hoenen is going to demonstrate that also atoms, molecules, and crystals, truly are such Substances. According to Hoenen any material Substance whatsoever is a "heterogeneous continuum". It is a continuum because it is not composed of material parts, which would turn it into a mere aggregate. From this it follows that the "particles" composing the Substance are only "virtually" existing in the (actually existing) Substance, they are not particles but "qualities" of that Substance. And so every Substance is a holistic unity. While Hoenen does not, in his book, consider organisms (which are for him the primary instances of Substance anyway), he, as has been said, considers also atoms, molecules and crystals to be true Substances as opposed to mere aggregates. He underpins this by showing that the description, in natural science, of them does not have to be altered when the constituent particles in them are viewed, not as actually existing particles anymore, but as only virtually existing particles, which become actual only upon leaving the Substance. So in the Substance they are in fact the Substance's qualities, i.e. qualities not of the constituent particles, but qualities of the Substance. So with all this, Hoenen maintains that natural science cannot decide upon the metaphysical nature of the constituent "particles", and in fact it doesn't need to. That they are virtual particles, and thus that the whole of which they are parts is a true, but heterogeneous, continuum, and thus a true Substance, Hoenen tries to demonstrate by the presence in such wholes of so-called "totality-resultants". Whether he succeeds in this endeavor we will see below. As to atoms, he has surely succeeded. They are true holistic entities. Generally, we can say that atoms and molecules are each of them constant qualitative patterns, indicating that they must be true Substances.
In all this, when speaking of "Substances", we must realize that the primary instance of "Substance" is an organism, an organismic individual. And this is a macroscopic entity. Also crystals are macroscopic entities, whereas atoms and molecules are truly microscopic entities. This is an important difference. While organisms and crystals can be described and characterized in thermodynamic terms (because they are supposed to be a very large collection of (virtual) particles) rendering macroscopic parameters such as temperature, pressure, color, etc. to be relevant to them, atoms and molecules cannot be so described. Maybe only macroscopic entities can be true Substances. Indeed, we must see whether the arguments of Hoenen for atoms and molecules to be Substances are sufficiently convincing. Anyway, in a crystal-analogy (crystals-organisms) only macroscopic Substances are relevant.
In Fourth Part of Website, part XXIX Sequel-35, Crystal-Analogy, we have extensively compared organisms with dendritic crystals, compared, that is, as to their ability to generate complex macroscopic forms and shapes, where this ability is a feature of macroscopic true Substances (in the metaphysical sense). Later, in the present Part of Website (Sixth Part) we shall expound the theory of discrete space : At the very bottom of explicate reality we have the "omnipresent universal network of quality-points". This is a kind of ultra-microscopic point-lattice determined by the geometrically minimum distance between every two points. And although we can be sure that organisms do not in any way possess a crystallographic lattice, an "organic lattice", they still may be compared with crystals (especially dendritic crystals) because of the presence of this ultra-microscopic point-net. But because this point-net is supposed to be present everywhere in explicate reality, we must assume that organisms "mobilize" this point-net in order for them to be able to generate complex macroscopic morphological structures. So this revives the "crystal-analogy" as it was developed in Fourth Part of Website, and the final conclusion is given in the document "Crystal-Analogy" accessible by the above LINK. And this summary of the crystal-analogy is at the same time a further characterization of "Substance" (in the metaphysical sense), so it would be instructive and interesting for the reader to consult it.
Let us now follow the expositions of HOENEN on inorganic Substance.
Explanation of multitude and change in Nature was and is the great problem, and subject, in all natural philosophy. Now, in the preceding documents we have seen that in Nature there are not only local motions which themselves only consist in extrinsic change, namely change of place of bodies, but that in addition also truly qualitative, and thus intrinsic, changes do occur. The mechanical view has admitted only the possibility of the first kind of change, local motion, [this view] based on metaphysical and epistemologic reasons.
[Here, HOENEN speaks of "accidental changes". The "accidental" here refers to some given Substance, i.e. it is accidental with respect to a given Substance. The change is accidental because it does not necessarily follow from "what the given Substance intrinsically is". On the contrary, a change is "substantial" (in the metaphysical sense) if it necessarily follows from "what the given Substance intrinsically is", or we should say, "necessarily follows from the nature of the given Substance", or, even better, "necessarily follows from the intrinsic qualitative content of a given Substance". This means that qualities can in principle either belong to the very essence of a given Substance, or (in other cases) belong only to its periphery (when "having such a quality" does not necessarily follow from the Substance's intrinsic nature, but is "accidental" or "extrinsic" with respect to it). This can also be said of quantities, relations, place, point in time, etc. The latter two are exclusively extrinsic determinations of a given Substance. And indeed, "change of place" of a given individual Substance, i.e. local motion, is a fine instance of "accidental change" : The moving body (that individual Substance) has its impetus (sustaining inertial motion) coming from outside that body, and its place (its spatial position) is extrinsic with respect to its nature. All changes of a given Substance of which [changes] the cause resides outside that Substance, i.e. does not belong to the intrinsic nature of that Substance, are "accidental changes". And so "deformation" is also a fine example of accidental or extrinsic change. And, further, say, "red iron" (iron taken as a Substance) gets its color as a result of a change of state (the glowing state) having an extrinsic cause (heating). But a copper sulphate crystal has its "blue" all from its very nature and only from its nature. It is an intrinsic quality of such a crystal. So although a quality, a quantity, a place, etc. is always a quality, a quantity, a place, "of something", we cannot, as is done in Aristotelian metaphysics, ontologically separate the quality, quantity, place, etc. from "that something" that has them, and then call the latter "Substance" and the former "accidents". It is better to leave aside the question of inherence, i.e. of "accidents" (qualities, etc) ontologically inhering in a Substance, or, a Substance "carrying its accidents" or determinations, because some "accidents" are not accidental at all, but belong to the very nature of the Substance, and therefore do not merely inhere in that Substance (a copper sulphate crystal not merely "has" blueness, but, among other things, "is" blueness, while, as to "blue", we in both cases (accidental or non-accidental) say that the crystal "is" blue, but also that (glowing) iron "is" red). So it is better not to distinguish and identify "Substance" by contrasting it with its "accidents" (its qualities, quantities, its place, etc.), but define it as a "being that is intrinsically one" (and thus not a mere aggregate), a being that is a constant (and thus repeatable) qualitative (s.l.) pattern, and having, when it is macroscopic, long-distance correlations of its parts, meaning that it is holistic. And every change of such a given Substance, such as deformation, local motion, weathering, etc., of which the cause does not belong to that Substance's intrinsic nature is "accidental change". And then, of course, every change having its cause in that nature is a "substantial change". And it is about the latter that HOENEN is going to speak.]
Generation and decay.
These substantial changes are the "generation" and "decay" in the strict sense of these words. Daily experience -- and this also Parmenides knew, and also the mechanicists -- holds to find in Nature also this "generation" and "decay". Birth and death of living beings, people, animals, and plants. Even in inorganic Nature it sees it, when there in a body or system of bodies a change is seen which [result] remains, also when the circumstances having caused it have been reversed to the previous ones. In the first pages of a textbook of chemistry one often mentions this -- usually in inexact terms -- as the characteristic mark, distinguishing a chemical change from a physical one. Sometimes it is (rightly so) described as follows : In a physical phenomenon something on matter changes, while in a chemical phenomenon matter itself changes, it transforms into another. Fairly acutely we find it expressed by Oswald (1904) :
"Solche Aenderungen lassen sich in zwei grosse, wenn ach nicht scharf geschiedene Gruppen teilen. Sie betreffen entweder nur eine oder einige wenige Beziehungen und Eigenschaften des betrachteten Körpers, oder sie sind von durchgreifender Art, so dass der betrachtete Körper verschindet und an seiner Stelle andere Körper mit anderen spezifischen Eigenschaften auftreten. Eigenschaften der ersten Art teilt man der Physik zu, solche der letzten Art der Chemie."
"Such changes may be divided into two large, but not sharply distinct, groups. These [changes] either concern one or merely just a few relations and properties of the given body, or they are more radical, resulting in the disappearance of that body and its replacement by other bodies with different specific properties. Properties of the first category belong to physics, while those of the second one belong to chemistry."
The chemical changes, as they are described here, are changes in the Substance itself of the bodies.
In the course of their expositions, writers of these textbooks (Ostwald, at the time in which he wrote these words, not yet, later he did) usually interpreted-away again this view of chemical transformations, precisely in the manner the atomists Leucippus and Democritus did. To these Greeks the generation and decay of inorganic bodies, even birth and death, was no more than mixing and demixing of atoms. And so "generation and decay" has degraded into mere appearance or human opinion, i.e. is not a natural phenomenon anymore. And not differently, classical 19th century chemistry has reduced all chemical changes, initially described as substantial changes, to mere changes of position of unchanged atoms. And the reason was the same : the preconceived mechanicistic view of [at least inorganic] Nature, denying the possibility of intrinsic change, and thus certainly of substantial change.
And yet the metaphysical possibility of such changes was demonstrated by Aristotle. He has detected the conditions that had to be satisfied by a Substance (in the metaphysical sense) in order for itself to be changeable. It had to be composed of two essential principles : (1) the being-in-tendency (or inclination) (in the case of substantial change, this is the tendency toward substantial being, toward the "being-without-qualification", the "esse primum") and (2) the entitative factor realizing (actualizing) this tendency. In the present case, that of substantial change, this tendency is called "protè hylè", materia prima, Prime Matter. The actualizing factor is then called the "Substantial Form", the forma substantialis. [If indeed a Substance is in this way constituted by two ontological elements, prime matter, substantial form, then it can truly change, namely by having its substantial form replaced by another. If it, on the other hand, consisted of only one single metaphysical element, it would not be able to change, but at most vanish into nothing, and, perhaps, later pop up again out of nothing. So in this way "substantial change" is at least intelligible.]. But with this metaphysical possibility not the actual occurrence of substantial change is necessarily implied. But surely this possibility does imply the demand, here, as in accidental change, to critically analyze the classical [physical or chemical] theories. And again here, the principle of elimination will serve us well, for it is not too risky to suppose that the mechanicistic principles also here had smuggled-in superfluous elements into the classical [physical or chemical] theories. The modern crisis of the mechanicistic view of Nature points to the mistakenness of these elements. When they now turn out to be superfluous in the classical theories, the essence of these theories, i.e. that what has value to explain by nearest causes, after the elimination of superfluous elements, can be preserved. Then also here the crisis does not ruin brilliant mind products.
The enquiry into classical theories, insofar as they contain a denial of substantial transfomations in the inorganic domain, is the more urgent because these transformations are found in transitions between inorganic and organic Nature [mainly the assimilation into a living body of inorganic materials.]. There is, in the latter, first of all the unity of a human being, constituted of matter and mind. And the unity of matter and mind is already evident from simple analysis of the sensory workings of a human being -- indeed, the sensory act is one, and yet an act of mind and matter [together]. And then the generation of the human body from inorganic materials must be a substantial change of these materials. The same will hold for the animal kingdom, in which the only trouble for many is the animal psyche which is then needed. A problem, that simply comes from the wrong view of the animal psyche, namely of an unnecessary and impossible substantiation of the material substantial form [i.e. to view such a form, in animals and plants, as an ens -- like it is so viewed by HOENEN, and others, in the case of man -- instead of [viewing this form as] an ens quo, which is that in virtue of which something is (an ens)]. A view, against which Aristotle, and even more penetrating, Thomas Aquinas, had warned.
Metaphysical composition of inorganic Substances.
Our investigation is only about inorganic Nature. But we already have, from the fact that there are inorganic materials which have been taken up into the unity of an organism, which materials thus become alife, a first datum. What precisely from the inorganic material is taken up into the living being, and what precisely is the "matter" out of which the new Substance, the living being, "becomes"?
That can in no way be an ens (a being). The metaphysics of Parmenides (Greek philosopher from Elea in lower Italy, 540-(after)480 B.C.) has proven this point strictly, and all subsequent philosphies had to acknowledge it [ that Being, ens, cannot change, cannot become]. Precisely because of that reason atomism had interpreted-away this "becoming". [Although "becoming" is, apparently, observed, it cannot exist, because it is unintelligible, so it was held] And therefore [in order to make this "becoming" intelligible] Aristotle had elaborated his theory of potency and act. That what goes over out of the initial material into the living being cannot be an ens, but only "being-in-tendency" which tendency then is fulfilled by the substantial form, and so, realizing a particular living being. This is simply an application of the Aristotelian theory of "generation and decay" to this special case, the "becoming" of a living being.
[Here is not meant the "actual spontaneous generation of life from inorganic material" (at some place and time in the Earth's history). It is about the intake and assimilation of inorganic materials into a living being, and it is assimilation that embodies the substantial change of inorganic material into species-specific organic material (or of species-extraneous organic material into species-specific organic material). But this is in fact no more than a number of chemical transformations, chemical reactions, in which every new reaction product is the result of a substantial change. But HOENEN is not thinking precisely along these lines. According to him, a living organism is a single Substance (in the metaphysical sense) and therefore contains just one single substantial form. So every event of assimilation is a substantial change, a change, where the substantial form of the ingested material is replaced by that organism's one substantial form. We can agree with all this only partially. The organism is a holistic entity, meaning that all its activities are ordered to the maintenance of the organism as a whole. Although we may express this fact by saying that an organism has "only one single substantial form", it is perhaps problematic to bring this form under the same category as the substantial form of every chemical compound, be it organic or inorganic. Certainly, ingested materials are partly assimilated by the organism and become integrated with the rest of the organic body. But whether a description of all this in terms of "substantial forms (in-forming Prime Matter) being replaced by one other substantial form" really enhances our undestanding of the "inorganic-organic" or merely confuses it, I'm not sure. Of course, because of the holistic constitution of every organism we may speak, despite of the organism's complexity, of its "one single nature". But maybe such a "nature" is not a single form, i.e. not a single entity, but a coherent pattern of a great many qualitative elements. These elements must ultimately be points, quality-points of discrete space. Such a quality-point has zero magnitude and represents the spatial position of a primitive quality (of which there are many), and only a large collection of such points will finally constitute complex qualities, the qualities we know. Quality-points are not particles, neither are large groups of them particles. They are local patterns of quality-points. This view of mine, the view of explicate reality (the Explicate Order) to be at its very bottom a network of quality-points, will later be worked out. In a sense, it gets support from HOENEN's attempted demonstration that all alleged constituent "particles" of any Substance (in the metaphysical sense) whatsoever (atoms, molecules, crystals, and organisms) may in fact be not particles, but qualities of such a Substance. So in this view it is all about PATTERN, i.e. patterns of quality-points and groups of quality-points. So the intake and assimilation of materials by an organisms is a matter of shifting qualitative patterns. And to speak of "substantial forms" may be confusing.
But let us continue following HOENEN's text.]
Now all this is also instructive to understand things in the inorganic domain itself. The inorganic world, of course, contains Substances. And every one of these is one single ens in the strict primary sense. What as matter is actually taken up into the living being, cannot be anything else than the ens-in-tendency, i.e. Prime Matter, from which immediately follows that those inorganic Substances from which the living being originates, must each of them be composed of (1) that same Prime Matter and (2) one single substantial form for each of those inorganic Substances. So only prime matter is taken up by the organism, not the substantial forms of those inorganic Substances [Later, these substantial forms of the ingested inorganic Substances are, by HOENEN, taken to be qualities of the one organism.]. What, then, happens with these forms? The last and deepest demand of the metaphysics of "generation and decay" is : The material forms "are and are not", and this without "generation and decay". Like they, in the generation of bodies, whose forms they are, are not themselves becoming, so, in the disintegration of those bodies, they do not themselves decay. Ultimately the decisive metaphysical ground is this : Because the material form is not ens (is not a being), cannot, in the strict sense "be", it can also not be generated and also not decay, because they [generation and decay] are the pathways leading to and from "being" (and not to and from forms, i.e. generation and decay refer to beings, not to forms). Let us, to make all this clear, think of the well-known analogy of Aristotle : The answer to the question "what happens to the material forms when the bodies whose forms they are perish?" may read : The same what happens with the spherical form of a sphere made of clay if we knead that same clay into an ellipsoid. It disappears without disintegrating, because it itself was not, when the sphere was there. Only through it [or as a result of it] the clay was a sphere.
One Prime Matter for all ponderable matter.
The inorganic materials [inorganic Substances] that can become part of living beings, consist of a common prime matter and a substantial form of their own. However, not all inorganic materials, and also not all chemical elements, are assimilated by organisms. Nevertheless all of them must be constituted by the same prime matter, because it has nowadays become clear that all chemical elements, and consequently also all chemical compounds, can theoretically be transformed into one another. Let us again note : prime matter is not what in modern science is called "matter". The latter always is some Substance, while prime matter is merely substance-in-tendency, it is not ens (being) in the strict sense, it only becomes ens when realized by one or another substantial form. Therefore, prime matter cannot exist all by itself. It always is merely a part, the potential aspect of a body, of which one or another substantial form is the other part, the actual aspect. So prime matter is also not the aether, or something like it. The aether also is a Substance, it exists all by itself. Whether in the aether there is the same prime matter, the same potential aspect, as in ponderable bodies cannot (yet) be decided upon, because not any transition of aether into ponderable matter is known. Maybe such a transition will one day be discovered. However, it appears to us, because of more than one reason, improbable [Anyway, it could be that the prime matter of ponderable matter is able to individualize forms while that of the aether can't.]
Aristotle's theory of Prime Matter thus provides a clear solution of the problem of the unity of a human being, constituted of matter and mind. Without the datum of Prime Matter, - itself not being an ens, but yet ens-in-tendency, which tendency can be fulfilled by the soul (psyche, as immaterial substantial form ["immaterial substantial form" is here clearly the product of meddling in the discussion of the alleged matter-mind duality of the human being] ), - there can be no explanation of that unity of a human being [The explanation of "mind", "soul", "reason", or "psyche" can, certainly since the conceptual investigations of HOFSTADTER, 1979, be given in terms of complexity levels in material structures : Mind is a high-level phenomenon of the human body : On the level of neurons (brain cells) there is no mind or soul. Mind or soul only appears at a much higher complexity level of the material brain.]. It is a powerful argument of the depth and correctness of Aristotle's [metaphysical] theory. [...]. Birth and death are transitions between inorganic and organic Nature [and these are substantial changes]. [...]
In all this, it is still not decided whether substantial changes also occur in inorganic Nature as such. Indeed, as such there is not a shadow of proof for them to occur. It might certainly be possible that substantial change only occurs in the transition between the inorganic and the organic, but not in these domains apart [As to the organic domain, the theory of evolution might have something to say, because, according to the conventional version of this theory organismic species transform into other species.]. Nevertheless it would not surprise us when substantial change would also be encountered in the inorganic domain. The apparent existence of substantial changes in inorganic Nature will provide a hint of how to develop further the general Aristotelian principles, for which new experience (knowledge of new empirical facts) and new thinking will show us this further developing.
What changes, then, should first of all be investigated? Above we heard Ostwald state that in chemical changes bodies [i.e. chemical entites such as chemical compounds] disappear to give place to other bodies with different specific properties. And such expressions, albeit less acutely formulated, are generally encountered in introductions of chemical textbooks. Origin of chemical compounds from chemical elements, or the analysis into these elements, or also the transition of one chemical compound into another, and thus chemical changes, - see here what is described as substantial change. The relationship of the chemical compound with its elements is what first will call our attention.
It is true that, in the 19th century, it was held that the problem was already solved along the lines of the mechanical view of Nature. One held that chemistry had proven that the atoms of the elements remain to exist actually in the molecule of the compound, that the compound was nothing else than an aggregate of its elements, that the structural formulae of chemistry even indicated the precise location, in the molecule, of the unchanged atoms. The reaction of the so-called energetism of Ostwald, Mach, Duhem, and others, who opposed this atomism, was only temporary, and rested on less solid ground anyway. In the first years of the 20st century this opposition was broken so that Perrin (1914) could happily state : "La théorie atomique a triomphé". At this same time Ostwald (1909) announced his "conversion" to atomism. Even sceptical minds as H. Poincaré (1913) were convinced : "les atomes ne sont plus une fiction commode. Il nous semble pour ainsi dire que nous les voyons, depuis que nous savons les compter". Not differently did think many philosophers, including scholastics (modern schoolmen) [= philosophers, like HOENEN himself, who adhere to the scholastic tradition of the Middle Ages.]. We will, when investigating the kinetic gas-theory, have opportunity and reason to return to these historical events. It were, it is true, not the atoms of Democritus, i.e. indivisible particles, that one had found, as realized Perrin and also Poincaré. [...]
The chemical atoms are not Democritic because they are still composed, and even composed in a high degree : This is the reason why the victory of atomic theory was not yet one of Democritus. The reason is valid, because the thus composed atom could, with respect to its constituents, be a new Substance, not an aggregate, but, what we shall call, a "totality". In this case the origin of a chemical atom from these [subatomic] components (or, the other way around, the disintegration into these components) was a substantial change.
But all these scholars and philosophers nevertheless were of the opinion that the chemical compound eventually could only be an aggregate of those atoms. Our analysis will demonstrate that this thesis does not at all follow from the data. The possibility that the chemical compound is a new Substance, a new totality, still remains in front of these data. So the chemical atoms are also because of quite another reason not Democritic atoms.
So chemical atoms are composed, and in the course of the 20th century one was able to study the components more meticulously. As last components we can mention the electron, the positron, the neutron [where positron and neutron together form the proton. Neutrons and protons are the respective electrically neutral and positive nuclear particles, making up the atom's nucleus, (except for Hydrogen, which has only a proton as its nucleus), while the negatitively charged elecrons reside in the atom's periphery.]. And this is thus another problem that remains to be investigated in addition to that of the chemical compound. And also when the latter turns out not to be a new Substance, a new totality, with respect to the chemical atoms, the atom could still be one with respect to its components from which it thus could originate only by substantial change. Precisely in this problem the modern crisis of the mechanicistic view of Nature broke out. There it was not able to explain the atom as an aggregate. There we can expect the first data for a definitive critique.
In all these classical theories larger complexes, larger than atoms and molecules, were automatically viewed as aggregates, and not as substantial wholes, totalities. Also this we must take to be a problem and investigate the matter.
Before we begin the investigation of these problems, we necessarily must develop the general principles of Aristotle's theory a little further, specify them a little more. If we solely use the most general principles, those of prime matter and form, we still know much too little of the capacities of this system of thought in order to be able to apply, in a scientific and philosophic precise way, the critique. The first specifications we can find in observational data of a more qualitative nature. That's why we find them already worked out in Antiquity and the Middle Ages. Later we shall also use quantitative data for further specification. It will turn out that these [data] are simply a further elaboration of what was already found in the Middle Ages. Indeed, we already saw one : the Aristotelian theory of natural minima.
For this subject -- the true whole and its elements -- see also in First Part of Website the document "the Mixtum and its Elements", directly accessible by the following LINK the Mixtum and its Elements.
Modes of being-potential.
All ponderable materials are, according to Aristotle's theory, constituted from the same prime matter. The one differs from another in virtue of its substantial form. Prime matter is potency, and, in itself pure potency. The one substantial form realizes this potency, makes, out of the ens-in-tendency, an ens, and at the same time [this form] determining this ens to a specific kind [i.e. an ens having a specific qualitative (s.l.) content]. Because the same prime matter is in potency to all these substantial forms [meaning that prime matter itself has no qualitative (s.l.) content whatsoever, being pure substrate of form, and thus able to be in-formed by any (substantial) form.], one might think that it is it [i.e. that it is such a substrate] in the same way, i.e. as direct [meaning that it has everywhere and always the same degree of directness of being something else in potency.]. If, in the case of the same degree of directness, we would have two randomly chosen species of bodies, then an individual of the one species -- under the influence of a suitable agent -- could immediately transform into an individual of the other [species]. But this conclusion is premature, does not follow from the notion of a same potency, a same tendency toward, or a same [kind of] "being-ordered to", different realizations. This is immediately clear from a series of other potencies. A child with bright reason is a scholar-in-potency, but this potency, this tendency, cannot immediately be realized, it demands a gradual development, starting [from the bottom up, i.e.] from the very elements of science (i.e. knowledge). A stock of bronze is a statue-in-potency, but the bronze must first be heated and molten before it can be cast. Cold water can be set into the state of boiling, but it first has to go through all the intervening temperatures below 100 degrees. And so on. All these potencies are thus orderings to different acts, but in realizing these acts there are grades or steps in a certain line or order [proceesses always go their way along certain determined trajectories]. Only when the one stage has been reached, the first [in line] ordering-to is a direct potency to the next one in this line. A given potency may be more or less remote from a realization whereto it is the potency.
Principle of distribution of influence between material and effective cause.
In all these and similar cases is, of course, in addition to the potency -- which is a material cause -- also demanded the influence of an effective cause bringing about the realization. And one may also read off from this a principle regulating the relative influence of both causes : The further away the potency-to-a-given-realization is from that very realization, the greater influence of the effective cause is demanded, and vice vera. This principle is immediately clear from the notion of both causes and from the datum of a given potency admitting grades. One may call it : the principle of the inverse-correlative influence of material and effective cause as to the origin of the effect. Every effect must have its complete "ratio sufficiens" (sufficient ground) to render it intelligible. It finds it partly in one of these two causes, partly in the other. The more is on account of the one, the less is on the other. So one could also call it : the principle of distribution of the ratio sufficiens.
Stages of potentiality of Prime Matter.
How are things with Prime Matter, which is ordered to ponderable materials? Is it in potency immediately to each of these materials, or do also here exist stages, so that it is first directly ordered to (one or more) less completed, elementary, realizations, and only thereafter (in one way or in more ways) directly ordered to others? Apparently we cannot a priori decide upon it. We do see the abstract possibility, but not its necessity. So experience must provide further specification of our general notion. And a qualitative experience turned out to be already sufficient. Therefore, this specification was already found in Antiquity and the Middle Ages. St.Thomas repeatedly describes this gradation of the potentiality of prime matter, as it is present in different classes of bodies. That's why Aristotle maintained the difference in bodies between elements and compound materials, which difference he had found in [the writings of] Empedocles. The quantitative methods of modern science will permit us to further develop this same specification, and to arrange the various materials into a natural system.
So there are, as simple observation already makes evident, bodies or materials, that cannot be divided into specifically different ones, and there are others that can, being constituted from the former. These former are called elements, the other composed bodies or materials. In chemistry, the distinction, within composed bodies or materials, between mixtures (solutions) and chemical compounds is classical and well known. We shall call the composed bodies or materials by the ancient name "mixta", by the following reason : Also the living beings are constituted from material elements, and are thus in this sense composed bodies too, and will, as to their material aspect, share properties with inorganic composed bodies or materials. They, however, are neither mixtures nor chemical compounds, and yet they still are mixta.
And then we must divide the mixta into two groups. There exist mixta, that, as already evident in immediate experience, are not, each one of them, one single substance [in the chemical and metaphysical sense], for instance a mixture of sand and sugar. There also exist mixta that turn out to be single substances only after scientific investigation, for example homogeneous mixtures, such as air. All these are genuine aggregates, that is to say, the components remain substantially what they were before the mixing [i.e. they remain the same Substances (in the metaphysical sense, and also in the chemical sense) ]. But there do exist mixta that truly are, each one of them, one single Substance, one single totality. These are first of all the living beings. The first group, the true aggregates, is, in scholasticism, as mixta imperfecta or mixta ad sensum distinguished from the second, the mixta perfecta. In the sequel we shall, unless explicitly stated to the contrary, mean by "mixta" only the last group.
In our list [genuine mixtures, organisms] we did not mention chemical compounds. Is a chemical compound, and thus something constituted from elements, one single Substance, a totality, or, as classical atomic theory wanted them to be, an aggregate of more than one Substance? We cannot yet decide upon it. After all, it is among the problems we have to solve. But this is certain : if a chemical compound is specifically one single Substance [i.e., if a given chemical compound is a species of one single Substance (in the metaphysical sense)], then it is a mixtum [perfectum], a totality, constituted from elements, and analyzable into them. [Of course, every species of chemical compound exists, or may exist, in the form of free individuals, molecules, and when they do, these molecules are the true Substances, the individual Substances.].
Definition of Aristotle.
Above we said that a composed body or material is divisible into specifically different bodies or materials. In contrast, an element is a body that is not so divisible. We applied the famous definition of Aristotle : " This, then, is an element of bodies : [itself also] a body into which the other bodies are divided, a body which is (in those other bodies) potentially or actually -- this is not yet decided upon -- present. It itself is not dividible into specifically different bodies". This definition is actually the same as was applied also in chemistry since Lavoisier, or in fact since Boyle, and it is perfect. Let us indicate briefly some points : It is about "elements of bodies". Also elsewhere elements can be found. Then an element is itself a body. It is not prime matter, and also not a quality (what is sometimes thought). In the definition, Aristotle does not decide whether the "presence" is potential or actual. In this he is more careful than classical chemistry used to be, presupposing the actual presence, by which it was already decided upon that a chemical compound is a mere aggregate (even worse : that the same should then hold for the living mixtum). Later, Aristotle investigates the question and proves the potential presence. Further, the element is not divisible into specifically different parts [i.e. different from the element itself]. It can, of course, be divisible into its parts which are specifically the same [as the element] and only numerically different. Also the addition that also other bodies can be divided into elements is not superfluous. After all, if those other bodies were not there, there would be no mixta, and the correlative word "element" would have no meaning.
Aristotle's definition being excellent, he failed in its application. His four elements : earth, water, air, fire, are not elements at all. That he was in error as a result of his poor observational data -- and the question : "what are the elements?" can, of course, only be answered on the basis of precise experience -- he cannot be blamed. Didn't one take the atmospheric nitrogen until Ramsay to be an element? [while in fact it is a molecule -- and thus a compound -- made up of two nitrogen atoms]. But one did blame him for not applying the definition purely positivistically, i.e. that he not only took materials not yet actually divided [analyzed] -- provisionally -- to be elements, but that he, departing from a hypothetical structure of his elements, with the help of opposed qualities (and other data), has declared them to be definitive elements and placed them into a system. Lavoisier would [according to this accusation] have done better, and be satisfied with provisional elements : then an element is a body which is not yet divided [analyzed]. Also this reproach is unreasonable. Also classical chemistry never has been entirely positivistic [where "positivistic" means : Something can be taken as actually existing only after having it observed or measured.]. Also it has attempted to construct a system of elements from other data, as, essentially, did Aristotle. And so Mendelejeff, with his so constructed system, was able to derive the existence of still unknown elements and their properties. And the discovery of Germanium, Gallium, and Scandium, showed that he was right. This would have been impossible with positivistc methods. So Aristotle's error was the result of poor experience, not of lack of scientific spirit.
What are the elements?
We saw that Aristotle was wrong in the application of his good definition, i.e. in answering the question : what bodies, now, are elements? His error will less surprise us if we realize that also 19th century chemistry with its rich, now also quantitative experience, has, with the success of Mendelejeff's system, fallen into the same error : The chemical elements are no genuine elements. Also they turn out to be composed materials. That chemists took them so long as genuine elements cannot be blamed onto them. The fact that these elements could be grouped so well into a periodic, "natural" system, seemed to be a wonderful confirmation as to their elementary nature, as did Aristotle, when he believed to find in the structure, attributed to his elements, a similar confirmation.
True elements will then have to be building blocks of those "chemical elements". They might be : the neutron (which, as we think  should not be taken as an intrinsic bond of proton and electron), the positron, and the electron. Yet we can, and should, still call the chemical elements more or less by this name. Therefore we say : they are relative elements. After all, even if they are true composites, and thus no absolute, no philosophically-so-taken elements, they nevertheless form a very characteristic group of composites, having a structure different from chemical compounds that originate from them. With this, they, - not the philosophically-so-taken elements, - are the immediate building blocks of chemical compounds [While the philosophically-so-taken elements, i.e. the true, absulote, last, elements, are only mediately the building blocks of chemical compounds]. And from them they again appear upon [chemical] analysis, i.e. precisely they, and no other chemical elements [meaning that chemical analysis always will retrieve the same elements that went into the constitution of the given chemical compound.]. And after having been so retrieved they have the same properties as they had before they were going to compose the compound. So in a sense they behave as elements, otherwise one had not taken them to be so in the light of so many precise observational data. So we can legitimately call them "relative elements". This name then indicates that they are not absolute, genuine elements, but also expresses the fact that they, relative to chemical compounds, behave as elements. In this sense we continue to call them elements. [Such elements are known as various metals, such as Iron, Potassium, Gold, Tin, etc., and further as substances (in the chemical sense) such as Oxygen, Sulphur, Carbon, Chlorine, etc.]
First experimental data.
So it is characteristic of a chemical element that it, in a certain sense, is conserved in composed materials. In a certain sense. When we exclusively present the direct data of experience, [it is conserved] in this sense : A composed material is analyzed into the same elements (and if we use quantitative methods they are also quantitatively so retrieved) from which it was composed. This doesn't depend on the mode of analysis, provided that, of course, it is completely so analyzed (and provided that the so-obtained chemical elements are not themselves further analyzed). This holds for every composed material : for mixtures, for chemical compounds, for living beings. Certainly in this sense the elements are conserved. But the precise sense must be specified still further, for that these elements in these three groups are being conserved in precisely the same way is, evidently, already out of the question.
First further specification.
Let us first consider mixtures of elements [like iron-filings and sulfur powder at low temperatures]. In these, the "mixta imperfecta", the elements, the components, completely remain what they were before [they were mixed]. They are aggregates of these unchanged constituents. So there the components are conserved in the strict sense of the word. They are there actually, not potentially.
On the other hand, as to mixta perfecta, in those mixta that are living beings, the material components ["material" is here added, because, according to HOENEN, an organism, or at least a human being, also contains "immaterial components", such as the "soul". A thesis which we [JB] do not accept.], and thus also the elements, cannot be actually present. A living being, because it is one single Substance (in the metaphysical sense), cannot be an aggregate of Substances, it is a totality. It has the substantial unity proper to [every] ens. Were the components actually present, then that ens, that being, would at the same time be actually one and many, and this in the same respect, which is absurd.
Long REMARK on Substance (by the author of this website) :
[ Of course it is true that a single Substance cannot actually be many Substances in the same respect. So given that, say, a living being is one single Substance, its material components cannot themselves be Substances.
But all this becomes different if we define a "Substance" (still intending to define it metaphysically) as to be a "constant qualitative (s.l.) pattern, i.e. a more or less stable and repeatable qualitative pattern" and, moreover, do not impose a strict unity upon it -- and thus now not metaphysically defining it anymore, but physically -- then a given "Substance" may certainly consist of several different constituent "Substances". In addition to being organisms, such high-level "Substances" composed of other (low-level) "Substances" can be molecules or crystals : They are composed of atoms, their lower-level "Substances". And these lower-level Substances are, still according to the physical definition of "Substance", not merely potentially, but actually present in the higher-level Substance. And the difference of the latter from a mere mixture is that its structure, its compositional pattern (of constituent lower-level Substances), is non-random. It is stable within a certain range of conditions, and is, therefore, repeatable and most often actually repeated. The constituent parts in such a higher-level Substance are not merely adjacent to one another but are bonded to each other in a specific way, resulting in a recognizable pattern. The higher-level Substance is a qualitative/quantitative material pattern representing an interim end-stage of a particular dynamical system ( This relationship between Substance and dynamical system is extensively worked out in First Part of Website ). But such a "Substance", defined in this way, then is not a Substance in the metaphysical sense anymore, but is still a definite and constant pattern, different from a mere random pattern (stable only at an almost infinitesimal small range of conditions, and therefore, in the real world, not exactly repeatable). And such a pattern contrasts with its environment by its constancy and its often elaborate structure. It may, therefore, acquire, but doesn't need to, some properties attributed to it by the metaphysical definition of Substance.
Most important, however, is the status (virtual or actual) of the elements (components) of those Substances insofar as they are recognized as such by the physical definition. If we consider, for example a galaxy (which is a giant system of often more than a hundred billion stars, and of which [galaxies] there are many), we see a system that takes up its shape and structure from (gravitational) forces, but forces not from without, but from within the system. The galaxy certainly is an interim end-stage of a huge (gravitational) dynamical system. And its structure surely has a degree of constancy and stability. All these properties let a galaxy to be a true Substance (as physically defined). But we know that the constituents, stars, planets, etc., are actually, not merely virtually or potentially, present. After all, we know of our own galaxy (the Milky Way) that it contains actual suns, like our sun, and that it contains actual planets, like our own, the Earth, and above all, we know of the presence (also in our galaxy) of actually existing undoubted Substances, like, first of all, ourselves, but also all other organisms, and all crystals.
So when a given material object satisfies the physical definition, roughly outlined above, of "Substance", it may consist of other Substances being actually present in it (At the highest structural level it is one (single entity), while at lower structural levels it is many such entities). But of course we can stipulate that the notion of "Substance" is a genuine metaphysical concept, a concept that cannot be physically defined, and, seeing it this way, a galaxy cannot be a Substance because it contains actually existing Substances. Metaphysically it is an aggregate.
So if we define, as HOENEN does, a "Substance" in a strict metaphysical way, then we characterize a true "Substance" in the, of course, metaphysical sense. And how, then, would such a metaphysical definition read? Well, such a definition is that of HOENEN (and all other scholastics), reading something like : A "Substance" is a full-fledged being, which is totally and intrinsically one (single being), it is a case of "self-being", a being all of its own, a being, entirely in virtue of itself, a being having a "self" (in whatever gradation). And while the aspects of "self" can hardly be further specified and explained as to what they are, the notion of "intrinsic unity" (of a Substance) is clear, and can be applied. And it is only from this "unity-by-definition" that HOENEN can say that the components of a given actually existing Substance cannot themselves be actually existing Substances. So stating that these components are merely potentially (virtually) present in the one Substance is a direct consequence of the metaphysical definition of "Substance". So these components are only virtual when one adheres to that metaphysical definition of "Substance". And to adhere to it means that it is believed that truly holistic structures (things) do exist. It is in such holistic things that the parts do not exist by themselves but connect to one another to be the whole. So in such a structure the whole is prior to its parts, which in fact means that the whole exists actually while its parts exist only virtually. And this further means that the parts of any Substance (being such a holistic whole by definition) exist only virtually. Lots of rethinking is demanded when we think of a human body of which all its components exist only virtually. And if molecules and crystals also satisfy the metaphysical definition of Substance, then their constituent atoms also exist in them only virtually. All these virtually existing components are then not things in themselves, and not things residing in the Substance, but qualities of that Substance of which they are components. All this is not easy to swallow, but if truly holistic things do exist, and there are many indications that they do, then the consequence about the metaphysical status of their components must be accepted. After all, if we would insist on exclusively the physical definition of "Substance" in terms of dynamical systems and intrinsic forces, then the gravitational or electrical attraction of even already two bodies, two Substances, would result in a new Substance (for instance as it is in the union of a chlorine atom with a hydrogen atom, resulting in a hydrogen chloride molecule). And this is hard to swallow too. So in following the ensuing discussion of HOENEN about Substance and elements, the most important question is whether true holistic entities do really exist. And while it is highly probable that organisms and also crystals are such holistic entities (Discussed in Fourth Part of Website ), and while it is also probable that free atoms also are holistic entities, it is not sure whether (small) molecules are such holistic entities as well. Hoenen assumes them to be so.]
[Continuing with Hoenen's text again] The components, and thus also the elements, do not exist there actually. And yet in that unity there is a multiplicity. After all, it may be divided [analyzed] into its components. However, this multiplicity is only a potential multiplicity, and, we already saw it when dealing with the continuum : actual unity and potential multiplicity do not exclude each other. So in living beings the elements will be present only potentially, not actually. Their existence there is, however, not purely potential, in this sense : All ponderable matter contains, as potency, the same prime matter. So in every living being there is the potency toward all elements [because they contain the same prime matter which is potency to all content]. Yet, at death do appear only these particular elements in only these particular quantities. They are the same elements from which this living being was made up. That has teached us the simple data of experience mentioned above. The potency of the prime matter, as it is in that particular living being, is already determined to transform, at death, into these particular elements (or compounds) and not into others. And the other way around : This particular living being demands in its origin these particular elements in a determined proportion. It cannot directly be built from other elements. So we indeed find experimentally confirmed precisely that of which we above had seen the theoretical possibility : Also prime matter is -- like the other potencies which we'd considered -- not directly in potency to all other [kinds of] bodies. Also here there are steps or grades in realizations, so that prime matter is in direct potency first to less perfect, more elementary or strictly elementary, realizations, and only then, i.e. through it, [in direct potency] to other more complex realizations.
Genetic connection and connection "by nature".
The composition of various mixta [mixta perfecta, that is] from various elements -- every specific mixtum has its specific composition -- first and foremost points to a genetic connection, and with it to an order, a natural system of substances (in the chemical sense). So there exist similarities (alongside differences) between the "natures" of the various mixta and their elements. Like one animal species is more closely related to another, more distantly related to a third, in the same way one species of chemical substance is more similar to another, is further removed from a third. "Nature" or "specific nature" is here what scholasticism so aptly calls principium (ultimum) intrinsecum agendi et patiendi, i.e. that intrinsic deepest in every ens (being), every Substance (in the metaphysical sense), from which its properties, active as well as passive, come forth. So these "natures" show mutual relationships allowing them to be arranged in one system expressing that natural connection. And that order can be read off from the composition of mixta out of elements, thus from the genetic connection. The same potency, the prime matter, present in a given "nature", is therefore a direct potency to those determined other "natures", and a more remote potency to others again. Precisely this is what the notion "virtuality" expresses. Then elements are "virtually conserved" in the mixtum, in this sense that a mixtum has a "nature" (always in the above described sense) similar to the "nature" of those elements, and only from them, from which it is genetically constituted, and into which it can be analyzed again. Different mixta, related to one another by a more or less direct genetic connection, mutually show more or less similarity as to their "nature".
Unity of substantial form.
[The "substantial form" should loosely be seen as the intrinsic qualitative/quantitative pattern representing some material object. It represents what precisely that object intrinsically is. It as such in-forms prime matter, which is its ontological substrate. It is the "form" of a Substance (in the metaphysical sense).] In explaining this "virtual conservation" of the elements yet another problem popped up in the Middle Ages, a purely metaphysical problem, which did found in St Thomas Aquinas a solution, although that solution met with a severe dispute, also for a time after his death. The question was : Is it necessary, in order to explain metaphysically the virtual conservation of the elements in a mixtum, to assume that the substantial forms of the elements are conserved in the mixtum alongside or under the new form, that of the mixtum, or can there be only one single substantial form in one and the same Substance -- thus in the mixtum only that of the mixtum?
The arabian aristotelici and the older scholastici, up to St Thomas, almost unanimously assumed the actuality of those element-forms [i.e. that those element-forms remained to be actually present in the mixtum]. Almost unanimously, but with many deviations in the further explanations, which are often worse than obscure for that matter. One should note (this is often forgotten) : It is, strictly taken, not about the continued existence of the elements themselves, as Substances, in the mixtum, because then the mixtum would already automatically not be one single Substance anymore, but a mixture, an aggregate. And that wasn't intended by these philosophers. To save the substantial unity of the mixtum, the Substance of the elements could not be preserved in the mixtum. Nevertheless, one wanted to assume a certain actual continued existence of their substantial forms. So it is not surprising that the metaphysical hypotheses one had to set up, to explain this, had to perish as a result of desperate obscurity, or, better of contradiction.
St Thomas solved the matter, and demonstrated that necessarily in one single Substance only one single substantial form can actually reside, i.e. can actually be (of course, of accidental forms, such as quantities and qualities, there can be many in one single Substance) [Here, HOENEN, following aristotelian-scholastic tradition, sharply distinguishes between (1) the Substance-proper, which is a true ens (a true being), an ens having (and its prime matter in fact being) a substanial form representing its intrinsic "whatness", and (2) further determinations of this Substance which is their common substrate. These determinations are the so-called "accidents" or "accidental forms", where "accidental" does not refer to these forms being connected with the Substance merely coincidentally, merely "by accident", but rather does refer to the alleged fact that such an accidental form (a quality or a quantity) is not a true, fully-fledged ens but a mere "secondary, assessory or auxiliary ens". The Substance ìs not such an accidental form, but merely has it, while it truly is its substantial form, not merely having it. Whether all this, i.e. the classical Substance-Accident distinction, does represent a truly fundamental and really existing distinction, i.e. an objectively existing difference between "being" and "auxiliary or assessory being", between "substantial form" and "accidental form",- I am not so sure of. I think that the type of content of the "substantial form" and of an "accidental form" is the same. Both are qualitative (s.l., i.e. including quantitative proportions) patterns. And among the "accidental forms" of a given Substance there are some that necessarily arise from what that Substance intrinsically is. And then, of course, such "accidental forms" cannot ontologically be distinguished from the Substance's substantial form. Other "accidental forms", however, do not derive from the Substance's whatness, but partly come from without that Substance. So in considering a given Substance, as it is in itself, it would be best to exclude from it(s description) the extrinsic accidental forms (as one traditionally does), but to take together the "substantial form" and all those "accidental forms" that necessarily derive from it (all intrinsic "accidental" forms), into one single, or should we say complete, substantial form. And as such it then represents the complete intrinsic qualitative/quantitative content or pattern of that given Substance, a pattern that is truly complex, and therefore apparently many-fold, but nevertheless one, i.e. one single form.]
From the whole set-up of the theory of Aristotle, as it is necessary for understanding the transition of one Substance into another, for understanding "becoming and decaying", it is entirely clear that the substantial form not only has the task to give the substantial specification -- a specifically different form gives another species -- but that its more profound and first task is : rendering prime matter into an e n s, a true, primary, being. And indeed, in order to solve Parmenides' problem : "how can an ens come from an[other] ens?" the theory was first and foremost set up. So where there is in the prime matter -- not potentially, but actually -- already a substantial form, this [bit of] prime matter has already become ens, become a Substance. So if there were in this same bit of prime matter more than one of such forms, then there would be more than one Substance, and the mixtum would then substantially be actually one and many, which is absurd.
The same can also be formulated differently. Above we found that from inorganic materials the living being, one single mixtum, is constituted. And since it is, after Parmenides, clear that from ens no [other] ens can originate such that the first is the matter out of which the second can originate [i.e. an ens, if it is taken to be ontologically simple, cannot transform into another such ens, because that would simply be nothing else than the annihilation into nothingness of one and the creation out of nothing of the other.], it is, precisely by that reason, in a transition from ens to new ens, exclusively the prime matter of the initial inorganic materials, and thus not the whole ens, that is transferred to the new ens. What is not transferred are precisely the substantial foms of those inorganic bodies, substantial forms having determined them as to what intrinsically they are [meaning that these forms have made that bit of prime matter into entia]. And so it is clear that the substantial forms of the elements cannot actually be present in the mixtum.
We now skip a few pages of HOENEN's tekst, because they have become obsolete, and add the following more up-to-date consideration as to the relationship between elements and mixta.
According to the above "it is clear that the substantial forms of the elements cannot actually be present in the mixtum". But they, as properties, certainly remain virtually present in the mixtum formed by these elements. And that means that some of them, or some 'parts' or aspects of them may become actual again in the mixtum. But then they are not properties of the elements anymore but of the mixtum. This is the "conservation of certain properties of the elements in the mixtum". Other properties of these elements remain virtual, i.e. remain to exist only virtually in the mixtum. The truly new properties of the mixtum, i.e. properties not in any way present in the elements, are formed as a result of interaction of existing properties of the elements.
All properties involved in the transformation of elements into a mixtum are essentially chemical properties, and these reside in the periphery of atoms and molecules. And, according to the theory of electron orbitals, it is those peripheries that interact with each other in all chemical reactions.
Earlier we [JB] spoke of the difference between (1) defining, and thus recognizing something to be, a "Substance" physically (namely as : a well-defined constant and repeatable qualitative/quantitative pattern formed by internal forces and thus being an interim end-state of some dynamical system), and (2) defining a "Substance" metaphysically (namely as : it to be, to begin with, a physically defined Substance, but moreover it necessarily to be a true holistic, and therefore absolute, unity implying "self-being".
When the notion of "Substance" is defined metaphysically, then such a Substance cannot in turn consist of [other] Substances, meaning that all the parts and elements of such a Substance, i.e. all its heterogeneity, are qualities, and qualities not of those elements but of that Substance (mixtum) itself. In such a Substance certain properties of the [originally free] elements are "conserved" in the Substance composed from these elements, but now they are properties (qualities) of that Substance. And this, precisely, is what could be meant by the expression "certain properties of the elements are virtually present in the Substance (mixtum)", or, perhaps by the expression "the elements exist virtually in the Substance (mixtum)".
Let us, in order to be able to handle things well in all that is to come (in the rest of this document and in the next), dig a little deeper in this "virtual existence". The virtuality of the elements in a given mixtum (Substance) consists of the following three facts :
If, on the other hand, we acknowledge "Substance" only as physically defined, then such a Substance is not an absolute unity, and thus its parts and elements are independently (and actually) existing in such a Substance, i.e. they are not mere qualities of it. And such a physical Substance is nevertheless not a mere aggregate of its elements, because it is formed by intrinsic forces, it is a product of internal dynamics.
In fact all this is about the question of how "Substance" should be defined. And when we then finally have settled this question, then a given material thing is -- depending on whether it does or doesn't satisfy the agreed-upon definition of something to be a Substance -- either an aggregate (a mixture) (of elements) or a true Substance, and let us thereby assume we have aggreed upon defining "Substance" exclusively metaphysically.
However, things turn out not to be so easy : While it is clear that free atoms, crystals, and organisms are true Substances, because they satisfy the agreed-upon metaphysical definition, free molecules appear not to satisfy this definition and must therefore be mere aggregates (of atoms). But this is a dilemma, because we know for sure that there exists a fundamental difference between (1) a mixture (and thus an aggregate) of chemical elements and (2) a chemical compound of these same elements. So if a molecule (which is the minimum (particle) of a chemical compound) is neither a metaphysical Substance nor a mere aggregate, what is it? Well, it may be reasonable to assess the free molecule (i.e. any free molecule) to be a physical Substance, and only a physical Substance : While the molecule satisfies some necessary but not sufficient conditions for it to be a true metaphysical Substance, it does not satisfy the remaining conditions for it to be such a Substance, and we may then say that it is a physical Substance. Whether molecules in fact do satisfy these definitions, and thus, whether molecules are metaphysical or mere physical Substances is a question that will be settled in the sequel in following HOENEN's text and commenting upon it. As to (free) molecules, things will depend on whether a free a t o m is a true holistic entity. And the demand to impose quantum-conditions upon the structure of a (stable) atom points into the direction of an atom to be a holistic entity because the constituents of an atom have turned out to be ordered to the whole atom itself, rendering the atom not merely to be a physical Substance but a metaphysical Substance over and above, a totality where the whole is (ontologically) prior to its parts. If this is correct, then also free molecules will be true holistic entities, and thus metaphysical Substances, because also the structure of molecules demands quantum conditions. And this will lead to the conclusion that all physical (and only physical) Substances are macroscopic bodies, because their structure does not demand quantum-conditions.
As to an organism, although there is exchange of matter and energy with the environment and therefore the organism seems to be a mere aggregate, the exchange is performed a c t i v e l y by the organism itself. So here all the internal processes and the structures making possible these processes evidently are ordered to the organism as a whole, in order for it to survive and keep on existing. Together these processes and structures form the organism's strategy-to-exist. So the organism is a true metaphysical Substance, despite the fact that its constitution from molecules, or at least from certain supra-molecular complexes, does not demand quantum-conditions. There exist certain inorganic structures (patterns) that originate as a result of internal forces, such as the well-known Belousov-Zhabotinsky reaction and the Bénard instability. Nevertheless, they are not true Substances, neither physical, nor metaphysical, because, despite the intrinsic nature of these structures they are not concrete individuals, as is demanded for any material thing to be a Substance.
As has been said, all this will be further analyzed and discussed in follwing HOENEN's text (on inorganic Substances) and commenting upon it based on new ideas and more up-to-date data.
[Numerical] Unity of substantial form and heterogeneity.
We have demonstrated above that in one given Substance, and so also in one given mixtum, there can only be one single substantial form [that we [JB] take to be a constant, well-defined qualitative/quantitative pattern], directly "informing" prime matter. Yet prime matter is, as we saw, not directly in potency to all substantial forms, but directly so to those of elements, and only thereafter to the mixta that originate from these elements (and even only from these mixta to more complex mixta). This genetic connection points to a more fundamental systemic connection-by-nature, and this "natural connection" also expresses itself in the conservation of properties of elements alongside the new properties of the mixta. The one-ness of the Substantial form does not imply that there is no gradation in the potentiality of prime matter. Its realization demands steps, as do the other potencies we mentioned. This possibility of gradation in the potency of prime matter we could derive a priori, its fact only experience could teach us. This in contrast to geometric forms in which we had this insight a priori.
The [numerical] unity of the substantial form [in one and the same Substance] has seduced some to another conclusion which was just as premature as was the one that wanted to derive from that unity the equality of the potentiality of prime matter with respect to all forms. One, namely, supposed that the one-ness of substantial form would imply : homogeneity of properties of a given extensum [a spatially extended being or thing] which is one single Substance. But we already earlier saw that oneness of Substance does not exclude multiplicity of accidents, i.e. of qualities [If we do not distinguish "auxiliary beings", "accidents", such as qualities, from the (one) "substantial form" of the (given) Substance, but see the Substance as a well-defined constant qualitative/quantitative coherent pattern (which may be supplemented by extrinsic qualities and other determinations), we may say that in one and the same Substance there is one single such coherent pattern, which nevertheless, because it is a pattern, is, in some sense, a multiplicity.]. We also saw that the intrinsic oneness of the extensive Substance does not prohibit a diversity of qualities in different parts of that Substance [So such a Substance may well be qualitatively spotted all over its body], and that thus heterogeneity in one single given [metaphysically defined] continuum is possible. The numerical unity of the substantial form is demanded for the unity in Substance. Also this does not demand the homogeneity of the body. And thus that conclusion indeed was premature.
So the possibility of a heterogeneous Substance (in the metaphysical sense), especially of a mixtum, necessarily follows from these principles, not yet its factualness. This must be decided upon with the help of experience. But now this already follows with certainty : if experience lets us encounter an extensum that is heterogeneous, then from only this cannot follow that this extensum is not intrinsically one, not a [metaphysically defined] continuum. Indeed, the possibility of a heterogeneous continuum is certain. And we also saw that this holds for all sorts of heterogeneity, not only for temporal, but also for permanent, not only for coninuously flowing, but also for abrupt heterogeneity.
That there exists in mixta a changing, temporal, heterogeneity, elementary experience teaches us. But this kind of heterogeneity is not important for the prtesent discussion [It refers to the extrinsic changes of a Substance all the time.]. Also not all permanent heterogeneity is important to us in this respect, such as a scar on a living being. Important, however, is the question : Does there exist a permanent heterogeneity in physical continua, which is characteristic to the species, which thus is a true property (proprium)? According to the above the possibility is certain, whereas the factualness must be shown in experience.
This heterogeneity is for this reason important because it may be connected with the conservation of properties of elements in a mixtum. It therefore may provide a further specification of this conservation and, as a result, of the relationships between elements and mixta. It will, in the full sense of the word, have to be called a "structure", i.e. "specific heterogeneity" is a structure.
Heterogeneity in living beings.
Already a simple observation confirms this expectation. A living being with its one single substantial form is an example [This, of course, always under the assumption that a living being is a Substance in the metaphysical sense, and therefore that a living being is a holistic unity, a metaphysical continuum]. There we find heterogeneity in the one single ens, also permanent, also specific heterogeneity. Not all heterogeneity is permanent in such Substances, not all permanent heterogeneity is specific. There is also individual heterogeneity, but the existence of specific heterogeneity there is certain. Therefore, it was the classical example in the Middle Ages.
And again, simple experience teaches us that this heterogeneity connects with the conservation of properties of the elements, here, in this simple experience : of the components. The disintegration of a living being is the "becoming" of the material remains. The latter consists of an agglomerate of chemical compounds, the direct components of the mixtum which was the living being. Well, these components do show in all their parts similarity of properties with the corresponding parts of the living body from whose disintegration they "become".
So we find, in the living body, a set of properties being heterogeneously scattered over the whole, a heterogeneous structure. And precisely in this way [the living body has such a structure] because [it is precisely] these properties [that] were [found] there [in the body] because they correspond to the various properties of the different components being preserved in it [i.e. the body]. These are, to a simple observation -- here considered to be the only observation still -- external properties, macroscopically observable. The result will remain the same when being derived from microscopic and scientific observation (and from still deeper analysis, as we shall see). Conservation of external properties points to a conservation of internal, intrinsic, fundamental properties. The similarity in properties between the parts of the body and its components originates from the conservation of qualities of latter. Heterogeneity originates from the just discovered fact that the properties of the components do not become melted together into one homogeneous quality. And this heterogeneity is for many -- not all -- qualities [present in a mixtum] specific, and thus is present in all individuals of a species.
Heterogeneity of chemical compounds.
[HOENEN, here, is still preparing his agument in favor of molecules [and also atoms and crystals] to be true Substances, totalities. Every species of molecule is then shown to represent a new Substance (mixtum, totality) with respect to its composing atoms, which themselves are, when they are free, true Substances too. The argument will take place in the long Section on atomic theory, further down. There, atoms, molecules, and crystals, will be shown to be totalities, because at least several of their properties turn out to be not (merely) aggregation-resultants (i.e. properties that are, it is true, new with respect to those of the elements, but nevertheless directly follow from the aggregation of these elements), but in fact are "totality-resultants", i.e. properties not at all of, or coming from, the elements (properties that cannot be derived or computed from these elements), but properties exclusively of the mixtum itself. All this connects with the necessity of introducing quantum-conditions in all covalent chemical bonds (as expounded in the theory of orbitals [First Part of Website : "The Chemical Bond"] ). How things are in this respect [whether giving totalities or not] with ionic chemical bonds I do not know. It might be that also in an ionic molecule, like hydrogen chloride (HCl), quantum-conditions are demanded in the ionic bonding, because such molecules are not simply the result of the attraction of initially two oppositely charged particles (atoms) : These particles are initially neutral but then one particle donates one [or more] electron to the other particle resulting in two oppositely charged particles which then, subsequently, attract each other, forming a molecule, an ionic molecule.
HOENEN rightly says that if a given thing is a totality, a true Substance, then its elements are present in it only virtually, meaning that they are not present in it as individual entities. And in order to demonstrate this for the case of free molecules of chemical compounds he struggles with the demonstrated fact that in the molecule there are some properties that are in fact properties also present in the corresponding free atoms, properties such as those that are present in the X-ray spectrum of molecules, also in those cases where it is certain that these molecules have not been disintegrated by the action of the kathode-rays, bombarding the molecule that induces, as a result, the X-rays of the molecule. For some, this phenomenon of the conservation of properties of the free atoms, later forming the molecule, points to the actual existence of individual atoms (changed a bit in their periphery) in the molecule. Such conservations are explained by HOENEN by the phenomenon of "natural kinship" between the compound and its corresponding elements. So he then concludes from this natural kinship that also in the case where molecules are, each for themselves, real totalities, true Substances (in the metaphysical sense), it is to be expected that certain properties of the corresponding free atoms are conserved in the molecule. But this is a rather vague reasoning, and if we do not see it as conclusive, then for a molecule to be a true totality the only evidence that remains are the quantum-conditions that have to be introduced (and are themselves evident by the discontinuous nature of the spectrum) in at least molecules containing covalent chemical bonding. What the necessary introduction of quantum-conditions certainly means is that it definitely points to a non-mechanical nature of a molecule (and also of an atom), but that might not, all by itself, decide whether the molecule (and the atom) is a true totality.
So if we (1) do not accept to reduce the conservation of (some) atomic properties in the molecule to be the result of mere "natural kinship" between elements and mixtum, but see it as possible evidence of the actual existence of atoms in the molecule, and (2) if we hold that quantum-conditions in molecules do not necessarily point to these molecules to be true totalities (but only could point to them to be so), then it is still possible that a molecule is not a true totality, not an intrinsic unity, not a Substance in the metaphysical sense. But we know that a molecule is certainly not a mere aggregate of its elements, so we should then conclude that a molecule is a physical, not a metaphysical Substance. It then is a constant qualitative/quantitative well-defined pattern of several actually existing atoms (with their own intrinsic pattern) together making up the molecule.
So we can expect, also after having followed all arguments of HOENEN, that the question whether molecules are true totalities or physically-defined "Substances" cannot yet be decided upon. We should note, for instance, that it is especially troublesome to assess small ionic molecules like HCl as true, metaphysically-defined, Substances.
But, of course we may simply stipulate that the "conserved properties of atoms" in the molecule are now -- in the molecule -- not properties of the atoms anymore but properties of the molecule, of the mixtum. And this means that the atoms do not actually exist in the molecule, but only virtually so. And so we then speak of the v i r t u a l conservation of properties of the atoms, virtual conservation, because nothing is in fact conserved. And we may further speculate that a thing that is proven to be not-mechanical (as a result of the quantum-conditions) is necessarily holistic and therefore a true Substance in the metaphysical sense. And so, a molecule may be a true metaphysical Substance after all.
It is good for the reader to follow HOENEN's arguments in the light of what has just been discussed. And maybe all this inspires him or her to investigate the matter still further.]
See here, what the medievals -- because here a simple observation is sufficient -- knew well, and what they knew to perfectly explain in the theory of the virtual conservation of elements (or components) and to combine with the theory of the [numerical] unity of the substantial form [one Substance - one substantial form].
In the inorganic mixta, however, they did not assume specific heterogeneity, while surely assuming heterogeneity by accident, individual, heterogeneity. Where St.Thomas proves the p o s s i b i l i t y of heterogeneity in one continuous body, the proof is general (and thus also holding for inorganic mixta), and his example is that of a non-living body and an abrupt heterogeneity. But where it is about specific heterogeneity, which is thus not only permanent but also characteristic of the species, he only admits it to be possessed by living mixta, not by inorganic mixta. Not by reason of theoretical objections -- the proof of the possibility of heterogeneity is general -- but only because experience seemed to demonstrate specific homogeneity of inorganic mixta. Even a much more sophisticated experience than that of the Middle Ages seemed demonstrate the same. Indeed, microscopic and even ultramicroscopic methods do still not succeed to demonstrate this heterogeneity in chemical compounds [Today -- end of 20th, beginning of 21st century -- it is, for some cases more or less possible to visually demonstrate this heterogeneity, but, I presume, these methods do involve some theoretical presuppositions as to the constitution of matter]. For this the powerful tools of X-ray analysis are needed. The same [absence of specific heterogeneity] is teached, as we will see, by the chemical theories. In the Middle Ages one even had problems with the supposed homogeneity of [chemical] mixta in connection with the conservation of properties. One found the following explanation : The various fundamental properties of the elements [whatever these elements were supposed to be] were pairs of opposite qualities, such as warm and cold. When, now, the elements enter into a compound [i.e. go to make up a compound], they, as to their oppositions, first interact with each other. This, then, must proceed until there has originated in the whole mass one single homogeneous quality lying between both opposites. Only then there exists the "own disposition" for the mixtum. And this quality then, after the formation of the mixtum, remains (not individually, but as to its intensity anywhere on the body of the mixtum). Experience, with its alleged demand of homogeneity, seemed to force one to such an explanation. Otherwise it would have been much more easier to assume that this interaction of elements could surely cause a change in their properties, and not necessarily a change having to end up in homogeneity. Only as a result of insufficient[ly sophisticated] observation, the scholastics did not assume specific heterogeneity in inorganic mixta.
Specific heterogeneity possible.
That this specific heterogeneity is, on scholastic principles as they have been worked out in the Middle Ages, possible or even to be expected, will now be clear.
The possibility of the existence of bodies relating to one another as elements and (more or less complex) mixta, can already be deduced from the principles of potency (prime matter) and its realizations just like that, because a potency admits steps of realization [the prime matter of the mixtum is in near potency to its elements, and only from this to the mixtum. And thus there are steps leading from elements, through complexification, to the (complex) mixtum.]. Observation then teaches us about the factualness of this relationship and the genetic connection between these bodies. The order, that becomes evident from this genetic connection, is an order (a system) that comes forth from grades of similarity and difference between the "natures" of these bodies [the elements on the one hand, the mixtum on the other]. But because the "nature" is the origin of active and passive properties (in the strict specific sense), the relations between the bodies must also express themselves by similarity and difference of these properties. ( This precisely is what we directly observe. The "nature" is, as Substance, not directly observable [Here HOENEN sharply distinguishes between the "nature", the "substantial form", of a given Substance, and the "properties" of such a nature. We also did this in First Part of Website. However, now we think that there is no need to do so : Every Substance has its intrinsic "whatness" or "identity", and this is the (intrinsic and constant) qualitative/quantitative pattern, or, if one wants to call it like that, the Substance's "substantial form". Excluded from it are only all extrinsic qualities, qualities that is, that come from outside and depending on here-and-now conditions.] ). So an observable genetic connection [of natures] must go parallel with an observable similarity of properties (in the strict, specific sense). And thus a mixtum is expected to show, alongside differences, similarities in properties with its elements. Among the properties of the elements there must be those that (in a certain degree, or entirely) must be "conserved" in the mixtum. Genetic connection demands this similarity, that "conservation" : This necessarily follows, given the fact of genetic connection as it is present between a mixtum and its elements.
But in the genesis of a mixtum from its elements these, as to their active and passive properties, must interact with one another until a "disposition" originates which is "proper" to the mixtum. Is it, now, a priori necessary that the result of this interaction is a same disposition in the various elements going to bond with one another [i.e. a commonly possessed disposition of the elements]? There is no reason at all for this. The theoretical possibility of a heterogeneous disposition is plausible without doubt. Heterogeneity may even be expected. When this possibility is realized, then the necessary conservation of properties will go together with heterogeneity. A heterogeneous mixtum will be the result of the genesis. Heterogeneous in the sense that one part of the mixtum has properties which recall or are even equal to the properties of the first element, while other parts do so relate to those of other elements. There will be a heterogeneous structure, corresponding to element-properties. And where it is about properties in the strict sense, i.e. about specific properties [intrinsic properties], the resulting heterogeneity of the inorganic mixtum will be a specific one. By which the possibility of this heterogeneity is demonstrated purely from Aristotelian principles, the possibility and even (after specification of the virtual conservation of elements, in which experience already plays a part) the probability of an expectation, of this heterogeneity.
Also here, wo do not yet assert that the chemical compound is a true mixtum, one Substance : But surely this is clear : if it is one, then it may, or probably will be, specificly heterogeneous. So when we, as a result of more sophisticated experience, discover heterogeneity in a chemical compound, corresponding to the different properties of the chemical elements, then this alone cannot form a reason to conclude that the compound is not a true mixtum.
Principle of heterogeneity.
We may call this specification of the Aristotelian principles of prime matter and one substantial form the "principle of the heterogeneous virtual conservation of properties" or simply the "principle of heterogeneity". [According to me this is not entirely correctly stated : The elements, not (some of) their properties, are virtually conserved, while a number of (intrinsic) properties are actually conserved in the mixtum (mixtum perfectum). But then also the "natures" of these elements must be actually present in the mixtum, because actual intrinsic properties must come forth from actual "natures". But, upon the formation of the mixtum, they are no longer (conserved) properties of the elements, but now properties of the mixtum.]. Later we will connect this with yet the other specification of those same principles, a specification we have met earlier : the natural minima. These united specifications will prove very useful in the now following analysis of atomic theory.
Before discussing atomic theory, we must realize that it is all about the c o n s t i t u t i o n of chemical compounds (molecules), not about the actual chemical process (reaction) in which a chemical compound is actually formed from its chemical elements. Indeed, many chemical compounds cannot directly result from a reaction between its chemical elements, but from other compounds. But also in this latter case we have to do with chemical reactions. Chemical reactions consist in the formation or breaking of chemical bonds between atoms. And although we are, here, concerned only with the actual (static) constitution of chemical compounds (out of their chemical elements), and therefore with the metaphysical status of molecules (i.e. the question of their being Substances or aggregates), it may be instructive to know what, essentially, is the driving force of all chemical reactions (including crystallization). Why do they take place at all, and what determines their direction?
To answer this, we must resort to the science of thermodynamics. Thermodynamics is the fundamental science of transformations, i.e. of any transformation, whether chemical (chemical reactions) or physical (state transitions). With thermodynamics we can in principle predict and understand whether a transformation is feasable and in what direction it will go. This had been shown already at length in Fourth Part of Website, but to explain it here concisely and in general terms we shall use the excellent and clear exposition of Ph. BALL in his wonderful book Designing the Molecular World, CHEMISTRY AT THE FRONTIER, 1994, p.57-58 :
[As to what determines the feasability and direction of transformations] there is a universal answer which is embodied in the so-called Second Law of thermodynamics ( What, you might ask, is the First Law? It is that principle which is commonly known as the "conservation of energy" -- energy is never destroyed, but only transformed from one kind into another). The Second Law states that all realizable transformations are accompanied by an increase in the total amount of entropy in the Universe (Strictly speaking, it says that the entropy cannot decrease. There is a class of transformations -- those that can be reversed exactly -- for which the entropy content of the Universe can remain unchanged.).
These days, entropy is a term not uncommon in everyday parlance, but it has acquired a certain air of mystery. There is, however, nothing very mysterious about it at all. It can be regarded as a measure of disorder -- a pile of bricks, for instance, has more entropy than a house. Similarly, a liquid has a greater entropy than a crystal, since in the former the molecules tumble about in disarray while in the latter they are stacked in an orderly, regular pattern. The Second Law is therefore saying that the Universe is bound to become ever more disorderly. This too can appear to be a very recondite and mysterious statement, but in fact it is saying nothing more than that things tend to happen in the most probable way : there is simply a greater probability that things will become disordered than the reverse. The Second Law is therefore actually a statistical law, which does not prohibit absolutely the possibility of a change that induces a decrease in entropy, but says only that such a change is overwhelmingly unlikely when we are considering huge numbers of molecules [because these admit of many different configurations, and most of them are disordered]
Uphill or downhill?
Although the Second Law of thermodynamics provides a universal arrow for specifying the direction in which change, chemical or otherwise, will occur, it is not actually of very much practical use to chemists. The problem is that the Second Law considers only the entropy of the entire Universe, which, as you might imagine, is not an easy thing to measure. In order to predict which way a chemical reaction will go, we need to know not just how the entropy of the reactants differs from that of the products, but also how the heat given off (or consumed) changes the entropy of the surroundings, How heat produced in a reaction changes the surroundings is hard to establish in detail -- it will depend on the nature of the surroundings themselves. But fortunately we do not need to worry about these details -- the entropic effect of heat dished out to the surroundings depends just on how much of this heat there is. If the loss or gain of heat by the chemical system is accompanied by a change in volume (if a gas is given off, for example), this also has an effect on the entropy of the surroundings. When there is a volume change of this sort, the chemical system is said to do work on the surroundings (this work can be harnessed, for example, by allowing the change in volume to drive a piston), and this work must also be taken into account in determining the total entropy change,
We can therefore determine the direction of a chemical change as specified by the Second Law on the basis of just the change in entropy of the reactants, the amount of heat consumed or evolved, and the work done on the surroundings. All of these can in principle be measured. Willard Gibbs expressed the directionality criterion in terms of a quantity called the Gibbs free energy, which quantifies the net effect of these various contributions on the total change in entropy during the transformation. The Gibbs free energy represents the balance [in the bookkeeping sense] between the change in entropy of the system and the change in entropy of the surroundings. The latter is represented by a quantity called the enthalpy, which is the sum of the heat change (due largely to the making and breaking of chemical bonds) and the work done (due to a change in volume).
A chemical reaction is feasable if there is an overall increase in entropy of the system and its surroundings (the latter being an effective representation of the rest of the Universe). This means that, for example, if the products have less entropy than the reactants, this decrease must be more than balanced by an increase in entropy of the surroundings due to the heat given out or the work done via volume changes. This translates into the rule that the Gibbs free energy must decrease ( Strictly speaking, this is true only when the temperature and pressure of the system are held constant [In phase-transitions, i.e. transitions from solid to liquid or to vapor, and vice versa, the temperature and pressure is constant (if all has free play)]. Under different conditions, other kinds of free energy must be considered instead of that defined by Gibbs.). The change in Gibbs free energy therefore defines the "downhill" direction for the reaction [i.e. its spontaneous direction]. In the same way that a ball is perched atop a hill will run down it, thereby reducing its potential energy (the value of which depends on the ball's height above the ground), a chemical reaction will tend to proceed in that direction in which it loses free energy.
[...] [However, not every reaction will spontaneously take place when the above thermodynamic conditions are met. I mean that these thermodynamic conditions are still incompletely described] :
The vast majority of chemical reactions that are downhill processes turn out to be hindered by a barrier that prevents them from occurring, at least at any significant rate. What determines the feasibility of the reaction is the thermodynamics -- considerations of enthalpy, entropy, and free energy. But what hinders the reaction from proceeding is the so-called "kinetics" of the transformation. [...] The initial step of a reaction is generally an uphill process : energy must be supplied [to overcome the initial peak of free energy].
So here, then, were sketched the main thermodynamic settings that determine whether a chemical reaction (and any other macroscopic transformation) is possible in principle and in what direction it will go. Of course, first and foremost, i.e. before even thermodynamic conditions are going to play a role, the products of the reaction must formally be derivable from the reactants (tin and chlorine atoms will not combine to iron carbonate, or any other compound not containing tin and chlorine, even if these "products" have lower free energy as compared to the "reactants" tin and chlorine). Further, it is also known that many complex chemical compounds cannot directly originate from their corresponding elements, but only through a number of intermediate stages, because the product does not formally derive uniquely from the "reactants" (a larger, but defined, number of carbon, oxygen, nitrogen, and hydrogen atoms may formally combine into a large number of ways, resulting in different possible species of molecules). But apart from these formal demands, it is the thermodynamic conditions that determine feasability and direction of any chemical transformation. So thermodynamics can teach us about all (at least) macroscopic changes in the World. It determines what Substances do exist here and now, there and then. It describes the dynamics between volumes of Substances (be they volumes of atoms or molecules).
On the other hand, though, the discussion of the present document is about Substances as to their static aspects only. It enquires into the general nature of the constitution of material things, and decides upon whether these material things are aggregates, physically defined Substances, or metaphysically defined Substances. So it is about the formal, physical, and chemical relationships between the elements (among themselves) of some already presupposed total of them, and between these elements and that total. Or, said a bit differently, we here, following HOENEN, investigate already existing material things as to the nature of their composition. And in this no thermodynamics is involved.
The problem that we investigate concerns the constitution of inorganic materials. Are all composed materials in the inorganic world, all mixta in a broader sense, merely aggregates, in which the components remain substantially-unchanged, i.e. remain to exist actually -- as it is in a mixture of sand and sugar, or also in an externally homogeneous mixture [such as air] -- or do there exist among the composed materials also ones that are not merely an aggregate of components, but a totality, i.e. a substantial unity, a mixtum perfectum, in which thus the components exist only virtually, materials, whose "becoming" from, and "disintegration" into, the compoments, are substantial changes? [For HOENEN the only alternative for something to be an aggregate is for it to be a "Substance" metaphysically defined. He does not at all distinguish between (1) a purely physically defined "Substance" (a constant and repeatable qualitative/quantitative pattern, which as such allows for such a "Substance" to consist of several other "Substances" forming such a pattern but without resulting in an absolute unity) and (2) a metaphysically defined "Substance" which is an absolute unity, one single ens, and therefore not consisting of other such Substances, and thus one [Substance] of which the components must exist merely virtually.]. Here there are several problems, that do not necessarily demand the same solution : The problem of the chemical compound as to its chemical elements, the problem of the atom of these elements as to its smaller components, and the problem of macroscopic [inorganic] bodies : crystals, liquids, and gasses. For : if the molecule of a chemical compound is one single Substance, then also as to macroscopic bodies the question must be asked whether they are aggregates of molecules, and so actual multiplicities [of Substances] with merely accidental unity, or that among them there do exist substantial unities, totalities [rendering the constituent molecules, totalities, virtual] [In a physically defined "Substance" its unity, resulting from the coexistence of spatially contiguous constituent Substances, may either be accidentally, but then it is not a "Substance" at all, but an aggregate of Substances, - or (that unity is) not at all merely by accident but per se. It is so, for example, when it is the necessary result of some definite dynamical system and certain initial conditions. It is then a physically defined Substance, which, as has been said, HOENEN does not consider.]. The problem of the chemical compound should first of all have our attention.
Classical atomic theory of the 19th century thought to have solved this problem definitively already long before the end of that century. According to this solution the chemical compound, every molecule, was nothing else than an aggregate of the chemical atoms actually existing in the molecule, being held together into one single not necessarily static whole, by mutual attraction. It, classical atomic theory, in addition held that the chemical elements were true elements, not divisible anymore, true atoms. It had (as Prout had them) its suspicions but no proofs as to the contrary.
That the chemical compounds could not be anything else than aggregates, classical atomic theory had a priori assumed, as told earlier, because it departed from the mechanistic view of Nature. In virtue of its principles it had to view all change, and thus also the generation and destruction of a chemical compound, as a purely local displacement of particles. Giving up the Cartesian theory of the infinite divisibility of matter, those particles had to be the atoms. That's why all these theories had a democritean slant. So the chemical molecule had to be the aggregate of atoms.
But during the course of the 19th century this philosophic fundament of science seemed to be confirmed very reliably by the truly wonderful results of classical atomic theory. Indeed, the latter consisted of a series of specifications, all the time specifications of the general democritean principles. These specifications indeed showed, evident from the remarkable success they had, the nearest causes of the phenomena for whose explanation they were assumed. And because the specifications turned out to be true, also the general principles of which they are specifications, had to be necessarily true. Hence the conviction of classical chemistry that compounds were just aggregates of atoms of the elements, the conviction that this was sufficiently proved by classical atomic theory.
But nevertheless one was forced to introduce corrections, but initially these didn't seem to affect the essence of the theory. Indeed, the chemical elements turned out not to be true elements, the chemical atoms not atoms. Earlier we heard Poincaré about it. Now this wasn't yet a definitive failure for Democritus. It merely was a shift of the problem toward smaller components than chemical molecules and atoms, but -- and this is what was meant by Poincaré -- it precisely took place at a time at which also doubters of the success of Democritus became convinced. Now the next problem had to be solved : Is the composed "atom" of the chemical elements a totality [in which then the components exist only virtually], or is it in its turn an aggregate of its components? So the composed nature of the atom does not yet necessarily bring with it the fiasco of the mechanical view of Nature. Maybe only the displacement of the limit of its "smallest particles". Hence the attempts to continue along the spirit and methods of classical physics, and constitute the chemical atom from its components, more or less like the solar system from a central body and planets.
Precisely here the famous crisis of the mechanical view of Nature, and thus of classical physics, broke loose, a crisis provoking the words of Bavink (1935) :
"Darüber sind sich jedenfalls heute alle genaueren Kenner der modernen Physic mit wenigen Ausnahmen einig, dass der alte primitive Substanzbegriff [i.e. the substantially-unchangeable matter] nicht zu halten ist. Leider können wir, wie schon bemerkt, nur nicht präzis heute sagen, was an seiner Stelle zu treten hat. Das soll ja noch erst herausgefunden werden."
"At least today all experts of modern physics with only a few exceptions agree that the ancient primitive concept of substance cannot be accepted. Unfortunately, today we, as remarked earlier, cannot yet say precisely what should replace it. This has yet to be determined."
Now we think that it need not be "determined" anymore which theory should replace the mechanical view of Nature. This view collapsed, as Bavink describes correctly, because its fundamental demand of the intrinsic unchangeableness of matter, now also as Substance, turned out to be false. Then it must be replaced by the theory having proved the possibility of such a changeability, having discovered the conditions that have to be satisfied by a substantially-changeable matter : the theory of Aristotle. But also as to this theory one must enquire whether its general principles can also be specified satisfying the demands of the new and sophisticated experience. Demands that could not be satisfied by the mechanistic view of Nature.
But the crisis does not remain to be limited to atoms only. It also affects -- apparently definitive -- results obtained earlier. Also the molecule of the chemical compound seems not to be, as the mechanical view had it, a mere aggregate, but a totality : also here the mechanical view seems fo fail.
[In these discussions it is interesting to realize what, precisely, is the consequence of abandoning the mechanical view of Nature. Does it imply the giving-up of the "reductionistic view of Nature", i.e. is the philosophy of reductionism (deriving the whole from its parts, or, equivalently, reducing the whole to its parts) identical to the mechanistic view of Nature? Apparently it is. And what, then, must take its place? Apparently it is the philosophy of "holism" (in which the parts are derived from the whole). Indeed, in aggregates the whole derives from the parts, and also in physical defined "substances" the whole ultimately follows from the initial conditions of the dynamical system generating that whole, and of course from the dynamical law of that system. So also here the whole has a derived status, i.e. derived from its elements according to the dynamical law connecting every two consecutive system states. So if indeed "non-mechanical" means "holistic", then, as to, at least, atoms, we have to do with true totalities, i.e. Substances in the metaphysical sense. In atoms it will be the necessary imposition of quantum-conditions that turn the atom into a non-mechanical object. And indeed, at least many of the properties of the electron -- often being a component of an atom -- can only be understood from the whole -- the atom -- of which it is a part. So, especially the amount of energy an electon can have at all is determined by the discrete energy-levels of the atom, and these energy levels are a direct consequence of the imposition of the mentioned quantum-conditions. So we have the following implications : quantum-conditions ==> holism, quantum-conditions ==> non-mechanical constitution ==> holistic constitution. And if all this is correct, then also molecules are true totalities, Substances in the metaphysical sense, because at least covalent chemical bonds between atoms of a chemical compound demand quantum-conditions.]
But we sometimes have the impression that many physicists and chemists do not detect the influence of the crisis in this problem [the status of molecules], and, in line with classical atomic theory, keep assuming the actual presence of atoms [in molecules]. This attitude is understandable : First of all, remains valid to them what Bavink has said : One doesn't know what has to replace the mechanical view of Nature. And [this is because] virtual [as opposed to actual] presence [of elements in the mixtum] is unknown to them. And secondly : the wonderful results of atomic theory in its explanation of chemical compounds will, of course, definitively remain valid. No error is detected in them. The case is analogous with the undulation theory of light. The periodic nature of light, experimentally proved so well, has not disappeared after one came to realize that it was no periodic change of place [periodic local motion], but a periodic change of quality. Atomic theory contains, like classical light theory, true elements. The only thing to do is to separate them from the superfluous and, as follows from the crisis, [separate them from] false elements. So there is good reason to also subject classical atomic theory to an enquiry with the help of the "principle of elimination" of superfluous elements, with the well-based expectation that the specifications, which, as nearest causes of phenomena, could explain so much, can be taken up into a new non-mechanical theory. These specifications, by which the classical theory had so much success, will then not have to be sacrificed. Also here the crisis will not cause ruins.
In order to introduce the Aristotelian theory also in the case of the chemical compound as a true mixtum, we will have to demand that the general aristotelian theory is able to take up the specifications having caused the success of atomic theory. In the investigation of this problem the aristotelian principles of "natural minima" and the "principle of the heterogeneous virtual conservation of elements", already making up the first specifications of the general principles, will be further worked out by the application of the modern exact quantitative experience.
See here, then, what we must investigate in this "analysis of atomic theory".
[The term "stoechiometric" may also be written as "stoichiometric".]
We must find the point of departure of classical atomic theory in the explanation it could -- since Dalton, at the beginning of the 19th century -- offer of the stoechiometric laws, especially of the laws of weight which we shall mention in due course. We begin with a short consideration of the earlier discovered law of weight : Lavoisier's Law of the conservation of mass.
Law of Lavoisier.
We may formulate this law as follows : The mass of a given chemical compound is equal to the sum of the masses of the composing elements. This law is, as especially follows from the investigation of Landolt, in such a high degree exact, that experimentally no deviation can be detected. Certainly, the thesis of the theory of relativity that energy brings with it mass, has cast doubt as to the absolute correctness of the law (indeed, in the case of a compound [i.e. its generation or disintegration] generally there will be a heat effect, and thus import or export of energy), but this deviation is surely in all cases too little to be directly detected experimentally. And what is more : this deviation, if there is one, is of no influence over our analysis to come.
Already in this law one saw a demonstration of atomism (with this word we will, in what follows, refer to the atomic doctrine in democritean sense, i.e. a doctrin supposing that the atoms are necessarily actually present in the chemical compound), a demonstration of the actual continued existence of the atoms in the compound. The "proof " may look like this, in line with the physical theories : We suppose atomism to be the fundament, and derive the law from it. Then the generation of a compound is nothing else than a change of place of atoms. Change of place, one assumes, does not entail a change of mass. So the mass will remain constant in the generation of a compound. This "proof " is without value because of two (or three) reasons. First : it is supposed that "mass" does not change during change of place, also not in atoms approaching one another. Now, if one precisely knew what "mass" really is, it might be a reasonable supposition. Now it is pure hypothesis. One may call it a probable hypothesis (which it is for us), but it remains a hypothesis. So there is no "proof ", unless one demonstrates that it is the only hypothesis. (Of course one may proceed [reasoning] the other way around : From the law experimentally follows that the approaching of atoms towards one another does not cause a change of mass, but this cannot form a link in this "proof ").
This brings us to the second (and third) objection [against the claim that Lavoisier's Law proves atomism -- the thesis of the actual continued existence of the atoms in the compound]. A similar hypothesis can be set up on a cartesian basis (which has to deny the existence of atoms, as a result of the indefinite divisibility). So the constancy of mass, also when displaced, cannot form a proof in favor of the atomic theory. But this is yet the lightest objection, for one cannot derive anything from it even favoring mere mechanicism in general. Indeed, also in the Aristotelian theory, i.e. if we suppose that the elements are not actually present in the compound, an equally probable hypothesis can be formed from which also does follow the conservation of mass, and even more than one [hypothesis], here we give one : Because "mass" is something that is found in all ponderable bodies -- (a property of the genus [i.e. all 'species' of mass possess this property, namely the property of being present in all ponderable bodies]), -- it is natural to suppose that it is a property coming forth from a common principle of all ponderable bodies. That principle is prime matter only. So where there is the same prime matter, we can expect equal masses ( If our earlier hypothesis of the originally passive nature of mass is true, then we have yet another reason to derive this property from prime matter.) But at the transition from elements into a compound, the same prime matter remains. So from this the Law of Lavoisier immediately follows [And thus is this law compatible also with a non-atomistic theory.]. From the hypothesis we made, this law follows just as well as it follows from the hypothesis (mass of moving atoms doesn't change) implicitly held in atomism. And here we thus see an experimental confirmation of the thesis that also in substantial change the mass remains the same : Also in generation and corruption of living beings no change in mass has been established. So we may safely conclude : Lavoisier's Law does not favor mechanicism in a higher degree than it favors a theory viewing the generation and corruption of [chemical] compounds as substantial change.
Laws of proportionality.
So classical atomic theory of the 19th century has its point of departure in the other stoechiometric laws of weight. In chemical textbooks usually two are mentioned, viz., that of Proust, i.e. the law of constant proportions or of constant composition, and that of Dalton, i.e. the law of multiple proportions [If the same quantity of weight of a given element combines with different amounts of some other element, then these different amounts relate to one another as a proportion of small whole numbers. See Van MELSEN, Van Atomos naar Atoom, 1949, p.133/4.]. To these we must add a third (of which that of Dalton is just a special case). Ostwald calls it "the law of the weight-of-compounds". Elsewhere we had called it "the law of proportional numbers". We may formulate it : One can attribute to every substance [in the chemical sense] (not only to elements) a "proportional weight-of-compound", having the following property : all chemical reactions proceed in such proportions of mass or weight that these can be expressed by the weight-of-compounds, or by small, whole multiples of them.
Closer characterization of the laws.
First of all we must note that here it is everywhere supposed that we have to do with "pure substances" ["substances" in the chemical sense], or, after Ostwald, with "hylotropic substances", about which one may consult textbooks of chemistry. To one point we direct our attention : the determination of these substances depends on purely experimental criteria, independent of any theory (the complication, which the existence of isotopes brings with it, does not need to be discussed here. It does not mess up the theoretical picture of our analysis).
The law of constant proportions is a purely experimental law. Some, among which Ostwald (1904), have tried to deduce it. Others seem to be of the opinion that it is en empty tautology : a determined compound has a determined composition. And this, of course, also holds for every mixture, it holds even for every physical body such as a watch, otherwise it wouldn't be this particular thing. The meaning of the law is this : in [the case of] "pure substances" [still in the chemical sense] there does not exist a continuous or practically-continuous series of compounds, made up of the same elements and differing in proportional composition. Precisely because of this, the chemical compound sharply contrasts with the mixture, the latter admitting practically-continuous differences in composition. This we cannot deduce, it is an experimental datum. So if we have more than one chemical compound, made up from the same elements [and being not isomers], then, according to the Law of Proust, they must differ abruptly as to their composition [Isomers -- compounds consisting of the same elements in the same proportion -- differ only as to their structure, and they do so abruptly].
Now, the Law of Dalton determines the magnitude of this abruptness, this gap. It is, as to proportional composition, always expressible by small, whole, numbers.
It will be clear to the reader that the second and third laws would carry no sense when the Law of Proust wouldn't hold. Dalton would not be able to determine the magnitude of the gap if there weren't any gap. And because the second law is a special case of the third, a theory, explaining the third law automatically explains the first two.
It should further be realized that these laws belong to the most exact laws we possess in the physical sciences. Experimentally, no deviation can be found.
If energy has mass [or is in some sense equivalent to mass, according to the relation E = mc2 ], there will be a small, but experimentally undetectable deviation. Also the existence of isotopes [atoms with the same number of protons in the nucleus, and thus, chemically, of the same chemical element, but with a different number of neutrons in the nucleus] forces us to a more or less sharper formulation, except when one very strictly defines "pure" or "hylotropic" substances. As was already said, we may neglect this complication, having no influence on our analysis.
Classical explanation of the laws by atomic theory.
For this explanation we could refer to textbooks of chemistry, but usually there is a lack of necessary acuteness, and sometimes even one cannot find there essential parts of the derivation. Dalton, to whose insight we owe the theory, took as his point of departure the atomism of Democritus. But this, like other metaphysical theories, provides general principles only, from which we cannot deduce, and thus explain, the stoechiometric laws. Such general principles must, like in other theories, be specified. Dalton did this by means of adding two auxiliary hypotheses, as specifications, to the fundamental principles.
So this was the foundation of Dalton's theory : Bodies, elements as well as compounds, consist of smallest particles. Dalton called them atoms in the case of elements as well as of compounds. If we, for indicating both species of smallest particles, use the common term m i n i m a, then we further may call, where needed, according to modern use, the minima of elements atoms, those of compounds molecules. So [physical] bodies consist, according to Dalton, of minima. This is nothing else than the atomism of Democritus, the point of departure of Dalton. But he now specifies it by two auxiliary hypotheses.
The first auxiliary hypothesis is : the minima of a same body have, among themselves, the same properties, first of all the same weight. The minimum of the one sort differs from the minimum of another, again first of all as to their weight. So the atoms of one and the same element are completely equal among themselves. So also the molecules of one and the same compound. And thus individuals of such a compound have a completely equal composition out of atoms of their elements.
And here comes the second auxiliary hypothesis of Dalton, often overlooked although it is essential : One single molecule of a given compound originates from [consists of ] a small number (one, two, three, ... ) of atoms of elements, and thus not from an arbitrary number of them (that this number is a whole number, is not a new hypothesis, it follows from the ground-thesis of atomism).
From the so-specified atomism now easily follows the derivation, and thus the explanation, of the stoechiometric laws, provided that the law of Lavoisier holds, which we, in the sequel, always will tacitly assume. We may limit ourselves to the third law (because the second is just a special case of the third, and the first -- the law of Proust, constant proportions and composition -- is logically implied by the second and third). [The stoechiometric laws are, so to say, macroscopic. They deal with weights of bulk mass of chemical compounds or elements. And then the third law -- comprising the others -- is then derived from the assumed atomic nature of the elements, i.e. it is derived from its supposed "microscopic version"]. For convenience -- this doesn't devalue the generality -- we limit ourselves to the generation [here in the sense of constitution] of an arbitrary compound from arbitrary elements. Then we can formulate this third stoechiometric law as follows : Any given chemical compound has a proportional weight-of-compound [later to be its molecular weight], and also the composing elements have each for themselves their proportional weight-of-compound [later to be their atomic weight], with this property : the weight-of-compound of the chemical compound is the sum of the weights-of-compound of the composing elements [as species making up the compound], each multiplied by a small, whole, number (one, two, three, ... ) [as, for instance, 2 with respect to O (oxygen) in CO2 (carbon dioxide) ].
See now, then, the derivation of this [third stoechiometric] law from the hypotheses of Dalton :
According to the first auxiliary hypothesis, the minima of one and the same compound have an equal composition among themselves, a composition out of the same atoms. From this, immediately follows (because the atoms of one and the same element have equal weights) : a chemical compound (in an arbitrary quantity) has the same proportional composition as to weight, as any one of its molecules has (a). [So from the assumption of minima do indeed follow the known macroscopic data of weight (of bulk matter).]
From the second auxiliary hypothesis immediately follows : one single molecule of a compound has a weight that is equal to the sum of the weights of the composing atoms, each multiplied with a small, whole, number (one, two, three, ...) (b). [The assumption of molecules consisting of a small number of atoms implies a corresponding relation as to the weights of these minima, and then also to the proportions of weight in the corresponding bulk matter. So also here we have a derivation from assumed minima to known macroscopic facts about weight.]
From (a) and (b) follows : An arbitrary chemical compound has a proportional weight-of-compound, and the composing elements each have their proportional weight-of-compound, with this property : the weight-of-compound of the chemical compound is the sum of the weights-of-compound of the composing elements, each one multiplied by a small, whole, number (one, two, three, ...), -- and this is nothing less than the third stoechiometric law applied to the proportion of an arbitrary chemical compound to its elements.
From the derivation follows that the proportional weights-of-compound of substances [in the chemical sense] do relate as [do] the weights of the individual molecules, respectively atoms. From this a method may be derived for a beginning of the determination of molecular, resp. atomic weights [namely determining these weights relative to that of Hydrogen]. All this is very elementary.
Critical analysis of Dalton's theory.
Analysis along the principle of elimination.
We're now going to investigate Daltons's theory, following the previously-derived principle of elimination of superfluous elements. After all, it has entirely the structure of theories to which the principle can be applied. So let us look for superfluous elements in the earlier defined sense.
The first specifying hypothesis, that of mutual equality of the minima of one species, evidently is a necessary element of the theory. If we eliminate this, then the above proposition (a) drops out. The same holds for the second hypothesis, that of "small numbers", it is a necessary element for the derivation of the proposition (b). Without it, the law of Proust, so characteristic of the distinction between chemical compounds and mixtures, would not hold. And so, both hypotheses of Dalton are necessary, not superfluous, elements.
But if indeed both hypotheses are necessary, the same must of course go for the fundamental assumption of which they are specifications, and thus the minima must be real. If the minima of one and the same species are necessarily equal among themselves, and bound together in small numbers into one single minimum of a compound, then natural minima must exist. So the fundamental hypothesis is just as necessary, and therefore many scholars saw the success of Dalton's theory as a confirmation of its foundation.
Yet we must take a closer look at it [i.e. that foundation].
And then it turns out to be composed out of two elements. Indeed, Dalton, proceeding from atomism, not only supposed that matter is divisible into minima, but that it is also always divided into minima, his atoms [also in molecules] are always actual atoms, his molecules [i.e. his "atoms" of chemical compounds] always aggregates of atoms. And so we discover that to the fundamental hypothesis of the reality of minima -- this is the necessary element of the derivation that we have found -- a second one is added : the minima of the elements do always exist actually. And this addition is not necessary in order to introduce both specifying hypotheses of Dalton. Is it a superfluous element?
To decide upon this, we set up a hypothesis (in the Aristotelian spirit) that avoids this element altogether [i.e. the supposition that the atoms do actually exist, also in the molecule], and instead supposes the opposite. Then the whole theory reads : the bodies are divisible, not, however, indefinitely so, but until certain minima, divisible, not divided. Division takes place, as we heard it above from Toledo, at (or also just before) the chemical reaction [So just before the chemical reaction actually starts, i.e. is realized at all, the bulk matter of the reactants (be they elements or compounds) divides into smaller and smaller fragments and ultimately into their corresponding minima.]. During the reaction, the minima unite into [new] molecules (each one of which is intrinsically one, not an aggregate), and then into larger complexes [i.e. larger, more complex molecules]. So here we suppose the actuality of minima only [somewhere] during, not (unless just a moment before) before and after, the reaction [Long before this reaction, they may exist virtually in some other compound. But even inbetween, when the elements are still not united into a compound, they are, in this vision, not actually divided into their minima. However, this cannot actually be meant by this vision because "chemical elements actually existing as atoms" is a common phenomenon for example in solutions containing only the one element (or others, non-reactive-with them). So what, in this discussion, must be emphasized is the way of existence -- actual or virtual -- of atoms in molecules.]. But the minima are not arbitrary : we specify them by applying both additional hypotheses of Dalton. So, indeed, in describing Dalton's theory, we have only left out the element saying that the atoms are always [and thus also in molecules] actual, and replaced it by its opposite [saying that the atoms are not always actual : in molecules they exist only virtually.]. And now the reader may check that no letter has to be changed in the above derivation of the stoechiometric laws. Reasoning and conclusions remain perfectly the same [This, of course, also holds when we suppose the atoms to exist always actually, even in the molecule. Daltons's theory, freed from presupposed democritean atomism, and thus freed from superfluous elements, does not decide upon the question of the atoms' virtual or actual existence in the molecule.].
Indeed, the classical idea of "always actually being divided", i.e. of "the always actual existence of the minima", turns out to be a truly superfluous element of the theory. And then we can conclude according to the principle of elimination : This element [of the theory] is not only not also confirmed by the wonderful confirmation of Dalton's theory, explaining the general stoechiometric laws, but hasn't gained any probability at all. So we cannot only not say : atomism has not yet become certain as a result of the successes of Dalton's theory, - no, from our analysis follows : it has gained nothing at all as to its probability.
Of course, also the opposite moment of what we introduced for a test, namely the hypothesis excluding actuality [of atoms in molecules, i.e. the hypothesis that atoms exist in the molecule only virtually], is a superfluous element. Also it doesn't find any confirmation from the success of Dalton's theory. So as to the explanation of the stoechiometric laws we must be content with a more general theory, leaving, for the time being, the question undecided whether the minima are actually or only potentially [virtually] present in the whole [bulk of an] element or chemical compound. For the time being, - for we must later attempt to further specify this general theory containing only the necessary elements, from other data, either into the one direction or into the other.
Results of Dalton's theory.
The only element that turned out to be superfluous in Dalton's theory is thus precisely the mechanistic element which a priori assumed the imperishableness of atoms [i.e. they always and only exist actually, and only their spatial combination can vary, representing all change in the material world.]. The fact that, accordingly, the mechanistic view of Nature could not find the slightest confirmation in Dalton's theory -- in contrast to what in the 19th century was thought, and possibly still today -- does not, however, mean that this theory has not produced results. On the contrary : It has produced many of them, important ones, philosophic and scientific ones.
First, as a result of its success, it is now certain that matter indeed is not indefinitely divisible, and that accordingly the theory of Descartes, identifying matter with [pure] extension and thus assuming its indefinite divisibility, must be abandoned and be replaced by a theory constructing minima.
Dalton's theory moreover demonstrates that the minima have the properties attributed to them by his specifying hypotheses (this brings with it still other philosophic consequences, as we will see in due course). And yet more it teaches us : It provides the first possibility to also determine the magnitude (the weight) of the minima. Earlier, when we developed the aristotelian theory of natural minima, we heard a 16th century philosopher, B. Pereira, already expressing the desire to be able to determine the magnitude of those minima for every species, a desire, accompanied by the despondent statement that this would be very difficult if not impossible to realize. The theory of Dalton provides the first possibility to fulfill this desire, to determine atomic weights. But still with two restrictions : 10, from this do follow only methods for determining the relative atomic weights, and 20, the indefiniteness having its place in the expression "small numbers", has its influence, shortly expressed, this influence : With only these methods one cannot decide upon whether the proportion H : O [hydrogen to oxygen] is 1 : 16, or 1 : 8 (or as other proportions in which the law of "small numbers" still holds). Later development of the theory, with the help of other data, did overcome these restrictions.
Philosophic analysis of Dalton's theory.
We must deepen our analysis still further. We have found out that the hypotheses of Dalton, in classical theory taken as specifications of atomism, can just as well be taken as specifications of an aristotelian theory not supposing dividedness but only divisibility up to minima. The results didn't, then, change. "Just as well" we said, but this must be investigated still further, for we only considered the consequences, and did not yet precisely enquire whether the impositions of these specifications onto the fundamental theory, i.e. onto its antecedents, is as smooth in both cases. If there would be [already] a [single] case in which a necessary element of a theory resists to be a specification of some general hypothesis, then this hypothesis is, as a result, doomed to be a failure. So let us finish our analysis along this line.
Let us first consider atomism [as such a fundamental hypothesis or theory]. The fundamental theory is that of Democritus, in which the atoms had all possible magnitudes and shapes, perhaps with some restriction, excluding atoms too big. But anyway such that nevertheless below that limit all possible magnitudes do occur, infinite in number. And now we, forced by the stoechiometric experience, with Dalton's first hypothesis assume that the atoms of one and the same chemical element are completely identical to each other. So how many magnitudes do we encounter in Nature? Just as many as there are chemical elements, i.e., accordingly -- here the isotopes can surely be taken as different -- a few hundred, i.e. instead of infinitely many, just a small number. And [arithmetically possible] magnitudes lying between two consecutive ones are lacking [because there are only finitely many of these magnitudes]. This is the first hypothesis of Dalton, rendering atomic theory to become scientifically useful, not a specification of Democritus' principles, but a correction of it. The second hypothesis of Dalton easily can be considered to be a specification.
We may now ask whether this necessary correction [only a few species of atoms, instead of infinitely many] is essential or merely subordinate. The answer will be : partly it is philosophically very essential, partly it is only subordinate.
For the fact that Democritus assumed infinitely many species of atoms is in his theory essential. There, any principle that of all possible magnitudes and shapes [of atoms] prefers only a small number and realizes them, and should therefore be a final and effective principle, is excluded. All is delegated to chance. From such a theory follows -- as Democritus did see very well -- with iron necessity : all magnitudes and shapes of atoms must be present in Nature, because there is no "ratio sufficiens", no sufficient reason, to prefer only a few of the infinitely many [possible magnitudes and shapes]. This is just an ordinary principle of the theory of chance. Very acutely this is derived from the principle of sufficient ground by Leibniz : "therefore there is also a reason [for the existence] of eternal things (such as the atoms of Democritus). If we would make up a world that existed from eternity and in which only little spheres did exist, then there had to be a ground or reason why it were rather little spheres than cubes.".
So if Democritus, in virtue of his philosophical principles, had to demand infinitely many species of atoms, and Dalton's hypothesis, confirmed by experience, demands a small number of them, then this correction in Democritus' system is philosophically essential.
The spirit having inspired the atomism of Democritus is the same as what we encounter in classical atomic theory, namely the spirit of eleatic metaphysics, the metaphysics of the unchangeable ens.
Therefore the correction [only a few atomic species, not infinitely many] is on the other hand not essential. We still may, after Dalton, construct an atomism accepting effective and final causes of atoms [These causes of the atoms then have as their consequence that atoms only exist in the form of a few species.] while still being mechanistic in spirit as to changeability [change being there exclusively a rearrangement of eternal atoms], a spirit that still holds the atoms to be always actual entities. For such a theory the first hypothesis of Dalton is no correction anymore, but a true specification of the atomism of Democritus.
B. The theory of minima.
Remark. Is Dalton, in his introduction of his auxiliary hypotheses, also historically influenced by the peripatetic theory [= Aristotle-oriented theory]? In all probability this is the case. It is certain that the theory of the minima naturalia, after the Middle Ages, was accepted also outside proper scholasticism. So by the medic J.C. Scaliger (1484-1558). Under his influence, in atomistic circles, a doctrine was developed, taking atoms in the peripatetic sense. D. Sennert (1572-1637) is wrongly taken to be an atomist. Sennert influenced R. Boyle who had worked out a sort of intermediary, philosophically far from clear theory. Through Boyle these visions ended up in physical and chemical writings. And then it is not surprising that Dalton easily came to correct Democritus with the help of peripatetic principles. (See on behalf of all this the Utrecht dissertation of Dr. A. van Melsen, Het wijsgerig verleden der atoomtheorie, Amsterdam, 1941.)
State of the investigation.
Scientific atomic theory has its origin in Dalton, and the immense consequence of this theory has confirmed its necessary elements for ever. Before Dalton, for several centuries atomic theory remained without scientific result. From the foregoing we now have automatically the following theses :
The Cartesian philosophy of indefinite divisibility [at least under qualitative constancy], which doesn't allow for minima, must be rejected.
Atomic theory, purified from superfluous elements, is as to its origin rightly Aristotelian and not Democritean, in one essential respect, namely that it assumes equality of minima of one and the same species.
Classical atomic theory was meant to be Democritean-mechanical. But precisely that what was removed through our analysis with the help of the principle of elimination as irrelevant to the derivation of the stoechiometric laws, is the mechanical presupposition of the all out actuality of atoms. Accordingly, the mechanical view of Nature doesn't find the slightest confirmation from the abundance of facts explained in this theory. And what is no less important : Were the mechanical view of Nature fail as a result of more detailed experience, it will not take with it in its fall this corrected atomic theory, because this theory is, as is evident from our analysis, completely independent of the mechanical view of Nature. This stable part of Dalton's theory contains the nearest causes of the phenomena [these phenomena] summarized in the stoechiometric laws. And these causes evidently are the true ones. Classical atomic theory got it wrong only in that it considered Dalton's hypotheses as specifications of atomism, whereas they are actually and rightly so specifications of a more general minimum theory. So in classical atomic theory remains true what it has indicated as to be these nearest causes.
From our analysis yet another demand follows that should be fulfilled in education, also in elementary, and in higher education. One should not present Dalton's theory as being a proof of atomic theory in the mechanistic, and still less in Democritean sense. Indeed, the strict theory of Democritus is, as we saw, in fact refuted by Dalton. The mechanistic view with its ever actual atoms doesn't find any trace of proof in it. Even when, from other, later to be studied, phenomena would follow that atoms are always actual, it is not because that could be confirmed by reason of the stoechiometric laws, because it cannot be so confirmed. Already the scientific "Sauberkeit" of the reasoning demands this. The fact that Aristotle and the scholastics had constructed a minimum theory, and the fact that already in their theory Dalton's hypothesis of the equality of minima of one species [of element or compound] formed an integral part, should certainly be mentioned [when teaching Dalton's theory].
Items, further to be investigated. So there are two theories that can account for the stoechiometric laws : the corrected Democritean theory -- corrected in order to be able to allow for the equality of Daltonian minima -- [but still asserting the actuality of atoms also in molecules], and the Aristotelian minimum theory [asserting the merely virtual existence of atoms in the molecule]. To the first theory, the molecule of the chemical compound is necessarily an aggregate, consisting of actual atoms unified into an extrinsic whole. Necessarily [an aggregate], because the essence of the system [the system of mechanistic thought] demands this [But demanding actual existence of atoms in the molecule does not necessarily turn the molecule into just a mixture of atoms. The latter together may form a coherent and repeatable system, a system of actual atoms, not just a contingent aggregate of them. Such a system is, it is true, not a genuine Substance in the metaphysical sense, but some kind of inherent unity nonetheless. Such an intermediary form of unity, lying between "aggregate" and "Substance", is not considered by HOENEN, although it should.]. To the Aristotelian minimum theory, on the other hand, the minimum of the chemical compound [the molecule] can be a substantial unity [i.e. the Aristotelian theory allows for this possibility], can be a totality, a new substance in the chemical sense. So if it is not an aggregate [and it isn't because it is a constant and repeatable qualitative/quantitative pattern], the Aristotelian theory of potency and substantial form [as act of this potency] together with the specifications we had found earlier, must be applicable.
[And indeed we see here that HOENEN does not consider the just mentioned possibility of some intermediary material unity. But maybe he is justified after all in not considering it, because (1) of the fact that molecules are not only individual (as all material entities are) but true i n d i v i d u a l s, and (2) [because] it may be true that a system in which the elements exist actually, can never be a true individuum, because the multiplicity of actual elements precludes it from being one, and therefore from being a "s e l f". And because all true beings, all true entia, must be true individuals, a "system" of actual components cannot be a true being at all, because it is not a true individual. And thus, if our discussion is about true b e i n g s only (and all aggregates are ultimately aggregates of true beings, Substances) we can, with HOENEN dismiss these "systems" from our enquiry as also we do with "aggregates". So we may say that from the viewpoint of true Substance these "systems" are "aggregates", just as true mixtures are aggregates for the same reason.
It may further be noted that the Hoenenian view that takes all molecules to be true Substances in the metaphysical sense, supports the view that also organisms are true Substances if we adhere to the theory that each organism is itself one single giant molecule embedded in an aquous serum-like medium as its direct existential condition. This conception, named "Unimol", will be dealt with later as a supplementary theory (not dealt with by HOENEN, because he's dealing with anorgana only). The very organism as one single molecule is, with small temporary interruptions, chemically continuous, meaning that all its "parts" are chemically bonded with each other into one single organic living molecule. This molecule is, in contrast to all "small" molecules, not characterized by a fixed molecular weight (it varies between limits, it varies between developmental stages, and it varies slightly between individuals of the same species) and also not characterized by fixed stoechiometrics. After all, molecular weight and stoechiometry originate from the investigation of small molecules, including "large" molecules such as proteins and DNA. Because the living molecule is embedded in, and interacts with, the mentioned serum-like medium, and because this medium contains millions of free atoms [ions] and small molecules, the idea of seeing the organism as a "system" easily comes up, although in fact (i.e. according to Unimol) the true organism is not a system (nor an aggregate) but one single molecule, one single being, one Substance in the metaphysical sense. It differs from all other molecules by the fact of its functional complexity, by the fact that it embodies a single indivisible function that a c t i v e l y maintains the organism's existence by directly interacting with the serum-like medium (which has about the same volume as the living molecule itself) and indirectly with the outer environment, i.e. the existence-function is such that a living molecule interacts with its outer environment through its inner environment. And it interacts not (only) passively, like non-living molecules do, but, in addition, also a c t i v e l y according to a determined specific "strategy". And it is this active "strategy-to-exist" that is the just mentioned function of the living molecule, lacking in the non-living molecule.
When we take, by whatever reason, an organism to be a true Substance, it then necessarily follows that the constituents of each organism are, not actually existing particles or parts, but qualities of the organism and only of the organism. These qualities are no more than determinations of the organism, they are not independently existing things. So these constituents, elements, exist only virtually in the organism (they are virtual particles, but actual qualities, qualities not of these particles, not of these parts, but of the organism itself). In itself this is hard to swallow (thinking of the organic tissue-cells and free blood-cells, their content, and also thinking of organs, free molecules in the organism, etc., all being mere qualities!), but the Unimol view of organisms gives extra support to this conception of virtuality because in this view the organism is itself also a molecule.
And, if we realize further that in the Explicate Order only true Substances exist (all aggregates are aggregates of Substances) separated by pure aether, and if we now also realize that each such a Substance is exclusively a qualitative/quantitative pattern, separated from other such patterns by patches of pure aether, then this provides also support to the theory that Reality is one giant and complex sort of cellular automaton (CA) for which the rules reside in the Implicate Order and determine the qualitative/quantitative patterns in Reality's "display window", the Explicate Order. Also this "CA-theory", based on the discreteness of space, and on that of extension itself, will be dealt with later on, after having followed HOENEN's text (which doesn't discuss the "CA-Theory").
As has been said, we will consider these two intriguing themes (Unimol and CA) more extensively later on, and, no doubt, the present discussion of HOENEN about the metaphysical status of molecules is very interesting indeed in this respect.]
Now, then, the items to be investigated. There are three of them.
1. To the corrected Democritean theory, first of all the (extrinsic) unity of the molecule is a soluble problem. It is certain that that unity is not the same as that of the multitude of atoms or molecules forming a mixture. It is more stable. It follows the stoechiometric laws. The very coincidental unity of a mixture results from the effect of the walls enclosing the mixture, and from forces as the ones van der Waals placed at the foundation of his theory. The unity of the molecule of the chemical compound evidently is not like that. However, that unity can, in virtue of the principles of the mechanistic system of thought, only exist in a static or (and especially) dynamic equilibrium resulting from the forces exerted by the essentially unchanged components, the actual atoms, onto one another. So the problem of this (mechanical) theory is : to detect these forces. Of course one could begin with setting up hypotheses. But these should then be verfied with their consequences, if one wants to draw confirmed conclusions from them. If, in some case, the properties of the individual components are known, i.e. if one knows how these components interact and be influenced by one another, then a result is quickly obtained : Can the properties of the compound indeed be calculated from them [i.e. from the properties of the components] (stepping over possible purely mathematical difficulties), then the compound is indeed what mechanicism supposes it to be, i.e. nothing more than an aggregate. Do, on the other hand, result [from the calculation] other properties, i.e. properties different from those that are actually possessed by the compound, then it is demonstrated that the compound is not an aggregate, but a totality. This necessarily follows as a criterium from the principles of the (mechanicistic) system of thought [such a "compound" would then be an "atom" in the sense of atomism]. All this is clear. Let us note that this difficulty will not decrease during further development of atomic theory, on the contrary, it more and more increases.
The unity of the molecule, if it turns out not to be an aggregate but a new Substance (in the metaphysical, but here also the chemical sense), a totality, - is not a new problem in the Aristotelian system [of thought]. It is nothing else than the metaphysical basic problem that has led to the setting up of the aristotelian theory in the first place : How, i.e. in virtue of what, can from a given "ens", a given being (or from more entia), originate a new ens? The answer was : It can, because it is [ontologically] composed of "tendency-to-being" and "form-of-being". Then, automatically, in virtue of this form, the tendency-to-being becomes ens, i.e. one. It is not a new problem. And also this is completely clear.
2. There is yet another problem, the problem of the difference in properties between elements and their compound. At first sight this difference is in most cases very large indeed, so large, that it can cause troubles in both systems [of thought, namely the mechanicistic system of atomism, particularly assuming actual atoms in molecules, and the, we may say, holistic, aristotelian or peripatetic system, particularly assuming virtual existence of atoms in molecules]. These we already saw earlier.
To atomism, assuming unchanged continuance of existence of atoms in chemical compounds, the problem is : Why can the properties of the compound differ so much from those of the [corresponding] elements which continue to exist in it substantially unchanged with their properties? [An answer is given in the orbital theory of the chemical bond. But there the changes of atoms are not of a mechanical nature. And in this theory molecules can be considered to be "multi-nuclear atoms". See "The Chemical Bond" in First Part of Website.].
To the peripatetic (= aristotle-oriented) view of the molecule as being a true mixtum, the presence of new properties is no enigma. After all, a new Substance has appeared in the place of the previous one, no wonder that there appears a change in properties. But for the peripateticus then the problem arises : Where, then, is the effective cause to be sought? Indeed, according to the "principle of distribution", from the effect we must attribute the more to the effective cause the less the material cause is responsible [and because we here have to do with the replacement of a substantial form by another, there is little or no participation of the material cause.]. But this problem is more frightening to the atomist (at least it should be) than to the peripateticus, for if there has indeed appeared something essentially new, as superficial observation seems to confirm, then the atomistic position is already refuted, even when the peripateticus fails to find the effective cause. But nevertheless this still remains a problem.
Fortunately, subsequent development of atomic theory demonstrates, as we will see, that the difference in fundamental properties [between elements and compounds] is not so vast as the impression, we have of the global ["external"] properties, makes us believe, abeit a difference that is significant. [this "not so vast difference" becomes evident in X-ray analysis].
3. And then we have the question of the similarity in properties [between compounds and elements] demonstrated by this subsequent development of atomic theory. To the atomistic position this generally is not a problem. After all, if atoms, essentially unchanged, continue their existence in the molecule, then they must be there with their properties. So only the special problem remains, the problem whether this general finding continues to hold also as to details exposed by more sophisticated experience.
To the peripateticus this [similarity in properties between compounds and elements] is not directly self-evident, not derivable purely apriori from his general principles. Surely, he can foresee the possibility of conservation of properties [of the elements in the compound]. This can be derived from the notion of a potency allowing stepwise realizations, different realizations, in a certain consecutive order. When then experience demands a certain conservation of properties, this can easily be accounted for by the theory. This we already have demonstrated earlier, and even proved there that the conservation of properties of elements in their compounds, along purely peripatetic lines, even may lead to specific heterogeneity in these compounds. So this general problem is already solved also for the Aristotelian view of the compound as being a true mixtum. But also here the particular, specific, problem remains whether this view can account for the similarity in properties, precisely in that particular way in which it, according to modern sophisticated experience [observation], is actually realized. [Just to mention an example of what all this is in fact all about : Table Salt, NaCl, being a chemical compound, has entirely different properties than the respective free elements chlorine gas (Cl) and the metal Sodium (Na) do, while X-ray diffraction analysis shows that nevertheless relatively many properties of Cl and Na are conserved in the compound.]
So in our further analysis of atomic theory especially three problems must be considered :
Atomic theory and kinetic molecular theory are connected, but are not identical. This is a proposition that will be admitted by every expert on the matter as it is formulated in these general terms. But if one considers the different meanings it may yet have, another analysis and comparison with observational data will turn out to be necessary in order to find out whether that proposition can be affirmed in every respect. This further analysis is even necessary to circumvent a historically occurred error.
If one takes "atomic theory" in the above determined sense, as a theory affirming Daltonian minima, without deciding whether the atoms are actually present in the molecule, then there is an intimate connection with the kinetic molecular theory of gasses and liquids, then containing a valuable development of atomic theory.
Does one, on the other hand, take "atomic theory" in an atomistic sense, and so as a theory affirming the actuality of atoms in the molecule, then it will turn out that there is no connection between them [i.e. between atomic theory and kinetic molecular theory]. The error to which we alluded is precisely this that the success of the molecular theory was seen as a proof of that atomism. Just one century after Dalton.
Origin and content of the theory.
Its first observational fundament the theory has in the stoechiometric volume-law of Gay-Lussac. Already soon after the discovery of this law, Avogadro (1811) and Ampère (1814) came up with a hypothesis as explanation. This explanation consists in the assumption that all gasses, in equal circumstances of pressure and temperature, contain in equal volumes an equal number of molecules. In this, molecules are sharply distinguished from atoms. A molecule can consist of more than one atom, also in chemical elements [such as the molecules O2, H2, etc.]. So the number of atoms does not need to be equal in equal volumes, and often it isn't. Molecules are taken to be mutually separated, i.e. individual, particles.
This basic thesis has been worked out in the course of the 19th century with the help of kinetic hypotheses, and has resulted in the famous gas theory. This theory assumes that these molecules are in a state of violent thermal motion. They collide with one another, and with the walls of the container resulting in pressure. The number of these molecules could be determined and it turns out to be very large indeed. If we take of some given gas one gram-molecule [i.e. a number of gramms equal to the molecular weight of that species of gas], then this number, N, -- "Avogadro's number" -- has the value of 68 x 1022. Here we shall not evaluate this theory. Our judgement does not deviate from that of the physicists. We are of the opinion that it is demonstrated by the series of experiments about which we'll say a word later.
Pereira's wish fulfilled.
And here, then, is the clear connection with atomic theory. The theory of Dalton still has two gaps [i.e. two not-yet answered questions] : 10, it only allows for relative atomic weights. 20, even these relative atomic weights are, by Dalton's results alone, not yet completely determined. There still remains possible choice between different proportions.
The second gap is already closed by the simple hypothesis of Avogadro -- without considerations concerning kinetics. The "rule of Avogadro", for determining relative molecular weights, also establishes relative atomic weights. From this already follows the possibility to set up "empirical" and "structural" formulae and the notion of "valence".
The first gap is closed by the kinetic theory. Indeed, it makes possible to know the value of N. If one divides the absolute weight of a gram-molecule of some given substance by N, then one knows the absolute weight of one molecule, which in turn gives the weight of a single atom.
These methods allow to completely fulfill the wish of old Pereira. So there is a connection between the two theories. The kinetic molecular theory provides a valuable development of atomic theory.
What doesn't follow from it.
So proceeding, we still do not find a solution of our problem, the question whether the atoms do exist actually in the molecule. But the theory does make clear that a gas is not a continuum, but consists of actual particles, the molecules (for some elements the gas consists, not of molecules, but of atoms, such as Argon and Helium) [the theory concludes to molecules as actual particles because it can explain pressure as to be caused by the constant collisions of particles against the walls of the container.]. And, according to van der Waals by reason of the "continuity of the gaseous and liquous states", the same must be said of the liquids which can originate from gasses in this way. But when these molecules, as is always the case in chemical compounds, consist of more than one atom, it does not follow that the molecule itself is not a continuum, does not follow that the atoms are actually-existing in the molecule. And precisely this is our problem.
So there is a big difference between kinetic molecular theory assuming discontinuity between the molecules of a gas (and proves it) and the atomistic atomic theory, assuming discontinuity in the molecule, between the atoms of one molecule. From the first, the second certainly does not follow, there is even not a trace of evidence to be found. So in this sense there is no connection between the two theories.
We could let this above remark be sufficient were it not that many physicists and chemists, and also philosophers, after having, at the beginning of the 20th century, obtained these definitive results of the molecular theory, saw in them a final proof of atomic theory in the atomistic sense, thus a proof of the actuality of the atoms in the molecule. That was the meaning of the earlier quoted words of Perrin, who himself had contributed significantly as to these results : "la théorie atomique a triomphé", and also from Poincaré we heard the same.
By physicists and philosopers in 1910 two meetings of the "Société francaise de Philosphie" were devoted to discuss these results, and there, unanimously, also by philosophers as Couturat, Meyerson, Brunschvicg, (with a few doubts and nuances) was accepted the thesis that atomism was definitively proved, atomism in the sense that the atoms continue to exist actually in the chemical molecule. Against Democritus only the restriction was taken to apply which we already mentioned above : These atoms are not Democritean atoms, "atomoi", -- but this not because they do not continue to exist in the molecule, but only because every one of them is still composed, is still a "monde" [and thus not an "atom" in the strict sense of the (greek) word].
If this acceptance would express the truth, then our problem was solved, we had reached the goal of our investigation as to the status of the chemical compound. However, closer analysis teaches us that this conclusion drawn from the results of molecular theory is completely illegitimate.
Applying the principle of elimination.
Perrin divides his arguments in favor of the kinetic molecular theory up into two series. In the first are discussed : several experiments concerning Brownian motion, the blue color of the sky, the so-called "critical opalescense", the viscosity of gasses, the radiation of the absolutely black body. One can see that there were a lot of contributions, from different areas of physics, to solve the problem, surely increasing the degree of certainty of the result.
We may now succinctly summarize the reasoning : If gasses consist of individual molecules, as described above, molecules that, in a certain sense, each for themselves act as an individual, then all the phenomena can quantitatively be determined, i.e. calculated from them. From each measured result in each of these areas a value of N follows, Avogadro's number. And the values of N, calculated according to all these methods, wonderfully coincide, and as such making up the experimental confirmation of the kinetic molecular hypothesis. In our very short summary all this sounds very commonsensically, but who has consulted the reasoning and its abundant data must be impressed by the beautiful result.
But in this reasoning, having the typical structure of an explicative theory, there is no sign of an assumption of actual atoms in the molecule, whereas it is about actual molecules. Indeed, the assumption, when nevertheless made, is completely superfluous, it is not needed for any confirmation of the molecular theory, and so can be eliminated where it was added.
And there is something in the reasoning pointing into the opposite direction. We do not say that it proves the opposite, this would be false. It is this : The molecules must, for the reasoning to be valid, act necessarily as wholes, as units, in a sense as individuals. Not the number of atoms counts, but the number of molecules. This emphasizes the problem of the unity of the molecule if it consists of more than one atom. And we saw that here a difficulty arises for atomism. It is, with all this, not yet demanded that the molecule must be a single individual in the strict sense, a totality : then the problem would be solved contradicting atomism. But it surely sharpens the demand of unity of the molecule.
While in the first series of the experiments of Perrin atoms were not even mentioned, unless for pointing out that they do not act individually in the causation of these phenomena, they are mentioned in the second series. But note : not as atoms of which a molecule consists, but into which it can be divided. It is about electrolytic analysis of a chemical compound as a result of an electric current. So, hydrochloric acid (HCl) is split up into the chemical elements hydrogen (H) and chlorine (Cl). The amount of electricity a current must supply to split up one gram-molecule HCl can easily be measured. Let the value be F. Now, in electrolysis of that one gram-molecule N hydrogen atoms are produced. So for the release of one atom the needed amount of electricity is F/N. Now one supposes that for this is needed one elementary quantum of negative electricity, the electron, having the charge e. The value of e is precisely known from other purely electrical experiments (among others, of J.J. Thomson and Millikan). And this value must be equal to F/N, resulting in an equation, from which N, Avogado's number, can be computed. And see : the result again matches with the results of the first series of experiments.
So we again have a new, independent confirmation of the kinetic gas-theory. But also here again it is clear : the assumption that the atoms would actually reside in the molecule is again totally irrelevant, is a superfluous element of the theory, and if it is added it can be eliminated. In the gas-theory there is not any evidence of the actual existence of atoms in the molecule.
So while in these experiments we may read a confirmation of the hypothesis that in gasses and liquids there is discontinuity between the molecules of chemical compounds, we cannot find in them any confirmation that there is discontinuity within the molecule, as atomism assumes. So our problem has not at all come nearer to its solution, unless insofar the peculiar unity of the molecule is stronger emphasized, and what concerns the stability, the difference in bonding of atoms in one molecule and of molecules in one gaseous or liquid mass, and insofar the fact of chemical valence is concerned.
Cause of the false interpretation.
But surely now -- só clear this result is -- another question must arise : Why did it happen that so many physicists and philosophers saw in these decisive results of molecular theory also a decisive victory of atomism, in this particular sense that now it must also be assumed that the atoms -- albeit themselves still composed, a "monde" -- are actually present in the molecule of the compound? The symposium of the "Société francaise de Philosphie" seems unanimously to have accepted this conclusion. The historical circumstances answer this question and more or less explain this unexplicable.
The classical physical theories, such as atomic and molecular theory, wanted to be genuine explicative theories, wanted to discover the causes of the phenomena. They were mechanicistic -- and became eventually untenable [as a result of subsequent quantum theoretic findings] -- but this precisely accentuates their nature as causal theories. Against this explicative, and in essence metaphysical, character of the theories a positivistic reaction emerged, which, at the end of the 19th century, became rather strong and was widely supported. Sometimes it was asserted that a chemist, respecting himself, should not think of atoms and molecules as realities [because they were inferred, not observed].
The origin [of positivism] must be looked for at the beginning of the 19th century in the positivistic philosophy of A. Compte, who was willing to accept in physics only laws, not causal explanations of these laws. And the influence, if not of Compte himself, but certainly of related ideas, has been great in many scientists of the 19th century especially at its end. Meyerson, devoting much attention to these ideas, and especially trying to discover opposite tendencies and opinions as inhering the human mind, had nevertheless to say of Compte : "son influence sur ces contemporains et surtout sur les générations qui on suivi, a été immense". While this was especially the case in France and England, elsewhere where Compte remained unknown, as in Germany, similar theories were teached by Kirchhoff, Mach, and Ostwald. Indeed, the latter author explicitly stated that he found the opinions, developed by him without knowing Compte, later in the master himself of positivism. So this positivism wanted to ban all explicative theories from science, like Compte himself opposed the optical theories, the emission as well as the undulation theory, precisely because they pretended to explain the phenomena. A causal explanation invokes metaphysics, the spectre of all positivism.
But because all classical theories were now indeed explicative, these tendencies, threatening to erase all deducing chapters from science, would not have gained so much influence, if not, in the 19th century, were developed theories of another type, which, without supposing anything about the essence of things or their structure (thus without explanation), nevertheless were able to deduce anyway. We mean pure thermodynamics and theories of the same type. These theories took their point of departure from a few very general principles, only processing experimental notions and of which one became convinced through a purely experimental course : Conservation in all circumstances of energy [First Law of Thermodynamics], increase of entropy in well defined circumstances [Second Law of Thermodynamics]. These principles, as has been said, allowed themselves to be expressed in purely experimental data, and thus did not suppose anything about the essence of matter. And yet they allowed for remarkable deductions making up pure thermodynamics and also letting them to extend to other physical and chemical areas. Thus, the "doctrine of phases" of Gibbs became a theory of wondrous fertility for the study of physico-chemical equilibria in the hands of Bakhuis Roozeboom and Schreinemakers.
Such type of theories were grabbed by later positivism to press ahead with its demand "no more explicative theories". Indeed, all these theories, because they were explicative and thus "metaphysically contaminated", were worthless. A deductive science without such theories has turned out to be possible. So all of science had to be of the thermodynamic type. That was the stance of positivistic energetism, so popular in many countries at the "fin du siècle" (19th-20the century). Rankine in England had started it. Kirchhoff, who in Germany was more or less a forerunner, restricted the task of science to description [thus abstaining from any explanation]. Mach viewed "theory" as nothing more than an "economy of thinking". Ostwalt chiefly demanded energetic considerations and a "hypothesen-freie Wissenschaft". For Duhem in France the task of science was the setting up of a "classification naturelle". The last three acknowledged the similarity of their positions in the diversity of formulations. Mach and Ostwalt truly were positivists and denied the value of metaphysics (although Ostwalt got engaged with it nevertheless). Duhem fully acknowledged metaphysics, but wanted to set up a physics which, contrary to classical physics, did not presuppose a metaphysics of whatever sort. In essence he wanted, what is sometimes overlooked, to obtain metaphysical results through physics, results that apprehend reality in its essence, but in fact only so at the end, when that "classification naturelle" has been -- who knows when? -- completed. Also Duhem did not escape from the sceptic tendencies characterizing the "fin du siècle" of the 19th century as to the physical sciences. His booklet Le Mixte et la Combinaison chimique of 1904 carries the signs of them.
One should not confuse thermodynamics and its applications with positivistic energetism. The thermodynamic theories are legal and fruitful methods, but they are not "the final answer" of physics and they are not hostile to explicative theories. Thence the attempts to find an explanation with ontological causes from the thermodynamic principles and results themselves : The work of Gibbs, Boltzmann, Lorentz, and others. Thence the theories connecting thermodynamic principles and theses with explicative theories, atomic and molecular theory, as it took place at the "école hollandaise", as Duhem calls them with much praise. Also the writer [HOENEN] has developed such theories, in which both elements, thermodynamic and molecular, go together harmoniously.
At the same time there were also many scolars who, without being energists, were sceptical against the classic explicative theories. This was "la crise de scepticisme dont, à la fin du 19e siècle, Henri Poincaré s'est trouvé à son corps défendant d'ailleurs, le représentant principal devant l'opinion" (1922). This scepticism promoted positivistic tendencies, as it was itself in turn influenced by them : "les anathèmes de Compte et de M. Mach y sont certainement pour quelque chose". That scepticism, originated from dissatisfaction as a result of unsolved difficulties, may lead to positivistic theories, we again witness in our days  in the blossoming of neo-positivism, which, also after the definitive demise of positivistic energetism at the beginning of the 20th century, found a new breeding ground in the difficulties of the quantum-doctrine.
In addition to energeticists and to sceptic minds, there were of course still enough scientists defending the value of explicative theories. In Holland they even were, especially as a result of the influence of Van der Waals and Lorentz, still numerous. They were hardly affected by the crisis. And this position now has, in the beginning of the 20th century, won the dispute thanks to the definitive victory of molecular theory. Indeed, there exist explicative theories of classical science discovering reality and being essentially correct. However, they should -- we saw many examples of it and have one now again -- be critically purified along the logical demands of the principle of elimination. What, then, has to be eliminated as superfluous elements usually turns out to consist of elements of the mechanical view of Nature, which always were supposed without any fundament in experience. Here, in the present discussion, that view turned out to be applicable in one part, namely in the supposition of discontinuity between molecules, but only here.
These then were, in inorganic natural science, the positions of mind up to the turn of the century [19th-20st]. There were two opposed views -- or three, if scepticism can be called a view -- disputing the question : may science set up explicative theories, or should it be content with non-explicative, only experience-summarizing and classifying, especially energetic, theories? As to one point, however, one was fully in ageement : As explicative theories were exclusively accepted : the classical theories, i.e. those adhering to the mechanical view of Nature ultimately presupposing the Eleatic metaphysics. Other theories, such as those of Aristotelian inspiration, were not known anymore. Thence the only opposition : atomism against energetism.
[Energetism : a view founded by Ostwald, replacing mechanicism. According to energetism there is no independent matter (and thus no molecules, no atoms). Matter is nothing more than a spatially ordered group of different energies. All events consist of migration of energy in space or of transformation of different energy forms into one another.]
And then suddenly appeared the definitive confirmation of the kinetic gas theory, a typical explicative theory, which at least in its actual violently moving molecules well complied with the ideas of the mechanical view of Nature, and with it giving a final blow to positivistic energetism. Suddenly, because that was the impression of many. Meyerson, at the mentioned symposium expressed it as follows : "tout d'un coup, un changement surprenant s'est produit". Brunschvicg, in his L'expérience humaine, after he has spoken about : "une époque", : " l'époque où l'humanité a vu l'un de ses reves millénaires descendre dans la réalité sensible et s'y incorporer", expresses this same opinion : "tant le succés de l'atomistique s'était révélé vaste et foudroyant".
So where only two views in natural science stood against each other, while a third possibility was not known, where only there was the choice between : atomism and energetism, automatically only the first remained after the demise of the second, so that with the demonstration of the reality and actuality of molecules, also that of atoms in the molecule was taken to be demonstrated. Thence we understand the only restriction in the acknowledgement of the victory of atomism, the restriction which we heard above from Perrin and Poincaré : these chemical atoms are not Democritean atoms, they are still composed, composed of course of actual particles.
Had one known Aristotle's theory, especially the fact that Aristotle and the Middle Ages had worked out a minimum theory which was compatible with the view of the compound as a new Substance, as one single continuum, as totality, then one immediately had seen the clear distinction between discontinuity between molecules and the discontinuity in the molecule, and then one had spoken of the victory of molecular theory -- against the view of energetics -- but not of a victory of the atomistic view. In addition to a mechanical explicative theory, an aristotelian theory [i.e. a non-mechanical explicative theory] is possible. Both oppose energetism. But is it possible at all that something valuable comes from the Middle Ages? Certainly, without this prejudice an error would have been avoided.
The "conversion" of Ostwald.
Most scientists have accepted the arguments in favor of the molecular theory. Also an earlier opponent of the first hour, Ostwald. We want to sketch his turnabout because it throws so much light onto this history and no less onto our problem.
Nobody was more opposed against the atomic theory than Ostwald. In 1895 he held his controversial speech at the meeting of German scientists and physicians in Lübeck about "die Ueberwindung des wissenschaftlichen Materialismus". That "scientific materialism" was here nothing else than the atomic theory. The same thoughts he has since published in many writings. Not only the reality value of atomic theory was denied, but the theory was also held to be detrimental to the progress of science.
His arguments are not only made up of the general reasons of energetism that an explicative theory -- and atomic theory is certainly of such a type to him -- is metaphysical, imaginary, and detrimental, but also of special reasons. See here a decisive one (italicization of HOENEN's) :
" Vielmehr verlangte die [atomistic] Ansicht die Annahme, dass, wenn auch beispielsweise alle sinnfälligen Eigenschaften des Eisens und des Sauerstoffes im Eisenoxyde verschwunden waren, Eisen un Sauerstoff in dem entstandenen Stoffe nichtsdestoweniger vorhanden seien und nun eben ander Eigenschaften angenommen hätten. Wir sind jetzt an eine solche Auffassung so gewöhnt, dass es uns schwer fällt, ihre Sonderbarkeit, ja Absurdität zu empfinden. Wenn wir uns aber überlegen, dass alles, was wir von einem bestimmten Stoffe wissen, die Kentniss seiner Eigenschaften ist, so sehen wir dass die Behauptung, es sei ein bestimmter Stoff zwar noch vorhanden, hätte aber keine von seinen Eigenschaften mehr, von einem reinen Nonsens nicht sehr weit entfernt ist."
" Much more the [atomistic] view demanded to assume that also, for example, when all visible properties of iron and of oxygen have vanished in iron oxyde, iron and oxygen would nevertheless be present in the generated compound and now having acquired new properties. Today we are so familiar with such a view that it is difficult for us to perceive its pecularity, yes, its absurdity. But when we consider the fact that all what we know of a given substance consists of knowledge of its properties, we see that the assertion that there is, it is true, a certain substance still present but has lost all its properties, is not far from pure nonsense."
So Ostwald urges precisely that what we have indicated to be our second problem : The difference in properties between the compound and its elements. And if indeed this difference was so great as he describes, then his conclusion would necessarily follow. The compound would indeed be a completely new substance as compared to its components. It would be absurd to assume the continued existence of the elements [in the compound]. But then we should not decide in favor of energetism but in favor of the theory of Aristotle, having indicated the conditions for such a change [namely the conditions for substantial change]. But also for this theory the complete disappearance of element-properties would bring with it difficulties. And the problem would remain to detect the effective cause of the change. But this [candidate-]solution was totally unknown to Ostwald like to the others.
But fourteen years later, in 1909, one year before the symposium we spoke of, Ostwald has become convinced of the legitimacy of atomism. He announced this in the preface of his Grundriss der allgemeinen Chemie, where he declared :
" Ich habe mich überzeugt dass wir seit kurzer Zeit in den Besitz der experimentellen Nachweise für die diskrete oder körnige Natur der Stoffe gelangt sind, welche die Atomhypothese seit Jahrhunderten, ja Jahrtausenden, vergeblich gesucht hatte."
" I am convinced that we recently came to possess the experimental demonstration of the discrete or granular nature of substances, which was sought for in vain for centuries, yes, for millennia."
Again, no distinction is made between discontinuity in the molecule and between molecules. The demonstrations are taken to hold for atomism. These demonstrations were those same demonstrations as presented by Perrin :
" Die Isolierung und Zählung der Gasionen [electrically charged atoms] einerseits, welche die langen und ausgezeichneten Arbeiten von J.J. Thomson mit vollem Erfolge gekrönt haben, und die Uebereinstimmung der Brownschen Bewegungen mit den Forderungen der kinetischen Hypothese anderseits, welche durch eine Reihe von Forschern, zuletzt am vollständigsten durch J. Perrin ermessen worden is, berechtigen jetzt auch den vorsichtigen Wissenschaftler, von einem experimentellen Beweise der atomistischen Beschaffenheit der raumfüllende Stoffe zu sprechen."
" The isolation and counting of gas ions [electrically charged atoms] on the one hand, having successfully crowned the long and perfect works of J.J. Thomson, and the correspondency of the Brownian motions with the demands of the kinetic theory on the other, which has been measured by a series of investigators and finally most completely by J. Perrin, now allow, also to the critic scientist, to speak of an experimental proof of the atomistic nature of spatial substances."
So we have here partially the same arguments as Perrin had put forward. Do these arguments contain an explanation of the fact of the new properties of the chemical compound and the vanishing of those of the elements? In these experiments of Thomson, of Perrin, and others, there is even no sign of any indication in this respect, nothing that lets disappear the difference in properties or even only decreasing it, nothing that now clarifies that for Ostwald in 1895 was : a "Sonderbarkeit", an "Absurdität", "von einem reinen Nonsens nicht sehr weit entfernt". [We will later see that the elements in the compound do not simply lose their properties while these elements are still actually present in the compound. The elements exist in the compound only virtually (they can be actually retrieved from it), and the new properties are now, not the properties of the elements, but properties of the compound, the new Substance (in the metaphysical and chemical sense).]
What, then, had to be Oswald's conclusion after having heard of these successful experiments? Nothing else than these ones : Gasses and liquids consist of individual molecules [so matter is real]. That is demonstrated by the data of experience, and therefore I give up my energetic position. But the difference in properties [between compound and elements] which has not disappeared or become smaller, is for me a decisive argument that the atoms of the elements do not have continued existence in the molecule anymore, that this is, accordingly, a new substance [in the chemical sense] as compared with the elements. Indeed, the opposite is absurd to me.
From which this is clear : Only ignorance of the Aristotelian theory could lead to that turnabout [that of Ostwald]. And also this : Also after the success of molecular theory our problem, the problem of the chemical compound, still remains unsolved, and we shall, in order to make progress, have to investigate the proportion of these properties [i.e. the nature of the difference between the properties of the compound and those of its elements] still further.
The position of Duhem.
A few words about Duhem, because his influence, also on philosophers, has been great and still not yet disappeared.
After the definitive demise of energetism he has not expressed himself anymore. Initially, his position differed a bit from those of the other energetists. He acknowledged, in contrast with the positivists, the possibility and value of metaphysics. But as regards his view of physical theories practically he agreed with them [the positivists] whereas theoretically not entirely. Not entirely, because he assumed that in the end -- i.e. when the natural classification, being the end goal of the physical theory, is completed -- the results of the physical theories would be subservient to metaphysics dealing with the essence of things. The physical theories did not do that themselves, and legitimately all were of the thermodynamic type, avoiding all considerations about the essence of things and the causes of the phenomena, and, in its derivations, only departing from some general experimentally established principles. Thence Duhem, like Ostwald, so vehemently opposed atomic theory and molecular theory. These, apparently, are explicative theories, purporting to know something about the essence of matter, about the real causes of phenomena. Their explicative nature cannot be denied.
There were, in Duhem's theory of science [epistemology] two oddities still being influential :
First, his argument for the symbolic nature of physical laws, and why the theories do not intend, like metaphysics, to discover reality : Reality is qualitative, whereas the modern physical theory is mathematical, is quantitative. Therefore it is in the end symbolic. And in his followers this even became : Therefore it cannot apprehend reality. That this last assertion is a fundamental error we already have seen earlier [Although we must realize that the qualitative can, as it seems, only be measured -- and perhaps perceived at all -- by its extensive, that is, quantitative effects. But then, thanks to this, qualities can be investigated in physical theories after all.]. That what is purely qualitative, purely intensive, can, as we have shown, be represented by numbers without losing something of the qualitative nature, without the intensive becoming the extensive. After all, Duhem has not been consequent to himself in these things, as we already indicated above. It is queer that this idea still walks around.
The second oddity is this : Duhem presented positivistic energetism as to be a return to Aristotle by physics. And it is still more peculiar that one has believed him. Concerning atomic theory, being the subject of our investigation : The minimum theory of Aristotle, Simplicius, and St Thomas has totally escaped Duhems's attention. He only encounters a formulation of it in Aegidius Colonna (a pupil of St Thomas) and traces in later medievals. And he assumed that this was Democritean import, accepted by only a few, and preserved carefully. Earlier we saw that the atomic theory of Dalton precisely introduces Aristotelian elements into chemistry. Duhem's resistance against the atomic theory is, instead of being a return to Aristotle, an attempt to eliminate Aristotelian elements from science.
And not different so in his energetism in general. Surely, in its battle against mechanicism rejecting qualities, it is, as is Aristotle, anti-mechanistic, because it admits the possibility of qualities. But this is something purely negative. In the positive, i.e. in the demand that natural science must be explicative and willing to try to understand reality and not merely classifying it, Aristotle, together with mechanicism, opposes Duhem's energetism. Truly, it is inconceivable that a man like Meyerson has just like that accepted Duhem's assertion that energetism brings with it a return to Aristotle. He does it, it is true, to be able to condemn also Aristotle in the failure of energetism, and then, with it, to exclude for ever a return to Aristotle -- all this said with more power of expression than with that of arguments. Even stronger : that there have been also neoscholastic philosophers who, following the authority of Duhem, took physics to be a positivistic-energetic construction, and holding this to be an Aristotelian-Thomistic position. And more so, that this influence, after the demise of energetism, is still always to be found here and there. It seems to be very difficult to read off the intrinsic and essential spirit of a science and of a doctrine from its principles and development.
Before we continue with HOENEN's text (pp.375, concerning the internal structure and wholeness of crystals, atoms and molecules, and the establishment of them being true Substances in the metaphysical sense (next document)), we shall ponder about the nature of molecules, as such established by the kinetic gas theory. While in some gasses the "molecules" are one-atomic (such as Argon and Helium), in many of them they consist of two or more atoms, such as H2 (hydrogen), O2 (oxygen), but also NH3 (ammonia), CO2 (carbon dioxyde), and many others. So they all consist of individual atoms or molecules. The same can be said of liquids. And it can now be inferred that also all solid materials consist either of individual atoms (such as in metals) or of individual molecules (in many cases these molecules have become very large indeed as a result of internal repetition of certain chemical units : crystals). Molecules can be simple, but also, especially in the domain of "organismic chemistry", very complex and even functional (specialized to perform a certain chemical task).
We will see that atoms are the true building blocks of matter (the so-called elementary particles are not truly matter, but merely fragments of matter), and they also are true Substances in the metaphysical sense. Chemical bonding of atoms result in molecules and crystals. And they too turn out to be Substances in the metaphysical sense. One of the main physical reasons to view them as Substances, wholes, totalities, is the fact that the bonding of their constituents involves quantum-conditions rendering the atom and the chemical compounds to be non-mechanical structures, which certainly means : holistic structures, intrinsic wholes, and thus true Substances. And now we know that organisms represent the supreme example of Substance (still in the metaphysical sense), and we might wonder whether their constituents are also held together by something that implies quantum-conditions, i.e. by chemical bonds (especially covalent bonds). And because we know that the essence of an organism is to be a single chemical "machinery", we might speculate that an organism is in fact one single, giant, molecule (instead of a system of molecules), albeit a molecule with changing chemical bonds, a molecule not having a fixed molecular weight, and not following precisely the stoechiometric laws as they were established for smaller molecules, and moreover, a molecule embedded in and soaked through by an aquous serum-like medium representing the molecule's nearest existential condition, a medium of about the same volume as that of the molecule itself. Such a living molecule is actively maintaining its existence and interacts with the aquous serum-like medium and through it with the outer environment. And now indeed we have a being, the organism, that certainly is a true Substance, because it (too) is a molecule. And indeed, all "wholeness-phenomena" that we encounter in all organisms become much more clearer when the organism is viewed as a single molecule instead of as a "system-only". Further this view of the unimolecular state of organisms is very well compatible with the view that the Explicate Order is the "display window" of the "World Cellular Automaton" with its rules residing in the Implicate Order (these rules regulating the "projection-event"), because it supports the view that the material world exclusively consists of quality patches and patterns, and indeed, unimolecular organism have their constituents, atoms and larger groupings, in a virtual state. Conserved properties of these constituents have become qualities, not of these constituents, but of the organism, the molecule, itself. So an organism is, like any other molecule, an intrinsic patchwork of qualities making up a true individual whole (separated from other such wholes by pure (not pentrated) aether, the medium of localization).
This doctrine of the unimolecularity of organisms ( Unimol ) was first hinted to by Pflüger in 1875, but is later fully worked out (as far as possible) by Oskar Müller in 1959, worked out in "Division III" of his comprehensive work on natural philosophy. This Division is called "Ein-molekülares Leben ". Here, in the present document, we will quote (or paraphrase) some passages from that Division in order just to introduce the idea of Unimol, first in (very) general terms, then taking over some ideas about molecules in general, and finally some more about Unimol :
Life begins there where in the stock of entities, the existence of molecular compounds is not anymore just existing [as do all non-living molecules], but has to be guaranteed by active self-acts [i.e. when descriptions of Substances become strategies-to-exist]. It is founded in the chemical-architectonic structure of a class of substances whose individual one-whole representatives are precisely the living organisms. In the organismic matter-aggregation of these organisms the "desire to exist" has obtained material limbs and tools to manifest and realize this desire. The living organismic forms of life are the existential conditions of the self-synthetic chemistry of life and its proper order and regularity.
The representatives of organismic life, and with them also humans, we view as special cases of individualization as molecular units. The organismic species and also humanity is a "chemical substance", unlike the other, but comparable to the latter at least by many features. Unimol creates -- rightly understood and seriously thought through -- a supreme feeling of existence.
(See for the nature of the chemical bond, resulting in molecules " The Chemical Bond", in First Part of Website )
Molecules are, against the "normal" environmental contact-influences temporally stable, permanent-neighborhoods of certain determined atoms, and represent an individualized strictly delimited qualitative substance. Disturbance from inside (disintegration) is, as a rule, not possible, but only, initially, from without without, but also here "rarely", as a result of the unprobable conditional coincidence of the factors of disturbance. The molecular internal connection -- temporally guaranteed by a lack of separation-energy, anc accidentally made possible by by the fine-structural properties of teh partner-constituents -- is expressed by the so-called chemical bond.
Chemical bonding is present when the electronic charge-densities between two adjacent atomic nuclei, that is, along the "line" between the nuclei, has substantially increased (so-called overlap in bond direction). Visibly imagined one may see things such that an intra-atomic space-element, lying in the vicinity of the just mentioned connecting line, finds itself "now" in a field-value state as if it were now much closer to the nucleus (which formally is indeed correct).
The molecular state is in its purest and prevailing form present in a gas (freedom, able to be isolated, best matching with theoretical constants, molecular weight, etc.). This concept of a molecule is by Unimol -- apart from a certain necessary modification -- in no way rendered loosened. This loosening would be much more approriate to attribute to saltlike, [electrically] polar, and to typical "complex compounds", which in spite of recent restricting new knowledge are still taken as (at least formally so) genuine molecules.
Molecules are the smallest not further divisible qualitative units of all defined substances ["not further divisible" in the sense of not divisible under constancy of qualitative properties]. They already have some sort of shape -- generally not spherical anymore -- and, as genuine matter, occupying a space-volume impenetrable to their own kind and to other molecules. In a few cases (inert gases, and, more or less approximating, also metals) the molecules are at the same time single atoms, but normally they consist of atoms directly bonded to one another, atoms of the same or of different chemical elements, and [the molecules] derive their properties indirectly from these elements' [atoms'] nature and number, and show, in certain physical constants, even a direct summation of the relevant constituent-properties [properties of the constituents].
Not considering the fact that also many chemical elements do occur in a molecular state, being, however, still an elementary condition, one may say that the transition from free elements into compounds with one another, i.e. into molecules, is connected with so abrupt a change in properties, that one cannot "see" -- also when possessing the most precise knowledge of all the elements involved -- the composition of any compound and that only the experienced chemist may, in a few cases, have useful conjectures, and that an analyzing craft, having taken centuries to develop, is needed to identify the nature and proportion of the constituents. Today , for almost all "small" molecules (small number of constituents) this is possible with unimaginable absolute precision. For the so-called macromolecules it is possible with a practically sufficient precision, and for the organismic giant-molecules only in the sense of preliminary orientation [Today, 21st century, the analytical methods have vastly improved, as is evident in today's sequencing of nucleotides in DNA and aminoacids in proteins, and further in identifying enzymes.].
Because at high temperatures, and therefore in all hot stars, there are no molecules, and about half the mass of the Universe is free of them, only consisting of atoms of elements as potential constituents of molecules, - the formation of molecules is a very conspicuous event, taking place preferably there where we ourselves are and to which [formation] we ourselves also belong.
It is most simple, but at the same time also least precise, to postulate a tendency of formation of molecules, i.e. attributing to the elements a striving for positioning themselves next to one another and forming stable aggregations. Because this event proceeds absolutely specificly and systematicly and also spontaneously when causal conditions are present, and because often -- in the overall framework of conditions even always -- energy is released in the process, and thus a certain determined (having the nature of a fall) end-product is involved, the "striving for" is conceptually justified. Indeed, the opposite cannot be proved, but one can, purely empirically, proceed without this conception.
[While atoms are mono-nuclear entities (entities with only one single nucleus) with an electronic shell, molecules can be viewed as poly-nuclear "atoms" (atoms with more than one nucleus) with a common electronic shell.]
In molecules resulting from true chemical bonding, i.e. molecules as polynuclear combinates, the nuclei of the individual elements [atoms] are embedded in a common electron plasma. The submersion -- as to properties -- of the atomic individuals in combinational or superpositional structures of the resulting molecule favors such a picture. One may also speak of poly-nuclear systems, as one may generally view all individuals as bonding-systems as opposed to mixing-systems [mixtures] in which the partners are chemically free.
The organismic giant-molecules [i.e. organisms] are, as existing entities, individually variable, but only so in insignificant features such as precise molecular weight. The individual variability surely goes so far that two organismic molecules of the same species are never qualitatively/quantitatively equal. In inorganic small-molecules this equality is only realized with respect to criteria known and usual to us. And we don't know whether, according to other unfamiliar criteria, also here there is such a variability. At least already a weak (functionally entirely insignificant) variation, among all molecules (and atoms) results from the overall energetic system of energy distribution in all degrees of freedom and from the electronic states at a given moment, so that in normal conditions of temperature (and thus, for instance, not at absolute zero) there is practically already no simultaneous equality of two individuals, and, more so, the same molecule isn't similar to itself at different times. But relative to us, and to our qualitative criteria, they are functionally "precisely" equal.
Our here presented description [from which, in the present document, we [JB] have selected only a few passages] was only meant to avoid a distorting too great a simplification of a definitely not simple phenomenon [the molecule], and [was meant] to provide a sustaining thought in order for one to intellectually enter areas far off from everyday analogy. This picture also assists to easier to recognize the molecular, high-molecular, and organismic-giant-molecular or systemic-molecular construction as a systematic follow-up-development of true stellar element-formation. From atom to human -- as organismic molecule -- we then have a certain unity and comparability of historical and physical constructionality.
If we want, to conclude matters, to say something about the "sense", evident from the effect, of the molecule, then we depart from the observation in colloid-chemical systems or in the simple molecular collision, meaning that we, in the molecular mixture, have, with the highest possible degree of mutual approaching, to do with a highest degree of constituent interaction [between molecules]. So the spatial approaching of action-centers is very important for "effecting".
Now the chemical bonding in the molecule creates a spatial nearness among the constituents otherwise impossible (they are, so to say, "opened up" and in this open exposed form combined with one another). It is therefore a gatherings principle and at the same time guaranteeing the "with one another" [of atoms].
The homopolar (covalent) chemical bond [being the most common bonds in especially organic compounds] resulting in the unequivocal true molecular forms, creates an especially intimate connection between the individual partners, similar to the intra-atomic structure, blends into a "boundless" unity. Here it is more than a spatial coexistence, and it is even more than a supra- and intra-entanglement of effects, because parts come together into units and only then unfold effects. The chief moment in addition to the "opening up" is the backward fixation. Overcoming the repulsion in the electronic shell, a common shell is created and this common electronic mantle acts like a kind of "protecting-coat", under which also the -- not electronic-emissional, but the organismically itself working out "hidden", in fact, however, not so hidden -- special potencies can be actualized and developed.
The one-molecular view of organisms (Unimol).
[Unimol is the view that every individual organism is a single giant molecule embedded in some aquous serum-like medium. And although thus an organism can be compared with known non-living molecules, they are not simply such molecules, not simply such, because of their different internal structure and of their special existential conditions. Unimol, however assumes a -- although intermittently varying -- chemical continuity throughout the organism-proper.]
An experimentally verifiable support for Unimol is certainly the extraordinarily high speed of stimulus-conduction of motorial and sensory chief nerves in higher mammals and humans, which speed is of an altogether different order than electrolyte shift in an aquous medium. Different conduction speeds already all by themselves speak for differently constructed "wires" and not for electrolyte media [an electrolyte is an electrically charged atom or atomic group, such as H+, OH-, NH4+, etc.]. The capacity for conduction of stimuli with its high frequency and the quick restitution points to real continuously-going organic wires, and the stimulus conduction can be viewed as an uninterrupted molecular-mesomerous reversible serial repositioning. If we thus conclude -- conclude from the phenomena of stimulus conduction and further from the experiences of neural damage in neuro-surgery -- electrolytically, nerves can more quickly be fused than by bondingly growing them together -- that the nerve fibres possess genuine chemical bonds, then only already by the fact that all parts of the living organism are penetrated by fine and finest connected nerve fibres, the organismic body is made up of a continuous structure, is made up of a complex but single unimolecular "interlacement" of living substance soaked through by a serum-like medium.
The unimolecular organismic connection is thus symbolized, by reason of especially clarifying relationships, by the nervous interlacement, but we must be convinced that also between nerve-substance and cytoplasm do exist further true bondings.
The fact that the bonding relationship is not present in the partly rather extensive serum-like medium with its many individual substances, and the fact of the trillions of classical small- and macro-molecules, do [both] not oppose the fundamental unimolecular view, is presupposed to be self-evident.
When we now speak, concerning the first living beings as well as all [fossil and] recent organisms, of "molecules", then it is necessary to change the usual physical concept of "molecule" accordingly. First and foremost so as to constant molecular weight and most of the properties made use of in the classical methods of determining molecular weights (also those for macromolecules). This is not significant, not decisive, for the mentioned methods have been worked out with the stoechiometric non-living small-molecules [of classical chenistry] and were sufficient for the determination of their molecular weights. On the other hand, one should maintain the important bonding-criterium for recognizing the molecular boundary : The molecular boundary is always there where the continuous chain of main valence bondings stops. From many data we derive the existence of continuous chemical bonding, but a bonding that may at times, and at a number of interruption sites, [temporarily] open up [all this, during the processes of self-maintenance of the living molecule], so that at least as a result of the overall nervous relationship, and [as a result of] the omnipresent true bondings-transitions, the entire organism is "soaked through" by one single giant molecule. Living molecules are then such that among them there is only some principal similarity, not equality, or expressed better : not a precise stoechiometric, but only a functional equality and correspondence. And precisely in this respect, i.e. functionally, they are well defined molecules even in the strict sense of the word [if every chemical bond, also in non-living molecules, is taken to be functionally.].
So the one-molecular view of organisms, Unimol, maintains that within that what is taken to be living substance, i.e. between all points of the organism, there exists a real or at least potential uninterrupted valence-bonding sequence, that we think, schematically idialized, to be realized by the nervous apparatus, which indeed connects all points of the body with one another. As a result, there exists -- together with all, surely numerically preponderant, independent molecular forms [free molecules] of the organismic system containing water, salts, enzymes, etc. -- between every two arbitrary points from head to feet an uninterrupted chain of bonds, so that the molecular boundaries coincide with the body boundaries, and so the whole body representing one single molecule.
The relations of order between organic parts and events ultimately have a physical chemical equivalent in the form of true chemical bonding relationship among the partners, which [partners] thus, because themselves consisting of molecular connections, together make up one single supermolecule in effect coinciding with the organism. So organismic consideration is, -- if it gets rid of the superfluous metaphysical [here indicating "vital forces" and the like], and through its concept not only wants to formulate a fact, but also wants to give it a foundation, -- an Unimol view of life. Unimol doesn't deny, nor overlooks the formally chemical steering as a further mechanism, for which there is nothing equivalent, and which [chemical steering] also cannot be realized by the bonding-relationship alone.
The organismic order may give up the otherwise necessary ranking order such as the stages [levels] of constituents (macromolecules, micelles, cells, tissues, organs, [antimers, metamers] ), because it immediately results from the unimolecular framework, and, what is especially important : remains in it. Or, said differently : the real organismic rank order of constituents is no other than that of table-salt, urea, or sugar molecules.
The (pure) system-view [as opposed to the unimolecular view] is a not particularly emphasized, but clearly implied "knocking together" from a classically colloid-chemical system of which there are rightly remarkable and powerful ones, and the additive system of empirical data, which both have conceptually been fused into a unity. So the system-view is an all-out pure form of mechanicism, supplemented with an "essence", consisting of assumptions, speculations, convenient faith, and in the end of a not admitted vitalism, and thus representing a unique example of methodical imprecision. Unimol realizes this essence simply and rationally.
[If we consider the organism as a "dynamical system with initial conditions and dynamical law of system-transitions" -- and thus adhere to a form of system-view -- as we have tried in First Part of Website, -- the organism is then a system-state following from initial conditions and dynamical law. And so it can in principle reductionistically be described. It is "mechanical" in the sense of mechanicism. In this view the unity of the organism is rooted in the one dynamical law. So the constituents of the organism are indirectly connected with one another, they are connected through the one dynanical law, they have descended from one set of initial conditions, of initial elements, meaning that they are not d i r e c t l y connected with each other. In this case no vitalistic element is added. The dynamical law + initial conditions embodies an organic strategy-to-exist. So there is nothing wrong with such a form of system-view of an organism. But it has some shortcoming : the components of the organism exist [in this view] actually, rendering the organism not to be a true unity in the stongest sense, not to be a genuine Substance in the metaphysical sense, but only it to be a physically [not chemically] defined Substance. But if we insist that an organism is a true Substance in the metaphysical sense [and this is legitimate, because such an organism has a true SELF ], then we must at the same time insist on the strict unity of it, meaning that its (corpuscular) components are only virtually existing in the organism, meaning simply that the conserved properties of these components have now become qualities of the organism. And such a holistic constitution of an organism is best expressed by it being a single living molecule.]
If one considers life, respectively its unit form, the cell, as summative living system, then at once all those problems, since long ago plaguing research in giving only moderate results, crop up, problems we cannot rationally get rid of, and whose description renders experienced life to be the most obscure phenomenon.
In all this, one might ask what role in the Unimol conception is played by the DNA-RNA-Protein System, so fundamental and important in the process of Life. Does this system belong to the single living molecule? Is this system "alife", or is it an auxiliary system to make Life possible at all? In order to be able to discuss this, it is necessary to consider "life" as being equivalent to [any degree of] "self-consciousness", to the "feeling of existence", to the "care of remaining in existence", and thus to embody an "active strategy-to-exist-and-persist". All this in contrast to inorganic beings. The potency to produce Life is supposed to be already present in the organogeneous chemical elements C, O, N, H. Further, we must realize that the truly "living substance", i.e. truly living matter, is largely protein-similar, "protoplasma". And, of course we now know that DNA codes, via RNA, for (fermentative and constitutive) proteins, and synthetizes them from elementary compounds (amino-acids) present in solution, and that the central structure -- the backbone -- of proteins is a chain of amino-acids chemically bonded to one another by the "peptide-bond" and making up a peptide chain.
[... then many difficulties can be avoided if one attributes to the nucleic acids [DNA, RNA] a life-substance-producing function, and a subsequent-delivery function, and if one lets the "nucleic acid free" true life-substance, existing in every germ [organism], constantly be connected with that subsequent delivery. [In the sequel we use "NA" as a shorthand for "nucleic acid" or "nucleic acids", and only when we want to discriminate in them we use "DNA" and "RNA". The DNA sends copies, in the form of mobile RNA-strands, of certain sections of it to the site where proteins are being produced, meaning that now the particular RNA-strand codes for the protein to be produced. This first of all results in the synthesis (aminoacid by aminoacid) of the protein right on the back of the RNA-strand (in the sequel more generally referred to as "NA"). Subsequently, the synthesized protein is released from its RNA matrix.]As to the relationship between gene (DNA) and organism, i.e. between genotype and phenotype, Oskar Müller (1959) further remarks :
A certain, us deeming an important refinement of the earlier presented picture, may be realized by the follwing consideration : The individual aminoacid, that in chemical-bonding fashion lies against the NA is, from this moment on and still before it concatenates with other aminoacids, a bounded aminoacid with a fine-structure very defiant from the free form, an aminoacid that, with respect to the particularly versatile internal-structural characters of NA, may be similar to a statu-nascendi form, and that also could be similar to the fine-structural form of the aminoacid residues within the protoplasmatic mega-molecule [the living substance]. One could also say that the conceptually isolated protoplasmatic continuity-structure of the "individual group" is already present in the nucleotidic connection. [...]
In at least the case of protoplasm of recent organisms, one may suppose that the subsequently dislodged units [produced proteins] become directly connected with the one living mega-molecule and therefore being protected against denaturational decay.
In the subsequent peptidic concatenation this [just described] similarity is enhanced, and the NA-molecule-"corresponding" complete peptide molecule possesses -- as primary product -- the overall chemical fully-equivalent ground-structure of living substance and, at the same time, the fine-configurative partly-equivalent structural principle of living substance. This peptide molecule does not yet live, because it cannot live at all. For it is still bonded [to the NA], and life = consciousness = feeling of existence, can only occur when it is embodied in a self-directed free molecular individual, having, in addition, an environment-threatened particular structure [i.e. a structure still threatened by its environment], a structure such that it only exists when it lives, and thus, feels its condition so to say with anxiety.
This condition may be the case at the moment in which the peptidic combinate is released from its NA-matrix, and now, as a result of the removal of the "harness" either is (and remains) free, or disintegrates (i.e. it either as such, i.e. as separated from NA, continues living, or perishes). For it is always a form that is unimpededly existent as long as the bonding with the NA-support-construction is preserved and which [form] all by itself cannot exist, when that support-construction is cancelled. All common substances too will, insofar a defiant state of them were possible at all, soon fall back [into their regular state]. And only in the C O H N S systems,- always having abundant special potencies [i.e. systems built up with organogeneous chemical elements] (in which [systems], and not in the NA, all the essential resides) - does exist the, generally very rare, possibility of a prolonged existence under the phenomena of life and [existence] as life. The moment of release will then also be the primordial conscious experience accompanied by being affected by anxiety.
We do not want to suppose that the increase of living substance in today's organisms unfolds as a continuous sequence of micro-dramas. As individual act it may again throw light on the primordial-generation-of-life-experience [Urzeugungserlebnis]. The present mechanism of increasing the amount of living substance [by repeated transcription, translation] has still preserved the original essential features, but has now made it 1000-fold secured, automatized, embodied, and anxiety-free. If one applies to a presently living organism the above described process, then, after strongly reducing and simplifying things, we obtain the image of a long track of material at whose far end we find NA as a knitting machine.
The overall function of this NA might be described by the function of interference expressing itself in the anti-crystalline order of configuration of the substrate, and then by the fixation of the sequence which, with its anti-periodic nature, guarantees the further strenghtening of the anti-crystalline tendency. In addition we of course have the general function of material autoreproduction.
With all this we have concluded our picture of the essential significance of NA, as one might see it according to the present  stage of knowledge. It is not complete and will never be. But we may try to "physicalize" it a bit, insofar as in a pictorial image such a concept [physicalizing] is legitimate at all.
We already said that the NA-molecule in no way is a living molecule (also not formally or by extending the concept, or in any other way of circumventing or smuggling-in. NA is, for example, also not denaturalizable in the usual sense [whereas any living substance is denaturalizable] ). It only -- together with a series of further fermentative helpers and mediate forms, which all by themselves are just as little "alife" than are the NA -- makes possible for a molecule to be alife. This is a very remarkable event. On the in every respect lifeless NA (at least when we consider it to be protein-free) a sequence of rationally chemical processes takes place, concluding with the fact that their result, namely a synthetized protein, is released (by breaking up bonds) and maintaining a condition generally not being entitled to it. Because the aggregate-form "NA-Protein" was "dead", life originates thereby that from this dead aggregate-form a characteristic part is taken away - (namely the NA-gear, which, apparently, recovers itself unchanged from the aggregation and which also remains dead even when its reproduction again takes place with the support of living protein or at least of protein-ferments), - is taken away, that is, as a result of which the residue is [now] living. Therefore, the living appears as a special form of the dead, or better : the initially dead was a special form of the living which was totally dissimilar to the dead.
The pure-form of DNA [i.e. that sort of NA which permanently resides in the cell nucleus], which today is generally agreed upon to be the carrier of genetic properties and (through RNA) to be the chief actor in all protein formation, is not at all the ultimate biological unit.
One may clearly formulate two extremes : NA as central substance, or as sophisticated micro-mechanical auxiliary tool.
Certainly it is best to hold an "intermediate" position that lets mutually imply one another, as a result of a billion year old association.
In the case of the centrality of NA, the biological cell would offer the nucleoproteids (NP) [I [JB] assume all DNA and RNA and their free building blocks and all auxiliary proteins] a proper environment, and the protoplasm would then be a product of metabolism or excretion, which [protoplasm] precisely possesses so much independence in order, with the help of the subtances produced by the nucleoproteids, to be able to care for itself and for the NP, as supplier of building blocks.
Although this view could be formulated without contradictions, it would never get rid of a certain plaintivety of moving around in circles. Indeed, there is no principle of Nature that forces us not to adhere the view most fortunate to a philosophically "truthful" judgement, if no other important objections can be found.
[During the origin of Life] the nucleic acid (NA) was in a certain sense a "target" compound in the abiogeneous earthly overall synthesis, adjoined with several other compounds (porphyrine, carbohydrates, lipoids). All other remaining compounds are so to say insignificant to the origin of Life and had perished. Also no further complexification was needed anymore, for [together] with the primary origin of Life also its necessities and "wishes" had been generated, which, now with the directing mechanism of the selection of the fittest, on their own account took care of that what we today call complexity.
A very broad field, connected with and subsequent to the primary development, consists of the formation of metabolic types and the homologous and analogous distribution of function onto the already generated entities and onto the entities resulting from disintegration. All this concerns change as well as specialization and also summation.
This period may be of considerable length. At its beginning there were truly living heterotrophous nucleotidic combinates - (not of the nature of virusses, who "bungle" in a short-cut process, where once there already was fully-fledged life). At its end there are the uncountable pre-cellular specialists subsequently symbiotically combining with one another (i.e. chemically bonding with one another) and experiencing body regressions as today we see them in the form of genes [and cell-organelles] and partly as enzymes (in this case as decapitated torsos), etc. [Here is described the later theory of Margulis that cells are evolutionarily constructed by symbiosis of what we now call cell-organelles and the like.]. That is, the chemical bonding lets them degenerate into tool groups and internal organs, and the totality of them appears as a spatial ordering which -- still including the environment -- introduces the cellular condition.
Perhaps someone asks the question why precisely those sophisticated NA exist, giving rise to such powerful and differentiated organisms. Well, such questions are easy [according to O. Müller] to answer, ever since Darwin formulated the principles. The NA, being synthesized intra-organismically, stand and fall with the biological fitness of their organisms under natural selection. So precisely those variants of NA are preserved, whose mutational deviations gave their organisms corresponding advantages or at least no disadvantages.
It can be noted that now the macro-morphological kinship of all organismic life (including all more detailed serological, microanatomical, microhistological and physiological kinship) grades down into the fine-morphological chemical kinship of a single type of substance, the NA [i.e. grades down into the chemical kinship relations between different representatives of the nucleic-acid type.]. But, taken strictly, this is a confusion with which the adherents of the system-view of organisms have a hard time to deal, but which [confusion] can be overcome directly by Unimol. [Here it is about the problem of the precise causal connection -- causal chain -- between macroscopic properties of an organism and its genes, and thus its DNA, and especially whether there always must be such a connection. All we directly know of the function of DNA is the fact that it codes for proteins and produces them at the right times and in the right amounts. The path from here to the macroscopic features is obscure, especially when we take into account the fact that many macroscopic features are not determined by genes at all (as we saw earlier in the case of Bonellia and of the angler fish, where we encounter in the male and female, having virtually identical genomes, completely different organisms, as to size, morphology, behavior, etc.]
One may, for that matter, await the man -- [and this man indeed did arrive in 1976 : Richard Dawkins, with his book The Selfish Gene] -- who declares : In addition to all other known molecules, also once [upon a time, in primordial times] a NA-molecule had been formed which had replicated uninterruptedly and hardly changed, and its functional appendages, only to be used for its convenient and secure preservation [as a result of replication], known as organisms, as plants, animals and humans, now have to take the burden of unimaginable never ending amounts of distress, mysery, care, battle, robbery and murder, just in order to preserve this one NA-molecule as long as possible.
If one doesn't want to attribute to NA, or to a precursor similar to it, the function of creating life, then one can say that the molecular-chemical life forms would gear to it. It is the symbiotic apparative tool to realize a molecular combination, which, when released, wants to preserve itself, - a molecular combination that lives.
So these were some words about the unimolecular conception of an organism (Unimol). It cleary connects with HOENEN's discussion of the metaphysical status of molecules. By considering the structure of atoms, as to their wholeness status (next document), HOENEN views the necessary introduction of quantum-conditions as one of the indicators that the atom is not mechanical in its constitution, but a holistic entity, a true Substance in the metaphysical sense. And because such conditions also play an important role in the formation and constitution of molecules, they too can be considered to be such holistic entities. And although HOENEN does not thematize organisms [his book is about the philosophy of the inorganic world] he considers organisms as being true Substances in the metaphysical sense. So by introducing Unimol the organisms naturally connect with molecules and free atoms. We shall thematize Unimol in Part XVg of the present Series.
In the next document we shall continue with HOENEN's text on the metaphysical status of respectively crystals, atoms, and molecules.
e-mail : ( Please write in ' Subject ' entry : ' METAPHYSICS ', in order for me to be able to distinguish your mail from spam )
To continue click HERE for continuing the study of the general features of Inorganic Nature as the natural context of organisms and organic evolution, Part XVf.
Back to Homepage ( First Part of Website)
Back to Contents
Fifth Part of Website
Back to Part I
Back to Part II
Back to Part III
Back to Part IV
Back to Part V
Back to Part VI
Back to Part VII
Back to Part VIII
Back to first part of theoretic intermezzo
Back to second part of theoretic intermezzo Back to Part IX
Back to Part X
Back to Part XI
Back to Part XII
Back to Part XIII
Back to Part XIV, Dipterygia, and conclusion of flight-devices in insects
Back to Part XV, first part of the theory of inorganic nature
Back to Part XVa, second part of the theory of inorganic nature
Back to Part XVb, third part of the theory of inorganic nature
Back to Part XVc, fourth part of the theory of inorganic nature
Back to Part XVd, fifth part of the theory of inorganic nature