Thoughts on Measurement Systems and Analytical Procedures
April 3, 2025 — Brad Venner
The drug sector has brought systems engineering concepts into analytical chemistry. ICH Q14 explicity relates systems life cycle concepts to traditional analytical method development, while ICH Q2 relates them to analytical method validation.
An USP document entitled “Distinguishing the analytical method from the analytical procedure” tries to distinguish between a method and a procedure as the use of the latter in decision-making.
One could argue that this is a practical illustration of the need to bring measurement and sytems concepts into greater communication. You can see in the development of measurement concepts within systems engineering a tendency to “re-invent” concepts that are developed in metrology. On the other hand, the notion of “system” in the metrology literature is often taken as a primitive term with little development. The hypothesis of this project is that both “transdisciplinary” frameworks are valuable and can be fruitfully combined.
Petri’s article on “Quality of measurement information in decision-making” [@petri:2021:quality] develops the concept of “quality of measurement information” as an alternative to the “analytical procedure” approach developed by ICH. They state that
we will argue that several other factors, together with measurement uncertainty, need to be considered to ensure that the information provided by measurement “fits for purpose”
This provides an operational distinction between “method” and “procedure”: the former only considers measurement uncertainty while the latter considers the “several other factors” necessary for “fitness for purpose”. But this conflicts with the notion of method validation as targeted for fitness for purpose.
There seems to be a parallel with the distinction between “validation” and “verification”, where the concept of validation is more closely related to “fitness for purpose” while the latter is focused on “system meeting requirements”. Petri distinguishes “internal” and “external” quality as ” conformance to specifications” and “fitness for use”. Verification/validation is the process of gathering evidence to demonstrate internal/external quality.
EPA has also developed parallel notions of “data quality” that were intended to ensure that measurements were fit for purpose, largely out of experience with over-budget Superfund projects.
Petri distinguishes syntactic, semantic and pragmatic information. Syntactic information is called “data”. In Peirce’s classification, data would be related to the “sign vehicle”. The quotes from Morris on syntactic, semantic and pragmatic are suprisingly close to Peirce’s classifications, with semantic defined as “the relations of signs to the objects to which the signs are applicable” and pragmatics as “the relations of signs to the interpreters”. Petri refers to syntactic, semantic and pragmatic as “layers”, which fits the categorical semiotic perspective and may fit the “process view” that signs, objects, interpretants develop in a sequential process. I seem to recall that a similar process view was developed by Poinsot - might be good to check Deely for this. Petri describes the comparison of actual and target uncertainty as being essential to the “pragmatic” layer of measurement quality information (kind of assumes a total order).
Aside: the fact that “use” of any tool (including a measurement result) can never be completely specified and is inherently open (Stuart Kaufmann made this point and I should try to find this paper) means that a variety of different uses must be anticipated in any measurement quality framework. Petri calls this
the component that reflects implied, unidentified needs, called latent quality.
Does this imply an “inherent” governmental responsibility for measurement? I need to asign myself the task of defending this position.