top of page
Terry Elmaleh

The validity and reliability of forensic handwriting examination


RELIABILITY AND VALIDITY OF HANDWRITING EXAMINATION

This story says it all…..There was a drunkard who was searching for a key under a street lamp in an area significantly away from where he had dropped a key. When he was questioned as to why he was not looking for the key in the area where he had dropped it he replied “Why, it’s much lighter here, of course.” (Galbraith)

Reliability and validity refer to the best approximation to the truth and falseness of propositions derived from tests and/or handwriting examination, and as such, provide the pivot point by which any tests and examinations are interpreted.

Reliability

“A stuck clock, for example, is reliable but not valid and only accurate twice a day” (p110, Cole, 2006)

Reliability refers to the stability of a measure or technique in providing consistent, viable answers and results given the same scenario. Reliability is encapsulated in reproducibility and repeatability. Does the examination consistently give the same results among different examiners given the same case (reproducibility) or does it give the same results if the case is given to the same analyst at different times (repeatability). Questions that also need to be considered are: Is the quality of the specimens under examination good enough for examination? What discriminatory power does the evidence provide? Has the forensic scientist followed procedures and techniques and truthfully made objective inferences based on the facts? The American Judicial System has introduced the Daubert criteria to determine the reliability of expert scientific evidence.

Previously under Frye v United States (1923), the admissibility criteria were:

  1. Is the principle or test general accepted to be reliable - is the scientific principle or novel method generally accepted within the relevant scientific community?

  2. Is the analyst qualified?

  3. Did the analyst follow the generally accepted procedures and methods when performing the specific test in the case?

In point 1 above “to be reliable” opened the door for challenges in court as to the evidence submitted by expert witnesses. The Supreme Court handed down in its decision in the Daubert v. Merrell Dow Pharmaceuticals ruling (1993) a set of new criteria for the admissibility of scientific evidence, later expanded to include all expert opinion testimony as follows. The court held that the reliability of the scientific evidence under Rule 702 must be evaluated.

The court also held that a judge can exercise his “gatekeeper” function and has to consider 5 factors with respect to the scientific principles upon which a test or conclusion is based on the specific methodology that was applied:

  • General acceptance of the principle or methodology among the relevant community.

  • Has the principle or methodology been tested and validated?

  • Has the methodology or principle been subjected to peer review and/or publication?

  • The existence and maintenance of standards dictating techniques and methodologies.

  • What is the known or potential rate of error for the principle or methodology?

Validity

Validity in the context of forensic science refers to the validity of the conclusions based on the methods used in the analysis. Was the appropriate test accurately applied to the examination? Or put another way: does the method used do what it claims to do and was the process used to arrive at the result sound and justified? Is there a way to measure the weight of the evidence?

  • Statistical conclusion validity: there should be sufficient co-variation in the data, i.e. the variation should be observable. The sample size should not be too small, and the reliability of the measures used in the investigation should be high.

  • Construct validity: This defines how well the investigation measures up to its claims. Internal validity: This deals with determining the nature of causality. The key question in internal validity is whether observed criteria can be attributed to specific explanations.

  • External validity: This is the extent to which the results of the investigation can be generalised to other examples of the same observable fact.

Accuracy /Error Rate

An accurate decision “is one that reflects the true state of evidence”. An error rate in the context of a scientific discussion is defined as a continuous, repeatable, consistent action that yields a predictable level of false positive or false negative results in casework (Budowle et al., 2009). Langenberg (2012) proposes that there cannot be a 0 error rate in the method of ACE-V (Analysis, Comparison, Examination- Verification – see separate article) which is used by handwriting examiners. The reason for this is because this method is inextricably linked to the analyst in this method of examination. According to Lewis (2014, 130) “....forensic document examination does not have an error rate because it is impossible to calculate as each case’s evidence and every examiner examining the evidence are unique”. Kam (1994) has evidence to imply that as long as the document examiner has a thorough understanding of the principles, techniques and methodologies to conduct a handwriting examination there is a significant difference in their capability of conducting a writing identification as opposed to an examination by a professionally trained person. Laypersons are almost 6 times more likely than professional document examiners, to incorrectly identify the unique features pointing to a particular writer. His study in 2001 showed that the error rate for professional examiners incorrectly identifying handwriting was 0.49% and for laypersons was 6.47%. His study in 2003 showed error rates of 9.3% for document examiners and 40.45% for laypersons. There is further research confirming that the expert should be allowed to give an opinion. The various strategies that forensic handwriting examiners can use to reduce error rates is the subject of my next article….

  • Cole, S.A. “Is finger print identification valid? Rhetoric of reliability in fingerprint proponents’ discourse”. Law & Policy (2006a) 28(1), 109-135

  • Galbraith III Oliver, Ph.D., Galbraith, Craig S, Ph.D and Galbraith, Nanette G., “The Principle of the “Drunkard’s Speech Search” As A Proxy For Scientific Analysis: The Misuse of Handwriting Test Data In a Law Journal Article”, International Journal of Forensic Document, Vol. 1, No. 1 1995, pp 7-17.The re-evaluation of results from a handwriting proficiency test program.

  • Kaplan, A, The Conduct of Inquiry, New York, Harper & Row, 1963

  • Kam, M. P., Fielding G, M., & and Conn, R. P. (1997). Writer Identification by Professional Document Examiners. Journal of Forensic Sciences , 39 (No. 1)

  • Langenburg, G.M., 2008 A Performance Study of the ACE-V Process: A Pilot Study to measure the Accuracy, Precision, Reproducibility, Repeatability, and Biasability of Conclusions Resulting from the ACE-V Process, Journal of Forensic Identification 59 (2), 2009\219

  • Langenburg, G.M., 2012, A Critical Analysis and Study of the ACE-V Process, University of Lausanne, Doctoral Thesis

  • Rosa, C. Ph.D., Forensic Handwriting Examination, Court Preparation * Expert Testimony, 2016

3,561 views0 comments
bottom of page