Analysis versus Synthesis: are we atuned to each kind of thinking?

Posted on March 2, 2018 by Rick Jelliffe

Does some of the supposed discrimination in the hiring policy of high tech companies actually have the common root cause that while the companies’ hiring regimes are brilliant at identifying useful analytical thinkers they are weak at finding useful synthetic thinkers?  (In fact, the regimes may actually weed out useful synthetic thinkers.) What is the difference?  Why do we need both?

With these questions in mind, his article looks at what technologies help developers with each kind of thinking, and where Schematron fits in.

Analysis and Synthesis

Analysis and synthesis are opposite ways of thinking about a thing.  Understanding synthesis is a key to understanding what Schematron is about. But I recently came across a startling proposition: your ability (and therefore habit, and success and inclinclinanation)  in short-term memory retention (and/or working memory and/or memory consolidation) may primarily determine whether you can be a good analytical or a good synthetic thinker.  A person with a strong short-term memory may find analytical thinking naturally easy and never develop their synthetic thinking abilities:  as a matter of relative advantage rather than laziness.

There is a nice introduction mini-video by the famous Prof Roger Akoff (“the Einstein of Systems Thinking” and associate of Demming) for this particular usage of the terms (which is slightly different from the use in logic); watch it to be set up for this article.

Analysis is dissection of a thing; synthesis is fitting it into a larger whole.  (You might say that analysis is top-down, starting from the thing; while synthesis is bottom-up starting with the thing.)  Dr Akoff says both involve three stages:

Dr Akoff thinks analysis only deals with how something works and cannot deal with why it works, which requires stepping back outside the thing itself.  With analytical thinking, the outside world is the black box.

Angle for Schema Languages

When we look at schema languages for XML, going way back to SGML, we can see that all of them (except Schematron) are based on analytical thinking. You start with a grammar:  an element is valid if its child elements and attributes are valid against the grammar; an attribute if valid if the data valid is valid against the type. For DTDs and RELAX NG, a document is valid if its top-level element is valid against the grammar.  Top-down, taking things apart.

Looking at Schematron, we find instead it fits in with synthetic thinking.

  • The natural language assertion—the essense of Schematron—lets you describe the constraint in terms larger than the document:  hence the preferred form for assertions “An X should have a Y because Z“. And Z is often in the containing system.
  • Schematron does not have a simple hierarchy of elements: instead patterns may overlap and may include information in external documents (retrieved by the document() functions in XPath)—sometimes I comment that Schematron is the only WWW schema language because the grammar-based languages (DTD, RELAX NG, XSD) are strictly limited to the information in the document in question.  Something like the UBL List methodology, where part of the information is managed by a different party, has no affordance.
  • Schematron has value-add attributes like role, and elements like diagnostic and property that allow an assertion to carry extra information useful for interacting with the wider system.

There is a good historical reason why SGML/XML validation languages were so stolidly analytical: it springs out from the radical separation of presentation and content.  In this methodology, your document should not have any style information (total abstraction) which all goes into a stylesheet (formal abstraction).  The act of document analysis would be both analytical (go through and remove the style information) and synthetic (and name the elements according to their meaning). This methodology makes it natural that the grammar for the document therefore only has information relating to content, not styling: however, the methodology has no place for context, and so it treats everything that is to do with processing the document as non-schema-related.

<aside>SGML/XML’s radical separation of presentation and content was a brilliant idea, but I (and others) have long criticized on several grounds:

  • It does not allow for defaults or prototypes. So you have to re-invent the wheel. There is a good reason why so many people choose to markup up their documents by starting with HTML and adding “semantic” information in class attributes: it means they don’t have to create the basic rendering infrastructure.  They don’t need to write code or configure anything to say the a paragraph element should be rendered as a kind of block. Similarly, with JSON: there is no additional mapping step.
  • Documents are almost never designed with no application. Designing a semantic format which might be the ideal neutral form that is equally translatable into any application may in fact mean the form is not easy for any particular application.  I think it is often better to design a format so that it at least satisifies the known requirements of the major application: that is is definitely good for something rather than theoretically good for everything.
  • When thinking about how users interact (human and artificial) with a document, it is always in the context of some context and system. So it is the height of uselessness to provide users validation messages couched in terms which, by their nature, must ignore that context and those systems. (Can you spot the difference between “The element bffzzsdsd had an unexpected value” and “The patient must be given the correct dosage otherwise they will lapse into a coma.”  ?)

Angle on Memory

Very few people can do both kinds of thinking equally well, and I don’t see why we should expect to.  Know your strengths.And this may not just be a personality issue, but a brain issue: if you have a good short-term memory  then you may have the equipment for analysis, while if you instead have a relatively good long-term memory then you may have wetware better for synthesis.

<aside>One of the theories about dyslexia/ADD/ADHD, is that to some extent they relate to short-term memory processing: it is not that the brain does not remember, it is that it forgets too eagerly; several dyslexia websites make a comment that synthesis is easy while analysis is hard.

Which brings me to two thoughts.

First, why are high-tech companies’ employment tests for developer so entirely geared to analysis (computer science) and not synthesis (software engineering)?

  • My guess is because it is easier to formulate questions on queueing or algorithms and get some objective result.  And easy to outsource to companies like HackerRank.  High-tech companies would do well to consider whether their hiring process actually excludes (and discrimates against?) synthetic thinker
  • Why would you want synthetical thinkers as well as analytical?  Consider that for project management, the waterfall method is purely analytical: divide the problem as a whole into tasks, and solve each one. But agile methods, notably Scrum, is constantly synthetic:  keep on relating to the system outside the project, involve the stakeholders, feedback, measure and adjust velocity, refine requirements as you go on. Most software company realize now the inherent flaws in the waterfall method (except in limited repeated projects) and how agile methods help avoide those problems (but of course have their own characteristic pain points); but how many CIOs, project leads and stakeholders look harder at it and see it, at heart, it is the inappropriate application of analysis rather than synthesis?
  • Is there a hidden and unnecessary discrimination caused by testing analysis but not synthesis, if they are really cyphers for short-term versus long-term memory capabilities?  For example, against dyslexics whose brains are tuned to forget fast, or autistics whose brains take time to process information, not to mention adults or the cronically flustered.  My suspicion is that analytical tests favour neurotypical teens…

<aside>Quotation detectives trace Edison’s line about genius, perspiration and inspiration to reported comments  “Two per cent. is genius and ninety-eight per cent. is hard work. … Genius is not inspired. Inspiration is perspiration.”  But isn’t that just another way of saying that relying on coming up with good solutions off-the-cuff is magical thinking?

Second, if we often do have a relative inate strength in one or the other of these modes of thinking, are some  technologies accordingly well or poorly suited to us?

  • For synthetical thinkers:
    • Developing:  Declarative programming languages, languages with no re-assignment of variables, functional languages, pipelines, may be good for people who don’t have strong short-term memory and who consequently find it difficult to keep track of values in long linear chains of programing execution.  For example, a language like XSLT might be a suitable for an extreme synthetic thinker.  And perhaps language features like libraries and generics, so that you don’t need to implement algorithms yourself, fit in too.  Perhaps expect (assert()) statements and annotations fit in here too: they reduce the load on short-term memory by (perhaps otiose) explicit statements of what must be true about some value or variable at a certain point in the code.
    • Assisted by: But if you have to use a programming language that does not suit your thinking style, you need to use tools.  For example, a synthetic thinker who has to deal with a Java program that jumps hither and thither would of course take advantage of debuggers and static analyzers (how brilliant does FindBugs continue to be!!) to help.  Is the rise of the No-SQL databases relevant here too:  they operate in areas such as caching, persistance, fast access which may be the forte of (details oriented) analytical developers rather than (purpose oriented) synthetic thinkers.
  • For analytical thinkers:
    • Developing: If we think of the 50s/60s/70s generations of computer languages like assembler, FORTRAN and BASIC which relied heavily on global variables, coding them relied intensely on your ability your memory to juggle many balls at the same time: most of the successful innovations in computer languages come down to tools that use our short-term memory bandwith more effectively (for example, by avoiding GOTOs, by scoping rules for variables, by classes, and so on).
      The extreme example that springs to my mind is 1990s-style Perl: where you can only understand a program by tracing through it from start to finish: just taking a chunk and figuring out what it does is often unreliable (no flames please: I had to saw this maintaining Perl code decades ago): Perl is mocked  for being “write only”: is that really a euphemism for code that requires above average short-term memory retention?
    • Assisted by: But (unless you are a pure researcher) the practical world will intrude into the analysis.  So tools such as smoke tests and regression tests prevent a developer spinning out of the context of the project and team.  Approaches such as Scrum and Acceptance Test Driven Development are relevant, as are User Stories in a form “As I {user X}, I want to {do this Y} in order to {achieve my goal Z}“.  Schematron, obviously. Surprisingly, I think Frameworks actually fit in here: they allow the developer to concentrate on the immediate functionality and reduce the background knowledge needed (or that is what their value proposition is supposed to be…)

But analyticals and syntheticals need each other: is this the root of successful pair programming?

We hear a great deal about introverts and extroverts for the last couple of years. As the most extreme introvert you are likely to meet, except on a stage, that interests me! But, at the risk of proposing pop psychological pap, for employment and management getting a balance of analytical and synthetic thinkers and tasking them appropriately may be more important.