The Awakened Enterprise: Part 3

Awakened Enterprise

adaptation by design to mitigate uncertainty and incomplete knowledge.

Richard Arthur
17 min readJul 3, 2024
“Dollar Bill Eye of Providence” by Mark Turnauckas is licensed under CC BY 2.0

This article pulls together groundwork that began in 2020 (Provenance for Decision-Making) & (Machine-augmented Mindfulness). Rapid technical evolution in recent years delayed the writing, but advances such as large language models (LLMs) like ChatGPT, and their integration into knowledge applications will considerably boosts viability and capability of the vision.

Imagine: you have a clever idea and you work for a company that cherishes and protects its intellectual property. Maybe your idea becomes a patent filing — but after two or more years when the patent is finally granted (or not), will anyone know what to do then? What if you have transferred into a different role or left the organization? Perhaps the idea was instead labeled trade secret — under what conditions should that status be re-evaluated?

Imagine: you are a cancer patient, undergoing a variety of medical tests and treatments over many years from multiple specializations. Often your appointments are separated by months, during which many changes are possible — from the practicing physician to medical insurance coverage, to new diagnostic and therapeutic options, to your personal health. How may novel tools and practices improve continuity of thought for practitioners?

The Awakened Enterprise envisions a knowledge infrastructure through a framework and culture to empower organizations to act despite uncertainty.

The proposed vision applies context to establish qualified accountability, operationalizing a coherent frame of reference to mediate the clouding of information, the paralysis of doubt, and cognitive susceptibility to inconsistency over spans of time and across organizational divides.

The system infrastructure improves understanding past decisions by supplying searchable context metadata. It also enables acting with future insight, focusing ongoing situational awareness for adaptation by design.

Clarity vs. VUCA Fog of War

Four features impairing clarity when navigating reality’s murky operational landscape are: Volatility, Uncertainty, Complexity, and Ambiguity (VUCA). For at least 2500 years (Sun Tzu), strategies and tactics have been developed by commanders, priests, philosophers, and scientists to confront VUCA.

Complexity can be overcome by learning from past experiences: codifying choices and processes where we can bound and simplify options and context into learned best practices as standard operating procedure (SOP) or “standard work.”

Powerful analytical tools can tame even greater complexity, deriving models from vast data by applying techniques such as machine learning (ML). But even the most sophisticated of these methods will inadequately simplify problems whose context and alternative options prove difficult to sufficiently bound. For example, confused by inherent uncertainties or ambiguities in the operational theaters characterized by “fog of war”.

Even the clarity from a miraculous moment of perfect knowledge to pierce such fog will fade as volatility in key factors render its precious temporary confidence and clarity fragile and obsolete [1]. Entropy is everywhere!

Some information and data remain unknowable until future events unfold, with desired confidence afforded only through the passing of time.

Therefore, the applicability of SOP’s, standard work, best practices, and even machine learning’s data-derived models will be limited within the time period of their framing and the data from which they are derived.

The effectiveness of a technical knowledge infrastructure to address the VUCA environment requires corresponding cultural commitments. To achieve the ability to act despite uncertainty, we set the following goals:

  1. Improve confidence in making decisions by formally recording caveats.
  2. Monitor commonality of assumptions and unknowns across all decisions.
  3. Re-assess assumptions and unknowns vs. emergent change & discovery.
  4. Systematize adaptation by design to future change.
  5. Safeguard the value of learning to mitigate reluctance to accountability.

Beyond VUCA, further obstacles to realizing this envisioned mindfulness result from human and bureaucratic limitations that erode consistency in decisions and actions.

Mindfulness vs. Discontinuity in Time & Place

Many consequential decisions are but one single member in a sequence of linked decisions occurring over a period of time — or have been composed by multiple stakeholders across distinct organizations. The discontinuity creates ample opportunity for inconsistencies link-to-link as we consider:

  • fallibility of the human brain applying persistent focused attention and precise and accurate memory over numerous events or lengthy timespans,
  • variation among the authorities making, interpreting, and acting upon decisions — including due to turnover, reassignment, or restructuring,
  • discrepancies from discretion in interpretation, divergent incentives across entities, fluctuating socio-political and economic norms, etc.

The mind of an individual can struggle with constraints imposed by time: poverty of time afforded for agility or urgency, or long intervals of time fading coherence of thought into memory with notoriously unreliable recall.

The attention and decision-making of individuals compose the collective coherence of thought across an organization, presenting challenges to coordinate between subgroups or distinct (and role-changing) individuals.

In the cancer patient example above, there are plentiful opportunities for inconsistency when tests and consultations may be separated by years and involve several different specialists, alongside changing reimbursement policies, medical technology and practices. Other examples, such as legal processes and project execution are discussed in “Irresponsible Caution.”

Mindfulness refers to a state of holistic awareness. Meditation and yoga practices center focus on one’s immediate place and moment in time — as a cornerstone of coherence from which to contemplate context. Mindfulness prepares a means to see and think more clearly despite the overwhelming VUCA of one’s surroundings.

Systematic capture and curation of decision context provides insight into the limited knowledge for decisions made in the past. These records form a coherent frame of reference from which to understand, learn, and adapt over time, offering decision-makers in the present the opportunity to act with the advantage of future insights. Applying this machine-assisted augmentation of human perception and cognition can nurture institutional mindfulness.

The Minding Organization” describes problem-solving strategies to envision the (desirable) future and bringing it into the present [2]. This facility to shift perspective through time and across organizations with a coherent frame of reference develops a “continuum mindset.

A continuum mindset promotes continuity and coherence in thought that durably persists over spans of elapsed time, distinct events, and organizational divides.

The ability to employ a continuum mindset promotes durable continuity of thought to span discontinuities in time or place and precocious awareness to robustly adapt to emerging knowledge. This uncanny agility can mediate the effects of VUCA that foster inconsistency, wasteful deliberation, and the mediocrity that results from hedging against accountability.

Knowledge Infrastructure

Organizations successfully competing in strategic global markets and delivering products and services with reliability, robustness, and resilience must proficiently build and maintain preparedness, to apply with timely agility — guided by jointly-established capabilities to recognize opportunity.

“Luck is what happens when Preparation meets Opportunity” — Seneca the Younger (allegedly)

Infrastructure for Understanding

Crucial facets of successful preparation to act include:

  • Knowledge: trained expertise & gained experience, maintained through established practices and active & continual learning,
  • Models: embodying knowledge to guide its application in observation, orientation, decisions, and actions — the “OODA loop”, mental models, physical models, probabilistic/data-derived models, etc., and
  • Confidence: verification & validation and uncertainty quantification through testing and sensitivity analysis to bound predictive accuracy.

These compose an infrastructure for understanding, requiring continual investment to safeguard against becoming inadequate (as inconsistent, incomplete, inaccurate, invalid, or outdated). While knowledge may stay primarily embodied in the minds of people, those mental models must be extracted into explicit and specific forms to enable communication and collaboration. Historically, those forms may have been documents, but in modern systems the form taken will often be a digital model (even if no more than a spreadsheet).

Once digital, the models can harness decades-long exponential progress in computing that has provided the capability, affordability, and abundance of processing, storage and networking to drive advances in supercomputing, cloud (on-demand elastic) computing, and ultimately the scalable analytics embodying artificial intelligence (AI) and machine learning (ML).

Combined with abundant, affordable, and pervasively-accessible storage, AI/ML technologies offer powerful tools to map and apply knowledge, to tame beyond-human perceptive and cognitive complexity, and to explore vast combinations of trade-offs for optimization and sensitivity analysis.

State of the art modeling, analysis, and synthesis can also offer insight into gaps in knowledge and insufficiencies of confidence. That insight presents opportunity to improve knowledge, models, and confidence through data gathering, uncertainty quantification (UQ) and directed experimentation.

Though AI/ML tools provide the ability to evolve and adapt at speeds and scales superior to humans, there are shortcomings in their ability to experiment. Experimentation is crucial for cultivating versatility and robustness in anticipation of encountering unfamiliar circumstances [1].

The derived conceptual universe of AI/ML tools remains confined to the limited contexts and consequent responses (known) within the data forming the basis for training the models —and ignorant of its unknowns.

As a result, infrastructure for understanding must rely upon human facility for counterfactual reasoning (conceptualizing “what if” scenarios).

“Counterfactuals represent our cognitive capability to grasp situations we haven’t encountered before and to use them to improve our understanding to inform our decisions.”
― Kenneth Cukier, co-author of “Framers” [3]

Counterfactual imagination builds upon human proficiencies where AI yet lags — applying pragmatic constraints and causal understanding to frame strategically relevant scenarios and their evaluation [3].

Jack Welch, Chairman & CEO of General Electric 1981–2001

Infrastructure for Awareness

Complementing preparedness through the framework for understanding, a corresponding infrastructure for Situational Awareness (SA) gathers and delivers information for decision making — sufficiently recent, consistent, complete, and contextualized to promote the confidence to act swiftly.

Crucial capabilities for awareness to act include:

  • Detection: sensing emerging information through surveillance of data feeds and workflows, opportunistically directing focus of attention,
  • Recognition: processing information contextualized by prior knowledge and models into identification and observation of novelty, confirmation, contradiction, or anomaly, and
  • Exploration: active reconnaissance research seeking to improve future knowledge (insight / discovery), models (calibration / capability), or confidence (reduction of uncertainty / ambiguity).

Tools and practices developed for situational awareness include:

A knowledge infrastructure can complement human and organizational deficits to promote a continuum mindset by enlisting rapidly-evolving capabilities of digital technology to counter distraction, memory lapse, institutional inconsistencies, and overwhelming scale, complexity, and volatility of data.

Upon this gifted age, in its dark hour
falls from the sky a meteoric shower of facts;
They lie unquestioned, uncombined.
Wisdom enough to leech us of our ill is daily spun,
but there exists no loom to weave it into fabric.
— Edna St. Vincent Millay (1939) “Upon This Age”

Decision Provenance

To minimize behaviors that can motivate mediocrity, capture and document factors anticipated to prompt revisiting the decision in the future. For example, assumptions, unknowns, and evaluation criteria relating to how the decision was made. These metadata compose “decision provenance” and provide transparency into context at the time of decision-making.

Archiving decisions and their supporting data and provenance metadata into an enterprise knowledge steward establishes a recognized authority to query for considered sensitivities and limitations current to the decision.

Socialized awareness of the knowledge steward as a reference for clarity and transparency into the decision-making context offers an opportunity to establish qualified accountability — allaying fears of unfair future judgement in hindsight and promoting candor and confidence to act in the present.

Figure 1: At the point of decision-making, capture Decision Provenance metadata into Knowledge Steward.

Establishing the system and processes for capturing and querying these metadata can offer leap-forward improvements even in the absence of the advanced capabilities described in the following sections. For example, the ability to confidently recall assumptions, unknowns, evaluation criteria, and alternatives relevant to root cause analyses (RCA) of a problem.

Figure 2: Revisiting a decision in context of its provenance metadata stored in the Knowledge Steward.

Or consider preventing inconsistencies at the point of decision-making: the system could warn of assumptions, unknowns, and criteria contradicting previously-recorded decisions (where unreliability can result from long separations in time or between disjointed institutional divides).

Adaptation by Design

Uncertainties and incomplete knowledge foster risk/reward asymmetry. Mitigation of this bias toward risk aversion might be achieved through an uncanny ability to act with the advantage of future insights.

“It’s a poor sort of memory that only works backward” — Lewis Carroll

The customary use of digital infrastructures to retrospectively search archived data is well-established. But advances and affordable abundance in today’s digital technologies offer the opportunity to build far more capable information infrastructure. To achieve adaptation by design, we need to persistently search emerging data — to react to changes (volatility), developments & discoveries (uncertainty), and clarifications (ambiguity).

Entwining decisions with contextual provenance enables us to perform directed semantic searches on unstructured data for information valuable for clarification and consistency. Beyond internal archives, these may also reference diverse information feeds — from open public sources, to paid subscriptions, to data shared by collaborators via digital thread.

Referencing the knowledge steward, an assiduous agent infrastructure can persistently monitor emerging information to re-assess prior assumptions, criteria, and unknowns in the decision provenance metadata captured for consequential decisions.

Figure 3: Assiduous Agents alert decision-maker on discovering a contradiction to an assumption.

When becoming adequately confident in discovering a meaningful insight (like a contradiction to an assumption), an agent can prompt stakeholders to revisit stored decisions relating to that context. From the vantage of the decision in the past, this agent performed a search into the future, to allow correction of a flawed assumption and revision of related decisions.

Plan as a verb — not a noun.
— Moshe Rubinstein, The Minding Organization [2]

This capability empowers the making of consequential decision with more confidence, reducing the undesired time, effort and conservatism imposed by prevailing uncertainty. In effect, the system acts as a safety net to hedge upon a decision conditionally, aware that concerns of consequence (known unknowns or asserted assumptions) will be recorded, persistently monitored and prompt prescribed review upon confirmation (or contradiction).

Pursuing a continuum mindset, the assiduous agents continually synthesize a coherent frame of reference for enterprise mindfulness — persistently clarifying connections between decisions related by provenance metadata.

Transformation Obstacles

The awakened enterprise vision seeks to compensate for human and institutional deficits through a coherent frame of reference maintained by digital infrastructure for understanding and situational awareness.

But the very notion of “an enterprise” presumes a commonality of culture and processes on which to build. Large and long-established institutions like universities, research foundations, multinational corporations, and government agencies have often evolved into complex bureaucratic hierarchies with the deep organizational divides and distinctions we often characterize as “silos”.

Without active direction by top leadership toward inter-organizational goals and shared credit, the default behavior will remain aligned with legacy performance metrics to “do more with less” silo-by-silo.

Spinning the Digital Thread” anticipates cultural and institutional barriers that discourage efforts to improve cross-functional integration, data hygiene, or common infrastructure. Despite potential benefits to upstream and downstream stakeholders, misaligned incentives serve to confine impact within budget-native silos and executive promotion timeframes.

The Patient Inertia of Legacy

Perhaps the greatest obstacle to confront is legacy — in terms of technical infrastructure (as legacy code (software), hardware, and data) as well as the social infrastructure (in legacy practices, procedures, contracts, roles, incentives, organizational structures, learned behaviors, etc.) embodied in established institutional culture, training, and precedent.

Licensed from iStockPhoto

Technical failure will result from transformation lacking corresponding change in the social infrastructure, while attempting to change the social infrastructure may confront inability to execute while overly chained to legacy software, data, and their workflows. Successful transformation will require addressing both technical and social infrastructures.

This status quo defines a basis against which incremental costs, resources, and operational risk must be justified — that is: the magnitude of obligation for risk-adjusted long-term impact relative to the short-term default.

Inertia can also accumulate through behaviors resulting from perceived loss of control and corresponding exposure to accountability. For example, exhibiting an aversion to concede and write off sunk costs that may be considered “insufficiently aged” or reluctance to paying off procedural and technical debt that impede progress toward interoperable workflows, data quality and hygiene, and enterprise productivity and mindfulness.

Leadership commitment to strategic transformation can be demonstrated through regular active and visible review of the asserted priorities, allotted schedule, and invested resources — tempered with appropriate toleration for risk. In the absence of these, the patient inertia of legacy will triumph.

Institutionalized Mediocrity

Fear of future judgement in hindsight motivates mediocrity in the present without some form of qualified accountability to safeguard against liability.

Consider the guarded caution with which patient medical records are now recorded in learned response to the proliferation of malpractice litigation. This hedge characterizes one of the tactics employed in the practice of “defensive medicine” to safeguard practitioners. Unfortunately, side effects can include reduced quality of care, burden of increased tests and costs, and diminished candor in communications between physician and patient.

These detrimental consequences illustrate an example of how incentives for avoiding blame[4] can motivate behavior toward irresponsible caution. Adaptation by design (in the future) offers one approach to promoting candor, clarity, and confidence to act (in the present) — by establishing qualified accountability. However, any strategies for counteracting the institutionalization of mediocrity will require cultural leadership (and perhaps legal reform) to achieve acceptance in practice.

Intellectual Property, Debt, & Integrity

Any system accessing sensitive information must implement verifiable controls to enforce the required data protection policies. But a natural tension exists between desirable transparency and availability of information vs. undesirable theft or loss of protected information.

For example, the U.S. Department of Defense has undertaken great efforts to enable access for speed and agility, while still safeguarding classified, controlled unclassified, and trade secret data. Similarly, the HIPAA regulations for protected health information balance intentional privacy protections with compliant access in delivering needed care to patients.

When capturing decision context (with candor and clarity), the provenance metadata merits at least the sensitivity assessment of the underlying data and decisions themselves. By exposing the thought processes and state of mind framing consequential decisions, these assumptions, unknowns, evaluation criteria, and alternatives can require even more confidentiality.

To assure credibility, confidence, and compliance, the infrastructure to support the awakened enterprise must control and audit: classifications, roles, access, and authority.

Additionally, the adaptive and dynamic nature of the system creates opportunities to undermine integrity without reliable and robust version controls and immutable audit records. In particular, the establishment of qualified accountability requires trusting the authenticity of the system record. Unethical actors must not escape liability by rewriting history.

Finally, while advancing artificial intelligence (AI) capabilities can grant near-magical powers to tame complexity and make useful predictions, “with great power comes great responsibility.”

The data-derived models from Machine Learning (ML) can accrue what Jonathan Zittrain describes as “intellectual debt.” This debt accrues in systems that can usefully apply knowledge, but where the corresponding understanding of why it does what it does remains absent — including the assumptions and limitations of the model. That is, we do not know:

  1. what we do not know,
  2. how the model will perform in unusual situations, or
  3. whether applied inferences are causal or merely correlated in nature.

Closing the gap between opaque knowledge and understanding for ML applications will require corresponding efforts to develop trust in models through explainability, transparency, and sensemaking.

Invention & Implementation

While many of the needed capabilities can be readily delivered through current technologies, some gaps remain in the functionality needed to compose the awakened enterprise architecture.

While metadata are well-established for defining structure, content, statistics, and licensing for data (such as in data provenance and supported by FAIR data principles), these tend to be system-generated or follow policy guidelines. Decision Provenance metadata differ in requiring more human attention — which poses risk (consistency, completeness) and presents an obstacle (learning, additional cognitive labor, integration into workflows).

Decision Provenance capabilities needing attention:

  1. Low-friction (intuitive, courteous) capture within existing decision-making workflows. (Potential solution: conversational AI/chatbots).
  2. Terminology normalization (ontology) to promote correctness and consistency in indexing for search matching and interoperability.
  3. Guidelines for structural formalism by application domain. For example, in engineering, features for model scope, representation, intended use, assessment, and lifecycle (see: ASSESS ESMS [5]).

These can be leveraged within mature semantic search engine technology for retrospective queries. But to persistently monitor asserted assumptions and unknowns, demands developing an Assiduous Agent infrastructure:

  1. Automated and cost-effective access to feeds from all relevant sources of data, ideally filtering for new (or revisions to existing) content.
  2. Data architecture and storage topology to efficiently perform matching of provenance conditions with emerging information from the feeds [6].
  3. Dedicated computing resources performing the scans of triggering conditions and evaluating confidence to consequently send alerts.

Visionary digital systems projects notoriously creep into extravagant, unduly-complex, and overdue implementations that result in byzantine workflows and ineffable data. Learned reluctance to such undertakings, however, can overly constrain scope and impact, and thereby undercut abundant network advantages attainable by deploying at enterprise scale.

Placing pragmatic utility at the root of the design of our systems and data is paramount to the vision of the Awakened Enterprise. Disciplines such as LEAN offer strategies and tactical focus for delivering solutions with valuable outcomes — yet also sustainable through continual enterprise learning and operational improvement.

The Awakened Enterprise vision describes a culture for mindfulness and qualified accountability, operationalizing a coherent frame of reference to improve decision-making confidence and consistency despite separations over spans of time and between institutional divides.

An infrastructure for understanding employs a knowledge steward to capture and curate decisions of consequence, referencing supporting data and analytics and context-setting decision provenance metadata. The system overlays enterprise knowledge and data with a decision-centered index.

A corresponding infrastructure for situational awareness can be added to task assiduous agents to continually monitor ever-emerging data, information, and knowledge updated from external and cross-enterprise sources — to send alerts upon discoveries framed by the archived decision provenance.

The alerts prompt reevaluation of decisions of consequence correlated to the revised emergent context, enabling adaptation by design — offering the opportunity for actions to gain the advantage of future insights.

This machine-augmented mindfulness empowers decisions to be made with a continuum mindset, enabling adaptation by design, to act despite uncertainty.

© 2024 All Rights Reserved.

All views expressed in this article are the personal views of the author.

The Awakened Enterprise: A Vision for Machine-augmented Mindfulness

  • Presentation (PDF — from Power Point version with animations)
  • Article (Medium) (LinkedIn)
  • Video (YouTube) — in progress.

References

  1. Gasser, U., Mayer-Schönberger, V. (2024). Guardrails: Guiding Human Decisions in the Age of AI. Princeton University Press. ISBN: 9780691256351
  2. Rubinstein, M., Firstenberg, I. (1999). The Minding Organization: Bring the Future to the Present and Turn Creative Ideas into Business Solutions. Wiley. ISBN: 0471347817
  3. Cukier, K., Mayer-Schönberger, V., de Véricourt, F. (2021). Framers: Human Advantage in an Age of Technology and Turmoil. Dutton. ISBN: 0593182596
  4. Hood, C. (2010). The Blame Game: Spin, Bureaucracy, and Self-Preservation in Government. Princeton University Press. DOI: 10.1515/9781400836819.3
  5. Engineering Simulation Metadata Specification. NAFEMS ASSESS. (2024)
  6. Arthur, R., DiDomizio, V., Hoebel, L. (2003). Active Data. In: CoopIS, DOA, and ODBASE. OTM 2003. Lecture Notes in Computer Science, vol 2888. Springer. DOI: 10.1007/978-3-540-39964-3_91

Supporting Materials

--

--

Richard Arthur
Richard Arthur

Written by Richard Arthur

STEM+Arts Advocate. I work in applying computational methods and digital technology at an industrial R&D lab. Views are my own.

No responses yet