<?xml version="1.0" encoding="utf-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom">
  <channel>
    <title>Cognitive Inheritance</title>
    <description>The Application Development Experiences of an Enterprise Engineer</description>
    <link>http://www.cognitiveinheritance.com/</link>
    <docs>http://www.rssboard.org/rss-specification</docs>
    <generator>Prehensile Pony Tail 1.0</generator>
    <language>en-US</language>
    <atom:link href="https://cognitiveinheritance.com/syndication.xml" rel="self" type="application/rss+xml" />	
    <item>
  <title>What Counts as AI‑Generated?</title>
  <description>&lt;p&gt;I still have the first camera I ever used - a 126 box camera, similar to a &lt;a href=&quot;https://web.archive.org/web/20251216145350/https://historiccamera.com/cgi-bin/librarium2/pm.cgi?action=app_display&amp;amp;app=datasheet&amp;amp;app_id=3988&quot;&gt;Hawekeye II&lt;/a&gt;, that was basically a toy even in its own era. I shot with black‑and‑white film because that&apos;s what a kid could afford, and it produced the kind of photos you&apos;d expect from a plastic lens and a shutter that felt like it was powered by hope. One of those photos captured Thurman Munson, the Yankees catcher who would later die in a plane crash, making him something of a larger-than-life figure in my experience. It&apos;s not a great photo. It&apos;s grainy, off‑center, and full of the accidental foreground clutter you get when you&apos;re small, excited, and holding a camera that doesn&apos;t care about your artistic intent.&lt;/p&gt;
&lt;p&gt;Recently, I ended up with three versions of that same moment:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;The original&lt;/strong&gt; - a scan of the actual frame I shot as a kid.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;A cleaned‑up version&lt;/strong&gt; - run through an AI tool that removed some shadows, centered Munson, and erased the stray arms of the people next to me.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;A colorized version&lt;/strong&gt; - also AI‑assisted, adding color to a scene that never existed in color on film.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;All three images are real in the sense that they correspond to something that actually happened, and all three are altered in the sense that every photograph is shaped by the tools available at the time. When I show any version of these images, I could be asked, &lt;strong&gt;Is it &amp;quot;AI‑generated&amp;quot;?&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;&lt;a href=&quot;../Images/ThreeMunsonPhotos.png&quot;&gt;&lt;img src=&quot;../Images/ThreeMunsonPhotos-800x276.png&quot; alt=&quot;3 Images of a man wearing a NY Yankees baseball uniform in the outfield of a ballpark, shot from a few feet above him&quot; /&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;Unfortunately, that question really can&apos;t be answered without a lot more context. All 3 images &lt;strong&gt;used AI&lt;/strong&gt; as part of the pipeline in some form or another, because depending on how you define AI, even the act of scanning the original likely used a model. The question we really need to answer is: &lt;strong&gt;what do we mean when we say something is &amp;quot;AI‑generated&amp;quot;?&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;The cleaned‑up version of this photo didn&apos;t invent anything. It didn&apos;t fabricate Munson&apos;s face or change the moment. It just did what darkroom techniques, Photoshop, and restoration tools have always done. The colorized version added something new, but colorization has existed for more than a century. The only difference is that a machine did the brushwork instead of a human. What about the original? It&apos;s still the moment I captured as a kid with a box camera. The digital version may have passed through modern software on its way to the screen, but the instant in time remains intact.&lt;/p&gt;
&lt;h2&gt;Even &amp;quot;true&amp;quot; photos can mislead, with or without AI&lt;/h2&gt;
&lt;p&gt;This is where things get tricky. Any still or moving image can create false impressions with the viewer. Strange lighting, unusual shadows, a frozen instant in time that doesn&apos;t really capture the essence of the situation. All of these things happen, and we&apos;ve experienced them. How many times have you taken a photo of someone who was happy, but looked sad or angry in the shot? Was &lt;a href=&quot;https://en.wikipedia.org/wiki/The_dress&quot;&gt;the dress&lt;/a&gt; blue or gold?&lt;/p&gt;
&lt;p&gt;In my three images above, the event happened nearly entirely as presented in those photos. Despite that, any of these versions can still create false impressions in the mind of the viewer.&lt;/p&gt;
&lt;p&gt;For example:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;It is possible that Munson is talking to someone, or perhaps yelling at them in a way not captured by this frame.&lt;/li&gt;
&lt;li&gt;When I took the picture, there may have been one or more other people just outside the frame, changing the context.&lt;/li&gt;
&lt;li&gt;The cleaned‑up version might imply the scene was less crowded than it really was, because the tool removed the arms of the people next to me.&lt;/li&gt;
&lt;li&gt;The colorized version might imply the grass at Yankee Stadium looked a certain way that day, when the original didn&apos;t capture that detail.&lt;/li&gt;
&lt;li&gt;The colorization might suggest Munson wore an undershirt of a particular shade, a detail the model had to invent.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;None of these facts are necessarily germane to the image, but they absolutely can alter its &lt;em&gt;interpretation&lt;/em&gt;. Still images can present scenes in a framing that doesn&apos;t completely do it justice, while AI can introduce confident, plausible details that were never in evidence, whether done maliciously or not.&lt;/p&gt;
&lt;p&gt;This is why labeling matters. Not because AI involvement is inherently bad, but because, in most cases, viewers deserve to know which parts of an image are grounded in reality and which parts were reconstructed, inferred, or imagined. However, defining those rules is an area where a poor definition could let some people get away with anything while the rest of us end up having to tag everything as AI generated, turning the label into just more noise.&lt;/p&gt;
&lt;h2&gt;This isn&apos;t even touching the copyright issues&lt;/h2&gt;
&lt;p&gt;Everything above is about &lt;em&gt;truth&lt;/em&gt;: what happened, what didn&apos;t, and what an image implies, but there&apos;s a whole separate dimension we haven&apos;t entered: &lt;strong&gt;copyright&lt;/strong&gt;.&lt;/p&gt;
&lt;p&gt;Questions like:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;What training data was used to create the model?&lt;/li&gt;
&lt;li&gt;Who owns the derivative works?&lt;/li&gt;
&lt;li&gt;When does enhancement become transformation?&lt;/li&gt;
&lt;li&gt;What rights do I retain over my own childhood photo once an AI model has touched it?&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;These aren&apos;t footnotes. They&apos;re large, unresolved questions that deserve their own analysis and probably their own regulatory framework. Mixing them into the &amp;quot;AI‑generated vs. not&amp;quot; debate only makes everything muddier. So for this post, I&apos;m deliberately setting copyright aside; not because it&apos;s unimportant, but because it&apos;s &lt;em&gt;too&lt;/em&gt; important to treat as a parenthetical.&lt;/p&gt;
&lt;h2&gt;The Hard Part Is Defining What Matters&lt;/h2&gt;
&lt;p&gt;The reasons why blanket rules about &amp;quot;AI‑generated content&amp;quot; fall apart are complicated. The line between &amp;quot;generated,&amp;quot; &amp;quot;assisted,&amp;quot; &amp;quot;enhanced,&amp;quot; and &amp;quot;restored&amp;quot; isn&apos;t a line at all, it&apos;s a gradient. That doesn&apos;t mean we shouldn&apos;t regulate AI‑involved media. It means &lt;strong&gt;we need to regulate AI with language and intent that actually matches reality, and solves the real problems&lt;/strong&gt;.&lt;/p&gt;
&lt;p&gt;There &lt;em&gt;are&lt;/em&gt; cases where labeling is essential, but most of it is context specific. If I am posting a picture of a conference talk I gave, I wouldn&apos;t feel right adding fake participants in the crowd, but I&apos;d often be fine with editing someone out who asked me to, depending on the reason for doing so. I might not feel the same way if the photograph was being published as part of a story in the news. However, there are some things that should probably always be disclosed:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Images of things that never happened&lt;/strong&gt; should be labeled as such.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Images containing people that don&apos;t exist&lt;/strong&gt; must be disclosed.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Images where people or evidence is added&lt;/strong&gt; absolutely require clear disclosure, even if they are believed to be &apos;real&apos;.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;AI‑assisted reconstructions&lt;/strong&gt;, such as those built from text descriptions after the fact, should be labeled in way that allows viewers understand what&apos;s real and what&apos;s assumed.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Those distinctions matter because they speak to truth, provenance, and the potential for harm, and they remain just as important whether AI is part of the process or not.&lt;/p&gt;
&lt;p&gt;But my three images of Thurman Munson? They&apos;re all the same moment, they differ only in the tools used to reveal it. In most contexts, there is no meaningful change made by these manipulations.&lt;/p&gt;
&lt;p&gt;There are already existing sets of rules we can lean on here. The National Press Photographers Association has a &lt;a href=&quot;https://web.archive.org/web/20260315211924/https://nppa.org/resources/code-ethics&quot;&gt;Code of Ethics&lt;/a&gt; for visual journalists that includes the following:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;Editing should maintain the integrity of the photographic image&apos;s content and context. Do not manipulate images or add or alter sound in any way that can mislead viewers or misrepresent subjects.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;I would ask you, &amp;quot;Does my manipulation of this image mislead viewers or misrepresent subjects?&amp;quot;&lt;/p&gt;
&lt;p&gt;This Code of Ethics also includes composition and subject matter rules such as:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Resist being manipulated by staged photo opportunities&lt;/li&gt;
&lt;li&gt;Be complete and provide context when photographing or recording subjects&lt;/li&gt;
&lt;li&gt;While photographing subjects, do not intentionally contribute to, alter, or seek to alter or influence events&lt;/li&gt;
&lt;li&gt;Do not pay sources or subjects or reward them materially for information or participation&lt;/li&gt;
&lt;li&gt;Do not accept gifts, favors, or compensation from those who might seek to influence coverage&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;All of which suggests that the editing of images, the part that can be done using AI, is just a small part of the harm that can be done through visual means, albeit one that scales better than most.&lt;/p&gt;
&lt;h2&gt;Here&apos;s the part we can&apos;t ignore&lt;/h2&gt;
&lt;p&gt;AI, in some form, is nearly &lt;em&gt;always&lt;/em&gt; involved now. Not the headline‑grabbing generative models that synthesize faces or fabricate events, but the quiet, invisible systems inside scanners, cameras, phones, and photo apps, the ones nobody notices because they don&apos;t feel like AI. Processes like sharpening, noise reduction, auto‑contrast, white‑balance correction, lens‑distortion fixes and de‑mosaicing filters are all part of many of the image capture mechanisms we use every day. Other domains have similar tools used for autocorrect, predictive-text, grammar correction, spellcheck, voice-to-text, spam filtering and recommendations. These are all machine‑learning (ML) systems doing work behind the scenes.&lt;/p&gt;
&lt;p&gt;So the question can&apos;t be &amp;quot;Was AI used?&amp;quot;  The questions must be more akin to &lt;strong&gt;&amp;quot;What kind of AI was used, how was it used, and to what effect?&amp;quot;&lt;/strong&gt;. These questions need to be answered in the full context of the situation, because the truth of this photo is simple, AI didn&apos;t create it, &lt;strong&gt;it actually happened&lt;/strong&gt;. The tools just helped me see it more clearly, but they can also help someone else see something that was never there. Outside of this one childhood snapshot, it&apos;s rarely even &lt;em&gt;that&lt;/em&gt; simple.&lt;/p&gt;
&lt;p&gt;Knowing the difficulty in categorizing these three versions of a childhood photo as &apos;&lt;em&gt;AI-generated&lt;/em&gt;&apos; or not, it is obvious that we can&apos;t build policy around such a binary definition. We need rules that focus on &lt;strong&gt;intent&lt;/strong&gt;, &lt;strong&gt;impact&lt;/strong&gt;, and &lt;strong&gt;what claims are being made&lt;/strong&gt;, not on whether a model was somewhere in the toolchain. We will drill into more detail on how we can craft regulations that take these items into account in future posts.&lt;/p&gt;
</description>
  <link>https://www.cognitiveinheritance.com/Posts/what-counts-as-aigenerated.html</link>
  <author>aa94bef8-9a1d-4486-8024-e38b3f45e8e4@bsstahl.com (Barry S. Stahl)</author>
  <guid>https://www.cognitiveinheritance.com/Permalinks/8a03e594-887c-4412-ae36-23fa3d2cf0c2.html</guid>
  <pubDate>Sat, 28 Mar 2026 10:19:40 GMT</pubDate>
</item><item>
  <title>Introducing the Behavioral Layer</title>
  <description>&lt;p&gt;Modern systems increasingly receive free‑text input, either from humans or from language models. These inputs can be ambiguous, incomplete, or phrased in ways the domain layer cannot act on directly. They are not the predictable, schema‑bound shapes that a traditional Anti‑Corruption Layer (ACL) is designed to translate. They require interpretation before any downstream component can reason about them. This is the realm of the Behavioral Layer.&lt;/p&gt;
&lt;h2&gt;What the Behavioral Layer Does&lt;/h2&gt;
&lt;p&gt;The Behavioral Layer is responsible for taking unstructured or highly variable inputs, such as those produced by a person or a language model, and producing a clean, normalized, and predictable shape that the rest of the system can trust. It is the architectural boundary where the system interprets intent before any downstream components have to reason about structure.&lt;/p&gt;
&lt;p&gt;At a high level, the Behavioral Layer:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Interprets what behavior the sender is attempting to invoke&lt;/li&gt;
&lt;li&gt;Normalizes inconsistently presented or incomplete inputs&lt;/li&gt;
&lt;li&gt;Detects structural and behavioral anomalies in the message&lt;/li&gt;
&lt;li&gt;Enriches the data with derived or inferred attributes&lt;/li&gt;
&lt;li&gt;Produces a stable output object that downstream components can rely on&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;The Behavioral Layer is defined by its responsibilities, not by any specific technology. You can implement it with deterministic rules, heuristics, or fine-tuned models. The architecture stays the same regardless of the tools you choose.&lt;/p&gt;
&lt;h2&gt;A Machine to Machine Example&lt;/h2&gt;
&lt;p&gt;To ground this in something concrete, consider a service that exposes an OpenAI‑compatible API for the purpose of intent determination and routing. This service is designed to accept natural language inside a structured request, classify the intent, and direct the call to the correct downstream system. Even in a machine to machine scenario, the request still contains unstructured text because the caller may be a human, a script, or an upstream LLM.&lt;/p&gt;
&lt;p&gt;Here is an example of the kind of request this router might receive:&lt;/p&gt;
&lt;div class=&quot;lang-json editor-colors&quot;&gt;&lt;div style=&quot;color:Black;background-color:White;&quot;&gt;&lt;pre&gt;
{
  &lt;span style=&quot;color:#A31515;&quot;&gt;&amp;quot;model&amp;quot;&lt;/span&gt;: &lt;span style=&quot;color:#A31515;&quot;&gt;&amp;quot;gpt-4o-mini&amp;quot;&lt;/span&gt;,
  &lt;span style=&quot;color:#A31515;&quot;&gt;&amp;quot;messages&amp;quot;&lt;/span&gt;: [
    {
      &lt;span style=&quot;color:#A31515;&quot;&gt;&amp;quot;role&amp;quot;&lt;/span&gt;: &lt;span style=&quot;color:#A31515;&quot;&gt;&amp;quot;system&amp;quot;&lt;/span&gt;,
      &lt;span style=&quot;color:#A31515;&quot;&gt;&amp;quot;content&amp;quot;&lt;/span&gt;: &lt;span style=&quot;color:#A31515;&quot;&gt;&amp;quot;You are a plan selection assistant.&amp;quot;&lt;/span&gt;
    },
    {
      &lt;span style=&quot;color:#A31515;&quot;&gt;&amp;quot;role&amp;quot;&lt;/span&gt;: &lt;span style=&quot;color:#A31515;&quot;&gt;&amp;quot;user&amp;quot;&lt;/span&gt;,
      &lt;span style=&quot;color:#A31515;&quot;&gt;&amp;quot;content&amp;quot;&lt;/span&gt;: &lt;span style=&quot;color:#A31515;&quot;&gt;&amp;quot;please switch the user to the premium plan with the extras&amp;quot;&lt;/span&gt;
    }
  ],
  &lt;span style=&quot;color:#A31515;&quot;&gt;&amp;quot;user&amp;quot;&lt;/span&gt;: &lt;span style=&quot;color:#A31515;&quot;&gt;&amp;quot;8821&amp;quot;&lt;/span&gt;,
  &lt;span style=&quot;color:#A31515;&quot;&gt;&amp;quot;source&amp;quot;&lt;/span&gt;: &lt;span style=&quot;color:#A31515;&quot;&gt;&amp;quot;recommendation-service&amp;quot;&lt;/span&gt;
}
&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;The outer structure is predictable, but the content is not. The router cannot forward this request until it determines what the caller is trying to do. The phrase premium plan with the extras is natural language, not an instruction the domain layer can act on. The router must identify the intent so it can send the request to the correct downstream service, which in this case is probably a plan or user service.&lt;/p&gt;
&lt;p&gt;A Behavioral Layer implementation might produce something like this.&lt;/p&gt;
&lt;div class=&quot;lang-json editor-colors&quot;&gt;&lt;div style=&quot;color:Black;background-color:White;&quot;&gt;&lt;pre&gt;
{
  &lt;span style=&quot;color:#A31515;&quot;&gt;&amp;quot;userId&amp;quot;&lt;/span&gt;: &lt;span style=&quot;color:#A31515;&quot;&gt;&amp;quot;8821&amp;quot;&lt;/span&gt;,
  &lt;span style=&quot;color:#A31515;&quot;&gt;&amp;quot;source&amp;quot;&lt;/span&gt;: &lt;span style=&quot;color:#A31515;&quot;&gt;&amp;quot;recommendation-service&amp;quot;&lt;/span&gt;,
  &lt;span style=&quot;color:#A31515;&quot;&gt;&amp;quot;intent&amp;quot;&lt;/span&gt;: &lt;span style=&quot;color:#A31515;&quot;&gt;&amp;quot;changePlan&amp;quot;&lt;/span&gt;,
  &lt;span style=&quot;color:#A31515;&quot;&gt;&amp;quot;confidence&amp;quot;&lt;/span&gt;: &lt;span style=&quot;color:#A31515;&quot;&gt;&amp;quot;high&amp;quot;&lt;/span&gt;,
  &lt;span style=&quot;color:#A31515;&quot;&gt;&amp;quot;notes&amp;quot;&lt;/span&gt;: [
    {
      &lt;span style=&quot;color:#A31515;&quot;&gt;&amp;quot;message&amp;quot;&lt;/span&gt;: &lt;span style=&quot;color:#A31515;&quot;&gt;&amp;quot;The request refers to &amp;#39;premium plan with the extras&amp;#39;.&amp;quot;&lt;/span&gt;
    }
  ]
}
&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;The business logic within the router may take this input, determine which service is best suited to handle it, and route the original request to that service. The Behavioral Layer has taken a natural language request and expressed the sender&apos;s behavior in a structured form. It has identified what the caller is trying to do, surfaced any uncertainty, and produced a stable intent that the rest of the system can trust. Nothing about this output depends on domain rules or specific plan identifiers. The Behavioral Layer simply interprets the behavior contained in the text and turns it into a predictable shape that downstream components can build on. It has NOT concerned itself with mapping to the domain language, since this layer is not responsible for that. If additional mapping is required into the language of the domain, an anti-corruption or other mapping layer should be used to maintain the separation of concerns.&lt;/p&gt;
&lt;h2&gt;How It Works&lt;/h2&gt;
&lt;p&gt;The Behavioral Layer sits between the raw input and the ACL or domain layer. It receives whatever the outside world provides and applies a series of transformations that gradually reduce uncertainty.&lt;/p&gt;
&lt;p&gt;A typical flow looks like this:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;Receive the raw input exactly as it arrived.&lt;/li&gt;
&lt;li&gt;Perform structural checks to understand what type of thing it might be.&lt;/li&gt;
&lt;li&gt;Apply behavioral checks to understand what the sender is trying to accomplish.&lt;/li&gt;
&lt;li&gt;Normalize fields, resolve aliases, and fill in missing but inferable information.&lt;/li&gt;
&lt;li&gt;Detect suspicious or incoherent combinations of attributes.&lt;/li&gt;
&lt;li&gt;Produce a Behavioral Output object that expresses the input in a clean, predictable shape.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;Neither the ACL nor the domain layer ever sees the raw input. They only see the Behavioral Output, which keeps both layers small, deterministic, and easy to reason about.&lt;/p&gt;
&lt;h2&gt;How It Differs From a Traditional ACL&lt;/h2&gt;
&lt;p&gt;A traditional Anti-Corruption Layer protects the domain from other systems. It translates external models into internal ones, isolates upstream changes, and ensures that foreign concepts do not leak into the domain.&lt;/p&gt;
&lt;p&gt;The Behavioral Layer protects the domain from ambiguous inputs. It resolves uncertainty, interprets intent, and produces a coherent behavioral shape before any translation or invariant enforcement occurs.&lt;/p&gt;
&lt;p&gt;You can think of the responsibilities like this:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Behavioral Layer: coherence&lt;/li&gt;
&lt;li&gt;ACL: translation and isolation&lt;/li&gt;
&lt;li&gt;Domain: correctness and invariants&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;The Behavioral Layer is not a variant of an ACL and not a replacement for one. It is a complementary layer that handles a different class of problems. The ACL expects structured, well-formed inputs. The Behavioral Layer exists precisely because real-world inputs often are not fully structured.&lt;/p&gt;
&lt;p&gt;If you are building a &amp;quot;modular monolith&amp;quot;, where all functionality is crammed into a single deployment unit, you can manage both sets of fuctionality (translation and behavioral) in a single place, however you probably don&apos;t want to mash them together so they can be more completely separated if it becomes appropriate.&lt;/p&gt;
&lt;h2&gt;Why is it Called the &lt;strong&gt;Behavioral&lt;/strong&gt; Layer&lt;/h2&gt;
&lt;p&gt;The name comes from the nature of the inputs it handles. At this boundary, the system is not reacting to a schema. It is reacting to behavior. A person behaves unpredictably when typing a request. A language model behaves unpredictably when generating a response. A third-party system behaves unpredictably when sending a payload that almost matches your expectations.&lt;/p&gt;
&lt;p&gt;The Behavioral Layer exists to interpret that behavior.&lt;/p&gt;
&lt;p&gt;It focuses on what the sender is trying to do, not how the sender structures the data. It resolves intent, ambiguity, and variability before any translation or invariant enforcement occurs. The name fits because it describes the responsibility: making sense of behavior so the rest of the system does not have to.&lt;/p&gt;
&lt;h2&gt;Implementation Options&lt;/h2&gt;
&lt;p&gt;You can build a Behavioral Layer using several strategies, depending on your constraints and the variability of your inputs.&lt;/p&gt;
&lt;h3&gt;Deterministic Rules&lt;/h3&gt;
&lt;p&gt;This is the simplest approach. You define explicit rules for classification, normalization, and enrichment. It works well when the input space is small and predictable. It may work in more complex spaces with the help of a rules-engine or similar logic framework.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Pros: transparent, easy to test, easy to reason about&lt;/li&gt;
&lt;li&gt;Cons: brittle when inputs vary widely or evolve over time&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;Heuristics and Pattern Matching&lt;/h3&gt;
&lt;p&gt;This approach uses scoring, thresholds, and pattern recognition to handle more variability without committing to full machine learning.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Pros: flexible, adaptable, still deterministic&lt;/li&gt;
&lt;li&gt;Cons: harder to maintain, can drift into complexity&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;Fine-Tuned Language Models&lt;/h3&gt;
&lt;p&gt;A small, purpose-built model can classify intent, normalize fields, and map ambiguous inputs into structured forms with far more reliability than hand-written rules.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Pros: handles real-world variability, reduces rule complexity, improves resilience&lt;/li&gt;
&lt;li&gt;Cons: requires training data, monitoring, and versioning discipline&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;The Behavioral Layer does not require a language model. LLMs and other probabilistic models simply make it easier to implement the layer when the input space becomes too variable for deterministic approaches.&lt;/p&gt;
&lt;h2&gt;Use Case 2: Human Input&lt;/h2&gt;
&lt;p&gt;The earlier example showed how a machine to machine request can contain natural language inside a structured API call. The same problem appears when a human interacts with the system. A user may type a request in their own words, combine multiple actions in a single message, or omit details that downstream components require. The Behavioral Layer handles this variability by interpreting what the user is trying to do and expressing that behavior in a predictable shape.&lt;/p&gt;
&lt;p&gt;Imagine a system that receives inbound support messages from users. The messages can arrive through email, chat, or a mobile app. Users may not follow a template. They may combine multiple requests in one message. They may use synonyms, shorthand, or incomplete phrasing.&lt;/p&gt;
&lt;p&gt;A raw message might look like:&lt;/p&gt;
&lt;p&gt;&amp;quot;Hey, can you change my home address to the new one on file and also switch my plan to the premium thing&amp;quot;&lt;/p&gt;
&lt;p&gt;The Behavioral Layer would:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Translate the sender information into discrete fields&lt;/li&gt;
&lt;li&gt;Detect that the message contains two distinct intents&lt;/li&gt;
&lt;li&gt;Normalize &amp;quot;premium thing&amp;quot; into a known plan identifier&lt;/li&gt;
&lt;li&gt;Extract the address reference and map it to the stored address record&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;As shown below, this layer might also interpret normalized data that it has access to. For example, if the list of plans is accessible to the Behavioral Layer, it might add an indication that &amp;quot;premium thing&amp;quot; is not an exact match to a known plan. This is one of the places however where some judgement is required because, depending on the circumstances, that functionality might be better left to an ACL or the Domain.&lt;/p&gt;
&lt;p&gt;The Behavioral layer would consider the input above along with the email metadata and might produce an output object similar to the one shown below:&lt;/p&gt;
&lt;div class=&quot;lang-json editor-colors&quot;&gt;&lt;div style=&quot;color:Black;background-color:White;&quot;&gt;&lt;pre&gt;
{
  userIds: [
    &lt;span style=&quot;color:#A31515;&quot;&gt;&amp;quot;email&amp;quot;&lt;/span&gt;: &lt;span style=&quot;color:#A31515;&quot;&gt;&amp;quot;sampleuser@cognitiveinheritance.com&amp;quot;&lt;/span&gt;,
    &lt;span style=&quot;color:#A31515;&quot;&gt;&amp;quot;eMailName&amp;quot;&lt;/span&gt;: &lt;span style=&quot;color:#A31515;&quot;&gt;&amp;quot;Sample User&amp;quot;&lt;/span&gt;,
    &lt;span style=&quot;color:#A31515;&quot;&gt;&amp;quot;dkimDomain&amp;quot;&lt;/span&gt;: &lt;span style=&quot;color:#A31515;&quot;&gt;&amp;quot;cognitiveinheritance.com&amp;quot;&lt;/span&gt;,
    &lt;span style=&quot;color:#A31515;&quot;&gt;&amp;quot;spfDomain&amp;quot;&lt;/span&gt;: &lt;span style=&quot;color:#A31515;&quot;&gt;&amp;quot;sendgrid.net&amp;quot;&lt;/span&gt;
  ],
  intents: [
    { type: &lt;span style=&quot;color:#A31515;&quot;&gt;&amp;quot;updateAddress&amp;quot;&lt;/span&gt;, addressId: &lt;span style=&quot;color:#A31515;&quot;&gt;&amp;quot;home&amp;quot;&lt;/span&gt; },
    { type: &lt;span style=&quot;color:#A31515;&quot;&gt;&amp;quot;changePlan&amp;quot;&lt;/span&gt;, planId: &lt;span style=&quot;color:#A31515;&quot;&gt;&amp;quot;premiumPlan&amp;quot;&lt;/span&gt; }
  ],
  confidence: &lt;span style=&quot;color:#A31515;&quot;&gt;&amp;quot;high&amp;quot;&lt;/span&gt;,
  anomalies: [
    { &lt;span style=&quot;color:#A31515;&quot;&gt;&amp;quot;fieldName&amp;quot;&lt;/span&gt;: &lt;span style=&quot;color:#A31515;&quot;&gt;&amp;quot;planId&amp;quot;&lt;/span&gt;, &lt;span style=&quot;color:#A31515;&quot;&gt;&amp;quot;value&amp;quot;&lt;/span&gt;: &lt;span style=&quot;color:#A31515;&quot;&gt;&amp;quot;&amp;#39;premium thing&amp;#39; not an exact match to plan name&amp;quot;&lt;/span&gt; }
  ]
}
&lt;/pre&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;p&gt;The ACL or domain layer now has a clean, predictable structure to work with. It does not need to parse free‑form text or guess what the user meant. The Behavioral Layer has already done that work.&lt;/p&gt;
&lt;h2&gt;What Comes Next&lt;/h2&gt;
&lt;p&gt;This post introduces the Behavioral Layer as an architectural concept and distinguishes it from a traditional ACL. In the next article, we will look at how fine‑tuned language models can assist with the transformations inside the layer. We will walk through how to build small, purpose‑built models using Microsoft Foundry, how to train them on your domain, and how to integrate them into a reliability‑first architecture.&lt;/p&gt;
</description>
  <link>https://www.cognitiveinheritance.com/Posts/introducing-the-behavioral-layer.html</link>
  <author>aa94bef8-9a1d-4486-8024-e38b3f45e8e4@bsstahl.com (Barry S. Stahl)</author>
  <guid>https://www.cognitiveinheritance.com/Permalinks/18cf98fe-ad5e-43a5-91ee-55f0a90478c8.html</guid>
  <pubDate>Sat, 14 Mar 2026 17:13:00 GMT</pubDate>
</item><item>
  <title>Types of AI Models</title>
  <description>&lt;p&gt;It is a common misconception that to have an Artificial Intelligence you must have some form of machine learning. This belief has become so pervasive in recent years that many developers and business leaders assume that AI and ML are synonymous terms, or worse, that LLMs are the definition of AI. However, this couldn&apos;t be further from the truth.&lt;/p&gt;
&lt;p&gt;Artificial Intelligence is a broad field that encompasses a wide spectrum of computational approaches. While Machine Learning (ML) and Large Language Models (LLMs) are important subfields, AI also includes rule-based logic, search/optimization techniques, and Hybrid approaches. AI is not synonymous with ML or LLM.&lt;/p&gt;
&lt;p&gt;Understanding the different types of AI models is crucial for several reasons:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Choosing the Right Tool&lt;/strong&gt;: Different problem domains require different approaches. A rules-based system might be more appropriate than a neural network for certain business logic scenarios.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Explainability Requirements&lt;/strong&gt;: Some applications demand clear explanations of how decisions are made, which varies across AI model types.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Resource Constraints&lt;/strong&gt;: Different AI approaches have vastly different requirements for data, computational power, and development expertise.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Risk Management&lt;/strong&gt;: Understanding the strengths and limitations of each approach helps in making informed decisions about where and how to deploy AI systems.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;By exploring the full landscape of AI model types, we can make better architectural decisions and avoid the trap of applying machine learning solutions to problems that might be better solved with other AI approaches.&lt;/p&gt;
&lt;h4&gt;What is AI&lt;/h4&gt;
&lt;blockquote&gt;
&lt;p&gt;An AI is a computational system that behaves rationally.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;In the context of AI, rational behavior means making decisions that are optimal or near-optimal given the system&apos;s goals, available information, and understanding of the problem domain. This simple definition captures the essence of what distinguishes artificial intelligence from conventional software.&lt;/p&gt;
&lt;p&gt;More comprehensively, an AI is a computational system that autonomously evaluates situations and makes decisions by attempting to optimize outcomes based on its model of the problem domain and available data, often while handling uncertainty and incomplete information.&lt;/p&gt;
&lt;p&gt;At its core, an artificial intelligence system is designed to make decisions autonomously. Unlike traditional software that simply executes predetermined instructions, an AI system evaluates situations and attempts to make the best possible decision based on two critical components: its understanding of the problem domain (the model) and the available information about the current situation (the data).&lt;/p&gt;
&lt;p&gt;This decision-making process is what distinguishes AI from simpler computational systems. The AI doesn&apos;t just process data--it interprets that data through the lens of its model to determine the most rational course of action. Furthermore, many AI systems go beyond just making decisions; they can also act on those decisions through automation, creating a complete cycle from data input to actionable output.&lt;/p&gt;
&lt;p&gt;The key difference between an AI and a decision support system (DSS) is that the DSS aggregates and presents data such that the user can make the best decision whereas the AI attempts to make the decision itself. This autonomous decision-making capability is what transforms a helpful tool into an intelligent agent.&lt;/p&gt;
&lt;h4&gt;The Categories of AI Models&lt;/h4&gt;
&lt;p&gt;I find it useful to categorize AI models into four families: Logical Models; Probabilistic/Learning Models; Optimization/Search Models; Hybrid Models. Each category has distinct characteristics, typical use cases, and trade-offs in explainability and performance.&lt;/p&gt;
&lt;h5&gt;Logical Models&lt;/h5&gt;
&lt;p&gt;Logical AI models are perhaps the most familiar to traditional software developers because they operate using deterministic rules and conditional logic. These systems make decisions by following explicit, programmed instructions that can be reduced to if-then statements and boolean logic.&lt;/p&gt;
&lt;p&gt;This category includes both object-oriented programming approaches (which encompass most traditional software development) and rules engines. While it might seem counterintuitive to classify conventional programming as AI, these systems qualify as artificial intelligence when they autonomously make decisions based on their programmed logic and available data, rather than simply executing predetermined workflows.&lt;/p&gt;
&lt;p&gt;The key distinction is that logical AI systems evaluate conditions and make rational decisions within their domain, even if those decisions follow deterministic patterns. A sophisticated business rules engine that processes complex scenarios and determines appropriate actions is exhibiting rational behavior, even though its decision-making process is entirely transparent and predictable.&lt;/p&gt;
&lt;h6&gt;Features of Logical Models&lt;/h6&gt;
&lt;ul&gt;
&lt;li&gt;Results Explainable: Generally - Code is highly imperative&lt;/li&gt;
&lt;li&gt;Correctness Understood: Generally - Code is highly imperative&lt;/li&gt;
&lt;li&gt;Solution Discoverability: Low - Code is highly imperative&lt;/li&gt;
&lt;/ul&gt;
&lt;h5&gt;Probabilistic/Learning Models&lt;/h5&gt;
&lt;p&gt;Probabilistic and learning models represent the category most people think of when they hear &amp;quot;artificial intelligence&amp;quot; today. These stochastic systems operate by learning patterns from data and making predictions based on statistical relationships rather than explicit rules. Unlike logical models, they don&apos;t follow predetermined decision trees but instead develop their own understanding of how to map inputs to outputs.&lt;/p&gt;
&lt;p&gt;What makes these models unique is their ability to handle uncertainty and incomplete information. They excel in domains where the relationships between variables are complex, non-linear, or not fully understood by human experts. Rather than requiring programmers to explicitly code every decision path, these systems discover patterns and relationships autonomously through exposure to training data.&lt;/p&gt;
&lt;p&gt;These models are most appropriate when you have large amounts of historical data, when the problem domain is too complex for rule-based approaches, or when you need the system to adapt and improve over time. They&apos;re particularly powerful for tasks like image recognition, natural language processing, fraud detection, and recommendation systems where traditional programming approaches would be impractical.&lt;/p&gt;
&lt;p&gt;However, this power comes with significant trade-offs. The decision-making process is often opaque—even to the system&apos;s creators—making it difficult to understand why a particular decision was made. Additionally, their correctness can only be evaluated statistically across many examples rather than being guaranteed for any individual case.&lt;/p&gt;
&lt;h6&gt;Examples of Probabilistic/Learning Models&lt;/h6&gt;
&lt;ul&gt;
&lt;li&gt;Neural/Bayesian Networks&lt;/li&gt;
&lt;li&gt;Genetic Algorithms&lt;/li&gt;
&lt;/ul&gt;
&lt;h6&gt;Features of Probabilistic/Learning Models&lt;/h6&gt;
&lt;ul&gt;
&lt;li&gt;Results Explainable: Rarely&lt;/li&gt;
&lt;li&gt;Correctness Understood: Somewhat - Unknown at design time, potentially known at runtime&lt;/li&gt;
&lt;li&gt;Solution Discoverability: High - Solutions may surprise the implementers&lt;/li&gt;
&lt;/ul&gt;
&lt;h5&gt;Optimization/Search Models&lt;/h5&gt;
&lt;p&gt;Optimization and search models represent a mathematical approach to artificial intelligence that focuses on finding the best possible solution within a defined solution space. These systems work by systematically exploring possible solutions and applying mathematical techniques to converge on optimal or near-optimal answers to well-defined problems.&lt;/p&gt;
&lt;p&gt;What makes these models unique is their foundation in mathematical optimization theory and their ability to guarantee certain properties about their solutions. Unlike probabilistic models that learn from data, optimization models work with explicit mathematical formulations of problems and constraints. They excel at finding provably optimal solutions when the problem can be properly formulated and the solution space is well-defined.&lt;/p&gt;
&lt;p&gt;These models are most appropriate for problems with clear objectives, well-understood constraints, and quantifiable outcomes. They shine in scenarios like resource allocation, scheduling, route planning, portfolio optimization, and supply chain management where you need to maximize or minimize specific metrics subject to known limitations. They&apos;re particularly valuable when you need to justify decisions with mathematical rigor or when regulatory requirements demand explainable optimization processes.&lt;/p&gt;
&lt;p&gt;The trade-off with optimization models is that they require problems to be formulated in specific mathematical ways, which can be limiting for complex real-world scenarios. Their solution discoverability is constrained by how well the problem is modeled and the algorithms chosen for implementation. However, when applicable, they often provide the most reliable and defensible solutions.&lt;/p&gt;
&lt;h6&gt;Examples&lt;/h6&gt;
&lt;ul&gt;
&lt;li&gt;Dynamic Programming&lt;/li&gt;
&lt;li&gt;Linear Programming&lt;/li&gt;
&lt;/ul&gt;
&lt;h6&gt;Features&lt;/h6&gt;
&lt;ul&gt;
&lt;li&gt;Results Explainable: Sometimes - dependent on implementation&lt;/li&gt;
&lt;li&gt;Correctness Understood: Somewhat - dependent on implementation&lt;/li&gt;
&lt;li&gt;Solution Discoverability: Limited - solutions will likely be limited by the implementations&lt;/li&gt;
&lt;/ul&gt;
&lt;h5&gt;Hybrid Models&lt;/h5&gt;
&lt;p&gt;Hybrid AI models combine multiple AI approaches to leverage the strengths of different model types while mitigating their individual weaknesses. Rather than relying on a single technique, hybrid systems strategically integrate logical, probabilistic, and optimization approaches to solve complex problems that no single model type could handle effectively.&lt;/p&gt;
&lt;p&gt;What makes hybrid models particularly powerful is their ability to provide both optimal solutions and explainable reasoning. This addresses one of the key limitations identified by IBM Fellow Grady Booch regarding systems like AlphaGo: while they can make optimal decisions, they cannot explain why those decisions were made.&lt;/p&gt;
&lt;p&gt;Hybrid approaches can iteratively combine optimization engines with logical reasoning to create systems that not only find the best solutions but can also explain their decision-making process. For detailed examples of how this works in practice, see my previous articles on &lt;a href=&quot;https://www.cognitiveinheritance.com/Posts/AI-That-Can-Explain-Why.html&quot;&gt;AI That Can Explain Why&lt;/a&gt; and &lt;a href=&quot;https://www.cognitiveinheritance.com/Posts/An-Example-of-a-Hybrid-AI-Implementation.html&quot;&gt;An Example of a Hybrid AI Implementation&lt;/a&gt;, which demonstrate hybrid systems for employee scheduling and conference planning that provide both optimal solutions and clear explanations for why certain constraints couldn&apos;t be satisfied.&lt;/p&gt;
&lt;p&gt;This approach is most appropriate when you need both optimal solutions and the ability to explain decisions to stakeholders. It&apos;s particularly valuable in scenarios like resource allocation, scheduling, and assignment problems where users need to understand not just what the solution is, but why certain trade-offs were necessary.&lt;/p&gt;
&lt;h6&gt;Features of Hybrid Models&lt;/h6&gt;
&lt;ul&gt;
&lt;li&gt;Results Explainable: Often - Depends on the combination of techniques used&lt;/li&gt;
&lt;li&gt;Correctness Understood: Often - Combines the characteristics of constituent models&lt;/li&gt;
&lt;li&gt;Solution Discoverability: Moderate to High - Can surprise implementers while providing reasoning&lt;/li&gt;
&lt;/ul&gt;
&lt;h4&gt;Conclusion&lt;/h4&gt;
&lt;p&gt;Understanding the different types of AI models is essential for making informed architectural decisions and choosing the right approach for your specific problem domain. Each model type offers distinct advantages and trade-offs that make them suitable for different scenarios.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Logical Models&lt;/strong&gt; are ideal when you need transparent, explainable decision-making processes and have well-defined business rules. They&apos;re perfect for regulatory environments, business process automation, and scenarios where every decision must be auditable and justifiable.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Probabilistic/Learning Models&lt;/strong&gt; excel when dealing with complex patterns, large datasets, and problems where traditional programming approaches would be impractical. They&apos;re the go-to choice for image recognition, natural language processing, and scenarios where the system needs to adapt and improve over time.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Optimization/Search Models&lt;/strong&gt; are most valuable when you have clearly defined objectives, constraints, and need mathematically optimal solutions. They shine in resource allocation, scheduling, and planning problems where efficiency and optimality are paramount.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Hybrid Models&lt;/strong&gt; combine the best of multiple approaches, providing both optimal solutions and explainable reasoning. They&apos;re particularly valuable in complex business scenarios where stakeholders need to understand not just what the solution is, but why certain trade-offs were necessary.&lt;/p&gt;
&lt;h5&gt;Feature Comparison&lt;/h5&gt;
&lt;table&gt;
&lt;tr&gt;
&lt;th&gt;Model Type&lt;/td&gt;
&lt;th&gt;Results Explainable&lt;/th&gt;
&lt;th&gt;Correctness Understood&lt;/th&gt;
&lt;th&gt;Solution Discoverability&lt;/th&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;b&gt;Logical&lt;/b&gt;&lt;/td&gt;
&lt;td&gt;Generally&lt;/td&gt;
&lt;td&gt;Generally&lt;/td&gt;
&lt;td&gt;Low&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;b&gt;Probabilistic/Learning&lt;/b&gt;&lt;/td&gt;
&lt;td&gt;Rarely&lt;/td&gt;
&lt;td&gt;Somewhat&lt;/td&gt;
&lt;td&gt;High&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;b&gt;Optimization/Search&lt;/b&gt;&lt;/td&gt;
&lt;td&gt;Sometimes&lt;/td&gt;
&lt;td&gt;Somewhat&lt;/td&gt;
&lt;td&gt;Limited&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;b&gt;Hybrid&lt;/b&gt;&lt;/td&gt;
&lt;td&gt;Often&lt;/td&gt;
&lt;td&gt;Often&lt;/td&gt;
&lt;td&gt;Moderate to High&lt;/td&gt;
&lt;/tr&gt;
&lt;/table&gt;
&lt;p&gt;It is important to remember that artificial intelligence is not synonymous with machine learning. By understanding the full spectrum of AI approaches available, you can select the most appropriate technique for your specific requirements, constraints, and stakeholder needs. Sometimes the best solution isn&apos;t the most sophisticated one—it&apos;s the one that best fits your problem domain and organizational context.&lt;/p&gt;
&lt;h5&gt;Glossary&lt;/h5&gt;
&lt;ul&gt;
&lt;li&gt;AI: Artificial Intelligence, a broad family of computational techniques for solving problems and making decisions.&lt;/li&gt;
&lt;li&gt;ML: Machine Learning, a subset of AI focused on learning from data to improve performance over time.&lt;/li&gt;
&lt;li&gt;LLM: Large Language Model, a class of ML models specialized for natural language understanding and generation.&lt;/li&gt;
&lt;li&gt;DSS: Decision Support System, a traditional software system that supports decision making, distinct from autonomous AI.&lt;/li&gt;
&lt;li&gt;Explainability: The degree to which a system&apos;s decisions can be understood by humans.&lt;/li&gt;
&lt;/ul&gt;
</description>
  <link>https://www.cognitiveinheritance.com/Posts/types-of-ai-models.html</link>
  <author>aa94bef8-9a1d-4486-8024-e38b3f45e8e4@bsstahl.com (Barry S. Stahl)</author>
  <guid>https://www.cognitiveinheritance.com/Permalinks/37f405ce-7cf9-4aa1-810c-9793f3a1acd7.html</guid>
  <pubDate>Thu, 06 Nov 2025 08:00:00 GMT</pubDate>
</item><item>
  <title>The Return of the Valley .NET User Groups</title>
  <description>&lt;p&gt;After a long pause, I’m excited to share some great news: &lt;strong&gt;the Valley of the Sun .NET user groups are officially restarting in 2026&lt;/strong&gt;! As one of the organizers — and one of the speakers for our first event — I couldn’t be more thrilled to help bring our community back together.&lt;/p&gt;
&lt;p&gt;We’ll be hosting &lt;strong&gt;quarterly meetups&lt;/strong&gt;, alternating between:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;NWVDNUG&lt;/strong&gt; (Northwest Valley .NET User Group)&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;SEVDNUG&lt;/strong&gt; (Southeast Valley .NET User Group)&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Each event will be &lt;strong&gt;in-person at one location&lt;/strong&gt;, with a &lt;strong&gt;livestream option&lt;/strong&gt; for the other group — so no matter where you are, you’ll have a way to participate.&lt;/p&gt;
&lt;h3&gt;🚀 First Event: Tuesday, January 20, 2026 at ASU West Valley&lt;/h3&gt;
&lt;p&gt;To kick things off, &lt;strong&gt;Rob Richardson&lt;/strong&gt; and I will be presenting:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;&quot;.NET Aspire Accelerator: Fast-Track to Cloud-Native Development&quot;&lt;/strong&gt;&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;This talk is a shortened version of the workshop Rob and I delivered in October 2025 in Porto, Portugal — tailored for our local community.&lt;/p&gt;
&lt;p&gt;We’ll be live at the &lt;strong&gt;Arizona State University (ASU) West Valley campus&lt;/strong&gt;, and the session will be streamed by the &lt;strong&gt;.NET Foundation’s&lt;/strong&gt; &lt;a href=&quot;https://www.meetup.com/dotnet-virtual-user-group/&quot;&gt;NET Virtual User Group&lt;/a&gt;, making it accessible to developers &lt;strong&gt;across the Valley and around the world&lt;/strong&gt;.&lt;/p&gt;
&lt;h3&gt;🔄 What’s Next?&lt;/h3&gt;
&lt;p&gt;The follow-up event will be in the &lt;strong&gt;SE Valley around April&lt;/strong&gt;, continuing our quarterly rotation and hybrid format. We’re committed to making these meetups inclusive, energizing, and valuable for developers across the valley.&lt;/p&gt;
&lt;p&gt;Meetup listings for January will be posted soon — on both the &lt;a href=&quot;https://www.meetup.com/nwvdnug/&quot;&gt;NWVDNUG&lt;/a&gt; and &lt;a href=&quot;https://www.meetup.com/sevdnug/&quot;&gt;SEVDNUG&lt;/a&gt; pages — so keep an eye out and RSVP when they go live.&lt;/p&gt;
&lt;p&gt;Thanks for being part of this community. I can’t wait to see familiar faces and meet new ones as we reboot and reconnect in 2026.&lt;/p&gt;
</description>
  <link>https://www.cognitiveinheritance.com/Posts/return-of-the-valley-dotnet-user-groups.html</link>
  <author>aa94bef8-9a1d-4486-8024-e38b3f45e8e4@bsstahl.com (Barry S. Stahl)</author>
  <guid>https://www.cognitiveinheritance.com/Permalinks/eaf445c6-26bf-4c48-bf93-a2ad67849cbe.html</guid>
  <pubDate>Tue, 04 Nov 2025 19:00:00 GMT</pubDate>
</item><item>
  <title>When VS Code Shows the Wrong Source Control View - Resolving Duplicate Icons</title>
  <description>&lt;p&gt;Recently, I encountered a confusing issue with Visual Studio Code where the source control tab wasn&apos;t showing my modified files anymore. Git was correctly detecting changes since the &lt;strong&gt;git status&lt;/strong&gt; command correctly showed modifications, but those changes weren&apos;t appearing in VS Code&apos;s source control panel. Instead, I was seeing a graph view of my repository history.&lt;/p&gt;
&lt;h4&gt;The Investigation&lt;/h4&gt;
&lt;p&gt;I turned to Claude Sonnet 3.7 (via the Cline extension in VS Code) for help troubleshooting this issue. We started with some basic diagnostics:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;First, we verified Git was working correctly by viewing modified fiels in the terminal using &lt;strong&gt;git status&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;We checked VS Code&apos;s Git extensions and settings to see if anything was misconfigured&lt;/li&gt;
&lt;li&gt;Claude suggested trying the Ctrl+Shift+G keyboard shortcut, which immediately showed the correct view with my modified files&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;This last step was the key insight - pressing Ctrl+Shift+G showed the standard Source Control view with a list of modified files (what I wanted), but clicking the Source Control icon in the Activity Bar showed a different view (the graph view).&lt;/p&gt;
&lt;h4&gt;The Solution&lt;/h4&gt;
&lt;p&gt;After some investigation, we discovered the root cause: I had two different source control icons in my Activity Bar:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;One at the top labeled &amp;quot;Source Control&amp;quot; that showed the graph view&lt;/li&gt;
&lt;li&gt;One at the bottom (off-screen, requiring scrolling) labeled &amp;quot;Source Control (Ctrl-Shift-G)&amp;quot; that showed the view of changed files&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;The solution was simple:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Remove the unwanted icon (the top one showing the graph view)&lt;/li&gt;
&lt;li&gt;Move the correct icon (&amp;quot;Source Control (Ctrl-Shift-G)&amp;quot;) to a more visible position in the Activity Bar&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;After making these changes, clicking the Source Control icon in the Activity Bar now consistently shows my modified files, just like pressing Ctrl+Shift+G.&lt;/p&gt;
&lt;h4&gt;Why This Happens&lt;/h4&gt;
&lt;p&gt;VS Code allows multiple views with similar icons to coexist in the Activity Bar. This flexibility is powerful but can sometimes lead to confusion:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Extensions can add their own source control-related views&lt;/li&gt;
&lt;li&gt;These views might use similar terminology and iconography&lt;/li&gt;
&lt;li&gt;Without careful attention to the hover labels, it&apos;s easy to confuse which icon does what&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;In my case, I had somehow ended up with duplicate Source Control icons in my Activity Bar, each showing different views of my repository.&lt;/p&gt;
&lt;h4&gt;Preventing Future Issues&lt;/h4&gt;
&lt;p&gt;To avoid similar confusion in the future, I will make sure that I:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Hover over icons in the Activity Bar to see their full labels&lt;/li&gt;
&lt;li&gt;Pay attention to keyboard shortcuts listed in the labels (like &amp;quot;Ctrl-Shift-G&amp;quot;)&lt;/li&gt;
&lt;li&gt;Right-click on the Activity Bar and review which views are enabled&lt;/li&gt;
&lt;li&gt;Remove the ones I use less frequently when I end up with multiple, similar icons.&lt;/li&gt;
&lt;/ul&gt;
</description>
  <link>https://www.cognitiveinheritance.com/Posts/when-vs-code-shows-the-wrong-source-control-view.html</link>
  <author>aa94bef8-9a1d-4486-8024-e38b3f45e8e4@bsstahl.com (Barry S. Stahl)</author>
  <guid>https://www.cognitiveinheritance.com/Permalinks/8f7e6d5c-4b3a-2a1d-9c8b-7e6f5d4c3b2a.html</guid>
  <pubDate>Sun, 13 Apr 2025 07:00:00 GMT</pubDate>
</item><item>
  <title>Preserve Section 230 to Protect Free Speech and Competition</title>
  <description>&lt;blockquote&gt;
&lt;p&gt;An open letter to Senators Kelly and Gallego urging them to oppose any weakening of the protections found in Section 230 of the Communications Decency Act (CDA) of 1996.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;Dear Senator,&lt;/p&gt;
&lt;p&gt;I am reaching out to express my strong opposition to any modifications or repeal of Section 230 of the Communications Decency Act.&lt;/p&gt;
&lt;p&gt;I am a constituent and a professional with 40 years of experience in distributed systems development, including my work on some of the earliest Internet-based applications at Intel Corporation in Chandler.&lt;/p&gt;
&lt;p&gt;Section 230 is a foundational element of the Internet&apos;s legal framework and altering it could have profound negative impacts on both free speech and competition in the Internet services space. Here are my primary concerns:&lt;/p&gt;
&lt;h4&gt;Impact on Free Speech&lt;/h4&gt;
&lt;p&gt;Section 230 provides a crucial liability shield that enables platforms to host diverse content without fear of constant litigation. Repealing or modifying this section would lead to increased censorship as platforms become overly cautious in moderating content. This could stifle free expression and create a chilling effect, where administrators are forced to censor, or shut down operations altogether, out of fear that perfectly legal speech might lead to liabilities for the platform. The open dialogue and exchange of ideas that are core to our democratic principles would be severely compromised.&lt;/p&gt;
&lt;p&gt;In addition, modifying or even eliminating Section 230 wouldn&apos;t stop bad actors from spreading harmful content, as they are adept at exploiting loopholes and adapting to new platforms. A much better approach lies in addressing the behavior of the bad actors themselves, not transferring the responsibility onto Internet platform administrators. The issues that people seek to solve by modifying Section 230 simply would not be improved by this legislation.&lt;/p&gt;
&lt;h4&gt;Impact on Competition&lt;/h4&gt;
&lt;p&gt;The current protections encourage innovation and allow new entrants to compete in the Internet services space. Without these protections, smaller companies and startups would face significant barriers to entry due to the threat of costly litigation and the need to support large staff of content moderators. This could lead to an even greater consolidation of power among a few large corporations, reducing competition and limiting consumer choice. Furthermore, these same increased operational costs could stifle innovation and slow the development of new technologies.&lt;/p&gt;
&lt;p&gt;As someone who has been deeply involved in the growth and evolution of Internet technologies, I believe that maintaining the integrity of Section 230 is essential for fostering a vibrant, competitive, and open Internet. I urge you to consider the potential ramifications of modifying this critical piece of legislation and to oppose any efforts that would undermine its foundational principles.&lt;/p&gt;
&lt;p&gt;Thank you for your attention to this important matter. I appreciate your service to our state and your consideration of my perspective. Please feel free to contact me if you wish to discuss this issue further.&lt;/p&gt;
&lt;p&gt;Sincerely,&lt;/p&gt;
&lt;p&gt;Barry Stahl&lt;br/&gt;
Software Engineer&lt;br/&gt;
Phoenix AZ&lt;br/&gt;
&lt;a href=&quot;https://CognitiveInheritance.com&quot;&gt;https://CognitiveInheritance.com&lt;/a&gt;&lt;/p&gt;
</description>
  <link>https://www.cognitiveinheritance.com/Posts/protect-section-230.html</link>
  <author>aa94bef8-9a1d-4486-8024-e38b3f45e8e4@bsstahl.com (Barry S. Stahl)</author>
  <guid>https://www.cognitiveinheritance.com/Permalinks/536c1aea-080b-4ecf-95ae-73022ad48781.html</guid>
  <pubDate>Wed, 26 Mar 2025 11:00:00 GMT</pubDate>
</item><item>
  <title>Understanding the ID Entanglement Effect</title>
  <description>&lt;p&gt;Every developer has faced it: the temptation to make identifiers &amp;quot;smarter&amp;quot; by embedding information. A customer ID that includes their region, an order number containing the date, a product code that encodes its category - these patterns appear innocent at first, even helpful. But they hide a subtle trap I call the &amp;quot;&lt;strong&gt;ID Entanglement Effect&lt;/strong&gt;&amp;quot; - a cascade of complexity that emerges when identifiers become intertwined with business logic and mutable state.&lt;/p&gt;
&lt;p&gt;This effect manifests when we blur the line between identification and information, creating a web of dependencies that grows increasingly difficult to maintain. What starts as a convenient shortcut often evolves into a significant source of technical debt, affecting everything from system flexibility to data integrity.&lt;/p&gt;
&lt;h4&gt;Critical Characteristics&lt;/h4&gt;
&lt;h5&gt;Structural Dependency&lt;/h5&gt;
&lt;p&gt;Systems relying on a specific format for composite IDs become fragile. Any format change can disrupt functionality and complicate maintenance. For instance, if a system uses &amp;quot;DEPT-EMP-123&amp;quot; as an employee ID, changing the department code structure creates a difficult choice: either update all systems and databases that use this format (a risky and potentially expensive undertaking), or abandon the standard for new records while keeping old IDs in the legacy format. The latter option results in inconsistent IDs across the system where some follow the old standard and others follow the new one, effectively creating a partial, incomplete, and incorrect standard within the IDs themselves. This inconsistency further complicates maintenance and can lead to confusion and errors in data processing.&lt;/p&gt;
&lt;h5&gt;Data Parsing&lt;/h5&gt;
&lt;p&gt;When information is embedded in composite IDs, parsing them often appears to be the simplest solution - and it&apos;s a completely understandable choice when the data is readily available in the ID itself. Consider an order ID like &amp;quot;2024-01-NA-12345&amp;quot; containing year, region, and sequence number information. Using this embedded data seems more straightforward than querying additional fields or services. However, this parsing must be replicated across different applications and languages, increasing the risk of inconsistencies and errors. The only way to be sure we don&apos;t end up parsing these IDs, and in doing so bringing the &lt;strong&gt;ID Entanglement Effect&lt;/strong&gt; into play, is to avoid creating systems that embed business data in identifiers in the first place.&lt;/p&gt;
&lt;h5&gt;Maintenance Complexity&lt;/h5&gt;
&lt;p&gt;Parsing logic embedded throughout the codebase increases complexity, making debugging and future development challenging. For example, if an order ID contains both a date and location code (like &amp;quot;20240129-PHX-1234&amp;quot;), every service that processes orders must implement and maintain the same parsing logic. When this logic needs to change, such as adding a new location format, developers must update and test the parsing code across multiple codebases, increasing the risk of inconsistencies.&lt;/p&gt;
&lt;h5&gt;Inflexibility&lt;/h5&gt;
&lt;p&gt;Composite IDs limit adaptability. Modifications can ripple through the system, complicating changes or scaling. For example, if a product ID includes a category code (like &amp;quot;TECH-LAPTOP-123&amp;quot;), adding new product categories or reorganizing the category hierarchy becomes a major undertaking. Similarly, if a customer ID includes a region code (&amp;quot;US-WEST-789&amp;quot;), business expansion to new regions or changes in regional organization can require extensive system updates.&lt;/p&gt;
&lt;h5&gt;Data Integrity Risks&lt;/h5&gt;
&lt;p&gt;Parsing composite IDs can lead to inconsistencies, especially in dynamic environments. Consider a system where we create product IDs by combining our supplier code with a sequence number (like &amp;quot;SUP123-WIDGET-456&amp;quot;). If the supplier&apos;s business is acquired and rebranded, or if the product&apos;s manufacturing moves to a different supplier, should all related IDs be updated? This creates significant challenges: either maintain increasingly inaccurate IDs, implement complex ID migration processes, or risk breaking existing references across the system.&lt;/p&gt;
&lt;p&gt;Note that using a manufacturer&apos;s actual part number (like &amp;quot;ACME-WIDGET-123&amp;quot;) as an opaque identifier is perfectly fine - the key is that we treat it as an unchanging reference and don&apos;t try to parse meaning from its structure. The ID Entanglement Effect occurs when we create our own composite IDs that encode business relationships or mutable state that we expect to parse and interpret later.&lt;/p&gt;
&lt;h5&gt;Security Vulnerabilities&lt;/h5&gt;
&lt;p&gt;Auto-incrementing integers, while simple, introduce significant security risks. Their predictable nature makes it easy for attackers to enumerate resources (like guessing user IDs to access profiles) or gather business intelligence (such as order volumes from sequential order numbers). They can also lead to race conditions in high-concurrency systems and make it difficult to merge data from different sources without ID conflicts.&lt;/p&gt;
&lt;h4&gt;Long-Term Impact&lt;/h4&gt;
&lt;p&gt;The ID Entanglement Effect compounds over time, creating increasingly complex challenges:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;em&gt;Technical Debt&lt;/em&gt;: As systems evolve, the cost of maintaining and updating composite ID logic grows exponentially&lt;/li&gt;
&lt;li&gt;&lt;em&gt;Integration Barriers&lt;/em&gt;: New systems and third-party integrations must implement complex parsing logic&lt;/li&gt;
&lt;li&gt;&lt;em&gt;Performance Overhead&lt;/em&gt;: Constant parsing and validation of composite IDs impacts system performance&lt;/li&gt;
&lt;li&gt;&lt;em&gt;Error Propagation&lt;/em&gt;: Mistakes in ID parsing can cascade through multiple systems&lt;/li&gt;
&lt;li&gt;&lt;em&gt;Documentation Burden&lt;/em&gt;: Teams must maintain detailed documentation about ID formats and parsing rules&lt;/li&gt;
&lt;/ul&gt;
&lt;h4&gt;Prevention Strategies&lt;/h4&gt;
&lt;p&gt;To avoid the ID Entanglement Effect, consider these key strategies:&lt;/p&gt;
&lt;h5&gt;Use Clean, Stable Identifiers&lt;/h5&gt;
&lt;ul&gt;
&lt;li&gt;Treat all identifiers, especially those from external systems, as opaque strings whose sole purpose is to establish equivalence through exact matching. This is crucial because:
&lt;ul&gt;
&lt;li&gt;It prevents accidental coupling to internal structures or business logic that may be embedded in the ID&lt;/li&gt;
&lt;li&gt;It ensures the system remains resilient to changes in ID format or structure&lt;/li&gt;
&lt;li&gt;It maintains compatibility with different ID generation schemes across systems&lt;/li&gt;
&lt;li&gt;It avoids assumptions about ID content that could break when integrating with new systems&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;Generate unique identifiers that remain consistent over time&lt;/li&gt;
&lt;li&gt;Human-readable identifiers (like &amp;quot;ORDER-12345&amp;quot;) are perfectly acceptable&lt;/li&gt;
&lt;li&gt;Avoid encoding mutable data or business logic in the identifier&lt;/li&gt;
&lt;li&gt;Use non-sequential identifiers (like UUIDs) to prevent enumeration attacks&lt;/li&gt;
&lt;li&gt;Consider the security implications of identifier patterns&lt;/li&gt;
&lt;/ul&gt;
&lt;h5&gt;Maintain Clear Boundaries&lt;/h5&gt;
&lt;ul&gt;
&lt;li&gt;Store business data in proper fields, not in the identifier&lt;/li&gt;
&lt;li&gt;Keep temporal data (dates, versions) in dedicated attributes&lt;/li&gt;
&lt;li&gt;Track status and metadata independently of the ID&lt;/li&gt;
&lt;/ul&gt;
&lt;h5&gt;Design for Change&lt;/h5&gt;
&lt;ul&gt;
&lt;li&gt;Assume business rules and categories will evolve&lt;/li&gt;
&lt;li&gt;Plan for system growth and new use cases&lt;/li&gt;
&lt;li&gt;Consider future integration requirements&lt;/li&gt;
&lt;/ul&gt;
&lt;h4&gt;Best Practices&lt;/h4&gt;
&lt;p&gt;When designing identifier systems:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;&lt;em&gt;Keep IDs Clean&lt;/em&gt;: Use straightforward identifiers that don&apos;t encode mutable data&lt;/li&gt;
&lt;li&gt;&lt;em&gt;Separate Concerns&lt;/em&gt;: Store business data, status, and metadata in dedicated fields&lt;/li&gt;
&lt;li&gt;&lt;em&gt;Plan for Scale&lt;/em&gt;: Choose identifier formats that support future growth&lt;/li&gt;
&lt;li&gt;&lt;em&gt;Consider Relations&lt;/em&gt;: Use proper database relationships instead of encoding hierarchies in IDs&lt;/li&gt;
&lt;li&gt;&lt;em&gt;Document Clearly&lt;/em&gt;: Maintain clear documentation about identifier generation and usage&lt;/li&gt;
&lt;/ol&gt;
&lt;h4&gt;Conclusion&lt;/h4&gt;
&lt;p&gt;The ID Entanglement Effect represents a significant challenge in system design, where the convenience of composite IDs leads to long-term maintenance and scalability issues. By understanding these risks and following best practices for identifier design, teams can create more maintainable and adaptable systems. Remember: while identifiers can be human-readable, they should never become entangled with business logic or mutable state - this separation is key to maintaining system flexibility and reliability over time.&lt;/p&gt;
</description>
  <link>https://www.cognitiveinheritance.com/Posts/the-id-entanglement-effect.html</link>
  <author>aa94bef8-9a1d-4486-8024-e38b3f45e8e4@bsstahl.com (Barry S. Stahl)</author>
  <guid>https://www.cognitiveinheritance.com/Permalinks/2d9c8f17-09c4-48a4-873c-4624cfd4fbd1.html</guid>
  <pubDate>Sat, 01 Feb 2025 07:00:00 GMT</pubDate>
</item><item>
  <title>Code Coverage - The Essential Tool That Must Never Be Measured</title>
  <description>&lt;h2&gt;TLDR: Code Coverage is the Wrong Target&lt;/h2&gt;
&lt;p&gt;&lt;strong&gt;Code coverage metrics HURT code quality&lt;/strong&gt;, especially when gating deployments, because they are a misleading target, prioritizing superficial benchmarks over meaningful use-case validation. A focus on achieving coverage percentages detracts from real quality assurance, as developers write tests that do what the targets insist that they do, satisfy coverage metrics rather than ensuring comprehensive use-case functionality.&lt;/p&gt;
&lt;p&gt;When we measure code coverage instead of use-case coverage, we limit the value of the Code Coverage tools for the developer going forward as a means of identifying areas of concern within the code. If instead we implement the means to measure use-case coverage, perhaps using Cucumber/SpecFlow BDD tools, such metrics might become a valuable target for automation. Short of that, test coverage metrics and gates actually hurt quality rather than helping it.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Do Not&lt;/strong&gt; use code coverage as a metric, especially as a gate for software deployment.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Do&lt;/strong&gt; use BDD style tests to determine and measure the quality of software.&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;What is Code Coverage?&lt;/h2&gt;
&lt;p&gt;Code coverage measures the extent to which the source code of a program has been executed during the testing process. It is a valuable tool for developers to identify gaps in unit tests and ensure that their code is thoroughly tested. An example of the output of the Code Coverage tools in Visual Studio Enterprise from my 2015 article &lt;a href=&quot;https://www.cognitiveinheritance.com/Posts/Remove-Any-Code-Your-Users-Dont-Care-About.html&quot;&gt;Remove Any Code Your Users Don&apos;t Care About&lt;/a&gt; can be seen below. In this example, the code path where the property setter was called with the same value the property already held, was not tested, as indicated by the red highlighting, while all other blocks in this code snippet were exercised by the tests as seen by the blue highlighting.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://www.cognitiveinheritance.com//Images/CodeCoverageDemoProperty_2.png&quot; alt=&quot;Code Coverage Results -- Standard Property Implementation&quot; /&gt;&lt;/p&gt;
&lt;p&gt;When utilized during the development process, Code Coverage tools can:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Identify areas of the codebase that haven&apos;t been tested, allowing developers to write additional tests to ensure all parts of the application function as expected.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Improve understanding of the tests by identifying what code is run during which tests.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Identify areas of misunderstanding, where the code is not behaving as expected, by visually exposing what code is executed during testing.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Focus testing efforts on critical or complex code paths that are missing coverage, ensuring that crucial parts of the application are robustly tested.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Identify extraneous code that is not executed during testing, allowing developers to remove unnecessary code and improve the maintainability of the application.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Maximize the value of Test-Driven Development (TDD) by providing immediate feedback on the quality of tests, including the ability for a developer to quickly see when they have skipped ahead in the process by creating untested paths.&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;All of these serve to increase trust in our unit tests, allowing the developers the confidence to &amp;quot;refactor ruthlessly&amp;quot; when necessary to improve the maintainability and reliability of our applications. However, they also depend on one critical factor, that when an area shows in the tooling as &lt;em&gt;covered&lt;/em&gt;, the tests that cover it do a good job of guaranteeing that the needs of the users are met by that code. An area of code that is covered, but where the tests do not implement the use-cases that are important to the users, is not well-tested code. Unfortunately, this is exactly what happens when we use code coverage as a metric.&lt;/p&gt;
&lt;h2&gt;The Pitfalls of Coverage as a Metric&lt;/h2&gt;
&lt;p&gt;A common misunderstanding in our industry is that higher code coverage equates to greater software quality. This belief can lead to the idea of using code coverage as a metric in attempts to improve quality. Unfortunately, this well-intentioned miscalculation generally has the opposite effect, a reduction in code quality and test confidence.&lt;/p&gt;
&lt;h3&gt;Goodhart&apos;s Law&lt;/h3&gt;
&lt;p&gt;Goodhart&apos;s Law states that &amp;quot;When a measure becomes a target, it ceases to be a good measure.&amp;quot; We have seen this principle play out in many areas of society, including education (teaching to the test), healthcare (focus on throughput rather than patient outcomes), and social media (engagement over truth).&lt;/p&gt;
&lt;p&gt;This principle is particularly relevant when it comes to code coverage metrics. When code coverage is used as a metric, developers will do as the metrics demand and produce high coverage numbers. Usually this means writing one high-quality test for the &amp;quot;happy path&amp;quot; in each area of the code, since this creates the highest percentage of coverage in the shortest amount of time. It should be clear that these are often good, valuable tests, but they are not nearly the only tests that need to be written.&lt;/p&gt;
&lt;p&gt;Problems as outlined in Goodhart&apos;s Law occur because a metric is nearly always a proxy for the real goal. In the case of code coverage, the goal is to ensure that the software behaves as expected in all use-cases. The metric, however, is a measure of how many lines of code have been executed by the tests. This is unfortunately NOT a good proxy for the real goal, and is not likely to help our quality, especially in the long-run. Attempting to use Code Coverage in this way is akin to measuring developer productivity based on the number of lines of code they create -- it is simply a bad metric.&lt;/p&gt;
&lt;h2&gt;A Better Metric&lt;/h2&gt;
&lt;p&gt;If we want to determine the quality of our tests, we need to measure the coverage of our use-cases, not our code. This is more difficult to measure than code coverage, but it is a much better proxy for the real goal of testing. If we can measure how well our code satisfies the needs of the users, we can be much more confident that our tests are doing what they are supposed to do -- ensuring that the software behaves as expected in all cases.&lt;/p&gt;
&lt;p&gt;The best tools we have today to measure use-case coverage are Behavior Driven Development tools like &lt;a href=&quot;https://cucumber.io/&quot;&gt;Cucumber&lt;/a&gt;, for which the .NET implementation is called &lt;a href=&quot;https://specflow.org&quot;&gt;SpecFlow&lt;/a&gt;. These tools test how well our software meets the user&apos;s needs by helping us create test that focus on how the users will utilize our software. This is a much better proxy for the real goal of testing, and is much more likely to help us achieve our quality goals.&lt;/p&gt;
&lt;p&gt;The formal language used to describe these use-cases is called &lt;strong&gt;Gherkin&lt;/strong&gt;, and uses a &lt;em&gt;Given-When-Then&lt;/em&gt; construction. An example of one such use-case test for a simple car search scenario might look like this:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://www.cognitiveinheritance.com/Images/Gherkin-CarSearch.png&quot; alt=&quot;Car Search Use-Case in Gherkin&quot; /&gt;&lt;/p&gt;
&lt;p&gt;These Gherkin scenarios, often created by analysts, are translated into executable tests using step definitions. Each Gherkin step (&lt;em&gt;Given&lt;/em&gt;, &lt;em&gt;When&lt;/em&gt;, &lt;em&gt;Then&lt;/em&gt;) corresponds to a method in a step definition file created by a developer, where annotations or attributes bind these steps to the code that performs the actions or checks described. This setup allows the BDD tool to execute the methods during test runs, directly interacting with the application and ensuring that its behavior aligns with defined requirements.&lt;/p&gt;
&lt;p&gt;Since these tests exercise the areas of the code that are important to the users, coverage metrics here are a much better proxy for the real goal of testing, because they are testing the use-cases that are important to the users. If an area of code is untested by BDD style tests, that code is either unnecessary or we are missing use-cases in our tests.&lt;/p&gt;
&lt;h2&gt;Empowering Developers: Code Coverage Tools, Visualization, and Use Case Coverage&lt;/h2&gt;
&lt;p&gt;One of the most powerful aspects of code coverage tools are their data visualizations, allowing developers to assess which lines of code have been tested and which have not, right inside the code in the development environment. This visualization transcends the mere percentage or number of lines covered, adding significant value to the development process and enabling developers to make informed decisions about where to focus their testing efforts.&lt;/p&gt;
&lt;p&gt;By permitting developers to utilize code coverage tools and visualization without turning them into a metric, we can foster enhanced software quality and more comprehensive testing. By granting developers the freedom to use these tools and visualize their code coverage, they can better identify gaps in their testing and concentrate on covering the most critical use cases. If instead of worrying about how many lines of code are covered, we focus on what use-cases are covered, we create better software by ensuring that the most important aspects of the application are thoroughly tested and reliable.&lt;/p&gt;
&lt;h2&gt;Creating an Environment that Supports Quality Development Practices&lt;/h2&gt;
&lt;p&gt;Good unit tests that accurately expose failures in our code are critical for the long-term success of development teams. As a result, it is often tempting to jump on metrics like code coverage to encourage developers to &amp;quot;do the right thing&amp;quot; when building software. Unfortunately, this seemingly simple solution is almost always the wrong approach.&lt;/p&gt;
&lt;h3&gt;Encourage Good Testing Practices without Using Code Coverage Metrics&lt;/h3&gt;
&lt;p&gt;So how do we go about encouraging developers to build unit tests that are valuable and reliable without using code coverage metrics? The answer is that &lt;strong&gt;we don&apos;t&lt;/strong&gt;. A culture of quality development practices is built on trust, not metrics. We must trust our developers to do the right thing, and create an environment where they are empowered to do the job well rather than one that forces them to write tests to satisfy a metric.&lt;/p&gt;
&lt;p&gt;Developers want to excel at their jobs, and never want to create bugs. No matter how much of a &amp;quot;no blame&amp;quot; culture we have, or how much we encourage them to &amp;quot;move fast and break things&amp;quot;, developers will always take pride in their work and want to create quality software. Good tests that exercise the code in ways that are important to the users are a critical part of that culture of quality that we all want. We don&apos;t need to force developers to write these tests, we just need to give them the tools and the environment in which to do so.&lt;/p&gt;
&lt;p&gt;There are a number of ways we can identify when this culture is not yet in place. Be on the lookout for any of these signs:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Areas of code where, every time something needs to change, the developers first feel it necessary to write a few dozen tests so that they have the confidence to make the change, or where changes take longer and are more error-prone because developers can&apos;t be confident in the code they are modifying.&lt;/li&gt;
&lt;li&gt;Frequent bugs or failures in areas of the code that represent key user scenarios. This suggests that tests may have been written to create code coverage rather than to exercise the important use-cases.&lt;/li&gt;
&lt;li&gt;A developer whose code nobody else wants to touch because it rarely has tests that adequately exercise the important features.&lt;/li&gt;
&lt;li&gt;Regression failures where previous bugs are reintroduced, or exposed in new ways, because the early failures were not first covered by unit tests before fixing them.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;The vast majority of developers want to do their work in an environment where they don&apos;t have to worry when asked to making changes to their teammates&apos; code because they know it is well tested. They also don&apos;t want to put their teammates in situations where they are likely to fail because they had to make a change when they didn&apos;t have the confidence to do so, or where that confidence was misplaced. Nobody wants to let a good team down. It is up to us to create an environment where that is possible.&lt;/p&gt;
&lt;h2&gt;Conclusion: Code Coverage is a Developer&apos;s Tool, Not a Metric&lt;/h2&gt;
&lt;p&gt;Code coverage is an invaluable tool for developers, but it should not be misused as a superficial metric. By shifting our focus from the number of code blocks covered to empowering developers with the right tools and environment, we can ensure software quality through proper use-case coverage. We must allow developers to utilize these valuable tools, without diluting their value by using them as metrics.&lt;/p&gt;
</description>
  <link>https://www.cognitiveinheritance.com/Posts/code-coverage-must-never-be-used-as-a-metric.html</link>
  <author>aa94bef8-9a1d-4486-8024-e38b3f45e8e4@bsstahl.com (Barry S. Stahl)</author>
  <guid>https://www.cognitiveinheritance.com/Permalinks/052711e4-2d3c-48cf-8959-8d964b1c05ab.html</guid>
  <pubDate>Sat, 14 Sep 2024 07:00:00 GMT</pubDate>
</item><item>
  <title>Objects with the Same Name in Different Bounded Contexts</title>
  <description>&lt;p&gt;Imagine you&apos;re working with a &lt;em&gt;Flight&lt;/em&gt; entity within an airline management system. This object exists in at least two (probably more) distinct execution spaces or &apos;bounded contexts&apos;: the &apos;passenger pre-purchase&apos; context, handled by the sales service, and the &apos;gate agent&apos; context, managed by the Gate service.&lt;/p&gt;
&lt;p&gt;In the &apos;passenger pre-purchase&apos; context, the &apos;Flight&apos; object might encapsulate attributes like ticket price and seat availability and have behaviors such as &apos;purchase&apos;. In contrast, the &apos;gate agent&apos; context might focus on details like gate number and boarding status, and have behaviors like &apos;check-in crew member&apos; and &apos;check-in passenger&apos;.&lt;/p&gt;
&lt;p&gt;Some questions often arise in this situation: Should we create a special translation between the flight entities in these two contexts? Should we include the &apos;Flight&apos; object in a Shared Kernel to avoid duplication, adhering to the DRY (Don&apos;t Repeat Yourself) principle?&lt;/p&gt;
&lt;p&gt;My default stance is to treat objects with the same name in different bounded contexts as distinct entities. I advocate for each context to have the autonomy to define and operate on its own objects, without the need for translation or linking. This approach aligns with the principle of low coupling, which suggests that components should be as independent as possible.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://www.cognitiveinheritance.com/Images/Airline%20Subsystems.png&quot; alt=&quot;Airline Subsystems&quot; /&gt;&lt;/p&gt;
&lt;p&gt;In the simplified example shown in the graphic, both the Sales and Gate services need to know when a new flight is created so they can start capturing relevant information about that flight. There is nothing special about the relationship however. The fact that the object has the same name, and in some ways represents an equivalent concept, is immaterial to those subsystems. The domain events are captured and acted on in the same way as they would be if the object did not have the same name.&lt;/p&gt;
&lt;p&gt;You can think about it as analogous to a relational database where there are two tables that have columns with the same names. The two columns may represent the same or similar concepts, but unless there are referential integrity rules in place to force them to be the same value, they are actually distinct and should be treated as such.&lt;/p&gt;
&lt;p&gt;I do recognize that there are likely to be situations where a Shared Kernel can be beneficial. If the &apos;Flight&apos; object has common attributes and behaviors that are stable and unlikely to change, including it in the Shared Kernel could reduce duplication without increasing coupling to an unnaceptable degree, especially if there is only a single team developing and maintaining both contexts. I have found however, that this is rarely the case, especially since, in many large and/or growing organizations, team construction and application ownership can easily change. Managing shared entities across multiple teams usually ends up with one of the teams having to wait for the other, hurting agility. I have found it very rare in my experience that the added complexity of an object in the Shared Kernel is worth the little bit of duplicated code that is removed, when that object is not viewed identically across the entire domain.&lt;/p&gt;
&lt;p&gt;Ultimately, the decision to link objects across bounded contexts or include them in a Shared Kernel should be based on a deep understanding of the domain and the specific requirements and constraints of the project. If it isn&apos;t clear that an entity is seen identically across the entirety of a domain, distinct views of that object should be represented separately inside their appropriate bounded contexts. If you are struggling with this type of question, I reccommend &lt;a href=&quot;https://youtu.be/6DgGhQQbfDE&quot;&gt;Event Storming&lt;/a&gt; to help gain the needed understanding of the domain.&lt;/p&gt;
</description>
  <link>https://www.cognitiveinheritance.com/Posts/objects-with-the-same-name-in-different-bounded-contexts.html</link>
  <author>aa94bef8-9a1d-4486-8024-e38b3f45e8e4@bsstahl.com (Barry S. Stahl)</author>
  <guid>https://www.cognitiveinheritance.com/Permalinks/8a1260ce-2b01-401c-96e2-8954e8091f86.html</guid>
  <pubDate>Sun, 29 Oct 2023 07:00:00 GMT</pubDate>
</item><item>
  <title>The Depth of GPT Embeddings</title>
  <description>&lt;p&gt;I&apos;ve been trying to get a handle on the number of representations possible in a GPT vector and thought others might find this interesting as well. For the purposes of this discussion, a GPT vector is a 1536 dimensional structure that is unit-length, encoded using the &lt;strong&gt;text-embedding-ada-002&lt;/strong&gt; embedding model.&lt;/p&gt;
&lt;p&gt;We know that the number of theoretical representations is infinite, being that there are an infinite number of possible values between 0 and 1, and thus an infinite number of values between -1 and +1. However, we are not working with truly infinite values since we need to be able to represent them in a computer. This means that we are limited to a finite number of decimal places. Thus, we may be able to get an approximation for the number of possible values by looking at the number of decimal places we can represent.&lt;/p&gt;
&lt;h3&gt;Calculating the number of possible states&lt;/h3&gt;
&lt;p&gt;I started by looking for a lower-bound for the value, and incresing fidelity from there. We know that these embeddings, because they are unit-length, can take values from -1 to +1 in each dimension. If we assume temporarily that only integer values are used, we can say there are only 3 possible states for each of the 1536 dimensions of the vector (-1, 0 +1). A base (B) of 3, with a digit count (D) of 1536, which can by supplied to the general equation for the number of possible values that can be represented:&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;V = B&lt;sup&gt;D&lt;/sup&gt;&lt;/strong&gt; or &lt;strong&gt;V = 3&lt;sup&gt;1536&lt;/sup&gt;&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;The result of this calculation is equivalent to 2&lt;sup&gt;2435&lt;/sup&gt; or 10&lt;sup&gt;733&lt;/sup&gt; or, if you prefer, a number of this form:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;10000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Already an insanely large number. For comparison, the number of atoms in the universe is roughly 10&lt;sup&gt;80&lt;/sup&gt;.&lt;/p&gt;
&lt;p&gt;We now know that we have at least 10&lt;sup&gt;733&lt;/sup&gt; possible states for each vector. But that is just using integer values. What happens if we start increasing the fidelity of our value. The next step is to assume that we can use values with a single decimal place. That is, the numbers in each dimension can take values such as &lt;strong&gt;0.1&lt;/strong&gt; and &lt;strong&gt;-0.5&lt;/strong&gt;. This increases the base in the above equation by a factor of 10, from 3 to 30. Our new values to plug in to the equation are:&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;V = 30&lt;sup&gt;1536&lt;/sup&gt;&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;Which is equivalent to &lt;strong&gt;2&lt;sup&gt;7537&lt;/sup&gt;&lt;/strong&gt; or &lt;strong&gt;10&lt;sup&gt;2269&lt;/sup&gt;&lt;/strong&gt;.&lt;/p&gt;
&lt;p&gt;Another way of thinking about these values is that they require a data structure not of 32 or 64 bits to represent, but of 7537 bits. That is, we would need a data structure that is 7537 bits long to represent all of the possible values of a vector that uses just one decimal place.&lt;/p&gt;
&lt;p&gt;We can continue this process for a few more decimal places, each time increasing the base by a factor of 10. The results can be found in the table below.&lt;/p&gt;
&lt;table border=1&gt;
&lt;tr&gt;&lt;th&gt;B&lt;/th&gt;&lt;th&gt;Example&lt;/th&gt;&lt;th&gt;Base-2&lt;/th&gt;&lt;th&gt;Base-10&lt;/th&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td&gt;3&lt;/td&gt;&lt;td&gt;1&lt;/td&gt;&lt;td&gt;2435&lt;/td&gt;&lt;td&gt;733&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td&gt;30&lt;/td&gt;&lt;td&gt;0.1&lt;/td&gt;&lt;td&gt;7537&lt;/td&gt;&lt;td&gt;2269&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td&gt;300&lt;/td&gt;&lt;td&gt;0.01&lt;/td&gt;&lt;td&gt;12639&lt;/td&gt;&lt;td&gt;3805&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td&gt;3000&lt;/td&gt;&lt;td&gt;0.001&lt;/td&gt;&lt;td&gt;17742&lt;/td&gt;&lt;td&gt;5341&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td&gt;30000&lt;/td&gt;&lt;td&gt;0.0001&lt;/td&gt;&lt;td&gt;22844&lt;/td&gt;&lt;td&gt;6877&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td&gt;300000&lt;/td&gt;&lt;td&gt;0.00001&lt;/td&gt;&lt;td&gt;27947&lt;/td&gt;&lt;td&gt;8413&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td&gt;3000000&lt;/td&gt;&lt;td&gt;0.000001&lt;/td&gt;&lt;td&gt;33049&lt;/td&gt;&lt;td&gt;9949&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td&gt;30000000&lt;/td&gt;&lt;td&gt;0.0000001&lt;/td&gt;&lt;td&gt;38152&lt;/td&gt;&lt;td&gt;11485&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td&gt;300000000&lt;/td&gt;&lt;td&gt;0.00000001&lt;/td&gt;&lt;td&gt;43254&lt;/td&gt;&lt;td&gt;13021&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td&gt;3000000000&lt;/td&gt;&lt;td&gt;0.000000001&lt;/td&gt;&lt;td&gt;48357&lt;/td&gt;&lt;td&gt;14557&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td&gt;30000000000&lt;/td&gt;&lt;td&gt;1E-10&lt;/td&gt;&lt;td&gt;53459&lt;/td&gt;&lt;td&gt;16093&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td&gt;3E+11&lt;/td&gt;&lt;td&gt;1E-11&lt;/td&gt;&lt;td&gt;58562&lt;/td&gt;&lt;td&gt;17629&lt;/td&gt;&lt;/tr&gt;
&lt;/table&gt;
&lt;p&gt;This means that if we assume 7 decimal digits of precision in our data structures, we can represent &lt;strong&gt;10&lt;sup&gt;11485&lt;/sup&gt;&lt;/strong&gt; distinct values in our vector.&lt;/p&gt;
&lt;p&gt;This number is so large that all the computers in the world, churning out millions of values per second for the entire history (start to finish) of the universe, would not even come close to being able to generate all of the possible values of a single vector.&lt;/p&gt;
&lt;h3&gt;What does all this mean?&lt;/h3&gt;
&lt;p&gt;Since we currently have no way of knowing how dense the representation of data inside the GPT models is, we can only guess at how many of these possible values actually represent ideas. However, this analysis gives us a reasonable proxy for how many the model can hold. If there is even a small fraction of this information encoded in these models, then it is nearly guaranteed that these models hold in them insights that have never before been identified by humans. We just need to figure out how to access these revelations.&lt;/p&gt;
&lt;p&gt;That is a discussion for another day.&lt;/p&gt;
</description>
  <link>https://www.cognitiveinheritance.com/Posts/depth-of-gpt-embeddings.html</link>
  <author>aa94bef8-9a1d-4486-8024-e38b3f45e8e4@bsstahl.com (Barry S. Stahl)</author>
  <guid>https://www.cognitiveinheritance.com/Permalinks/e47eddf9-fcda-4382-a7c8-e498026fbfee.html</guid>
  <pubDate>Tue, 03 Oct 2023 07:00:00 GMT</pubDate>
</item>
</channel>
</rss>