Friday, August 22, 2025

As Committees May Think!

Earlier, I wrote about the way in which a roadmap represented a discretized set of fundable investments (link), some of which were "active" and others which were potential future investments.

From the famous “How Do Committees Invent?” Conway’s Law indicates that there’s always a relationship between the organizational design and the system design. But similarly, there’s a relationship between that and the way work is laid out - the system, the organization, and the work are all connected - and must be designed in an integrated fashion.

As new work gets prioritized (especially in a iterative, decentralized and evolutionary design organization), workers end up being split between “offensive formations” which are oriented around new initiatives (often working on new components of the system which don’t yet exist) and a “defensive formations” which correspond with already-developed aspects of a system.

Thinking further on the topic, it's interesting to me how many of the common visualizations end up being different representations of three fractal entities: an organization, a system and a collection of work (who, what, and when). Many of these visualizations are monads or dyads, but I've never seen a compelling triad - probably because two-dimensional projections of a three body system are necessarily reductive.

  • An organizational chart: Shows people and their reporting relationships, often struggling to represent the inherently cross-functional nature of leaf-level working teams in a more matrixed EPD organization.
  • A roadmap: Shows a sequential list of projects, generally ordered sequentially in a notional representation of logical time but disconnected from wallclock time.
  • A gantt chart: Shows a piece of a roadmap arranged with high-level work items swimlanes with incremental milestones and lined up with absolute wallclock time.
  • A kanban board: Shows low-level work items 
  • A team planning chart: groups work items by person, showing their key priority (ideally using a quadratic layout to show decreasing fidelity (1 week, 1 week, 2 week, 1 month, 2 month for a full trimester view).
  • An system architecture diagram: Shows the relationships between different components of a large technical system.
  • Dataflow diagram: shows how data flows between different elements of a technical system.
  • An information architecture diagram: Shows the relationship between different interface elements, often related to a site map but with more visual complexity.
  • Concept diagram: shows the relationship between different concepts which show up across the UI + API in different components (both services and interface elements).
  • A responsibility matrix: showing assignments of people to projects or system components, often with a role (responsible, accountable, consulted, informed, owner, DRI).
  • A technology tree: shows a zoomed out version of a roadmap
Common hierarchies for each level include:
  • Organization: Person, Team, Group, Business Unit
  • Work: Initiative, Project / Epic, Story / Task / Issue
  • System: Subcomponent, Component, Component Family?
But it's interesting, because there really isn't a dynamic way to see the state of these three in a unified way someplace. Just sorta fascinating to me, because it's not a complex system, but because it's fractal, human program / product / project managers + engineers + systems engineers + architects try to keep it all together.

It's also fascinating that while code compiles into build binaries, it does not compile into system or work diagrams, though obviously there is huge amounts of source code metadata which could be used to construct these.


Apps and Maps: Using the iOS Developer Ecosystem to Attack Google


The fundamental challenge in consumer mapping is that there is no perfect map - different users need different maps at different times.

The same problem existed with PCs and smartphones - Microsoft and Apple both created App Stores and development frameworks to provide users with the ability to customize their device.

Over the past few releases, Apple has been opening up the iOS ecosystem to allow individual apps to push data into central Apple-owned plugin points in order to support cross-app interoperability and improve the multi-app user experience. Features like Apple Wallet, Apple Health, and Live Activities all allow apps to deliver app-specific context into specific areas of the operating system and provide more seamless experiences.

So far, Apple Maps has mostly approached consumer mapping the way Google did, implementing a single lowest common denominator basemap and allowing applications to embed that base map inside of themselves. But as an ecosystem, there isn't a good way for apps to push application-specific content into Apple Maps. Supporting this would allow users to customize their own map dynamically and would prevent the need for Apple as an organization to maintain a single perfect master map to met the needs of all users - especially across different regions.

Simply by installing apps, users would get dynamic layers enabled on their maps so that specific places could get highlighted based on the application suite the user had chosen to install.

Imagine a world where a user could toggle between vector layers provided by Chase, American Express, Marriott, Hyatt, or Airbnb to choose a hotel when thinking about booking a trip. Or between Resi and OpenTable to see restaurants that had open reservations. Rather than relying on the integrations that Google or Apple had developed centrally, Apple Maps could automatically populate with all of the layers corresponding to the apps the user had already installed - customizing your iPhone’s Map would be as simple as installing an App ("Share with Apple Health"). And rather than having to switch between a bunch of different apps when planning a trip or meeting up with a friend or landing in a new location, app-specific context could be surfaced spatially, dynamically pushing more detailed information to users based on zoom level and live activities.

The core of this is two features: an extension of Live Activities called Live Layers which would allow an activity to represent moving objects + routes on the Apple Maps canvas, and a feature called App Layers for pushing POIs (or possibly basemaps) into the Map Canvas. The most extreme version would let users actually subscribe directly to basemaps, eliminating the need for Apple to maintain a central basemap and pushing everything into the Overture ecosystem - at this point, Apple Maps would simply provide scaffolding for spatial appmakers.

Imagine seeing your Uber car, DoorDash delivery and your husband’s shared Lyft ride all converging on a friend’s house for a birthday party! Imagine having your Lime scooter automatically populating with your hotel address, or using Handoff to send data directly from desktop / web to your iOS device for easy navigation.

Now imagine this as context for Apple Intelligence - allowing Siri to answer a whole host of critical “who, when and where” questions based on the live information flow into the on-device spatial intelligence engine. Imagine landing in a new airport, trying to get to a tight connection - maybe your United App could provide turn-by-turn Apple Maps directions based on LOD which was being dynamically injected by the United airport map. Or asking a HomePod when the food was arriving while you’re trying to prepare for a dinner party. Or using your Apple Watch to order a car to the airport while you’re frantically packing for an international trip. The background context being injected into a central Map is effectively identical to the user’s mental context - key information about their life which is critical for answering their most important questions.

From a developer perspective, iOS would be able to provide gazetteer elements which synced across apps, GERS-focused advertising (“AdSpots” vs “AdWords”) for spatial apps to bid on to drive app downloads, and to push users into a more app-friendly environment where incentives are more aligned (vs Google where the goal is to own the end-to-end experience and take a large cut via referrals and ads). It could be a big way to get the industry on board with trying to shift users en masse off of Google Maps.

Wednesday, January 22, 2025

Meetings are the Dark Matter of Enterprise Cybernetics

We are going through is a transition period - from human reasoning to machine reasoning; given the previous revolution from human computing to artificial computing, I guess you could just say that the broad ~100 year arc is from human intelligence to artificial intelligence and just call it an AI revolution [1].

As such, I've been thinking a lot about how to use AI in the context of an existing business to streamline internal operations.

And while pieces of the business feel tractable based on applying AI to existing systems of record or operational processes, automating large swaths of a business feels too hard. Too much reasoning is illegible to machines - because it happens during meetings.

Meetings are the dark matter of the enterprise - the vast majority of "context" about what's happening at a business is transmitted verbally, and needs to be represented digitally. So a key part of transforming between the analog business to the digital business is about making meetings legible to machines.

That's why I think Granola.ai is going to be the Killer App for the next generation of enterprise operating systems - I'm only testing it out right now in a personal capacity (it's currently quite limited from a security perspective and not enterprise-ready), but it's the first application I've used that really feels like a step-change for personal productivity. Chat apps are nice for search, and I think they will continue to be useful. But they still feel like work - the experience for most LLM chat apps is still relatively similar to a better search engine (you swivel chair to it, do stuff, then swivel back). Granola is the first app I've used that's non-zero sum with my time; it inhabits the same time as I do, and makes that time more productive. I can't explain it exactly, but using Granola feels like - oh yeah, this is going to be ubiquitous in 5 years. Maybe it doesn't win the category (will be a battle), but this category is going to be the first non-chat Category of LLM-powered apps.

The problem is that in its current form, it's just a tool. It's going to be picked up ubiquitously and let people do better meetings - what it's NOT going to do is automate away meetings or massively accelerate operational productivity.

That's where the Ontology comes in - because right now, Granola is just a generic application. The opportunity is to use the Ontology to turn it into a Platform. Today, Granola has a basic templating system with generic out-of-the-box templates for specific meeting types. By applying a decision-centric data model to it, you could map meeting types to object types, and then include functions and actions in the UI where it currently provides some generic out-of-the-box actions (send email, list action items).

So now you've created a virtuous cycle - you use Granola to capture meetings, those meetings become data about the actual reasoning and decision-making in the enterprise, and then the meetings can be automated, orchestrated, agentified...ontologized.


Today, there's a feature in Granola that creates "action items" - imagine if each of these action items corresponded with an Ontology Action Type - with the right ontology, the Action Items from a partnership kickoff call could all be invokable Ontology Actions - one to create a Jira ticket for reviewing API docs, one for initiating a Legal ticket to get a partnership set up, one for scheduling a kickoff meeting, and one that automagically created a Slack channel. And they don't need to be actions yet - just having the Meeting <> Action Item mapping for all meetings of a given type lets you begin mining the plaintext action items to create semantic action types (and an Action Type <> Prompt Hint mapping so that future Action Items could be translated into invokable Ontology Actions via semantic search).

At this point, the UI would transform the text into a clickable bullet automatically orchestrating an entire system of action which could be kept in sync as more meetings occur and actions move through a Markov chain of semi-formalized state changes - "okay, you decided to transfer Phil to the Mobile team - let's kick off the process of talking to Phil, confirming the transfer, and then registering this."

To automate decisions, they need to be formalized and given that most decisions happen in meetings, this means that automating decision-making will require making the meeting context accessible to machines. Over time, this will enable us to progressively titrate decision-making authority from the human to the machine as humans shift from being decision-makers themselves to being the makers of decision-making machines.

[1] It's only tangentially related, but I do think that this Eric Schmidt talk was an interesting read about the steam to electricity transformation, which might be a good historical analogue to consider.