New Projects
A Hub for Synthetic/Generative Intelligence Collaboration
GEN-24 is a convergence point for research, development, and the application of emerging synthetic and generative intelligence methods. It serves as a nexus for applying these tools to various domains, such as cognitive modeling, legal reasoning, and social systems.
Key Features:
Research & Development: Engage with cutting-edge tools and methodologies for cognitive modeling and AI ethics.
Cross-Domain Applications: Apply synthetic methods across legal, social, and technical fields.
Collaborative Projects: Participate in ongoing initiatives, such as Active InferAnts, to advance synthetic intelligence systems.
Get involved:
Join GEN-24 to contribute your expertise, collaborate with professionals across domains, and help shape the future of synthetic intelligence. Visit the main hub and explore schema/examples to get started.
Potential Near Term Pilot Projects
Open Source Computational Science Kernel for Self-Aware and Self-Directed Agents
The scientific method is the single method of knowledge acquisition and validation that across time and cultures has most transformed the human condition and the natural world. It has succeeded because it is composed of a set of principles that are independent of person, status, culture, and power, and is open, transparent, evidence-based, and self-correcting. It is also imperfect. It is never complete, but is a process of continuous conjecture, testing, and revision based upon evidence, prediction, and replication. Yet it is also a social construction, hence, subject to capture, suppression, and concentration. For that reason, the scientific method needs to be open, distributed, and autonomous in its funding and governance. By making the scientific process open, self-funding, self-governing and computational, it can be scaled and embedded in an unlimited number of societal processes- media, governance, economic, cultural, educational, medical, legal, financial, and ecological. As such, it is not just an embedding of the scientific processes in societal processes, but an accumulating building of a collective understanding and expertise grounded in the first principles of physics, biology, neuroscience, and computation.
An important goal of the First Principles First initiative is to develop an open source "Computational Science Kernel" (CSK) that interoperates with a rapidly evolving network of multi-agent frameworks, LLMs, vector databases, and secure local Small Language Model (SLM). From our current review the most promising and "first principles" approach is based upon The Free Energy Principle (FEP), a theoretical framework for describing how predictive systems self-organize into coherent, stable structures by minimizing a free energy mathematical function. Active Inference (AIF) is a computational Bayesian embodiment of the FEP that specifically details how systems can plan for future actions by minimizing free energy functions that incorporate information seeking components. A goal here would be to develop a domain and scale free version of CSK and the FEP that could be embedded in a variety of application domains: finance, economics, law, governance, medicine, and media. The most mature and scalable computational version of AIF to date is the open-source module, RxInfer developed by BIASlabs at Eindhoven University. (This module is being commercially developed by StanhopeAI and LazyDynamics.) One near term goal is to increase the accessibility of the tool by simplifying model specification by having natural language interface and visual UX and APIs tools to enable RxInfer to be programmable for domain specific data, models and forms of actions and interventions. For example, in Climate Finance, there is considerable uncertainty and hence, risk, lack of trust and fungibility around MRVs (Monitoring, Reporting and Verification) because of high variability in methods and reliability of metrics. FEP combined with CSK could provide a highly scalable solution by providing scale and domain free MRVs that are commensurable and intrinsically measure the degree of risk associated with MRVs across different domains and scales. These models could be integrated with "digital twins" of a bioregion whereby the resolution and accuracy of the model will be enhanced through the minimization of uncertainty of MRVs. In other words, the digital twin evolves through better modeling and learning to become a simulation of a geospatial region it is intended to impact.
Another example in the application of CSK is jurisprudence. The scientific method and jurisprudence have much in common, both having been developed by the same man, Sir Francis Bacon. To date, the magnitude of costs and procedural barriers for everyday citizens asserting and protecting their legitimate legal rights has effectively denied those rights as a public good. Much of the judicial system is mired in case and procedural backlogs and is woefully under-resourced without public transparency and accountability. There are no scientific metrics by which judicial processes and actions can be evaluated in terms of either their effectiveness or their fairness. The judicial process has been effectively gamed for decades by well-funded private and corporate interests to protect and service their own interests. The potential automation and explicit modeling of the evidence collection and admissibility processes, as well as the capacity to predict the likely success of different legal briefs under different scenarios, could dramatically expedite the highly costly and protracted legal processes. It could also give litigants effective representation in the adjudication of their basic rights and thereby reduce the time and cost between the declaration of rights and the enforcement of those rights. A related area of legal application is in the explication of the consistency, rigor, and neutrality of judicial reasoning and the extent to which textual interpretation is consistent and bound to evidence and precedent.
Pilot CSK Business Agent to Generative Finance and Adaptive Business Modeling and Management
Traditional business modeling and finance begin with pro forma models of expected cash flows, revenue, and earnings predicated on predicted market size, product fit, and variable and fixed costs. Such models are in effect hypotheses based upon available data and the predicted effectiveness of different policies from marketing and sales, supply chain management, operations, product design and manufacturing costs, to human resources. Current business models are essentially spreadsheets of accounting ledgers, departmental budgets, and KPIs, which, while capable of limited what-if scenarios, cannot model the dynamics of the nested interdependencies within organizational units or across organizational units, divisions, or departments. For example, how might policy or market changes affect the viability of product development, or how might finance and R&D mutually influence up and down and across organizational hierarchies?
In contrast, a generative business finance model is a self-aware agent that is specifically adapted to a particular market niche with specific investment hypotheses that are continuously tested, revised, and updated through a CSK to achieve a particular impact and risk-adjusted rate of return. The Free Energy metric in this case represents a risk-adjusted expectation of value created and recognized. What is distinctive about this approach is that it is NOT just about finding an equilibrium between a buyer and seller, but rather in discovering new data, policies, and actions that generate new or increased forms of value (impacts) at a reduced risk or uncertainty. Think of this as a continuously self-revising business model that discovers new forms of efficiencies, cash flow, product types, supply chains, buyers, and market and product intelligence.
In terms of Active Inference method, the generative agent model uses the "Variational Free Energy" metric to achieve a Non-Equilibrium Steady State (NESS) where this metric represents the stability of the value of an asset over time. Since the CSK has a measure of the relative contribution of data, policies, and actions, it can provide an internal pricing mechanism to encourage those people and actions that best contribute to the overall result while retaining the viability of the enterprise. This experiment could be applied to startup financing as well as to mature companies. The accuracy of the models is tied to the quality of time series data, both internal and external, and while this has been an issue in the past, there has been a surge in both internal and external operational and financial data.
Formation and Pilot Testing of a CSK Networked Public Goods Cooperative
As noted in Pilot 1, there is a compelling argument for an edge network of CSK nodes where each node is dedicated to calculating the likelihood of the predicted risk and value of a particular belief or form of knowledge, be it an MRV about a carbon credit, a scientific or product claim, or the likelihood of a legal brief succeeding in a particular jurisdiction. In this case, the nodes create public goods by virtue of the fact that their results are not captive to a particular interest, but are based upon independent methods of verification and improvement, and are open to the public. Likewise, the public can contribute to the development and improvement of models through data, actions, and policies and be fairly compensated based upon their relative contribution through the Nobel Award-winning Shapley value algorithm.
The goal is to have the growth of the network be spontaneous and self-funding where all participants involved in creating and supporting the value of the node earn access tokens associated with the value of each CSK project node. As an algorithmic cooperative, members can vote to set up multiple CSKs to achieve different public good outcomes, each of which would have its own specific token. The overall mission of the network cooperative is to generate multiple CSK projects that achieve self-sustaining public goods outcomes and which are interoperable through a decentralized token exchange using "Stable Cooperative Token" (SCT).
Initial financing for the total cooperative and for specific CSK projects can be achieved through a Debt Payable by Assets (DPA) instrument that makes it possible to issue a convertible note that can be paid back through SCTs or access tokens for specific cooperatives and projects. This approach allows for flexibility in finance by both the aggregation of risk and the concentration of risk.