
Apr 23, 2025
Empowering Research Innovation
ExtensityAI platform combines neural and symbolic AI to accelerate research, developing a novel primality test in just 24 hours.

Marius Constantin-Dinu
Share this article
Empowering Research Innovation: The SymbolicAI Framework and Extensity Research Services Platform
Introduction: Next-Generation Research Automation Through Neurosymbolic AI
Prime numbers have long fascinated mathematicians and computer scientists, serving as the foundation for modern cryptography and information security. Their unpredictable distribution and unique mathematical properties make them both intellectually captivating and practically essential. As CEO of ExtensityAI, I've observed how the discovery of new mathematical insights related to primality has historically required specialized knowledge, extensive training, and significant time investments. Yet, with the emergence of neurosymbolic AI systems, this paradigm is rapidly changing.
What if researchers could dramatically compress the time needed to explore complex mathematical domains and develop novel algorithms? At ExtensityAI, we've developed the SymbolicAI framework as our foundation, built the Symbia Engine on top of it, and now are launching our Extensity Research Services (ERS) SaaS Platform to enable precisely this kind of breakthrough. Our recent experiment in primality testing demonstrates how our technology stack can empower researchers to make significant contributions in fields outside their primary expertise. See the following graphic illustrating our stack [Disclaimer: since our designer was busy this week, all GIFs are AI generated in this thread, so they may have flaws and imperfections, but they are buitiful in their own way and funny, so enjoy the illustrations].

The Architecture of SymbolicAI: Merging Neural and Symbolic Reasoning
The SymbolicAI framework represents a fundamentally different approach to AI system design. Unlike purely neural approaches that rely on statistical inference or traditional symbolic systems limited by rigid logic rules, our framework orchestrates a hybrid architecture that leverages the strengths of both paradigms while mitigating their individual weaknesses.
At its core, SymbolicAI treats large language models as semantic parsers capable of executing tasks based on both natural and formal language instructions. This positions language as the coordinating mechanism that enables seamless transitions between different reasoning modalities. By leveraging this language-centric approach, we create a system that can understand, interpret, and generate content across multiple domains and modalities.

Neurosymbolic Expressions: The Fundamental Building Blocks
The foundation of our framework consists of polymorphic, compositional, and self-referential operations that use so-called Symbol and Expression objects to process information. These expressions enable a multi-step generative process where each step builds upon the previous ones, creating a chain of operations that can transform complex concepts from initial ideas to concrete implementations.
Unlike traditional programming where operations are rigidly defined, expressions can adapt based on context and evolve through in-context learning or domain-specific requirements. This allows the system to handle multi-modal data and complex workflows while maintaining interpretability throughout the entire process.
A Symbol in our framework represents data with operations (or expressions) that can be performed on it. This abstraction enables us to embed symbolic reasoning within the generative process, ensuring that outputs adhere to predefined or context-adapted rules and constraints while leveraging the creative capabilities of neural networks.
Contracts and Guardrails: Ensuring Reliability and Correctness
One of the most significant challenges in generative AI involves ensuring semantic correctness. Traditional validation approaches focus on structural compliance but often miss deeper issues related to meaning and accuracy. Our framework addresses this through a system of contracts and guardrails inspired by software engineering principles.

Contracts in our system define preconditions that must be satisfied before an operation is executed, postconditions that guarantee properties of the output after the operation completes, and invariants that remain true throughout the execution lifecycle. Together, these mechanisms create what we call a "multi-step generative process with contracts," ensuring that each step in a complex workflow produces reliable results that can be trusted by subsequent operations. However, this does not mean that expressions and contracts cannot be dynamically created at runtime based on the requirements or states of execution. In fact, initial conditions of the input and goals we define allow for creation of a dynamically adjustable generator and validator.
Symbolic Engines, Memory and Knowledge Graphs: Grounding in Facts
To prevent hallucinations and maintain factual coherence, our Symbia Engine incorporates sophisticated memory and knowledge structures that continuously validate generated content. Our system measures the similarity between generated information and established facts, ensuring that outputs remain grounded in reality throughout the generation process.
Our implementation includes a vector-store memory system for retrieving semantically relevant context, knowledge graph modules that encode defined relationships, and hybrid solvers that can verify mathematical or logical identities. This combination enables the system to "think out loud," making its reasoning process transparent and anchoring its outputs in verifiable facts rather than probabilistic guesses. Building on top of this solid foundation we developed our research services.

The Extensity Research Services Platform: Putting Theory Into Practice
Building on the SymbolicAI framework and Symbia Engine, our Extensity Research Services (ERS) SaaS Platform provides researchers with an intuitive interface to leverage our neurosymbolic technology stack. The platform addresses four key research challenges: ensuring content generation with consistency, relevance, quality, and distinction at scale; validating results based on trusted sources; gaining insights from data through research and analytics; and exploring results more effectively through research automation.
To demonstrate the capabilities of our platform, we undertook a challenge in primality testing—creating a scientific paper from scratch, from ideation phase to related work exploration, over experiment design and analysis of results and summarizing all of that in a consistant scientific manuscript. The result is a deterministic test based on circulant matrix eigenvalue structures. Despite primality testing not being my personal area of expertise (my background is in AI research with focus on reinforcement learning, domain adaptaion and large language models), our platform enabled remarkable progress in just 24 hours or experimentation.

Concept Exploration and Development through Computational Nodes
In our primality testing experiment, we implemented a computational workflow where different functional nodes operated in sequence and in parallel, exchanging information through our research platform. The conceptual exploration began with a reserach question and ideation phase, followed by deeper literature ingestion which can be defined as nodes of operations (symbols and expressions) that processed and analyzed existing research on cyclotomic field theory and circulant matrices. These nodes identified relevant mathematical patterns and theoretical frameworks, creating a knowledge foundation for subsequent operations.
The identified concepts then flowed into hypothesis generation nodes that proposed potential relationships between matrix properties and primality testing. These nodes leveraged both neural components for creative connections and symbolic components for logical consistency, generating testable mathematical hypotheses about how eigenvalue structures might indicate primality in complex spaces.
As promising ideas emerged, verification nodes applied formal mathematical checks to ensure correctness, invoking specialized solvers when necessary to validate complex algebraic relationships. These nodes enforced the rigor essential to mathematical proof, confirming that the hypothesized relationship between circulant matrix properties and primality was mathematically sound.
Implementation nodes then transformed theoretical insights into executable code, incorporating best practices in numerical computing and algorithm design. These nodes addressed practical challenges like numerical stability and computational complexity, ensuring the theoretical insights would work reliably across different input ranges.
Throughout this process, evaluation nodes continuously assessed results by benchmarking against primality tests, providing feedback to refine both the theoretical model and its implementation. This created an iterative research cycle where insights from each stage informed improvements to others, accelerating the convergence toward a novel and effective solution.

This multi-node computational workflow discovered a novel insight (at least for me): an integer n > 2 is prime if and only if the minimal polynomial of a specific circulant matrix (defined as W_n + W²_n) has exactly two irreducible factors over the rational field. This mathematical discovery connects cyclotomic fields with matrix algebra in a way that enables effective primality testing.
Implementation and Validation Through Automated Workflows
The theoretical insight was immediately translated into executable code via dedicated workflows in our ERS Platform. The platform generated Python implementations for computing eigenvalues of circulant matrices, developed routines for analyzing minimal polynomial factorization patterns, created benchmarking tools to compare against established methods like Miller-Rabin and AKS, and generated visualizations that illuminate the theoretical connections.
The entire process—from initial concept exploration to working implementation and paper draft—was accomplished through a unified pipeline orchestrated by our technology stack, demonstrating how the ERS Platform accelerates research from concept to publication. Here is a preview of the drafted paper and a reference to the generated code base to run and evaluate performance, plots, and more (see Link to paper at the end of the post). Here is the GitHub link to the repository.

Current Limitations and the Importance of Human Oversight
While our Extensity Research Services Platform represents a significant advancement in research automation, it is essential to acknowledge the inherent limitations of current AI systems and the critical role that human researchers continue to play. Our technology stack is designed not to replace scientists but to serve as a powerful accelerator for human-led research initiatives.
Technical Constraints of Current AI Models
Despite impressive capabilities, today's AI models face several fundamental challenges when applied to scientific research:
Reasoning Limitations: Current generative models and architectures do not truly "think" in the human sense but operate on statistical patterns recognized in training data. When tasked with complex logical reasoning, especially in mathematical contexts, they may propose solutions that superficially appear correct but contain subtle flaws in reasoning. Even with symbolic verfication methods semantical assessments like the initial set of assuptions or contextual validity of an applied method cannot be rigorously checked. This is particularly evident in our primality testing experiments, where proposed optimizations occasionally violated mathematical principles that weren't explicitly stated as constraints or bypassed constraints due to semantic ambiguity.
Implementation Inconsistencies: We observed that when generating code implementations, the system would sometimes introduce optimization "hacks" that prioritized computational efficiency over algorithmic clarity or theoretical soundness. For example, when benchmarking the circulant matrix algorithm, we had to explicitly instruct the system to focus on measuring raw algorithmic performance rather than introducing caching mechanisms that would distort comparison with other primality tests. Furthermore, dublications of code can occur since the system is not holistically aware of the existing code bases.
Stylistic and Formatting Issues: The platform occasionally produces documents with inconsistent styling, particularly when rendering complex mathematical expressions or code blocks within the same document. These formatting anomalies, such as overlapping text in images resulting from generated code segments, require human review and correction to ensure clarity and professionalism in final outputs. However, we did not do manual edits but only used AI instructions to guide the operations.
Domain Knowledge Boundaries: While expansive, the knowledge encoded in our models has definite boundaries, particularly regarding cutting-edge research published after training cutoff dates or highly specialized subfields with limited literature. The system may occasionally present confident assertions about topics where its knowledge is incomplete or outdated. Domain experts are therefore required to assess the validity and completness of results.
The Essential Human-in-the-Loop Paradigm
These limitations underscore why our approach emphasizes human-AI collaboration rather than full automation:
Verification and Validation: Human experts remain indispensable for evaluating the correctness of AI-generated research outputs. In our primality testing case study—although fun to explore—before publishing these results, mathematical experts are required to review all theoretical connections proposed by the system.
Contextual Understanding: Human researchers provide critical contextual judgment about which research directions are most promising or meaningful within the broader scientific landscape—context that AI systems cannot fully internalize through training or search alone.
Creative Direction: While AI can explore vast spaces of possibilities, human researchers still provide the creative vision and purpose that guides these explorations toward meaningful scientific contributions rather than technically interesting but practically limited directions.
Ethical and Societal Considerations: Humans must continue to evaluate the ethical implications and potential applications of research breakthroughs, aspects that extend beyond the computational capabilities of our current systems.
Embracing Complementary Strengths
The most effective research paradigm leverages the complementary strengths of human and artificial intelligence:
AI excels at rapid exploration of large solution spaces, pattern recognition across vast literature, and connecting concepts from different domains.
Human researchers excel at intuitive leaps, contextual judgment, rigorous verification, and determining the broader significance of findings.
Our Extensity Research Services Platform is designed specifically for this collaborative paradigm, with interfaces and workflows that maximize synergy between human insight and computational power. By acknowledging current limitations while building systems that augment human capabilities, we create an environment where scientific progress can occur at unprecedented speeds without sacrificing rigor or creativity.
A Foundation for Continuous Improvement
It's important to recognize that today's limitations represent the current state of technology—not permanent barriers. The challenges we've identified are precisely the areas where ongoing research and development efforts are focused:
Advances in reasoning capabilities through improved symbolic integration
Better preservation of logical consistency in complex workflows
Enhanced formatting and stylistic coherence in document and graphical generation
Expanding domain knowledge through specialized training and knowledge graph integration
Each iteration of our platform incorporates these improvements, progressively enhancing the system's ability to serve as a reliable and powerful research partner. The limitations we experience today will steadily diminish as neurosymbolic approaches continue to evolve.
The Future of Research Automation with Extensity Research Services
The implications of our platform extend far beyond primality testing. What we've demonstrated is a new paradigm for research acceleration that has the potential to transform scientific discovery across disciplines.
Traditional research workflows are fragmented across multiple tools and manual processes, creating friction points and opportunities for error. Our ERS Platform offers several transformative advantages:
Reproducibility and Transparency
Every step in our research process is logged with its inputs, outputs, and verification results. This creates an auditable trail that makes the entire workflow transparent and reproducible—essential qualities for scientific advancement.
Accessibility and Collaboration
The platform enables domain experts to collaborate with AI systems without requiring deep cross-disciplinary training. A cryptographer need not be an expert in neural networks to leverage the system, just as an AI researcher need not be an expert in number theory to make contributions to the field. Moreover, expertise in any domain—whether one's own or an adjacent field—develops gradually over time through consistent engagement, without the steep learning curves that might otherwise deter collaboration. This natural progression of knowledge acquisition allows specialists to expand their capabilities incrementally while remaining focused on their core strengths.
Accelerated Discovery
Perhaps most significantly, the ERS Platform dramatically compresses the time required for exploration and experimentation. Hypotheses that might take weeks or months to explore manually can be investigated in hours or days, allowing researchers to iterate more rapidly and explore more possibilities.

Conclusion: Transforming Research with Extensity Research Services
The primality testing case study demonstrates how our complete technology stack—from the SymbolicAI framework to the Symbia Engine to the Extensity Research Services Platform—is not merely augmenting human capabilities but fundamentally transforming research methodologies. By combining the strengths of neural networks and symbolic reasoning within a coherent framework and delivering it through an intuitive platform, we're enabling breakthroughs that would be difficult to achieve through either approach alone.
As we launch the Extensity Research Services Platform, we envision research environments that continue to lower barriers to scientific discovery while maintaining rigorous standards of correctness and reproducibility. The combination of neural generative capabilities with symbolic verification creates systems that are both creative and trustworthy—a balance that has been difficult to achieve with traditional approaches.
The next revolution in research automation isn't about replacing human researchers—it's about amplifying their capabilities and expanding the frontiers of what's possible.

Marius-Constantin Dinu is the CEO and co-founder of ExtensityAI, leading the development of the SymbolicAI framework, Symbia Engine, and Extensity Research Services Platform for research automation and neurosymbolic AI applications.