Business Client need AI Software Development
Contact person: Business Client
Phone:Show
Email:Show
Location: Stockport, United Kingdom
Budget: Recommended by industry experts
Time to start: As soon as possible
Project description:
"Aeonic Labs is building a new class of infrastructure for governing autonomous AI agents.
We’re looking for a senior full-stack AI partner (individual or small team) to deliver an MVP that includes:
AgentGuard – a [login to view URL] dashboard for monitoring and controlling agents
T-LAIOR – a core LLM-based orchestrator and trust layer
Agent Builder – a backend function / framework to define, configure and run agents under governance
This is not a simple front-end job or a one-off LLM integration. We want someone who can think across:
UX ([login to view URL], dashboards)
Backend APIs & data model
LLM/SLM architecture (hosted + open models, RAG, tools)
Agent lifecycle (creation, configuration, monitoring, intervention)
We are open to a long-term relationship if the MVP goes well.
MVP Goals
By the end of this engagement, we want:
A working AgentGuard web app where we can:
See a list of agents, their status and risk score
Drill into an agent detail page with timelines and trust evaluations
Intervene: block, quarantine, retrain, reset tools/memory
A functioning T-LAIOR orchestration layer which:
Routes agent actions through a trust layer
Calls one or more SLMs (specialist evaluators) to assess risk, compliance, alignment
Returns structured trust results (e.g. ALLOW / WARN / BLOCK with explanation)
An initial Agent Builder that lets us:
Define a new agent (name, purpose, model, tools, guardrails)
Persist agent definitions
Run an agent task and stream telemetry + trust evaluations into AgentGuard
High-Level Architecture (Target)
Front End – AgentGuard
Tech: [login to view URL] (app router), React, TypeScript, Tailwind (or similar), charting library
Key views:
Dashboard (all agents, alerts, aggregate risk)
Agent detail (status, actions, SLM trust cards, intervention panel)
Incidents / audit log
Guardrail / policy configuration
Backend – API Layer
REST/JSON or GraphQL API to support the UI:
/agents, /agents/{id}, /agents/{id}/actions
/trust-events, /incidents
/policies, /guardrails
Can be implemented with:
Node/TypeScript ([login to view URL] API routes, NestJS, or Express) or
Python (FastAPI) – as long as it’s clean and documented.
Orchestrator – T-LAIOR Core
A service that:
Receives agent intents / actions
Calls an LLM for planning/tool usage under guardrails
Calls SLMs to evaluate safety/compliance/fit-to-policy
Produces a final decision + explanation (ALLOW / REQUIRE_REVIEW / BLOCK)
We are open to:
Using hosted models (e.g. Vertex AI / Gemini or OpenAI) for the core LLM
Using RAG and/or small open-weight models for SLM evaluators
SLM Layer – Specialist Evaluators
MVP can start with 1–2 SLMs, for example:
Policy / safety evaluator SLM – checks if an action violates high-level guardrails
Compliance SLM (placeholder) – checks if a proposed action touches “restricted” data/operations
These can be implemented as:
“Virtual SLMs” – strict prompts + RAG on a shared hosted LLM
Optionally a tiny open model in a container for deterministic classification
Agent Builder & Runtime
A simple agent definition model (in DB):
id, name, description, model, tools, guardrails, state
An agent runner that:
Takes a task + agent definition
Uses a framework (e.g. LangChain / LlamaIndex / custom lightweight orchestration)
Calls T-LAIOR for trust checks before executing tool actions
Emits telemetry (actions, tool calls, trust results) to be stored and visualised in AgentGuard
Deliverables
AgentGuard Front-End ([login to view URL])
Dashboard, Agent list, Agent detail, Incidents/Logs, Settings/Guardrails
API integration with backend (mock first, then real)
Clear UX for:
Viewing risk and trust events
Triggering interventions (block, quarantine, retrain, reset, reactivate)
Backend API
Endpoints to:
List agents, get agent detail
Run an agent task
Trigger interventions
Fetch trust events and incident logs
Clean, documented code + OpenAPI/Swagger or similar if possible
T-LAIOR Orchestrator (MVP)
Service that:
Receives an agent action / plan
Calls 1 LLM (for reasoning) + 1–2 SLM evaluators
Returns structured trust results (JSON) and logs them
SLM Evaluator(s)
At least one working SLM:
Input: agent action context
Output: decision (ALLOW/WARN/BLOCK) + score + short explanation
Implementation can use hosted LLM or small model + RAG
Agent Builder / Runner
Minimal UI or API to define an agent
Ability to run a test task for an agent and see:
Planned actions
Trust checks
Final outcome
Data & Telemetry Storage
Basic schema for:
Agents
Actions / events
Trust evaluations
Incidents
Can use Postgres, Firestore, or similar – open to your recommendation
Documentation
How to run everything locally (and in a basic cloud setup)
Architectural overview
Extensibility notes (how to add more SLMs, more agents, etc.)
Suggested Timeline & Milestones
Total: 8–10 weeks (flexible if you justify an alternative)
Milestone 1 – Architecture & Design (1–2 weeks)
Finalise stack & architecture
API contracts & data models
Low-fidelity UI wireframes
Milestone 2 – Front-End Skeleton & Mock Integration (2 weeks)
[login to view URL] app scaffolded
Core pages: Dashboard + Agent Detail + Incidents
UI uses mock APIs / fake JSON
Milestone 3 – Backend APIs & Basic Agent Runner (2–3 weeks)
Backend service with live endpoints
Basic Agent definition + simple agent runner
Telemetry stored, visible in AgentGuard
Milestone 4 – T-LAIOR Orchestrator & SLM Evaluator (2–3 weeks)
LLM + SLM evaluator integration
Trust decisions flowing into telemetry & UI
Support for block/quarantine actions
Milestone 5 – Polish, Hardening & Handover (1–2 weeks)
UX refinements, bug fixes
Basic auth wiring (if in scope)
Final documentation & handover
Ideal Partner Profile
We’re looking for someone who has actually shipped AI-powered products, not just toy demos.
Must-have skills:
[login to view URL] / React / TypeScript
API design & implementation (Node or Python)
Working with LLMs (OpenAI, Vertex AI, or similar)
Experience with at least one agent or orchestration framework (LangChain, LlamaIndex, CrewAI, etc.)
Solid understanding of security, logging, and basic DevOps
Nice-to-have:
Experience with governance, risk, security, or compliance
GCP / Vertex AI
Experience building dashboards / control planes
What to Include in Your Bid
Short intro: who you are / your team
2–3 relevant projects (links or PDFs)
Your proposed tech stack (front + back + LLM side)
Your estimated timeline & cost for the MVP
Whether you’re open to ongoing work after the MVP" (client-provided description)
Matched companies (3)

HJP Media

Conchakra Technologies Pvt Ltd
