{
  "title_es": "Arquitecturas Resilientes — Ontología de Referencias Reales",
  "subtitle_es": "Entidades, estándares, frameworks y misiones reales con URLs verificadas, sintetizados en 11 facetas y conectados por relaciones explícitas.",
  "generated_date": "2026-04-28",
  "total_articles_curated": 131,
  "facets": [
    {
      "facet_id": "sat-constellation",
      "facet_label_es": "Constelaciones satelitales",
      "intro_es": "Esta faceta cubre las arquitecturas de constelaciones satelitales centradas en resiliencia: enlaces inter-satélite (ISL) ópticos y de radio, enrutamiento dinámico sobre topologías time-varying (CGR, on-demand LISL), payload regenerativo con procesado a bordo, autonomía distribuida y patrones militares de P-LEO. La arquitectura de constelación es el sustrato de resiliencia porque define malla, redundancia, particionamiento, latencia y la capacidad de degradar grácilmente sin segmento terreno completo. Aquí se cruzan misiones operativas (Iridium NEXT, Starlink, Telesat Lightspeed), programas de referencia (DARPA Blackjack/Pit Boss, ION-DTN/PACE) y el cuerpo académico que formaliza CU-DU split, formation flying y enrutamiento bajo OISL dinámicos.",
      "subthemes": [
        {
          "id": "constellation-arch",
          "label_es": "Arquitecturas de constelación, payload regenerativo y autonomía militar"
        },
        {
          "id": "mesh-and-routing",
          "label_es": "Malla ISL, routing DTN y control distribuido"
        }
      ],
      "entities": [
        {
          "id": "iridium-next",
          "name": "Iridium NEXT",
          "type_es": "Constelación",
          "subtheme": "constellation-arch",
          "year": "n/d",
          "authority": "ESA eoPortal / Iridium Communications",
          "url": "https://www.eoportal.org/satellite-missions/iridium-next",
          "url_label": "eoPortal",
          "description_es": "Constelación LEO comercial de 66 satélites con 4 ISL en banda Ka (23 GHz, 2 intra-plano + 2 inter-plano) y procesador regenerativo a bordo (OBP) que conmuta dinámicamente cualquiera de las 252 portadoras L-band entre los 4 transpondedores ISL y 13 feeders. Es la única malla LEO operativa de larga vida con enrutamiento entre satélites real y soporta hosted payloads (Aireon ADS-B) entregados por una red MPLS Teleport.",
          "tags": [
            "Iridium-NEXT",
            "K-band-ISL",
            "mesh",
            "OBP",
            "hosted-payload",
            "MPLS",
            "Aireon",
            "ADS-B"
          ],
          "reliability": "HIGH"
        },
        {
          "id": "telesat-lightspeed",
          "name": "Telesat Lightspeed",
          "type_es": "Constelación",
          "subtheme": "constellation-arch",
          "year": "n/d",
          "authority": "ESA eoPortal / Telesat",
          "url": "https://www.eoportal.org/satellite-missions/telesat-lightspeed",
          "url_label": "eoPortal",
          "description_es": "Constelación LEO de 298 satélites a 1.300 km en planos polares e inclinados con 4 OISL de 10 Gbps por satélite (Thales Alenia Space) que forman malla óptica global con payload completamente regenerativo. Su mayor altitud reduce handovers terrestres y permite arrancar operaciones con una única estación de aterrizaje, asumiendo 5-10 ms extra por salto óptico.",
          "tags": [
            "Telesat-Lightspeed",
            "OISL",
            "regenerative-payload",
            "polar-inclined",
            "single-landing-station",
            "Thales-Alenia"
          ],
          "reliability": "HIGH"
        },
        {
          "id": "starlink-laser-isl",
          "name": "Laser Intersatellite Links in a Starlink Constellation",
          "type_es": "Paper",
          "subtheme": "mesh-and-routing",
          "year": 2021,
          "authority": "Inigo del Portillo, Bruce Cameron, Edward Crawley (MIT) — IEEE VTM",
          "url": "https://ieeexplore.ieee.org/document/9393372/",
          "url_label": "IEEE VTM",
          "description_es": "Trabajo seminal del MIT que clasifica los LISL de Starlink en tres clases topológicas: intra-plano (estables), inter-plano vecinos (estables) y crossing entre planos no-vecinos (temporales pero críticos para rutas de mínima latencia). Establece la distinción permanent-vs-temporary LISL y la base teórica del baseline Grid-Mesh+ con shortcuts oportunistas.",
          "tags": [
            "Starlink",
            "LISL",
            "laser-ISL",
            "intra-plane",
            "inter-plane",
            "crossing-LISL",
            "mega-constellation",
            "topology"
          ],
          "reliability": "HIGH"
        },
        {
          "id": "on-demand-routing-leo",
          "name": "On-Demand Routing in LEO Mega-Constellations With Dynamic LISL",
          "type_es": "Paper",
          "subtheme": "mesh-and-routing",
          "year": 2024,
          "authority": "Bhattacharjee, Madoery et al. — IEEE TAES",
          "url": "https://arxiv.org/html/2406.01953v1",
          "url_label": "arXiv",
          "description_es": "Formaliza el enrutamiento bajo LISL dinámicos donde el setup-delay entra en la función de coste y propone tres heurísticas: ILPR (Dijkstra persistente), ALPR (latencia media con rutas disjuntas) e ISASR (estabilidad-actividad con filtrado por umbral). Validado sobre Starlink Phase I v2 (1.584 satélites, 24 órbitas, 550 km, rango LISL 1.500 km).",
          "tags": [
            "Starlink",
            "LISL",
            "on-demand-routing",
            "ILPR",
            "ALPR",
            "ISASR",
            "Dijkstra",
            "mega-constellation"
          ],
          "reliability": "HIGH"
        },
        {
          "id": "darpa-blackjack-pit-boss",
          "name": "DARPA Blackjack & Pit Boss",
          "type_es": "Misión",
          "subtheme": "constellation-arch",
          "year": 2022,
          "authority": "ESA eoPortal / DARPA (Raytheon, Northrop Grumman, SEAKR, SSCI)",
          "url": "https://www.eoportal.org/ftp/satellite-missions/b/Blackjack_010222/Blackjack.html",
          "url_label": "eoPortal",
          "description_es": "Programa DARPA de referencia para Proliferated LEO militar (~20 small sats con OISL ópticos a 310-808 mi) cuyo elemento de autonomía Pit Boss integra avionics box + edge processor con software de IA, criptografía empotrada y computación distribuida en cientos a miles de nodos. Define el patrón operator-on-the-loop con tasking, procesado y diseminación autónomos durante más de 24 horas.",
          "tags": [
            "DARPA",
            "Blackjack",
            "Pit-Boss",
            "P-LEO",
            "autonomy",
            "edge-compute",
            "OISL",
            "military",
            "SEAKR",
            "Raytheon",
            "SSCI"
          ],
          "reliability": "HIGH"
        },
        {
          "id": "dsin-survey",
          "name": "Distributed Satellite Information Networks (DSIN) Survey",
          "type_es": "Estudio",
          "subtheme": "mesh-and-routing",
          "year": 2024,
          "authority": "Survey arXiv 2412.12587",
          "url": "https://arxiv.org/html/2412.12587v1",
          "url_label": "arXiv",
          "description_es": "Survey que formaliza la transición de plataformas independientes a Cohesive Clustered Satellites (CCS), describe tres patrones de payload (transparente, regenerativo, CU-DU split heredado de O-RAN/3GPP TS 38.401), ocho opciones de functional split y cuatro estrategias de Reconfigurable Satellite Formation Flying. Cubre habilitadores como phased-array sincronizado, distributed MIMO, estimación de canal con LSTM y protocolos erasure-transfer.",
          "tags": [
            "DSIN",
            "CCS",
            "CU-DU-split",
            "formation-flying",
            "ISL",
            "distributed-MIMO",
            "federated-control"
          ],
          "reliability": "HIGH"
        },
        {
          "id": "cgr-ion-dtn",
          "name": "Contact Graph Routing (CGR) en NASA ION-DTN",
          "type_es": "Estándar",
          "subtheme": "mesh-and-routing",
          "year": 2012,
          "authority": "Scott Burleigh (NASA JPL) y equipo ION-DTN",
          "url": "https://ntrs.nasa.gov/citations/20120006508",
          "url_label": "NASA NTRS",
          "description_es": "Algoritmo de enrutamiento dinámico por bundle sobre topologías de contactos planificados. Usa earliest-arrival-time + Dijkstra, soporta multicast como árbol EAT; implementado en ION-DTN open-source de NASA/JPL, operativo continuo en la ISS y base de la LunaNet Interoperability Specification (LNIS). PACE (2024) fue la primera misión NASA Class-B con telemetría operacional sobre DTN.",
          "tags": [
            "DTN",
            "CGR",
            "ION",
            "NASA",
            "JPL",
            "BundleProtocol-7",
            "Dijkstra",
            "scheduled-contact",
            "deep-space",
            "ISS"
          ],
          "reliability": "HIGH"
        }
      ],
      "relationships": [
        {
          "from": "telesat-lightspeed",
          "type": "compite-con",
          "to": "iridium-next"
        },
        {
          "from": "on-demand-routing-leo",
          "type": "depende-de",
          "to": "starlink-laser-isl"
        },
        {
          "from": "darpa-blackjack-pit-boss",
          "type": "usa",
          "to": "starlink-laser-isl"
        },
        {
          "from": "telesat-lightspeed",
          "type": "ejemplo-de",
          "to": "dsin-survey"
        },
        {
          "from": "iridium-next",
          "type": "ejemplo-de",
          "to": "dsin-survey"
        },
        {
          "from": "darpa-blackjack-pit-boss",
          "type": "implementa",
          "to": "dsin-survey"
        },
        {
          "from": "telesat-lightspeed",
          "type": "evoluciona-de",
          "to": "iridium-next"
        }
      ],
      "cross_facet_links": [
        {
          "to_facet": "sat-fault-tolerance",
          "to_entity": "solar-orbiter-safe-mode",
          "from_entity": "iridium-next",
          "rationale": "El OBP regenerativo de Iridium NEXT requiere FDIR a bordo para conmutar portadoras L-band/ISL/feeder ante fallos sin perder cobertura mesh."
        },
        {
          "to_facet": "space-cybersec",
          "to_entity": "ccsds-sdls",
          "from_entity": "darpa-blackjack-pit-boss",
          "rationale": "Pit Boss integra criptografía y ciberseguridad como parte del avionics box; referencia clave para patrones de seguridad embebida en P-LEO militar."
        },
        {
          "to_facet": "edge-swarms",
          "to_entity": "nasa-starling-dsa",
          "from_entity": "darpa-blackjack-pit-boss",
          "rationale": "Operator-on-the-loop con tasking y diseminación distribuidos en cientos-miles de nodos es el patrón canónico de enjambre edge espacial autónomo."
        },
        {
          "to_facet": "ml-sat-ops",
          "to_entity": "fedhap-fedisl-fedspace",
          "from_entity": "dsin-survey",
          "rationale": "DSIN cita estimación de canal basada en LSTM, vínculo directo con ML aplicado a operaciones de constelación."
        },
        {
          "to_facet": "stack-templates",
          "to_entity": "well-architected",
          "from_entity": "telesat-lightspeed",
          "rationale": "Lightspeed (4 OISL/sat, payload regenerativo, single-landing-station bootstrap) es el blueprint comercial para una plantilla de stack OISL-first orientada a resiliencia."
        }
      ],
      "super_category_id": "space-resilience"
    },
    {
      "facet_id": "sat-fault-tolerance",
      "facet_label_es": "Tolerancia a fallos satelital y FDIR",
      "intro_es": "FDIR (Detección, Aislamiento y Recuperación de Fallos) es la disciplina que mantiene operativos satélites y constelaciones frente a fallos de hardware, errores inducidos por radiación (SEU), anomalías de software y comandos adversarios. Combina mecanismos físicos (TMR en FPGA, lockstep, redundancia caliente) con lógica autónoma a bordo (modos seguros jerárquicos, contadores de persistencia, monitores de telemetría) y, cada vez más, con consenso distribuido entre satélites mediante enlaces ópticos. Esta faceta cataloga el patrón canónico de tres etapas Detect/Identify/Recover de ESA, frameworks abiertos (cFS, F´), misiones de referencia (Solar Orbiter, Iridium NEXT) y la frontera bizantina aplicada a control orbital.",
      "subthemes": [
        {
          "id": "fdir-onboard",
          "label_es": "FDIR a bordo y modos seguros jerárquicos"
        },
        {
          "id": "hw-redundancy",
          "label_es": "Redundancia hardware (TMR, FPGA radiation-hardened)"
        },
        {
          "id": "flight-sw-frameworks",
          "label_es": "Frameworks de software de vuelo para gestión de fallos"
        },
        {
          "id": "constellation-resilience",
          "label_es": "Resiliencia a nivel de constelación (mesh, repuestos, re-enrutamiento)"
        },
        {
          "id": "byzantine-distributed",
          "label_es": "Consenso bizantino y autonomía distribuida"
        }
      ],
      "entities": [
        {
          "id": "esa-fdir-three-level",
          "name": "ESA FDIR (SMART-FDIR / AFDIR)",
          "type_es": "Documentación",
          "subtheme": "fdir-onboard",
          "year": 2003,
          "authority": "ESA Software Engineering and Standardisation; Alenia Spazio; Astrium",
          "url": "https://www.esa.int/TEC/Software_engineering_and_standardisation/TEC4WBUXBQE_0.html",
          "url_label": "ESA TEC — FDIR",
          "description_es": "Referencia europea canónica que formaliza FDIR en tres niveles funcionales (Detección con series temporales y Razonamiento Inductivo Difuso, Identificación basada en modelos y Lógica Posibilista, Recuperación por reconfiguración lógica). Los estudios SMART-FDIR (validado contra GOCE) y AFDIR exploraron filtrado de Kalman, redes Bayesianas y tests de verosimilitud generalizada para FDIR a bordo basada en IA.",
          "tags": [
            "fdir",
            "esa",
            "ai",
            "bayesian-networks",
            "smart-fdir",
            "afdir"
          ],
          "reliability": "HIGH"
        },
        {
          "id": "solar-orbiter-safe-mode",
          "name": "Solar Orbiter Hierarchical Safe Mode (SASM/WSM/NCM)",
          "type_es": "Misión",
          "subtheme": "fdir-onboard",
          "year": 2021,
          "authority": "ESA/NASA Solar Orbiter team; A&A",
          "url": "https://www.aanda.org/articles/aa/full_html/2021/02/aa38519-20/aa38519-20.html",
          "url_label": "A&A — Solar Orbiter mission design",
          "description_es": "Implementación operacional de FDIR jerárquica de cinco niveles con tres modos seguros: SASM (supervivencia con propulsores y sensor solar fino + IMU), WSM (apuntado solar con ruedas para conservar combustible) y NCM (nominal con star tracker). Incluye una last-chance configuration con todo equipo no esencial deshabilitado y restricción térmica de off-pointing máximo 6,5°.",
          "tags": [
            "esa",
            "solar-orbiter",
            "fdir",
            "safe-mode",
            "sasm",
            "wsm",
            "hierarchical-fdir"
          ],
          "reliability": "HIGH"
        },
        {
          "id": "safe-mode-spacecraft",
          "name": "Safe mode in spacecraft (Hubble, MRO, Opportunity)",
          "type_es": "Patrón",
          "subtheme": "fdir-onboard",
          "year": 2026,
          "authority": "Wikipedia contributors",
          "url": "https://en.wikipedia.org/wiki/Safe_mode_in_spacecraft",
          "url_label": "Wikipedia — Safe mode",
          "description_es": "Patrón universal de operación degradada en naves no tripuladas: se desactivan los subsistemas no esenciales y solo permanecen activas gestión térmica, recepción de radio y control de actitud, priorizando la recuperación de orientación estable. Casos documentados: Hubble (giroscopio, oct 2018), New Horizons (2007/2015), Mars Reconnaissance Orbiter (8 incidentes) y Opportunity (tormenta de polvo, 2018).",
          "tags": [
            "safe-mode",
            "hubble",
            "opportunity",
            "mars-reconnaissance-orbiter",
            "attitude-control"
          ],
          "reliability": "MEDIUM"
        },
        {
          "id": "tmr-fpga",
          "name": "Triple Modular Redundancy (TMR) en FPGAs radiation-tolerant",
          "type_es": "Patrón",
          "subtheme": "hw-redundancy",
          "year": 2026,
          "authority": "Wikipedia / Microchip / VORAGO / BYU; QML Class V",
          "url": "https://en.wikipedia.org/wiki/Triple_modular_redundancy",
          "url_label": "Wikipedia — Triple modular redundancy",
          "description_es": "Patrón base de tolerancia a SEU: triplica lógica crítica y vota por mayoría. Se distingue TMR duro (flip-flops con TMR integrado en FPGAs radiation-hardened-by-design, QML Class V como Microchip RTG4/RT PolarFire, Xilinx Versal AI Edge XQR, NanoXplore NG-MEDIUM/LARGE) y TMR suave (tres soft-cores idénticos con votadores), usado para up-screening de COTS.",
          "tags": [
            "tmr",
            "fpga",
            "seu",
            "voter",
            "radiation-hardened",
            "qml-class-v"
          ],
          "reliability": "MEDIUM"
        },
        {
          "id": "fprime-fault-protection",
          "name": "F´ Fault Protection (Discussion #2536)",
          "type_es": "Framework",
          "subtheme": "flight-sw-frameworks",
          "year": 2024,
          "authority": "NASA JPL F´ contributors",
          "url": "https://github.com/nasa/fprime/discussions/2536",
          "url_label": "GitHub — fprime #2536",
          "description_es": "Diseño de protección de fallos del framework F´ (JPL, usado en Mars Helicopter Ingenuity y ASTERIA). Define tres puertos estándar — FaultAnnounce, FaultRespond y FaultResponseComp — con fault monitors que aplican contadores de persistencia para de-bouncing. Una tabla de mapeo conecta anuncios con respuestas usando lógica booleana AND/OR.",
          "tags": [
            "fprime",
            "fdir",
            "jpl",
            "fault-protection",
            "ports",
            "persistence-counts"
          ],
          "reliability": "HIGH"
        },
        {
          "id": "mega-constellation-resilience",
          "name": "Resilience of Mega-Satellite Constellations (arXiv 2509.06766)",
          "type_es": "Paper",
          "subtheme": "constellation-resilience",
          "year": 2025,
          "authority": "Guo, Xiong, Zhang, Li, Niyato, Yuen, Han",
          "url": "https://arxiv.org/abs/2509.06766",
          "url_label": "arXiv 2509.06766",
          "description_es": "Análisis cuantitativo de cómo los fallos de nodo degradan progresivamente la conectividad ISL en mega-constelaciones tipo Starlink/Kuiper. Demuestra que la topología orbital dinámica restaura parcialmente la conectividad, pero solo la integración con protocolos explícitos de re-enrutamiento desbloquea el potencial pleno de resiliencia bajo fallos a gran escala.",
          "tags": [
            "arxiv",
            "leo",
            "mega-constellation",
            "isl",
            "node-failure",
            "rerouting",
            "starlink",
            "kuiper"
          ],
          "reliability": "HIGH"
        },
        {
          "id": "pbft-oisl-thrust",
          "name": "PBFT-OISL Byzantine-Resilient Thrust Consensus",
          "type_es": "Paper",
          "subtheme": "byzantine-distributed",
          "year": 2026,
          "authority": "ScienceDirect S2590123026011485",
          "url": "https://www.sciencedirect.com/science/article/pii/S2590123026011485",
          "url_label": "ScienceDirect — PBFT-OISL",
          "description_es": "Primer uso práctico documentado de PBFT (Practical Byzantine Fault Tolerance) sobre enlaces ópticos inter-satélite (OISL) para control distribuido de empuje en constelaciones RGT-Walker. Tolera hasta 30% de nodos bizantinos (límite teórico f<n/3) con consenso >95%, errores de huella en tierra <10 m, 73% de ahorro de combustible vs control centralizado y escalabilidad a 100+ satélites.",
          "tags": [
            "byzantine",
            "pbft",
            "consensus",
            "oisl",
            "rgt-walker",
            "distributed-control",
            "thrust"
          ],
          "reliability": "HIGH"
        }
      ],
      "relationships": [
        {
          "from": "solar-orbiter-safe-mode",
          "type": "implementa",
          "to": "esa-fdir-three-level"
        },
        {
          "from": "safe-mode-spacecraft",
          "type": "ejemplo-de",
          "to": "esa-fdir-three-level"
        },
        {
          "from": "tmr-fpga",
          "type": "depende-de",
          "to": "safe-mode-spacecraft"
        },
        {
          "from": "pbft-oisl-thrust",
          "type": "compite-con",
          "to": "tmr-fpga"
        }
      ],
      "cross_facet_links": [
        {
          "to_facet": "sat-constellation",
          "to_entity": "starlink-laser-isl",
          "from_entity": "mega-constellation-resilience",
          "rationale": "El estudio modela fallos de nodo y re-enrutamiento ISL en mega-constelaciones tipo Starlink/Kuiper."
        },
        {
          "to_facet": "space-grade-sw",
          "to_entity": "jpl-fprime",
          "from_entity": "fprime-fault-protection",
          "rationale": "La especificación de Fault Protection en F´ es el contrato FDIR del framework de software de vuelo de JPL."
        },
        {
          "to_facet": "ml-sat-ops",
          "to_entity": "telemanom-jpl",
          "from_entity": "esa-fdir-three-level",
          "rationale": "SMART-FDIR/AFDIR introdujo redes Bayesianas y razonamiento difuso, base de la FDIR a bordo basada en ML."
        },
        {
          "to_facet": "space-cybersec",
          "to_entity": "viasat-ka-sat",
          "from_entity": "pbft-oisl-thrust",
          "rationale": "PBFT sobre OISL impide que un satélite secuestrado o defectuoso desvíe la constelación."
        },
        {
          "to_facet": "cross-domain-resilience",
          "to_entity": "nygard-stability-patterns",
          "from_entity": "pbft-oisl-thrust",
          "rationale": "PBFT-OISL traslada al dominio orbital los principios de consenso bizantino de los sistemas distribuidos terrestres."
        },
        {
          "to_facet": "space-grade-sw",
          "to_entity": "nasa-cfs",
          "from_entity": "fprime-fault-protection",
          "rationale": "Conversión automática tras dedup (complementa)."
        },
        {
          "to_facet": "sat-constellation",
          "to_entity": "iridium-next",
          "from_entity": "mega-constellation-resilience",
          "rationale": "Conversión automática tras dedup (evoluciona-de)."
        },
        {
          "to_facet": "sat-constellation",
          "to_entity": "darpa-blackjack-pit-boss",
          "from_entity": "pbft-oisl-thrust",
          "rationale": "Conversión automática tras dedup (evoluciona-de)."
        },
        {
          "to_facet": "sat-constellation",
          "to_entity": "starlink-laser-isl",
          "from_entity": "pbft-oisl-thrust",
          "rationale": "PBFT-over-OISL paper depende del trabajo seminal Starlink LISL."
        }
      ],
      "super_category_id": "space-resilience"
    },
    {
      "facet_id": "space-grade-sw",
      "facet_label_es": "Software grado espacial",
      "intro_es": "El software grado espacial agrupa los frameworks de vuelo, estándares de proceso, RTOS y arquitecturas de referencia que dan determinismo, trazabilidad y tolerancia a fallos al cómputo embebido en satélites y sondas. Stacks como cFS de la NASA y F´ del JPL ofrecen capas reutilizables (cFE/OSAL/PSP en cFS; componentes-puertos-topologías en F´) que han volado en decenas de misiones desde LRO hasta Ingenuity. Sobre ellos, normas como ECSS-E-ST-40C y arquitecturas de referencia como SAVOIR/OSRA fijan ciclo de vida, criticalidad A/B/C/D y partición temporal/espacial.",
      "subthemes": [
        {
          "id": "frameworks-vuelo",
          "label_es": "Frameworks de software de vuelo"
        },
        {
          "id": "standards-and-reference-arch",
          "label_es": "Estándares de proceso y arquitecturas de referencia"
        },
        {
          "id": "rtos-and-deterministic-networks",
          "label_es": "RTOS, plataformas de ejecución y redes deterministas"
        }
      ],
      "entities": [
        {
          "id": "nasa-cfs",
          "name": "NASA core Flight System (cFS)",
          "type_es": "Framework",
          "subtheme": "frameworks-vuelo",
          "year": 2026,
          "authority": "NASA Goddard Space Flight Center",
          "url": "https://github.com/nasa/cFS",
          "url_label": "Repositorio cFS en GitHub",
          "description_es": "Framework de software de vuelo de facto de la NASA, estructurado en tres capas (cFE, OSAL, PSP) con un Software Bus publish/subscribe. La versión 7.0.0 'Draco' (enero 2026) se libera bajo Apache 2.0 y soporta RTEMS, VxWorks, Linux y POSIX; ha volado en LRO, GPM, Roman y más de 40 misiones.",
          "tags": [
            "cFS",
            "cFE",
            "OSAL",
            "PSP",
            "Apache-2.0",
            "publish-subscribe"
          ],
          "reliability": "HIGH"
        },
        {
          "id": "jpl-fprime",
          "name": "JPL F´ (F Prime)",
          "type_es": "Framework",
          "subtheme": "frameworks-vuelo",
          "year": 2024,
          "authority": "NASA JPL",
          "url": "https://fprime.jpl.nasa.gov/overview/",
          "url_label": "Documentación oficial F´",
          "description_es": "Framework de software de vuelo del JPL basado en componentes, puertos tipados y topologías auto-codificadas con el lenguaje FPP. Voló en Ingenuity, CADRE Rover, Lunar Flashlight, NEA Scout, ASTERIA e ISS-RapidScat; orientado a CubeSats e instrumentos, soporta ejecución incluso sin RTOS bajo licencia Apache 2.0.",
          "tags": [
            "F-Prime",
            "FPP",
            "componentes",
            "autocoding",
            "Ingenuity",
            "Apache-2.0"
          ],
          "reliability": "HIGH"
        },
        {
          "id": "ecss-e-st-40c",
          "name": "ECSS-E-ST-40C Rev.1 Software Engineering",
          "type_es": "Estándar",
          "subtheme": "standards-and-reference-arch",
          "year": 2025,
          "authority": "European Cooperation for Space Standardization (ECSS) / ESA",
          "url": "https://ecss.nl/standard/ecss-e-st-40c-rev-1-software-30-april-2025/",
          "url_label": "Estándar ECSS-E-ST-40C Rev.1",
          "description_es": "Estándar europeo de ingeniería de software espacial que cubre todo el ciclo de vida (requisitos, diseño, V&V, operación) para 'product software' en segmento espacial, lanzador y tierra. Define hitos SRR/PDR/CDR/QR/AR/ORR y las cuatro categorías de criticalidad A/B/C/D que rigen la profundidad de análisis y verificación.",
          "tags": [
            "ECSS",
            "ciclo-de-vida",
            "criticalidad",
            "V&V",
            "tailoring"
          ],
          "reliability": "HIGH"
        },
        {
          "id": "rtems-rtos",
          "name": "RTEMS Real-Time Executive for Multiprocessor Systems",
          "type_es": "Plataforma",
          "subtheme": "rtos-and-deterministic-networks",
          "year": 2025,
          "authority": "OAR Corporation / RTEMS Project",
          "url": "https://en.wikipedia.org/wiki/RTEMS",
          "url_label": "RTEMS — referencia técnica",
          "description_es": "RTOS abierto con interfaz POSIX, soporte para SPARC LEON/ERC32, PowerPC, ARM, RISC-V y x86; versión 6.2 publicada en diciembre de 2025. Es la base de cFS y F´ en arquitecturas LEON rad-hard y opera a bordo de Mars Reconnaissance Orbiter, Trace Gas Orbiter, Parker Solar Probe, BepiColombo y JUICE.",
          "tags": [
            "RTEMS",
            "RTOS",
            "POSIX",
            "LEON",
            "SPARC",
            "RISC-V"
          ],
          "reliability": "MEDIUM"
        },
        {
          "id": "savoir-osra",
          "name": "SAVOIR / OSRA — On-board Software Reference Architecture",
          "type_es": "Especificación",
          "subtheme": "standards-and-reference-arch",
          "year": 2024,
          "authority": "SAVOIR Advisory Group / ESA ESTEC",
          "url": "https://savoir.estec.esa.int/SAVOIROutput.htm",
          "url_label": "SAVOIR Output (ESA ESTEC)",
          "description_es": "Iniciativa ESA-industria europea que define una arquitectura de referencia de aviónica con building blocks (OBC, RTU/DC) e interfaces estándar; su entregable software es OSRA, un meta-modelo de componentes con DSL textual sobre una Execution Platform (RTOS+BSP+SOIS+PUS) con partición temporal y espacial estilo ARINC-653.",
          "tags": [
            "SAVOIR",
            "OSRA",
            "ESA",
            "OBC",
            "TSP",
            "ARINC-653",
            "PUS-C"
          ],
          "reliability": "HIGH"
        },
        {
          "id": "ttethernet-as6802",
          "name": "TTEthernet (SAE AS6802)",
          "type_es": "Estándar",
          "subtheme": "rtos-and-deterministic-networks",
          "year": 2025,
          "authority": "SAE International / TTTech / NASA",
          "url": "https://en.wikipedia.org/wiki/TTEthernet",
          "url_label": "TTEthernet — referencia técnica",
          "description_es": "Ethernet determinista que combina tráfico time-triggered, rate-constrained y best-effort sobre un mismo cable, con sincronización de reloj tolerante a doble fallo. Es la red elegida para Orion, el European Service Module y la columna vertebral del Lunar Gateway; el ataque académico PCspoF demostró que su determinismo exige endurecimiento criptográfico.",
          "tags": [
            "TTEthernet",
            "AS6802",
            "time-triggered",
            "Orion",
            "Lunar-Gateway",
            "PCspoF"
          ],
          "reliability": "MEDIUM"
        }
      ],
      "relationships": [
        {
          "from": "nasa-cfs",
          "type": "usa",
          "to": "rtems-rtos"
        },
        {
          "from": "jpl-fprime",
          "type": "usa",
          "to": "rtems-rtos"
        },
        {
          "from": "savoir-osra",
          "type": "usa",
          "to": "rtems-rtos"
        },
        {
          "from": "savoir-osra",
          "type": "implementa",
          "to": "ecss-e-st-40c"
        },
        {
          "from": "nasa-cfs",
          "type": "compite-con",
          "to": "savoir-osra"
        },
        {
          "from": "jpl-fprime",
          "type": "complementa",
          "to": "nasa-cfs"
        },
        {
          "from": "ecss-e-st-40c",
          "type": "estandariza",
          "to": "jpl-fprime"
        },
        {
          "from": "ttethernet-as6802",
          "type": "complementa",
          "to": "savoir-osra"
        },
        {
          "from": "savoir-osra",
          "type": "define",
          "to": "rtems-rtos"
        }
      ],
      "cross_facet_links": [
        {
          "to_facet": "sat-fault-tolerance",
          "to_entity": "fprime-fault-protection",
          "from_entity": "ttethernet-as6802",
          "rationale": "La sincronización de reloj con tolerancia a doble fallo y la redundancia triple de TTEthernet son patrones canónicos de tolerancia a fallos en aviónica tripulada."
        },
        {
          "to_facet": "space-cybersec",
          "to_entity": "ccsds-sdls",
          "from_entity": "ttethernet-as6802",
          "rationale": "El ataque PCspoF (USENIX 2023) sobre Orion expone que el determinismo temporal requiere autenticación de tramas y filtrado en switches."
        },
        {
          "to_facet": "ml-sat-ops",
          "to_entity": "loft-aws-unibap",
          "from_entity": "nasa-cfs",
          "rationale": "La actualización informal cFS 2.0 anunciada por NASA incorpora AI/ML, autonomía y robótica en órbita como apps sobre el Software Bus."
        },
        {
          "to_facet": "sat-constellation",
          "to_entity": "darpa-blackjack-pit-boss",
          "from_entity": "jpl-fprime",
          "rationale": "F´ es el framework de referencia para CubeSats e instrumentos pequeños y habilita constelaciones académicas/comerciales con autocodificación FPP."
        },
        {
          "to_facet": "stack-templates",
          "to_entity": "well-architected",
          "from_entity": "savoir-osra",
          "rationale": "El stack europeo 'RTEMS + LEON + PUS-C + OSRA' es una plantilla de arquitectura completa formalmente especificada por ESA."
        }
      ],
      "super_category_id": "space-resilience"
    },
    {
      "facet_id": "ml-sat-ops",
      "facet_label_es": "ML en operaciones satelitales",
      "intro_es": "Esta faceta agrupa la evidencia operativa y de investigación sobre el uso de aprendizaje automático en el ciclo de vida de misiones satelitales: inferencia a bordo en VPUs de bajo consumo, detección de anomalías en telemetría, planificación autónoma continua y aprendizaje federado entre constelaciones LEO. Combina misiones validadas en órbita (EO-1, Phi-Sat-2, Starling, YAM-5) con frameworks de referencia (Telemanom, ASPEN/CASPER) y líneas de investigación activas como FedHAP/FedISL/FedSpace. El patrón emergente: el cómputo a bordo es ya más barato que la bajada de píxeles, pero radiación, potencia y certificabilidad marcan el techo de lo desplegable.",
      "subthemes": [
        {
          "id": "onboard-inference-and-mlops",
          "label_es": "Inferencia a bordo y MLOps orbital"
        },
        {
          "id": "anomaly-detection-and-fl",
          "label_es": "Detección de anomalías y federated learning LEO"
        },
        {
          "id": "planificacion-autonoma",
          "label_es": "Planificación autónoma y autonomía distribuida"
        }
      ],
      "entities": [
        {
          "id": "nasa-dsa-starling",
          "name": "NASA Distributed Spacecraft Autonomy (DSA) sobre Starling",
          "type_es": "Misión",
          "subtheme": "planificacion-autonoma",
          "year": 2024,
          "authority": "NASA Ames Research Center",
          "url": "https://www.nasa.gov/centers-and-facilities/ames/what-is-nasas-distributed-spacecraft-autonomy/",
          "url_label": "NASA Ames — DSA / Starling",
          "description_es": "Primera demostración en órbita de operación autónoma distribuida sobre cuatro CubeSats Starling (lanzados en julio de 2023), con asignación de tareas por consenso, comunicación espacio-espacio y mantenimiento autónomo. Usa PLEXIL como ejecutor de planes y combina árboles de decisión y modelos matemáticos en lugar de redes profundas end-to-end.",
          "tags": [
            "distributed-autonomy",
            "swarm",
            "PLEXIL",
            "starling",
            "NASA-Ames"
          ],
          "reliability": "HIGH"
        },
        {
          "id": "telemanom-jpl",
          "name": "Telemanom (NASA JPL)",
          "type_es": "Framework",
          "subtheme": "anomaly-detection-and-fl",
          "year": 2018,
          "authority": "NASA JPL — Hundman et al. (KDD 2018)",
          "url": "https://github.com/khundman/telemanom",
          "url_label": "GitHub khundman/telemanom",
          "description_es": "Framework de referencia para detección de anomalías en telemetría espacial basado en LSTMs por canal con umbralizado dinámico no paramétrico y no supervisado. Su mayor aporte es el dataset etiquetado SMAP/MSL (105 secuencias de anomalías reales en 82 canales), benchmark estándar del dominio con precision 87.5% y recall 80%.",
          "tags": [
            "LSTM",
            "anomaly-detection",
            "SMAP",
            "MSL",
            "benchmark"
          ],
          "reliability": "HIGH"
        },
        {
          "id": "phi-sat-2",
          "name": "ESA Phi-Sat-2",
          "type_es": "Misión",
          "subtheme": "onboard-inference-and-mlops",
          "year": 2024,
          "authority": "European Space Agency (ESA)",
          "url": "https://www.esa.int/Applications/Observing_the_Earth/Phsat-2/New_satellite_demonstrates_the_power_of_AI_for_Earth_observation",
          "url_label": "ESA — Phi-Sat-2",
          "description_es": "CubeSat 6U de ESA (lanzado el 16 de agosto de 2024 a 510 km) que demuestra una plataforma multi-tenant de aplicaciones AI a bordo: detección de nubes, embarcaciones, incendios, anomalías marinas y compresión por IA, con apps de KP Labs, CGI, CEiiA, GEO-K, IRT Saint Exupéry y Thales Alenia Space. Dos apps fueron subidas y activadas tras el lanzamiento, validando despliegue continuo en órbita sobre hardware Ubotica/Movidius.",
          "tags": [
            "phi-sat-2",
            "ESA",
            "on-board-AI",
            "Movidius",
            "Ubotica",
            "EO"
          ],
          "reliability": "HIGH"
        },
        {
          "id": "loft-aws-unibap",
          "name": "Loft Orbital + AWS + Unibap (YAM-5 / iX5-100)",
          "type_es": "Plataforma",
          "subtheme": "onboard-inference-and-mlops",
          "year": 2022,
          "authority": "AWS Public Sector + Loft Orbital + Unibap",
          "url": "https://aws.amazon.com/blogs/publicsector/aws-successfully-runs-aws-compute-machine-learning-services-orbiting-satellite-first-space-experiment/",
          "url_label": "AWS Blog — primera demo de cómputo y ML de AWS en órbita",
          "description_es": "Primera demostración comercial del stack AWS (IoT Greengrass + modelos compilados con SageMaker Neo) ejecutándose en órbita LEO sobre YAM-5/D-Orbit ION SCV-4 con el Unibap SpaceCloud iX5-100 (CPU x86-64, GPU AMD Radeon, FPGA SmartFusion2 y VPU Movidius Myriad X). Establece el patrón comercial 'condosat' multi-tenant donde clientes alquilan cómputo y sensores en la flota YAM.",
          "tags": [
            "edge-compute",
            "AWS-Greengrass",
            "Loft-Orbital",
            "Unibap",
            "Myriad-X",
            "condosat"
          ],
          "reliability": "HIGH"
        },
        {
          "id": "fedhap-fedisl-fedspace",
          "name": "FedISL / FedHAP / FedSpace",
          "type_es": "Paper",
          "subtheme": "anomaly-detection-and-fl",
          "year": 2022,
          "authority": "Elmahallawy & Luo (Missouri S&T), Razmi et al., So et al.",
          "url": "https://arxiv.org/abs/2205.07216",
          "url_label": "arXiv 2205.07216 — FedHAP",
          "description_es": "Tres enfoques académicos para federated learning sobre constelaciones LEO: FedISL (servidor MEO sobre el ecuador, asunción poco realista), FedHAP (plataformas de gran altitud HAPS como agregadores jerárquicos, reduce el entrenamiento de días a horas) y FedSpace (FL semi-asíncrono que sube datos brutos parciales a tierra, comprometiendo privacidad). Sin despliegues operativos conocidos a 2026.",
          "tags": [
            "federated-learning",
            "LEO",
            "HAPS",
            "non-IID",
            "intermittent"
          ],
          "reliability": "HIGH"
        },
        {
          "id": "aspen-casper-eo1",
          "name": "ASPEN/CASPER + ASE sobre EO-1",
          "type_es": "Framework",
          "subtheme": "planificacion-autonoma",
          "year": 2003,
          "authority": "NASA JPL Artificial Intelligence Group — Chien, Sherwood et al.",
          "url": "https://ai.jpl.nasa.gov/public/projects/casper/",
          "url_label": "JPL AI Group — CASPER",
          "description_es": "ASPEN (modelador y resolutor) y CASPER (bucle continuo de iterative repair) son los frameworks canónicos de planificación autónoma de JPL. Operaron dentro del Autonomous Sciencecraft Experiment (ASE) sobre EO-1 desde 2003 hasta el fin de la misión en 2017, integrando clasificadores SVM sobre imágenes Hyperion para detectar volcanes, hielo e inundaciones — la prueba operativa de autonomía a bordo más larga de la historia.",
          "tags": [
            "ASPEN",
            "CASPER",
            "EO-1",
            "iterative-repair",
            "PLEXIL",
            "JPL"
          ],
          "reliability": "HIGH"
        }
      ],
      "relationships": [
        {
          "from": "aspen-casper-eo1",
          "type": "comparte-lenguaje-con",
          "to": "nasa-dsa-starling"
        },
        {
          "from": "phi-sat-2",
          "type": "comparte-hardware-con",
          "to": "loft-aws-unibap"
        },
        {
          "from": "telemanom-jpl",
          "type": "complementa",
          "to": "aspen-casper-eo1"
        },
        {
          "from": "fedhap-fedisl-fedspace",
          "type": "se-despliega-sobre",
          "to": "loft-aws-unibap"
        },
        {
          "from": "nasa-dsa-starling",
          "type": "evoluciona-de",
          "to": "aspen-casper-eo1"
        },
        {
          "from": "phi-sat-2",
          "type": "valida-mlops-para",
          "to": "loft-aws-unibap"
        },
        {
          "from": "telemanom-jpl",
          "type": "se-ejecuta-a-bordo-en",
          "to": "phi-sat-2"
        }
      ],
      "cross_facet_links": [
        {
          "to_facet": "edge-swarms",
          "to_entity": "nasa-starling-dsa",
          "from_entity": "nasa-dsa-starling",
          "rationale": "DSA/Starling es el caso de referencia validado en órbita del patrón de enjambre autónomo con coordinación por consenso."
        },
        {
          "to_facet": "sat-fault-tolerance",
          "to_entity": "esa-fdir-three-level",
          "from_entity": "telemanom-jpl",
          "rationale": "Telemanom aporta el benchmark canónico SMAP/MSL y el patrón LSTM+umbralizado dinámico que la FDIR usa como detector temprano."
        },
        {
          "to_facet": "space-grade-sw",
          "to_entity": "nasa-cfs",
          "from_entity": "aspen-casper-eo1",
          "rationale": "ASPEN/CASPER consolida la separación deliberativo/ejecutivo y promueve PLEXIL como capa de ejecución determinista."
        },
        {
          "to_facet": "sat-constellation",
          "to_entity": "iridium-next",
          "from_entity": "fedhap-fedisl-fedspace",
          "rationale": "FedHAP/FedISL/FedSpace formalizan cómo entrenar modelos a través de una constelación LEO sin centralizar datos."
        },
        {
          "to_facet": "llmops",
          "to_entity": "agentic-rag",
          "from_entity": "aspen-casper-eo1",
          "rationale": "El esqueleto deliberativo/ejecutivo/funcional de ASPEN/CASPER+PLEXIL es el ancestro directo del patrón plan-execute-replán que LLMOps replica."
        },
        {
          "to_facet": "stack-templates",
          "to_entity": "well-architected",
          "from_entity": "loft-aws-unibap",
          "rationale": "El stack Greengrass + SageMaker Neo + sandbox multi-tenant sobre Myriad X define una plantilla edge-MLOps reutilizable."
        },
        {
          "to_facet": "edge-swarms",
          "to_entity": "swarmraft",
          "from_entity": "nasa-dsa-starling",
          "rationale": "NASA DSA Starling y SwarmRaft comparten arquitectura de consenso para autonomía distribuida."
        }
      ],
      "super_category_id": "space-resilience"
    },
    {
      "facet_id": "space-cybersec",
      "facet_label_es": "Ciberseguridad espacial",
      "intro_es": "La ciberseguridad espacial integra marcos de modelado de amenazas, estándares criptográficos de enlace, ciberseguridad de cadena de suministro y modelos de arquitectura de confianza cero adaptados al dominio orbital. Tras los incidentes Viasat KA-SAT (2022) y Dozor-Teleport (2023), el sector ha consolidado SPARTA como vocabulario de amenazas, NIST IR 8270 como guía de controles para satélites comerciales, y CCSDS SDLS como estándar L2 para autenticación y cifrado en TM/TC/USLP. La transición post-cuántica (FIPS 203/204/205) y la cadena de suministro (SBOM/FBOM/HBOM en Golden Dome) imponen decisiones arquitectónicas en momento de lanzamiento.",
      "subthemes": [
        {
          "id": "threats-and-incidents",
          "label_es": "Matrices de amenazas e incidentes de referencia"
        },
        {
          "id": "cryptography-link-and-pqc",
          "label_es": "Criptografía de enlace y post-cuántica"
        },
        {
          "id": "supply-chain-and-zero-trust",
          "label_es": "Cadena de suministro espacial y Zero Trust"
        }
      ],
      "entities": [
        {
          "id": "sparta-v3",
          "name": "SPARTA Framework v3.1/v3.2",
          "type_es": "Matriz-de-amenazas",
          "subtheme": "threats-and-incidents",
          "year": 2026,
          "authority": "The Aerospace Corporation",
          "url": "https://sparta.aerospace.org/",
          "url_label": "SPARTA Aerospace",
          "description_es": "Marco análogo a MITRE ATT&CK específico para sistemas espaciales con 9 categorías tácticas y ~85 técnicas, incluyendo PNT jamming/spoofing, payloads alojados y ataques cinéticos. La v3.1 (2025) incorpora mapeos a NIST/CNSSI 1253, crosswalk a MITRE EMB3D y dos técnicas nuevas (IA-0013 Compromise Host SV, DE-0012 Component Collusion); la v3.2 lanzó el 11 de marzo de 2026.",
          "tags": [
            "TTP",
            "ATT&CK",
            "Aerospace",
            "EMB3D",
            "STIX",
            "NIST"
          ],
          "reliability": "HIGH"
        },
        {
          "id": "ccsds-sdls",
          "name": "CCSDS SDLS & Extended Procedures + NASA CryptoLib",
          "type_es": "Estándar",
          "subtheme": "cryptography-link-and-pqc",
          "year": 2022,
          "authority": "CCSDS / NASA",
          "url": "https://ccsds.org/Pubs/355x0b2.pdf",
          "url_label": "CCSDS 355.0-B-2 Blue Book",
          "description_es": "Estándar Blue Book CCSDS 355.0-B-2 (jul 2022) que define seguridad L2 (autenticación, cifrado, AE) sobre Transfer Frames TM/TC/AOS/USLP, complementado por SDLS-EP (CCSDS 355.1-B-1, feb 2019) para KMS, SAMS y OTAR. Implementación de referencia: NASA CryptoLib v1.5.0 (ene 2026), C/Apache-2.0, integrada con cFS y validada por NASA IV&V e JPL AMMOS.",
          "tags": [
            "CCSDS",
            "SDLS",
            "USLP",
            "AES-GCM",
            "OTAR",
            "CryptoLib",
            "cFS"
          ],
          "reliability": "HIGH"
        },
        {
          "id": "nist-ir-8270",
          "name": "NIST IR 8270 — Cybersecurity for Commercial Satellite Operations",
          "type_es": "Documentación",
          "subtheme": "supply-chain-and-zero-trust",
          "year": 2023,
          "authority": "NIST",
          "url": "https://csrc.nist.gov/pubs/ir/8270/final",
          "url_label": "NIST IR 8270 Final",
          "description_es": "Primera guía formal de aplicación del NIST CSF al sector de satélites comerciales (jul 2023), con perfil concreto sobre small sensing satellite y mapeos a SP 800-53. Forma parte del ecosistema NIST espacial junto a IR 8323 (PNT), IR 8401 (ground segment) y SP 800-216 (vulnerability disclosure); referenciado por SPD-5 y CISA.",
          "tags": [
            "CSF",
            "SP 800-53",
            "SPD-5",
            "compliance",
            "small-sat"
          ],
          "reliability": "HIGH"
        },
        {
          "id": "pqc-space",
          "name": "Post-Quantum Cryptography for Space — FIPS 203/204/205 + HQC",
          "type_es": "Estándar",
          "subtheme": "cryptography-link-and-pqc",
          "year": 2024,
          "authority": "NIST",
          "url": "https://csrc.nist.gov/projects/post-quantum-cryptography",
          "url_label": "NIST PQC Project",
          "description_es": "NIST finalizó en agosto de 2024 FIPS 203 (ML-KEM/Kyber), FIPS 204 (ML-DSA/Dilithium) y FIPS 205 (SLH-DSA/SPHINCS+); HQC añadido en marzo de 2025. ML-KEM/ML-DSA muestran 0.31 μJ/op a 150 MHz en FPGAs space-grade; el roadmap NIST IR 8547 fija transición 2025-2040, exigiendo crypto-agility e hibridización para misiones de larga duración.",
          "tags": [
            "PQC",
            "ML-KEM",
            "ML-DSA",
            "SLH-DSA",
            "HQC",
            "FIPS 203",
            "HNDL"
          ],
          "reliability": "HIGH"
        },
        {
          "id": "supply-chain-bom",
          "name": "Satellite Supply Chain Security — SBOM/FBOM/HBOM e ITAR",
          "type_es": "Framework",
          "subtheme": "supply-chain-and-zero-trust",
          "year": 2025,
          "authority": "DoD / CISA / Aerospace Corp",
          "url": "https://eclypsium.com/blog/sbom-federal-requirements-guidelines/",
          "url_label": "Eclypsium SBOM/FBOM",
          "description_es": "El programa DoD Golden Dome for America (2025) eleva SBOM, FBOM y HBOM a requisito contractual para defensa antimisiles, mientras SPARTA v3.1 formaliza la técnica DE-0012 Component Collusion. ITAR/EAR siguen condicionando flujos transfronterizos y crean tensión con la transparencia pública del SBOM, exigiendo modelos por niveles y CBOM para auditar la migración PQC.",
          "tags": [
            "SBOM",
            "FBOM",
            "HBOM",
            "CBOM",
            "ITAR",
            "Golden Dome",
            "supply-chain"
          ],
          "reliability": "HIGH"
        },
        {
          "id": "viasat-ka-sat",
          "name": "Viasat KA-SAT Attack (24-feb-2022) — AcidRain Wiper",
          "type_es": "Estudio",
          "subtheme": "threats-and-incidents",
          "year": 2022,
          "authority": "EU/Five Eyes attribution; CCDCOE; SentinelLabs",
          "url": "https://en.wikipedia.org/wiki/Viasat_hack",
          "url_label": "Viasat hack — Wikipedia",
          "description_es": "El ataque GRU del 24-feb-2022 explotó CVE-2018-13379 en una VPN Fortinet sin parchear gestionada por Skylogic, pivotó al servidor de actualizaciones y desplegó el wiper AcidRain (ELF MIPS 32) que sobrescribió la flash de los módems Surfbeam2 en KA-SAT. Bricking de decenas de miles de módems en Ucrania con daño colateral en 5.800 turbinas Enercon; caso canónico de fallo arquitectónico ground-segment con firmware sin autenticación criptográfica fuerte.",
          "tags": [
            "AcidRain",
            "GRU",
            "Sandworm",
            "Surfbeam2",
            "Fortinet",
            "VPN",
            "ground-segment"
          ],
          "reliability": "HIGH"
        },
        {
          "id": "zero-trust-space",
          "name": "Zero Trust Architecture for Space Systems — CISA 2024",
          "type_es": "Framework",
          "subtheme": "supply-chain-and-zero-trust",
          "year": 2024,
          "authority": "CISA",
          "url": "https://www.cisa.gov/sites/default/files/2024-06/Space%20Systems%20Security%20and%20Resilience%20Landscape%20-%20Zero%20Trust%20in%20the%20Space%20Environment%20(508).pdf",
          "url_label": "CISA Zero Trust in Space (jun 2024)",
          "description_es": "Informe CISA de junio 2024 que alinea el sector espacial con el mandato OMB de ZTA federal, exigiendo autenticación y mínimo privilegio en cada componente —satélite, estación terrena o panel cloud— y atestación mutua estándar. Identifica tres tecnologías emergentes (homomorphic encryption, DLT, QKD) y se complementa con investigación 2025 sobre ZT en ISL basada en Hyperelliptic Curve Cryptography (HECC) y señales orbitales.",
          "tags": [
            "zero-trust",
            "CISA",
            "ISL",
            "HECC",
            "FHE",
            "DLT",
            "OMB"
          ],
          "reliability": "HIGH"
        }
      ],
      "relationships": [
        {
          "from": "viasat-ka-sat",
          "type": "demuestra-fallo-mitigado-por",
          "to": "ccsds-sdls"
        },
        {
          "from": "viasat-ka-sat",
          "type": "motiva-adopcion-de",
          "to": "zero-trust-space"
        },
        {
          "from": "sparta-v3",
          "type": "mapea-controles-a",
          "to": "nist-ir-8270"
        },
        {
          "from": "sparta-v3",
          "type": "incluye-tecnica",
          "to": "supply-chain-bom"
        },
        {
          "from": "ccsds-sdls",
          "type": "requiere-migracion-a",
          "to": "pqc-space"
        },
        {
          "from": "zero-trust-space",
          "type": "ancla-identidad-en",
          "to": "pqc-space"
        },
        {
          "from": "nist-ir-8270",
          "type": "complementa-modelado-con",
          "to": "sparta-v3"
        },
        {
          "from": "supply-chain-bom",
          "type": "exige-CBOM-para",
          "to": "pqc-space"
        },
        {
          "from": "viasat-ka-sat",
          "type": "ejemplifica-tactica-de",
          "to": "sparta-v3"
        }
      ],
      "cross_facet_links": [
        {
          "to_facet": "sat-constellation",
          "to_entity": "starlink-laser-isl",
          "from_entity": "ccsds-sdls",
          "rationale": "Las megaconstelaciones LEO comerciales (Starlink, OneWeb) usan crypto propietaria sin alineación pública con SDLS; tensión entre estandarización abierta y stacks cerrados."
        },
        {
          "to_facet": "space-grade-sw",
          "to_entity": "nasa-cfs",
          "from_entity": "ccsds-sdls",
          "rationale": "NASA CryptoLib es la implementación de referencia de SDLS-EP integrada con cFS, vinculando crypto de enlace con la pila de flight software open-source."
        },
        {
          "to_facet": "sat-fault-tolerance",
          "to_entity": "solar-orbiter-safe-mode",
          "from_entity": "zero-trust-space",
          "rationale": "Las ventanas de pase LEO de 5-12 min y la latencia de auth en ZT pueden mistriggear FDIR; arquitecturas requieren SAs precomputadas."
        },
        {
          "to_facet": "cross-domain-resilience",
          "to_entity": "nygard-stability-patterns",
          "from_entity": "viasat-ka-sat",
          "rationale": "El daño colateral a 5.800 turbinas Enercon expuso interdependencia crítica y motivó la inclusión del sector espacial como alta criticidad en NIS2."
        },
        {
          "to_facet": "research-frontier",
          "to_entity": "fips-203-204-205",
          "from_entity": "zero-trust-space",
          "rationale": "Investigación 2025 propone protocolos ZT para ISL con Hyperelliptic Curve Cryptography y señales orbitales como identidad auxiliar."
        },
        {
          "to_facet": "sat-fault-tolerance",
          "to_entity": "tmr-fpga",
          "from_entity": "pqc-space",
          "rationale": "PQC en FPGAs space-grade; ambos en hardware crítico."
        }
      ],
      "super_category_id": "space-resilience"
    },
    {
      "facet_id": "cross-domain-resilience",
      "facet_label_es": "Resiliencia interdominio",
      "intro_es": "Esta faceta reúne el vocabulario y los patrones canónicos de fiabilidad que se traducen entre nube, redes eléctricas, aeroespacial, salud y finanzas. Combina los patrones de estabilidad de Nygard, las publicaciones de la AWS Builders' Library, el paper de Dean y Barroso sobre tail latency, la práctica de ingeniería del caos de Netflix y Google, las sagas distribuidas, las políticas de error budget de Google SRE, los estándares de aseguramiento aerocomercial DO-178C/ARP4754A y la mensajería IEC 61850 GOOSE para microredes auto-curables. Funciona como faceta de cruce que conecta resiliencia espacial, energética, sanitaria y financiera mediante primitivas comunes.",
      "subthemes": [
        {
          "id": "stability-patterns-classic",
          "label_es": "Patrones de estabilidad clásicos"
        },
        {
          "id": "cloud-distributed-systems",
          "label_es": "Sistemas distribuidos cloud"
        },
        {
          "id": "cell-isolation",
          "label_es": "Aislamiento por celdas y shuffle sharding"
        },
        {
          "id": "chaos-engineering",
          "label_es": "Ingeniería del caos"
        },
        {
          "id": "grid-self-healing",
          "label_es": "Redes eléctricas auto-curables (GOOSE/IEC 61850)"
        },
        {
          "id": "assurance-frameworks",
          "label_es": "Marcos de aseguramiento y SRE"
        }
      ],
      "entities": [
        {
          "id": "nygard-stability-patterns",
          "name": "Stability Patterns (Nygard, Release It!)",
          "type_es": "Libro",
          "subtheme": "stability-patterns-classic",
          "year": 2018,
          "authority": "Michael T. Nygard / Pragmatic Bookshelf",
          "url": "https://github.com/csabapalfi/release-it/blob/master/3-stability_patterns.md",
          "url_label": "Resumen capítulo 3 (Release It! 2nd ed.)",
          "description_es": "Conjunto canónico de 8 patrones de estabilidad (Use Timeouts, Circuit Breaker, Bulkheads, Steady State, Fail Fast, Handshaking, Test Harness, Decoupling Middleware) y 11 antipatrones que constituyen la lingua franca de la arquitectura distribuida moderna. Base conceptual de Hystrix, Resilience4j y Polly.",
          "tags": [
            "circuit-breaker",
            "bulkhead",
            "fail-fast",
            "handshaking",
            "stability"
          ],
          "reliability": "HIGH"
        },
        {
          "id": "aws-timeouts-retries-jitter",
          "name": "Timeouts, Retries and Backoff with Jitter (AWS Builders' Library)",
          "type_es": "Documentación",
          "subtheme": "cloud-distributed-systems",
          "year": 2019,
          "authority": "Marc Brooker / Amazon Web Services",
          "url": "https://aws.amazon.com/builders-library/timeouts-retries-and-backoff-with-jitter/",
          "url_label": "AWS Builders' Library",
          "description_es": "Texto canónico sobre timeouts derivados de p99.9, reintentos en una sola capa para evitar amplificación 3^N, y full jitter como fórmula por defecto del SDK de AWS. Recomienda token bucket frente a circuit breaker binario para suavizar la reducción de carga.",
          "tags": [
            "retry",
            "backoff",
            "jitter",
            "timeout",
            "idempotency"
          ],
          "reliability": "HIGH"
        },
        {
          "id": "aws-load-shedding",
          "name": "Using Load Shedding to Avoid Overload (AWS Builders' Library)",
          "type_es": "Documentación",
          "subtheme": "cloud-distributed-systems",
          "year": 2019,
          "authority": "David Yanacek / Amazon Web Services",
          "url": "https://aws.amazon.com/builders-library/using-load-shedding-to-avoid-overload/",
          "url_label": "AWS Builders' Library",
          "description_es": "Define goodput frente a throughput, propagación de deadline transversal, priorización jerárquica (health checks > finalizers > paginación > clientes en cuota) y defensas en capas. Recomienda spillover/fast-fail antes que surge queues y rechazo basado en edad.",
          "tags": [
            "load-shedding",
            "backpressure",
            "goodput",
            "deadline-propagation"
          ],
          "reliability": "HIGH"
        },
        {
          "id": "aws-cell-based-shuffle-sharding",
          "name": "Cell-Based Architecture & Shuffle Sharding (AWS)",
          "type_es": "Documentación",
          "subtheme": "cell-isolation",
          "year": 2022,
          "authority": "AWS Well-Architected / AWS Builders' Library",
          "url": "https://aws.amazon.com/builders-library/workload-isolation-using-shuffle-sharding/",
          "url_label": "AWS Builders' Library: shuffle sharding",
          "description_es": "Patrones canónicos de aislamiento de fallos: celdas autocontenidas que limitan blast radius (1/N) y shuffle sharding probabilístico (con N=100, k=4 la colisión total cae a ~1/4M). Usado internamente en Route 53 y S3 para servicios multi-tenant.",
          "tags": [
            "cell-based",
            "shuffle-sharding",
            "blast-radius",
            "multi-tenant"
          ],
          "reliability": "HIGH"
        },
        {
          "id": "dean-tail-at-scale",
          "name": "The Tail at Scale (Dean & Barroso)",
          "type_es": "Paper",
          "subtheme": "cloud-distributed-systems",
          "year": 2013,
          "authority": "Jeffrey Dean & Luiz André Barroso / Google / CACM",
          "url": "https://cseweb.ucsd.edu/classes/sp18/cse291-c/post/schedule/p74-dean.pdf",
          "url_label": "Communications of the ACM, feb 2013",
          "description_es": "Paper fundacional de la ingeniería de tail latency. Introduce hedged requests, tied requests y backup requests con cancelación cruzada; en BigTable redujo p99.9 de 1800 ms a 74 ms con solo 2% de carga adicional. Tambien propone micro-particionado y latency-induced probation.",
          "tags": [
            "tail-latency",
            "hedge-request",
            "tied-request",
            "p99-9"
          ],
          "reliability": "HIGH"
        },
        {
          "id": "saga-pattern-azure",
          "name": "Saga Pattern, CQRS y Event Sourcing",
          "type_es": "Patrón",
          "subtheme": "cloud-distributed-systems",
          "year": 2025,
          "authority": "Microsoft Azure Architecture Center / Chris Richardson",
          "url": "https://learn.microsoft.com/en-us/azure/architecture/patterns/saga",
          "url_label": "Azure Architecture Center: Saga",
          "description_es": "Reemplazo canónico del two-phase commit en microservicios. Define transacciones compensables, transacción pivot (punto de no retorno) y transacciones reintentables idempotentes; admite orquestación (Step Functions, Temporal, Durable Functions) o coreografía (Kafka, EventBridge).",
          "tags": [
            "saga",
            "compensating-transaction",
            "pivot-transaction",
            "event-sourcing",
            "cqrs"
          ],
          "reliability": "HIGH"
        },
        {
          "id": "chaos-engineering-principles",
          "name": "Principles of Chaos Engineering & Simian Army",
          "type_es": "Framework",
          "subtheme": "chaos-engineering",
          "year": 2017,
          "authority": "principlesofchaos.org / Netflix",
          "url": "https://principlesofchaos.org/",
          "url_label": "Principles of Chaos Engineering",
          "description_es": "Disciplina de experimentación en producción para ganar confianza ante condiciones turbulentas. Cinco principios avanzados (hipótesis sobre estado estable, variar eventos reales, ejecutar en producción, automatizar, minimizar blast radius). Linaje Chaos Monkey (2011), Simian Army, AWS FIS, Chaos Mesh, LitmusChaos, Gremlin.",
          "tags": [
            "chaos-engineering",
            "simian-army",
            "blast-radius",
            "fault-injection",
            "gameday"
          ],
          "reliability": "HIGH"
        },
        {
          "id": "iec-61850-self-healing-microgrids",
          "name": "Microredes auto-curables con IEC 61850 GOOSE (F-FDIR)",
          "type_es": "Estándar",
          "subtheme": "grid-self-healing",
          "year": 2021,
          "authority": "MDPI Energies / Padullaparti et al. / IEC",
          "url": "https://www.mdpi.com/1996-1073/14/3/547",
          "url_label": "Energies 14(3) 547",
          "description_es": "Arquitectura de microred auto-curable que sustituye SCADA cliente-servidor por mensajería peer-to-peer GOOSE entre IEDs, logrando F-FDIR (deteccion, aislamiento y restauración rápidos). Combina microgrids anidados, OpenFMB sobre DDS, IEC 62351 y NERC CIP. GOOSE (1995-2003) anticipa Kafka y DDS.",
          "tags": [
            "iec-61850",
            "goose",
            "microgrid",
            "f-fdir",
            "openfmb",
            "self-healing"
          ],
          "reliability": "HIGH"
        },
        {
          "id": "do178c-arp4754",
          "name": "DO-178C y ARP4754A — Aseguramiento aerocomercial",
          "type_es": "Estándar",
          "subtheme": "assurance-frameworks",
          "year": 2011,
          "authority": "RTCA / SAE / FAA / EASA",
          "url": "https://en.wikipedia.org/wiki/DO-178C",
          "url_label": "DO-178C (resumen)",
          "description_es": "Marco normativo para certificación de software aerocomercial: cinco niveles DAL (A-E), cobertura MC/DC obligatoria en DAL A, suplementos DO-330 (cualificación de herramientas), DO-331 (model-based), DO-332 (OO) y DO-333 (métodos formales). ARP4754A/B cubre desarrollo a nivel sistema; ARP4761 cubre safety assessment (FHA, FTA, FMEA).",
          "tags": [
            "do-178c",
            "arp4754",
            "dal",
            "mc-dc",
            "safety-critical",
            "certification"
          ],
          "reliability": "HIGH"
        },
        {
          "id": "sre-error-budget-policy",
          "name": "Google SRE Error Budget Policy",
          "type_es": "Framework",
          "subtheme": "assurance-frameworks",
          "year": 2018,
          "authority": "Google SRE / O'Reilly",
          "url": "https://sre.google/workbook/error-budget-policy/",
          "url_label": "SRE Workbook: Error Budget Policy",
          "description_es": "Política canónica de Google SRE: ventana móvil de 4 semanas, halt de releases (excepto P0/seguridad) cuando se agota el budget, postmortem obligatorio si un incidente consume >20%, escalado al CTO en disputas. Convierte la fiabilidad en una tasa con slack explícito.",
          "tags": [
            "sre",
            "error-budget",
            "slo",
            "sli",
            "postmortem"
          ],
          "reliability": "HIGH"
        }
      ],
      "relationships": [
        {
          "from": "nygard-stability-patterns",
          "type": "fundamenta",
          "to": "aws-timeouts-retries-jitter"
        },
        {
          "from": "nygard-stability-patterns",
          "type": "fundamenta",
          "to": "aws-load-shedding"
        },
        {
          "from": "aws-cell-based-shuffle-sharding",
          "type": "implementa-bulkhead-de",
          "to": "nygard-stability-patterns"
        },
        {
          "from": "dean-tail-at-scale",
          "type": "complementa",
          "to": "aws-timeouts-retries-jitter"
        },
        {
          "from": "saga-pattern-azure",
          "type": "reemplaza-2pc",
          "to": "nygard-stability-patterns"
        },
        {
          "from": "chaos-engineering-principles",
          "type": "operacionaliza",
          "to": "nygard-stability-patterns"
        },
        {
          "from": "chaos-engineering-principles",
          "type": "valida-en-produccion",
          "to": "aws-cell-based-shuffle-sharding"
        },
        {
          "from": "iec-61850-self-healing-microgrids",
          "type": "encarna",
          "to": "nygard-stability-patterns"
        },
        {
          "from": "sre-error-budget-policy",
          "type": "gobierna-cadencia-de",
          "to": "chaos-engineering-principles"
        },
        {
          "from": "do178c-arp4754",
          "type": "establece-rigor-paralelo-a",
          "to": "sre-error-budget-policy"
        }
      ],
      "cross_facet_links": [
        {
          "to_facet": "sat-fault-tolerance",
          "to_entity": "esa-fdir-three-level",
          "from_entity": "iec-61850-self-healing-microgrids",
          "rationale": "F-FDIR de microredes con GOOSE multicast es el análogo terrestre del FDIR autónomo de constelaciones; ambos requieren detección, aislamiento y restauración con deadlines duros."
        },
        {
          "to_facet": "space-grade-sw",
          "to_entity": "ecss-e-st-40c",
          "from_entity": "do178c-arp4754",
          "rationale": "DO-178C en aviónica y ECSS-E-ST-40C en espacio comparten estructura de ciclo de vida y niveles de criticalidad."
        },
        {
          "to_facet": "space-cybersec",
          "to_entity": "viasat-ka-sat",
          "from_entity": "iec-61850-self-healing-microgrids",
          "rationale": "El paper de Energies depende de NERC CIP-005/007/014 para perímetros electrónicos; bisagra normativa entre GOOSE y ciberseguridad de la red."
        },
        {
          "to_facet": "edge-swarms",
          "to_entity": "ros2-dds-sros2",
          "from_entity": "iec-61850-self-healing-microgrids",
          "rationale": "GOOSE multicast es el ancestro directo de DDS pub/sub usado en ROS 2; mismo patrón en distinto dominio."
        },
        {
          "to_facet": "stack-templates",
          "to_entity": "well-architected",
          "from_entity": "nygard-stability-patterns",
          "rationale": "Los 8 patrones de Nygard son la columna vertebral del pilar Reliability en cualquier framework Well-Architected."
        }
      ],
      "super_category_id": "cross-cutting-edge"
    },
    {
      "facet_id": "edge-swarms",
      "facet_label_es": "Enjambres edge y robótica distribuida",
      "intro_es": "Los enjambres edge y la robótica distribuida unifican primitivas de coordinación que ya operan en producción en flotas de robotaxis, drones de combate, mallas IoT domésticas y constelaciones LEO. La faceta captura el desplazamiento desde el modelo bent-pipe hacia computación orbital edge, junto con consenso tolerante a fallos bizantinos, middleware ROS 2/DDS y orquestación SDN multi-órbita. El patrón unificador es autonomía local primero, con un plano de control superior que reconfigura recursos a cadencia más lenta. KubeEdge/OpenYurt formaliza la división nube-edge equivalente a satélite-tierra, y Raft con peer-ranging traslada al cross-link óptico de Starling.",
      "subthemes": [
        {
          "id": "swarm-consensus",
          "label_es": "Consenso descentralizado y resiliencia bizantina"
        },
        {
          "id": "robot-middleware",
          "label_es": "Middleware robótico (ROS 2, DDS, SROS2)"
        },
        {
          "id": "edge-orchestration",
          "label_es": "Orquestación edge (Kubernetes ligero, federated learning)"
        },
        {
          "id": "orbital-edge",
          "label_es": "Computación orbital edge y autonomía de constelación"
        },
        {
          "id": "fleet-autonomy",
          "label_es": "Autonomía de flota y SDN multi-dominio"
        }
      ],
      "entities": [
        {
          "id": "k8s-edge-distros",
          "name": "Distribuciones Kubernetes ligeras para edge (K3s, K0s, KubeEdge, OpenYurt)",
          "type_es": "Estudio",
          "subtheme": "edge-orchestration",
          "year": 2025,
          "authority": "Yakubov & Hästbacka / arXiv",
          "url": "https://arxiv.org/abs/2504.03656",
          "url_label": "arXiv:2504.03656",
          "description_es": "Comparativa empírica de cinco distribuciones K8s edge sobre Intel NUC y Raspberry Pi, midiendo consumo, throughput, autonomía offline y cumplimiento de seguridad. K3s minimiza huella; KubeEdge y OpenYurt destacan en operación desconectada con DeviceTwin y YurtHub.",
          "tags": [
            "kubernetes",
            "edge",
            "k3s",
            "kubeedge",
            "openyurt",
            "cncf"
          ],
          "reliability": "HIGH"
        },
        {
          "id": "swarmraft",
          "name": "SwarmRaft — Consenso Raft con peer-ranging para enjambres UAV",
          "type_es": "Paper",
          "subtheme": "swarm-consensus",
          "year": 2025,
          "authority": "Dev, Madhwal et al. / arXiv",
          "url": "https://arxiv.org/abs/2508.00622",
          "url_label": "arXiv:2508.00622",
          "description_es": "Adapta Raft con elección de líder, ranging punto a punto (UWB/visual) y verificación bizantina cruzada para coordinar enjambres UAV en entornos con GNSS denegado. Reconstruye estado de posición desde consenso de pares cuando GNSS falla.",
          "tags": [
            "raft",
            "consenso",
            "byzantine",
            "uav",
            "gnss-denied",
            "uwb"
          ],
          "reliability": "HIGH"
        },
        {
          "id": "nasa-starling-dsa",
          "name": "NASA Starling y Distributed Spacecraft Autonomy (DSA)",
          "type_es": "Misión",
          "subtheme": "orbital-edge",
          "year": 2024,
          "authority": "NASA Ames / Stanford SLAB",
          "url": "https://www.nasa.gov/centers-and-facilities/ames/what-is-nasas-distributed-spacecraft-autonomy/",
          "url_label": "NASA Ames — DSA",
          "description_es": "Primera demostración orbital de un enjambre satelital plenamente autónomo con cuatro CubeSats 6U, MANET cross-link, planificador distribuido sobre PLEXIL y navegación angular StarFOX. Validó decisión, planificación y ejecución colectivas sin tutela terrestre.",
          "tags": [
            "starling",
            "dsa",
            "plexil",
            "starfox",
            "manet",
            "cubesat"
          ],
          "reliability": "HIGH"
        },
        {
          "id": "waymo-fleet",
          "name": "Waymo — Arquitectura de orquestación de flota robotaxi",
          "type_es": "Plataforma",
          "subtheme": "fleet-autonomy",
          "year": 2024,
          "authority": "Waymo / Alphabet",
          "url": "https://waymo.com/blog/2024/05/fleet-response/",
          "url_label": "Waymo Fleet Response",
          "description_es": "Stack de despacho con previsión ML de demanda 2-4h, batching, mantenimiento y Fleet Response humano-en-bucle vía Q&A asíncrono. Autonomía local primero: la nube nunca está en el bucle crítico de seguridad y los operadores remotos solo sugieren.",
          "tags": [
            "robotaxi",
            "fleet",
            "ml",
            "human-in-the-loop",
            "supervised-autonomy"
          ],
          "reliability": "HIGH"
        },
        {
          "id": "flower-flare-fedml",
          "name": "Flower, NVIDIA FLARE y FedML — Frameworks de federated learning",
          "type_es": "Framework",
          "subtheme": "edge-orchestration",
          "year": 2024,
          "authority": "Flower Labs / NVIDIA / FedML",
          "url": "https://developer.nvidia.com/blog/supercharging-the-federated-learning-ecosystem-by-integrating-flower-and-nvidia-flare/",
          "url_label": "NVIDIA — Flower + FLARE",
          "description_es": "Tres frameworks FL representativos: Flower (estrategia/handler ágil), FLARE (workflow regulado con FHE CKKS) y FedML (modular API/core). La integración Flower-FLARE permite ejecutar apps de investigación dentro de runtime empresarial sin cambios de código.",
          "tags": [
            "federated-learning",
            "flower",
            "flare",
            "fedml",
            "fhe",
            "edge-ai"
          ],
          "reliability": "HIGH"
        },
        {
          "id": "iot-mesh-standards",
          "name": "Matter 1.4, Thread 1.4, Zigbee 3.0 y LoRaWAN — Mallas IoT",
          "type_es": "Estándar",
          "subtheme": "swarm-consensus",
          "year": 2024,
          "authority": "Connectivity Standards Alliance / Thread Group",
          "url": "https://csa-iot.org/all-solutions/matter/",
          "url_label": "Matter — CSA",
          "description_es": "Cuatro estándares de malla IoT con compromisos distintos: Zigbee con coordinador (recuperación 0.36s), Thread 1.4 con elección de líder distribuida sobre IPv6/6LoWPAN, Matter como capa de aplicación multi-red y LoRaWAN star-of-stars de largo alcance.",
          "tags": [
            "matter",
            "thread",
            "zigbee",
            "lorawan",
            "mesh",
            "ipv6"
          ],
          "reliability": "HIGH"
        },
        {
          "id": "orbital-edge-computing",
          "name": "Orbital Edge Computing y Cloud Ground Stations (AWS GS, Φ-sat-2, AI-eXpress)",
          "type_es": "Patrón",
          "subtheme": "orbital-edge",
          "year": 2024,
          "authority": "ESA Φ-lab / AWS / Chinese Journal of Aeronautics",
          "url": "https://www.sciencedirect.com/science/article/pii/S1000936124004709",
          "url_label": "ScienceDirect — OEC survey 2024",
          "description_es": "Transición desde bent-pipe hacia OEC: filtrado e inferencia onboard reducen 15 GB de imagen cruda a 0.75 GB útiles por pase. Incluye AWS Ground Station, retiro de Azure Orbital, Φ-sat-2 con acelerador AI y AI-eXpress (3CS) con blockchain de atestación.",
          "tags": [
            "oec",
            "ground-station",
            "phi-sat-2",
            "ai-express",
            "leo",
            "esa"
          ],
          "reliability": "HIGH"
        },
        {
          "id": "ros2-dds-sros2",
          "name": "ROS 2, DDS y SROS2 — Middleware multi-robot seguro",
          "type_es": "Framework",
          "subtheme": "robot-middleware",
          "year": 2024,
          "authority": "Open Robotics / OMG",
          "url": "https://design.ros2.org/articles/ros2_dds_security.html",
          "url_label": "design.ros2.org — DDS Security",
          "description_es": "ROS 2 reemplaza TCPROS por DDS pub/sub con perfiles QoS y descubrimiento descentralizado. DDS-Security define cinco SPIs (autenticación, control, criptografía, etiquetado, logging) y SROS2 envuelve el flujo con CAs, governance XML y permisos por tópico.",
          "tags": [
            "ros2",
            "dds",
            "sros2",
            "px4",
            "qos",
            "multirobot"
          ],
          "reliability": "HIGH"
        },
        {
          "id": "aalyria-spacetime",
          "name": "Aalyria Spacetime y lecciones de enjambres en Ucrania",
          "type_es": "Plataforma",
          "subtheme": "fleet-autonomy",
          "year": 2024,
          "authority": "Aalyria / DIU Hybrid Space Architecture",
          "url": "https://spacenews.com/space-startup-aalyria-demonstrates-satellite-mesh-network/",
          "url_label": "SpaceNews — Aalyria mesh",
          "description_es": "SDN temporo-espacial multi-órbita y multi-dominio que reconfigura topología y ruteo en segundos ante pérdida de satélite, jamming o cortes terrestres. Combina con lecciones del enjambre Swarmer (>100k misiones, 30% packet loss) que demuestran resiliencia por coordinación, no precisión.",
          "tags": [
            "sdn",
            "spacetime",
            "multi-orbit",
            "swarmer",
            "ew",
            "diu"
          ],
          "reliability": "HIGH"
        }
      ],
      "relationships": [
        {
          "from": "swarmraft",
          "type": "extiende",
          "to": "ros2-dds-sros2"
        },
        {
          "from": "nasa-starling-dsa",
          "type": "aplica-consenso-similar-a",
          "to": "swarmraft"
        },
        {
          "from": "k8s-edge-distros",
          "type": "orquesta-cargas-en",
          "to": "orbital-edge-computing"
        },
        {
          "from": "ros2-dds-sros2",
          "type": "habilita-coordinacion-en",
          "to": "aalyria-spacetime"
        },
        {
          "from": "iot-mesh-standards",
          "type": "comparte-patron-mesh-con",
          "to": "nasa-starling-dsa"
        },
        {
          "from": "flower-flare-fedml",
          "type": "se-despliega-sobre",
          "to": "k8s-edge-distros"
        },
        {
          "from": "waymo-fleet",
          "type": "comparte-patron-supervised-autonomy-con",
          "to": "aalyria-spacetime"
        },
        {
          "from": "orbital-edge-computing",
          "type": "integra",
          "to": "nasa-starling-dsa"
        }
      ],
      "cross_facet_links": [
        {
          "to_facet": "sat-constellation",
          "to_entity": "iridium-next",
          "from_entity": "nasa-starling-dsa",
          "rationale": "Starling demuestra autonomía colectiva que sustenta la operación de mega-constelaciones LEO sin tutela terrestre por satélite."
        },
        {
          "to_facet": "sat-fault-tolerance",
          "to_entity": "pbft-oisl-thrust",
          "from_entity": "swarmraft",
          "rationale": "El consenso Raft con cross-checking de ranging traslada como capa BFT sobre cross-links satelitales bajo spoofing EW."
        },
        {
          "to_facet": "cross-domain-resilience",
          "to_entity": "iec-61850-self-healing-microgrids",
          "from_entity": "aalyria-spacetime",
          "rationale": "Spacetime es el ejemplar canónico de orquestación SDN cruzando espacio, aire, marítimo y terrestre con reconfiguración en segundos."
        },
        {
          "to_facet": "agentic-llmops",
          "to_entity": "langgraph",
          "from_entity": "flower-flare-fedml",
          "rationale": "Flower Intelligence sirve LLMs locales+remotos siguiendo el patrón estrategia/handler de agentes LLM federados en el edge."
        },
        {
          "to_facet": "stack-templates",
          "to_entity": "well-architected",
          "from_entity": "k8s-edge-distros",
          "rationale": "K3s/KubeEdge/OpenYurt son los bloques canónicos de toda plantilla stack para nodos edge desconectados."
        },
        {
          "to_facet": "space-grade-sw",
          "to_entity": "nasa-cfs",
          "from_entity": "nasa-starling-dsa",
          "rationale": "DSA está construido sobre PLEXIL, lenguaje de plan-ejecución NASA usado como referencia space-grade."
        },
        {
          "to_facet": "ml-sat-ops",
          "to_entity": "phi-sat-2",
          "from_entity": "orbital-edge-computing",
          "rationale": "Orbital edge computing literalmente nombra Phi-Sat-2 en su título; bridge faltante."
        },
        {
          "to_facet": "space-cybersec",
          "to_entity": "viasat-ka-sat",
          "from_entity": "aalyria-spacetime",
          "rationale": "Aalyria Spacetime y Viasat KA-SAT operan en contextos militares Ukraine; bridge contextual."
        }
      ],
      "super_category_id": "cross-cutting-edge"
    },
    {
      "facet_id": "research-frontier",
      "facet_label_es": "Frontera de investigación",
      "intro_es": "La frontera de investigación 2024-2026 en arquitecturas resilientes converge en cinco vectores: consenso bizantino moderno (Shoal++, Autobahn) que persigue el Pareto latencia/throughput; verificación formal escalando con TLA+, P, Verus y Verdi sobre sistemas reales de AWS/Microsoft; computación neuromórfica para órbita (HTD, BrainStack) que multiplica por 4.5 el tiempo medio hasta fallo bajo SEU; criptografía post-cuántica estandarizada por NIST (FIPS 203/204/205, HQC) y FHE acelerada por hardware; y fiabilidad de sistemas de IA donde fallos cada 45 minutos en clústeres GPU obligan a rediseñar protocolos de coordinación. Foros como DSN, SOSP, NSDI, SIGCOMM, SmallSat y los seminarios Dagstuhl marcan los problemas abiertos.",
      "subthemes": [
        {
          "id": "consenso-bft-moderno",
          "label_es": "Consenso bizantino moderno y BFT en DAG"
        },
        {
          "id": "verificacion-formal",
          "label_es": "Métodos formales y seguridad de memoria"
        },
        {
          "id": "neuromorfico-espacio",
          "label_es": "Computación neuromórfica para espacio"
        },
        {
          "id": "pqc-fhe",
          "label_es": "Criptografía post-cuántica, FHE y redes cuánticas"
        },
        {
          "id": "fiabilidad-ia-sistemas",
          "label_es": "Fiabilidad de sistemas de IA y agentes LLM"
        },
        {
          "id": "frontera-espacial-autonomia",
          "label_es": "Autonomía resiliente en SmallSat y cislunar"
        }
      ],
      "entities": [
        {
          "id": "shoal-plus-plus",
          "name": "Shoal++",
          "type_es": "Paper",
          "subtheme": "consenso-bft-moderno",
          "year": 2025,
          "authority": "NSDI 2025",
          "url": "https://arxiv.org/html/2407.19863v3",
          "url_label": "Half a Century of BFT",
          "description_es": "Protocolo BFT basado en DAG presentado en NSDI 2025 que mantiene el throughput propio del DAG mientras reduce la latencia de commit a aproximadamente 4,5 retardos de mensaje en promedio.",
          "tags": [
            "BFT",
            "DAG",
            "NSDI"
          ],
          "reliability": "HIGH"
        },
        {
          "id": "autobahn-bft",
          "name": "Autobahn",
          "type_es": "Paper",
          "subtheme": "consenso-bft-moderno",
          "year": 2024,
          "authority": "SOSP 2024 (Cornell/UC Berkeley)",
          "url": "https://arxiv.org/html/2407.19863v3",
          "url_label": "Surveys BFT 2024-2025",
          "description_es": "BFT seamless de alta velocidad presentado en SOSP 2024 que aborda el compromiso histórico entre throughput y latencia en consenso bizantino con sincronía parcial.",
          "tags": [
            "BFT",
            "SOSP"
          ],
          "reliability": "HIGH"
        },
        {
          "id": "tla-plus-amazon",
          "name": "TLA+ en Amazon Web Services",
          "type_es": "Estudio",
          "subtheme": "verificacion-formal",
          "year": 2024,
          "authority": "Lamport et al. (Microsoft Research)",
          "url": "https://lamport.azurewebsites.net/tla/formal-methods-amazon.pdf",
          "url_label": "Formal Methods at Amazon",
          "description_es": "Documento que reporta el uso industrial de TLA+ en S3, DynamoDB, EBS y Aurora, incluido el hallazgo de un bug BFT sutil cuya traza de error requirió 35 pasos.",
          "tags": [
            "TLA+",
            "AWS"
          ],
          "reliability": "HIGH"
        },
        {
          "id": "p-language",
          "name": "Microsoft P",
          "type_es": "Herramienta",
          "subtheme": "verificacion-formal",
          "year": 2024,
          "authority": "Microsoft Research / p-org",
          "url": "https://github.com/p-org/P",
          "url_label": "p-org/P en GitHub",
          "description_es": "Lenguaje de máquinas de estado y eventos asíncronos que compila a C# verificable por model checking; usado en drivers USB 3.0 y servicios AWS (S3, EBS, DynamoDB, IoT).",
          "tags": [
            "P",
            "model-checking"
          ],
          "reliability": "HIGH"
        },
        {
          "id": "verus-sosp-2024",
          "name": "Verus",
          "type_es": "Herramienta",
          "subtheme": "verificacion-formal",
          "year": 2024,
          "authority": "SOSP 2024 (MPI-SWS, CMU, MSR)",
          "url": "https://verus-lang.github.io/verus/guide/",
          "url_label": "Verus Language Guide",
          "description_es": "Verificador para software de sistemas en Rust que aprovecha el modelo de ownership y solvers SMT para probar propiedades sobre código productivo.",
          "tags": [
            "Rust",
            "SMT",
            "Verus"
          ],
          "reliability": "HIGH"
        },
        {
          "id": "verdi-raft",
          "name": "Verdi",
          "type_es": "Framework",
          "subtheme": "verificacion-formal",
          "year": 2024,
          "authority": "UW PLSE",
          "url": "https://github.com/uwplse/verdi",
          "url_label": "uwplse/verdi",
          "description_es": "Framework en Coq para sistemas distribuidos formalmente verificados; produjo la primera prueba de seguridad de máquina de estado de Raft mediante Verified System Transformers.",
          "tags": [
            "Coq",
            "Raft"
          ],
          "reliability": "HIGH"
        },
        {
          "id": "cheriot-rtos",
          "name": "CHERIoT RTOS",
          "type_es": "Plataforma",
          "subtheme": "verificacion-formal",
          "year": 2025,
          "authority": "SOSP 2025 / Microsoft + lowRISC",
          "url": "https://cheriot.org/",
          "url_label": "CHERIoT Platform",
          "description_es": "RTOS con seguridad de memoria de grano fino que aprovecha las extensiones CHERI; candidato a sustituir FreeRTOS/RTEMS en misiones CubeSat.",
          "tags": [
            "CHERI",
            "RTOS"
          ],
          "reliability": "HIGH"
        },
        {
          "id": "cheri-alliance",
          "name": "CHERI Alliance",
          "type_es": "Estándar",
          "subtheme": "verificacion-formal",
          "year": 2024,
          "authority": "FreeBSD Foundation, Capabilities Limited, SCI, Codasip, lowRISC, Cambridge",
          "url": "https://en.wikipedia.org/wiki/Capability_Hardware_Enhanced_RISC_Instructions",
          "url_label": "CHERI Wikipedia",
          "description_es": "Alianza fundada en 2024 que agrupa a actores de hardware y software para estandarizar capacidades CHERI dirigidas a mitigar el ~70% de CVEs por errores de memoria.",
          "tags": [
            "CHERI",
            "consorcio"
          ],
          "reliability": "HIGH"
        },
        {
          "id": "htd-neuromorfico",
          "name": "Hierarchical Temporal Defense (HTD)",
          "type_es": "Paper",
          "subtheme": "neuromorfico-espacio",
          "year": 2025,
          "authority": "Engineering Applications of AI 2025",
          "url": "https://www.sciencedirect.com/science/article/abs/pii/S0952197626010602",
          "url_label": "HTD 2025",
          "description_es": "Defensa probabilística en tres capas (codificación, dinámicas neuronales, plasticidad sináptica) frente a SEU que multiplica por 4,5 el tiempo medio hasta fallo en SNN bajo eventos solares simulados.",
          "tags": [
            "SNN",
            "SEU",
            "neuromorphic"
          ],
          "reliability": "HIGH"
        },
        {
          "id": "neuromorphic-space-survey",
          "name": "Toward Transforming Space Exploration with Neuromorphic AI",
          "type_es": "Estudio",
          "subtheme": "neuromorfico-espacio",
          "year": 2025,
          "authority": "Engineering Applications of AI",
          "url": "https://dl.acm.org/doi/10.1016/j.engappai.2025.111055",
          "url_label": "Neuromorphic survey 2025",
          "description_es": "Survey de sustratos neuromórficos para misiones espaciales (Loihi 2, Akida, SpiNNaker 2, BrainScaleS-2, Tianjic) con análisis de restricciones radiación/vacío/térmicas.",
          "tags": [
            "survey",
            "neuromorphic"
          ],
          "reliability": "HIGH"
        },
        {
          "id": "fips-203-204-205",
          "name": "NIST FIPS 203/204/205",
          "type_es": "Estándar",
          "subtheme": "pqc-fhe",
          "year": 2024,
          "authority": "NIST",
          "url": "https://www.nist.gov/news-events/news/2024/08/nist-releases-first-3-finalized-post-quantum-encryption-standards",
          "url_label": "NIST PQC standards",
          "description_es": "Primeros tres estándares PQC finalizados en agosto 2024: ML-KEM (Kyber), ML-DSA (Dilithium) y SLH-DSA (SPHINCS+); base de la migración cripto-ágil para satélites de larga vida.",
          "tags": [
            "NIST",
            "PQC"
          ],
          "reliability": "HIGH"
        },
        {
          "id": "darpa-quanet",
          "name": "DARPA QuANET",
          "type_es": "Misión",
          "subtheme": "pqc-fhe",
          "year": 2025,
          "authority": "DARPA",
          "url": "https://www.darpa.mil/research/programs/quantum-augmented-network",
          "url_label": "DARPA QuANET",
          "description_es": "Programa Quantum-Augmented Networking que en menos de 10 meses demostró la primera red híbrida funcional con tráfico clásico y cuántico coexistiendo sin interrupción.",
          "tags": [
            "DARPA",
            "quantum"
          ],
          "reliability": "HIGH"
        },
        {
          "id": "mast-taxonomy",
          "name": "MAST y dataset MAD",
          "type_es": "Paper",
          "subtheme": "fiabilidad-ia-sistemas",
          "year": 2025,
          "authority": "Cemri, Pan, Yang et al.",
          "url": "https://arxiv.org/abs/2503.13657",
          "url_label": "arXiv:2503.13657",
          "description_es": "Primera taxonomía principiada de fallos en sistemas multi-agente LLM acompañada del dataset MAD con más de 1000 trazas anotadas de 7 frameworks MAS populares.",
          "tags": [
            "MAS",
            "LLM"
          ],
          "reliability": "HIGH"
        },
        {
          "id": "openrca",
          "name": "OpenRCA",
          "type_es": "Framework",
          "subtheme": "fiabilidad-ia-sistemas",
          "year": 2025,
          "authority": "ICLR 2025",
          "url": "https://openreview.net/forum?id=M4qNIzQYpd",
          "url_label": "OpenRCA ICLR 2025",
          "description_es": "Benchmark de RCA basado en LLM con 335 fallos en 3 sistemas empresariales y más de 68 GB de telemetría; Claude 3.5 sólo resuelve el 11,34% de los casos.",
          "tags": [
            "RCA",
            "benchmark"
          ],
          "reliability": "HIGH"
        },
        {
          "id": "stars-program",
          "name": "STARS — Safe Trusted Autonomy for Responsible Space",
          "type_es": "Misión",
          "subtheme": "frontera-espacial-autonomia",
          "year": 2025,
          "authority": "AIAA SciTech 2025 / arXiv 2501.05984",
          "url": "https://arxiv.org/html/2501.05984v1",
          "url_label": "STARS arXiv",
          "description_es": "Programa que combina control multi-satélite por RL con runtime assurance y un testbed flexible de equipo humano-autonomía para misiones críticas.",
          "tags": [
            "STARS",
            "AIAA"
          ],
          "reliability": "HIGH"
        },
        {
          "id": "dagstuhl-24182",
          "name": "Dagstuhl 24182 — Resilience and Antifragility of Autonomous Systems",
          "type_es": "Documentación",
          "subtheme": "frontera-espacial-autonomia",
          "year": 2024,
          "authority": "Dagstuhl Reports 14(4)",
          "url": "https://drops.dagstuhl.de/entities/document/10.4230/DagRep.14.4.142",
          "url_label": "DagRep 14.4.142",
          "description_es": "Seminario que armoniza definiciones de resiliencia/antifragilidad y discute métricas, V&V en runtime y transferencia entre dominios aeroespacial, automoción y salud.",
          "tags": [
            "Dagstuhl",
            "antifragilidad"
          ],
          "reliability": "HIGH"
        }
      ],
      "relationships": [
        {
          "from": "shoal-plus-plus",
          "type": "evoluciona-de",
          "to": "autobahn-bft"
        },
        {
          "from": "autobahn-bft",
          "type": "compite-con",
          "to": "shoal-plus-plus"
        },
        {
          "from": "verus-sosp-2024",
          "type": "complementa",
          "to": "tla-plus-amazon"
        },
        {
          "from": "p-language",
          "type": "usado-en",
          "to": "tla-plus-amazon"
        },
        {
          "from": "cheriot-rtos",
          "type": "implementa",
          "to": "cheri-alliance"
        },
        {
          "from": "htd-neuromorfico",
          "type": "se-evalua-en",
          "to": "neuromorphic-space-survey"
        },
        {
          "from": "openrca",
          "type": "evalua",
          "to": "mast-taxonomy"
        },
        {
          "from": "verdi-raft",
          "type": "antecede",
          "to": "verus-sosp-2024"
        }
      ],
      "cross_facet_links": [
        {
          "to_facet": "sat-fault-tolerance",
          "to_entity": "pbft-oisl-thrust",
          "from_entity": "shoal-plus-plus",
          "rationale": "Shoal++ y Autobahn ofrecen latencias de commit aplicables a control de actitud o enrutamiento OISL tolerante a intrusión."
        },
        {
          "to_facet": "space-grade-sw",
          "to_entity": "rtems-rtos",
          "from_entity": "cheriot-rtos",
          "rationale": "CHERIoT RTOS se posiciona como alternativa con seguridad de memoria por hardware frente a FreeRTOS/RTEMS."
        },
        {
          "to_facet": "ml-sat-ops",
          "to_entity": "phi-sat-2",
          "from_entity": "htd-neuromorfico",
          "rationale": "HTD habilita inferencia SNN sub-vatio para star trackers, detección de anomalías y triaje de datos a bordo."
        },
        {
          "to_facet": "space-cybersec",
          "to_entity": "pqc-space",
          "from_entity": "fips-203-204-205",
          "rationale": "La adopción de ML-KEM/ML-DSA/SLH-DSA condiciona la cripto-agilidad obligatoria para satélites de larga vida antes del CRQC."
        },
        {
          "to_facet": "stack-templates",
          "to_entity": "atam",
          "from_entity": "p-language",
          "rationale": "La pareja P + TLA+ ya aplicada en AWS y Microsoft constituye un patrón de plantilla de verificación reutilizable."
        },
        {
          "to_facet": "edge-swarms",
          "to_entity": "swarmraft",
          "from_entity": "verdi-raft",
          "rationale": "Consenso bizantino formal verificado ↔ implementación práctica para drones."
        }
      ],
      "super_category_id": "meta-frontier"
    },
    {
      "facet_id": "llmops",
      "facet_label_es": "LLMOps clásico",
      "intro_es": "LLMOps clásico cubre el ciclo de servicio de modelos de lenguaje en producción durante 2026: motores de inferencia (vLLM, TensorRT-LLM, TGI, SGLang), observabilidad sobre OpenTelemetry GenAI, RAG productivo con recuperación híbrida y reranking, evaluación continua (RAGAS, DeepEval, TruLens) y gateways multi-proveedor (LiteLLM, Helicone, Portkey). La capa de optimización combina cuantización (FP8, NVFP4, INT4), gestión paginada de KV-cache, decodificación especulativa y batching continuo para reducir coste y latencia 5-20×. La gobernanza se ancla en NIST AI RMF + AI 600-1, EU AI Act y OWASP GenAI Top-10.",
      "subthemes": [
        {
          "id": "inferencia",
          "label_es": "Motores de inferencia y serving"
        },
        {
          "id": "observabilidad",
          "label_es": "Observabilidad y trazabilidad GenAI"
        },
        {
          "id": "evaluacion",
          "label_es": "Evaluación y red teaming"
        },
        {
          "id": "rag",
          "label_es": "RAG en producción"
        },
        {
          "id": "voice-agents",
          "label_es": "Agentes de voz y APIs realtime"
        },
        {
          "id": "document-ai",
          "label_es": "Document AI y parsing OCR"
        },
        {
          "id": "models-open-weights",
          "label_es": "Modelos open-weights (Llama, Qwen, DeepSeek, Mistral, Gemma, Phi, Hermes)"
        },
        {
          "id": "training-frameworks-and-patterns",
          "label_es": "Frameworks de fine-tuning, post-training y patrones (RLHF/DPO/GRPO/QLoRA…)"
        },
        {
          "id": "serving-infrastructure",
          "label_es": "Serving infrastructure (hosted, K8s LLM, aceleradores, optimización, gateways)"
        }
      ],
      "entities": [
        {
          "id": "vllm",
          "name": "vLLM",
          "type_es": "Plataforma",
          "subtheme": "inferencia",
          "year": 2026,
          "authority": "UC Berkeley Sky Computing Lab",
          "url": "https://docs.vllm.ai/en/latest/",
          "url_label": "Documentación oficial vLLM",
          "description_es": "Motor de inferencia LLM open-source de referencia. La refactorización vLLM v1 (enero 2025) unifica scheduler/KV manager/worker/sampler/API con near-zero CPU overhead y ~1.7× throughput vs v0; v0.11 (2025-2026) añade WideEP en GB200, BF16/NVFP4 fused MoE, integración nativa con LMCache, NIXL y Mooncake Transfer Engine para PD disaggregation, y vLLM Production Stack (Helm+CRDs) para despliegue K8s.",
          "tags": [
            "serving",
            "paged-attention",
            "oss"
          ],
          "reliability": "HIGH"
        },
        {
          "id": "paged-attention-paper",
          "name": "Efficient Memory Management with PagedAttention (SOSP 2023)",
          "type_es": "Paper",
          "subtheme": "inferencia",
          "year": 2023,
          "authority": "Kwon et al. (UC Berkeley), SOSP 2023",
          "url": "https://arxiv.org/abs/2309.06180",
          "url_label": "arXiv 2309.06180",
          "description_es": "Artículo seminal de SOSP 2023 que introduce PagedAttention y motiva el diseño de vLLM. Establece la analogía con memoria virtual de SO para gestionar la KV-cache.",
          "tags": [
            "sosp",
            "seminal",
            "kv-cache"
          ],
          "reliability": "HIGH"
        },
        {
          "id": "tensorrt-llm",
          "name": "NVIDIA TensorRT-LLM",
          "type_es": "Plataforma",
          "subtheme": "inferencia",
          "year": 2026,
          "authority": "NVIDIA",
          "url": "https://github.com/NVIDIA/TensorRT-LLM",
          "url_label": "NVIDIA/TensorRT-LLM",
          "description_es": "Motor de inferencia LLM optimizado de NVIDIA. En 2026 está orquestado bajo NVIDIA Dynamo y consume FlashInfer + XGrammar como backends estándar; añade soporte Blackwell FP4 y MLA. Sustituye a Triton Inference Server en cargas LLM.",
          "tags": [
            "nvidia",
            "compilado",
            "baja-latencia"
          ],
          "reliability": "HIGH"
        },
        {
          "id": "tgi-v3",
          "name": "Hugging Face TGI v3",
          "type_es": "Plataforma",
          "subtheme": "inferencia",
          "year": 2026,
          "authority": "Hugging Face",
          "url": "https://github.com/huggingface/text-generation-inference",
          "url_label": "huggingface/text-generation-inference",
          "description_es": "Hugging Face Text Generation Inference v3. **Status mayo 2026: maintenance** — HF promueve principalmente vLLM como motor preferido y TGI v3 queda como opción interna del stack HF (Endpoints). Recomendado evaluar vLLM o SGLang en greenfield.",
          "tags": [
            "long-context",
            "huggingface",
            "prefill"
          ],
          "reliability": "HIGH"
        },
        {
          "id": "sglang",
          "name": "SGLang",
          "type_es": "Plataforma",
          "subtheme": "inferencia",
          "year": 2026,
          "authority": "SGLang Team",
          "url": "https://github.com/sgl-project/sglang",
          "url_label": "sgl-project/sglang",
          "description_es": "Motor alternativo a vLLM con RadixAttention y front-end estructurado. En 2025-2026 soporta PD disaggregation, DeepEP/DeepGEMM/EPLB, large-scale Expert Parallelism (96 H100 LMSYS may-2025), HiCache con backend Mooncake Store y day-0 DeepSeek-V4 (abr-2026). Backend de NVIDIA Dynamo junto con vLLM y TensorRT-LLM.",
          "tags": [
            "structured-generation",
            "tool-calling"
          ],
          "reliability": "HIGH"
        },
        {
          "id": "otel-genai",
          "name": "OpenTelemetry GenAI Semantic Conventions",
          "type_es": "Estándar",
          "subtheme": "observabilidad",
          "year": 2026,
          "authority": "OpenTelemetry / CNCF",
          "url": "https://opentelemetry.io/docs/specs/semconv/gen-ai/",
          "url_label": "Especificación OTel GenAI",
          "description_es": "Convenciones semánticas para trazas, métricas y eventos de aplicaciones GenAI. En estado Development en abril 2026, define atributos como gen_ai.system, gen_ai.usage.input_tokens y namespaces por proveedor (Anthropic, OpenAI, Bedrock, MCP).",
          "tags": [
            "otel",
            "semconv",
            "cncf",
            "agent-spans-extension"
          ],
          "reliability": "HIGH"
        },
        {
          "id": "langfuse",
          "name": "Langfuse",
          "type_es": "Plataforma",
          "subtheme": "observabilidad",
          "year": 2026,
          "authority": "Langfuse GmbH",
          "url": "https://langfuse.com/",
          "url_label": "Langfuse",
          "description_es": "Plataforma open-source (MIT) de observabilidad LLM con paridad funcional self-host. Stack PostgreSQL + ClickHouse + Redis + S3, ingesta nativa OTel, gestión de prompts y eval scoring; 80+ integraciones.",
          "tags": [
            "oss",
            "self-host",
            "prompt-mgmt"
          ],
          "reliability": "HIGH"
        },
        {
          "id": "arize-phoenix",
          "name": "Arize Phoenix",
          "type_es": "Plataforma",
          "subtheme": "observabilidad",
          "year": 2026,
          "authority": "Arize AI",
          "url": "https://phoenix.arize.com/",
          "url_label": "Arize Phoenix",
          "description_es": "Observabilidad GenAI source-available, OTel-native con SDK OpenInference y detección de drift mediante visualización de embeddings. Hereda capacidades de ML observability empresarial de Arize AX.",
          "tags": [
            "drift",
            "embeddings",
            "openinference"
          ],
          "reliability": "HIGH"
        },
        {
          "id": "helicone",
          "name": "Helicone",
          "type_es": "Plataforma",
          "subtheme": "serving-infrastructure",
          "year": 2026,
          "authority": "Helicone",
          "url": "https://www.helicone.ai/",
          "url_label": "Helicone",
          "description_es": "Proxy/gateway LLM open-source con balanceo PeakEWMA, caching in-memory + Redis y export OTel. **Status mayo 2026: maintenance — adquirida por Mintlify el 3-mar-2026**. El Helicone AI Gateway sigue operativo (Rust, semantic caching, failover, rate-limiting, 1M context Claude Sonnet 4/4.5 sobre Anthropic/Bedrock/Vertex). Para greenfield evaluar Langfuse/Braintrust/Comet Opik en su lugar.",
          "tags": [
            "proxy",
            "peakewma",
            "caching"
          ],
          "reliability": "HIGH"
        },
        {
          "id": "pydantic-logfire",
          "name": "Pydantic Logfire",
          "type_es": "Plataforma",
          "subtheme": "observabilidad",
          "year": 2024,
          "authority": "Pydantic (Samuel Colvin)",
          "url": "https://pydantic.dev/logfire",
          "url_label": "Pydantic Logfire",
          "description_es": "Plataforma de observabilidad full-stack construida nativamente sobre OpenTelemetry: unifica en el mismo árbol de spans las llamadas LLM, las queries SQL, las peticiones HTTP y la lógica de negocio. GA desde 1-oct-2024 (Serie A 12,5 M$ Sequoia). Instrumentación nativa para Pydantic AI, OpenAI, Anthropic, LangChain, LlamaIndex, LiteLLM, MCP y Claude Agent SDK; SQL Explorer con asistente NLP, live spans, Pydantic Evals integrado, MCP server, free tier 10M spans/mes y región de datos UE (SOC2 Type II + HIPAA + GDPR).",
          "tags": [
            "logfire",
            "pydantic",
            "otel",
            "full-stack",
            "live-spans",
            "sql-explorer",
            "mcp-server",
            "pydantic-ai",
            "claude-agent-sdk",
            "eu-region"
          ],
          "reliability": "HIGH"
        },
        {
          "id": "ragas",
          "name": "RAGAS",
          "type_es": "Suite-de-evaluación",
          "subtheme": "evaluacion",
          "year": 2026,
          "authority": "Exploding Gradients",
          "url": "https://docs.ragas.io/",
          "url_label": "RAGAS docs",
          "description_es": "Framework de evaluación específico para RAG con métricas reference-free: faithfulness, answer relevance, context precision/recall y noise sensitivity. Indicado para baseline rápido en menos de un día.",
          "tags": [
            "rag",
            "faithfulness",
            "reference-free"
          ],
          "reliability": "HIGH"
        },
        {
          "id": "deepeval",
          "name": "DeepEval",
          "type_es": "Suite-de-evaluación",
          "subtheme": "evaluacion",
          "year": 2026,
          "authority": "Confident AI",
          "url": "https://github.com/confident-ai/deepeval",
          "url_label": "confident-ai/deepeval",
          "description_es": "Framework de evaluación amplio (40+ métricas) con G-Eval, ToolCorrectness y ConversationCompleteness; pytest-native para gating CI/CD. Cubre RAG, agentes, multimodal y MCP.",
          "tags": [
            "pytest",
            "g-eval",
            "agentes"
          ],
          "reliability": "HIGH"
        },
        {
          "id": "trulens",
          "name": "TruLens",
          "type_es": "Suite-de-evaluación",
          "subtheme": "evaluacion",
          "year": 2026,
          "authority": "TruEra / Snowflake",
          "url": "https://www.trulens.org/",
          "url_label": "TruLens",
          "description_es": "Combina evaluación y tracing OpenTelemetry a nivel de span, permitiendo atribuir fallos a pasos específicos en cadenas de herramientas y agentes complejos.",
          "tags": [
            "tracing",
            "span-level",
            "agentic"
          ],
          "reliability": "HIGH"
        },
        {
          "id": "litellm",
          "name": "LiteLLM",
          "type_es": "Plataforma",
          "subtheme": "serving-infrastructure",
          "year": 2026,
          "authority": "BerriAI",
          "url": "https://docs.litellm.ai/docs/simple_proxy",
          "url_label": "LiteLLM proxy docs",
          "description_es": "Gateway OSS por defecto en 2026. Expone API formato OpenAI sobre 100+ proveedores con virtual keys, routing latency/cost-aware, caching Redis y 15+ integraciones de observabilidad.",
          "tags": [
            "oss",
            "virtual-keys",
            "multi-proveedor"
          ],
          "reliability": "HIGH"
        },
        {
          "id": "portkey",
          "name": "Portkey",
          "type_es": "Plataforma",
          "subtheme": "serving-infrastructure",
          "year": 2026,
          "authority": "Portkey",
          "url": "https://portkey.ai/",
          "url_label": "Portkey",
          "description_es": "Control plane y gateway LLM comercial con linaje completo de petición, guardrails de PII/política y routing region-aware. Orientado a gobernanza y auditoría multi-equipo.",
          "tags": [
            "enterprise",
            "gobernanza",
            "control-plane"
          ],
          "reliability": "HIGH"
        },
        {
          "id": "speculative-decoding",
          "name": "Decodificación Especulativa (EAGLE/Medusa)",
          "type_es": "Patrón",
          "subtheme": "serving-infrastructure",
          "year": 2026,
          "authority": "Comunidad de investigación",
          "url": "https://arxiv.org/abs/2401.15077",
          "url_label": "EAGLE arXiv 2401.15077",
          "description_es": "Patrón de aceleración EAGLE/Medusa. En 2026 las implementaciones EAGLE-3 y Medusa están integradas de serie en vLLM v1 y SGLang; ya no requiere stack externo, es feature engine.",
          "tags": [
            "draft-model",
            "eagle",
            "medusa"
          ],
          "reliability": "HIGH"
        },
        {
          "id": "hybrid-rag",
          "name": "RAG Híbrido + Reranking",
          "type_es": "Patrón",
          "subtheme": "rag",
          "year": 2026,
          "authority": "Comunidad RAG 2026",
          "url": "https://www.elastic.co/search-labs/blog/hybrid-search-elasticsearch-rrf",
          "url_label": "Hybrid search RRF",
          "description_es": "Patrón de retrieval que combina dense + BM25 + reranker. **Stack mínimo aceptable mayo 2026**: vector + BM25 nativo en el DB (Qdrant, Milvus 2.5, Turbopuffer, Weaviate BlockMax WAND — ya no Elastic separado) + cross-encoder reranker (Cohere Rerank 3.5 o Voyage rerank-2.5) + Contextual Retrieval de Anthropic como pre-procesamiento de chunks. Reduce fallos retrieval hasta 67%; default vía Reciprocal Rank Fusion.",
          "tags": [
            "hybrid",
            "rrf",
            "reranker"
          ],
          "reliability": "HIGH"
        },
        {
          "id": "qdrant",
          "name": "Qdrant",
          "type_es": "Plataforma",
          "subtheme": "rag",
          "year": 2026,
          "authority": "Qdrant",
          "url": "https://qdrant.tech/",
          "url_label": "Qdrant",
          "description_es": "Vector DB open-source de referencia. Qdrant 1.16+ añade tiered multitenancy, GPU-accelerated indexing, sparse vectors nativos con hybrid pipelines sin inferencia externa, ~12 ms p99 a 10M vectores. Roadmap 2026: 4-bit quantization, read-write segregation, block storage, replicas read-only.",
          "tags": [
            "vector-db",
            "hybrid-search",
            "self-host"
          ],
          "reliability": "HIGH"
        },
        {
          "id": "pgvector",
          "name": "pgvector / pgvectorscale",
          "type_es": "Herramienta",
          "subtheme": "rag",
          "year": 2026,
          "authority": "PostgreSQL community / Timescale",
          "url": "https://github.com/pgvector/pgvector",
          "url_label": "pgvector/pgvector",
          "description_es": "Extensión Postgres para vector search. En 2026 emparejada con **pgvectorscale** (Tiger Data, antes Timescale): StreamingDiskANN + Statistical Binary Quantization (SBQ) entrega 28× menor p95 y 16× más throughput vs Pinecone storage-optimized a 99% recall, 75% más barato self-hosted. Regla 2026: pgvector si ya usas Postgres, Qdrant si no, Milvus solo a billion+.",
          "tags": [
            "postgres",
            "hnsw",
            "extension"
          ],
          "reliability": "HIGH"
        },
        {
          "id": "nvidia-dynamo",
          "name": "NVIDIA Dynamo",
          "type_es": "Framework",
          "subtheme": "inferencia",
          "year": 2026,
          "authority": "NVIDIA (ai-dynamo)",
          "url": "https://github.com/ai-dynamo/dynamo",
          "url_label": "GitHub ai-dynamo/dynamo",
          "description_es": "Framework de inferencia distribuida a escala de datacenter, escrito en Rust, que orquesta vLLM, SGLang y TensorRT-LLM con desagregación prefill/decode, enrutado consciente de KV-cache, KV Block Manager jerárquico y autoscaling guiado por SLA. Anunciado GTC 2025, v1.0 GA marzo 2026 (1.0.2 el 23-abr-2026); sucesor de Triton para LLM workloads.",
          "tags": [
            "llm-serving",
            "kv-cache",
            "pd-disaggregation",
            "distributed-inference",
            "kv-router",
            "rust",
            "blackwell",
            "sla-autoscaling",
            "nixl",
            "reasoning-models"
          ],
          "reliability": "HIGH"
        },
        {
          "id": "mooncake",
          "name": "Mooncake (KVCache-centric Architecture)",
          "type_es": "Paper",
          "subtheme": "inferencia",
          "year": 2025,
          "authority": "Moonshot AI, Tsinghua University (FAST '25 Best Paper)",
          "url": "https://www.usenix.org/conference/fast25/presentation/qin",
          "url_label": "USENIX FAST '25",
          "description_es": "Arquitectura desagregada centrada en KVCache: separa clusters prefill y decode y agrega CPU/DRAM/SSD/NIC infrautilizadas en un pool global de KVCache. Sirve a Kimi (Moonshot AI) procesando >100B tokens/día. Best Paper FAST 2025; integrada en vLLM, SGLang, TensorRT-LLM y LMDeploy vía Transfer Engine y Mooncake Store.",
          "tags": [
            "kv-cache",
            "pd-disaggregation",
            "paper",
            "fast-2025",
            "scheduler",
            "kimi",
            "transfer-engine",
            "distributed-cache",
            "slo-aware",
            "best-paper"
          ],
          "reliability": "HIGH"
        },
        {
          "id": "lmcache",
          "name": "LMCache",
          "type_es": "Herramienta",
          "subtheme": "inferencia",
          "year": 2026,
          "authority": "LMCache Lab (UChicago) + comunidad",
          "url": "https://github.com/LMCache/LMCache",
          "url_label": "GitHub LMCache/LMCache",
          "description_es": "Capa de KV-cache jerárquica (GPU/CPU/disco/S3, NIXL-compatible) que reutiliza KV de cualquier texto repetido (no solo prefijos) entre instancias de vLLM, SGLang y NVIDIA Dynamo. 3-10× menor TTFT y hasta 15× throughput en RAG y multi-turno; v0.4.4 publicada 22-abr-2026; integrada en Dynamo 1.0 desde marzo 2026.",
          "tags": [
            "kv-cache",
            "offloading",
            "prefix-cache",
            "vllm",
            "sglang",
            "dynamo",
            "nixl",
            "rag",
            "multi-tier-storage",
            "ttft"
          ],
          "reliability": "HIGH"
        },
        {
          "id": "flashinfer",
          "name": "FlashInfer",
          "type_es": "Framework",
          "subtheme": "inferencia",
          "year": 2025,
          "authority": "U. Washington, CMU, NVIDIA (MLSys '25 Best Paper)",
          "url": "https://arxiv.org/abs/2501.01005",
          "url_label": "arXiv 2501.01005",
          "description_es": "Librería de kernels GPU para serving de LLMs con atención block-sparse/composable, plantilla de atención customizable vía JIT, scheduling load-balanced compatible con CUDAGraph y soporte FP8/FP4 + MLA + MoE. Best Paper MLSys 2025; backend por defecto en vLLM, SGLang, TensorRT-LLM, MLC-LLM y TGI.",
          "tags": [
            "kernels",
            "attention",
            "gpu",
            "kv-cache",
            "mlsys-2025",
            "jit",
            "fp8",
            "fp4",
            "mla",
            "moe"
          ],
          "reliability": "HIGH"
        },
        {
          "id": "aegaeon",
          "name": "Aegaeon",
          "type_es": "Paper",
          "subtheme": "inferencia",
          "year": 2025,
          "authority": "Peking University, Alibaba Cloud (SOSP '25)",
          "url": "https://dl.acm.org/doi/10.1145/3731569.3764815",
          "url_label": "ACM SOSP '25",
          "description_es": "Sistema multi-modelo que hace auto-scaling a nivel de token para hacer pooling efectivo de GPUs entre LLMs concurrentes; mitiga HOL blocking y sirve hasta 7 modelos por GPU. Desplegado en beta en marketplace Alibaba Cloud, redujo de 1.192 a 213 GPUs (~82% ahorro).",
          "tags": [
            "gpu-pooling",
            "multi-model-serving",
            "sosp-2025",
            "autoscaling",
            "token-level",
            "alibaba-cloud",
            "marketplace",
            "hol-blocking",
            "llm-marketplace",
            "paper"
          ],
          "reliability": "HIGH"
        },
        {
          "id": "aibrix",
          "name": "AIBrix",
          "type_es": "Plataforma",
          "subtheme": "inferencia",
          "year": 2026,
          "authority": "ByteDance + UMich/UIUC/UW/Google/DaoCloud (vLLM project)",
          "url": "https://github.com/vllm-project/aibrix",
          "url_label": "GitHub vllm-project/aibrix",
          "description_es": "Control plane Kubernetes-nativo para vLLM con gestión de LoRA de alta densidad, autoscaler específico para LLMs, reuse cross-engine de KV-cache, detección de fallos GPU y serving heterogéneo. White paper arXiv 2504.03648; v0.6 publicada marzo 2026.",
          "tags": [
            "kubernetes",
            "vllm",
            "control-plane",
            "lora",
            "autoscaler",
            "kv-cache",
            "bytedance",
            "heterogeneous-serving",
            "gpu-failure",
            "cost-optimization"
          ],
          "reliability": "HIGH"
        },
        {
          "id": "xgrammar",
          "name": "XGrammar / XGrammar-2",
          "type_es": "Framework",
          "subtheme": "inferencia",
          "year": 2026,
          "authority": "MLC AI (CMU/UW) + comunidad",
          "url": "https://github.com/mlc-ai/xgrammar",
          "url_label": "GitHub mlc-ai/xgrammar",
          "description_es": "Motor de generación estructurada (JSON, regex, CFG) basado en constrained decoding con near-zero overhead (<40µs/token); backend por defecto de vLLM, SGLang, TensorRT-LLM, MLC-LLM, MAX y OpenVINO GenAI. XGrammar-2 (2026) añade TagDispatch para tareas agentivas dinámicas; v0.2.0 el 1-may-2026; hasta 80× más throughput vs Outlines/lm-format-enforcer.",
          "tags": [
            "structured-output",
            "constrained-decoding",
            "json",
            "grammar",
            "agentic",
            "vllm",
            "sglang",
            "tensorrt-llm",
            "jit",
            "default-backend"
          ],
          "reliability": "HIGH"
        },
        {
          "id": "instructor-lib",
          "name": "Instructor (Jason Liu)",
          "type_es": "Herramienta",
          "subtheme": "inferencia",
          "year": 2026,
          "authority": "Jason Liu / community",
          "url": "https://python.useinstructor.com/",
          "url_label": "Instructor docs",
          "description_es": "Librería Python multi-provider (15+: OpenAI, Anthropic, Gemini, Cohere, Mistral, Ollama) para outputs estructurados con Pydantic + retries automáticos. Inspiró parcialmente la API JSON-Schema de OpenAI; default para desarrollo rápido multi-provider, complementa APIs nativas.",
          "tags": [
            "structured-output",
            "pydantic",
            "multi-provider",
            "library",
            "retries"
          ],
          "reliability": "HIGH"
        },
        {
          "id": "braintrust",
          "name": "Braintrust",
          "type_es": "Plataforma",
          "subtheme": "evaluacion",
          "year": 2026,
          "authority": "Braintrust Data, Inc.",
          "url": "https://www.braintrust.dev/",
          "url_label": "Braintrust",
          "description_es": "Plataforma comercial unificada de observabilidad, datasets y evals para LLM/agentes con CI quality gates: cualquier traza producción se convierte en test case con un click y los scores bloquean PRs en GitHub. Incluye Loop (auto-mejora prompts/scorers) y MCP server para IDE.",
          "tags": [
            "llm-eval",
            "observability",
            "ci-cd-quality-gates",
            "prompt-management",
            "llm-as-judge",
            "datasets",
            "mcp-server",
            "soc2",
            "enterprise"
          ],
          "reliability": "HIGH"
        },
        {
          "id": "comet-opik",
          "name": "Comet Opik",
          "type_es": "Plataforma",
          "subtheme": "evaluacion",
          "year": 2026,
          "authority": "Comet ML",
          "url": "https://github.com/comet-ml/opik",
          "url_label": "Opik OSS",
          "description_es": "Plataforma open-source de Comet para tracing, evaluación y monitorización de LLMs/RAG/agentes; procesa >40M trazas/día e incluye optimizador automático de prompts. En 2026 añade plugin nativo agent frameworks (opik-openclaw) y default Claude Sonnet 4.6.",
          "tags": [
            "open-source",
            "llm-observability",
            "tracing",
            "llm-as-judge",
            "prompt-optimization",
            "agent-monitoring",
            "comet"
          ],
          "reliability": "HIGH"
        },
        {
          "id": "galileo-luna",
          "name": "Galileo Luna-2 (Evaluation Foundation Models)",
          "type_es": "Suite-de-evaluación",
          "subtheme": "evaluacion",
          "year": 2025,
          "authority": "Galileo (en adquisición por Cisco, abr 2026)",
          "url": "https://docs.galileo.ai/concepts/luna/luna",
          "url_label": "Luna-2 docs",
          "description_es": "Familia de evaluadores fine-tuned sobre Llama 3B/8B optimizados para juicio LLM en producción: ~152ms latencia y $0.02 por 1M tokens, con 0.95 accuracy en hallucination detection. Diseñados para correr en línea sobre cada respuesta sin penalizar coste.",
          "tags": [
            "llm-as-judge",
            "hallucination-detection",
            "evaluation-foundation-model",
            "low-latency",
            "agent-eval",
            "cisco"
          ],
          "reliability": "HIGH"
        },
        {
          "id": "patronus-lynx",
          "name": "Patronus Lynx",
          "type_es": "Herramienta",
          "subtheme": "evaluacion",
          "year": 2024,
          "authority": "Patronus AI",
          "url": "https://www.patronus.ai/blog/lynx-state-of-the-art-open-source-hallucination-detection-model",
          "url_label": "Lynx OSS",
          "description_es": "Modelo open-source (8B/70B) especializado en detección de alucinaciones que supera a GPT-4o/Claude-3-Sonnet en HaluBench y PubMedQA, con razonamiento explícito sobre cada juicio. Disponible en HF y empaquetado por NVIDIA NIM.",
          "tags": [
            "hallucination-detection",
            "open-source",
            "evaluator-model",
            "halubench",
            "rag-eval",
            "patronus",
            "real-time"
          ],
          "reliability": "HIGH"
        },
        {
          "id": "prometheus-2",
          "name": "Prometheus 2 (KAIST)",
          "type_es": "Herramienta",
          "subtheme": "evaluacion",
          "year": 2024,
          "authority": "KAIST + LG AI Research",
          "url": "https://github.com/prometheus-eval/prometheus-eval",
          "url_label": "prometheus-eval",
          "description_es": "Modelo evaluador open-source (7B y 8x7B) que soporta direct assessment y pairwise ranking con criterios definidos por usuario; 72-85% acuerdo con jueces humanos en MT-Bench, HHH y Auto-J. Referencia académica para LLM-as-judge reproducible.",
          "tags": [
            "llm-as-judge",
            "open-source",
            "evaluator-model",
            "kaist",
            "pairwise-ranking",
            "direct-assessment",
            "calibration"
          ],
          "reliability": "HIGH"
        },
        {
          "id": "promptfoo",
          "name": "Promptfoo",
          "type_es": "Suite-de-evaluación",
          "subtheme": "evaluacion",
          "year": 2026,
          "authority": "Promptfoo, Inc. / OpenAI",
          "url": "https://github.com/promptfoo/promptfoo",
          "url_label": "promptfoo",
          "description_es": "CLI/librería MIT para evals declarativas y red-teaming de LLMs y agentes (prompt injection, jailbreaks, plugins adversarios) con integración CI/CD. Tras adquisición por OpenAI sigue OSS y es usada internamente por OpenAI y Anthropic; estándar de facto.",
          "tags": [
            "eval",
            "red-team",
            "prompt-injection",
            "ci-cd",
            "open-source",
            "openai",
            "security",
            "declarative"
          ],
          "reliability": "HIGH"
        },
        {
          "id": "hle-benchmark",
          "name": "Humanity's Last Exam (HLE)",
          "type_es": "Benchmark",
          "subtheme": "evaluacion",
          "year": 2025,
          "authority": "Center for AI Safety + Scale AI",
          "url": "https://agi.safe.ai/",
          "url_label": "HLE",
          "description_es": "Benchmark multimodal de 2.500 preguntas en la frontera del conocimiento humano (matemáticas, humanidades, ciencias) diseñado como último benchmark académico cerrado. En mayo 2026 los modelos frontier están en 44-58%, lejos de saturación; reemplaza a MMLU/MMLU-Pro saturados.",
          "tags": [
            "benchmark",
            "frontier-eval",
            "multimodal",
            "scale-ai",
            "cais",
            "knowledge",
            "no-saturation"
          ],
          "reliability": "HIGH"
        },
        {
          "id": "anthropic-prompt-cache-1h",
          "name": "Anthropic Extended Prompt Cache (1h TTL)",
          "type_es": "Especificación",
          "subtheme": "serving-infrastructure",
          "year": 2026,
          "authority": "Anthropic",
          "url": "https://platform.claude.com/docs/en/build-with-claude/prompt-caching",
          "url_label": "Anthropic prompt caching",
          "description_es": "API de prompt caching de Anthropic con dos TTLs: efímero 5m (write 1.25× input, read 0.10×) y extendido 1h GA (write 2.0× input, read 0.10×). Soportado en Claude Sonnet/Haiku/Opus 4.5+ y replicado en Bedrock y Vertex AI desde enero 2026; breakeven con 2 hits.",
          "tags": [
            "prompt-caching",
            "anthropic",
            "cost-control",
            "ttl",
            "claude",
            "bedrock",
            "vertex-ai",
            "latency"
          ],
          "reliability": "HIGH"
        },
        {
          "id": "contextual-retrieval",
          "name": "Contextual Retrieval (Anthropic)",
          "type_es": "Patrón",
          "subtheme": "rag",
          "year": 2024,
          "authority": "Anthropic",
          "url": "https://www.anthropic.com/news/contextual-retrieval",
          "url_label": "Anthropic",
          "description_es": "Patrón que prepende a cada chunk un contexto explicativo (50-100 tokens) generado por LLM antes de embedar y de indexar BM25. Reduce fallos de retrieval 49% solo, 67% combinado con reranker. Económicamente viable gracias a prompt caching de Claude. Patrón estándar de la industria 2026.",
          "tags": [
            "pattern",
            "rag",
            "chunking",
            "hybrid",
            "prompt-caching",
            "anthropic"
          ],
          "reliability": "HIGH"
        },
        {
          "id": "graphrag",
          "name": "GraphRAG (Microsoft) + variantes (HippoRAG, PathRAG)",
          "type_es": "Patrón",
          "subtheme": "rag",
          "year": 2024,
          "authority": "Microsoft Research",
          "url": "https://github.com/microsoft/graphrag",
          "url_label": "microsoft/graphrag",
          "description_es": "Construye knowledge graph del corpus + community summaries jerárquicas, y los usa para augmentar el prompt en queries globales/multi-hop. Microsoft reporta 86% accuracy vs 32% baseline RAG. HippoRAG (NeurIPS 2024) lo abarata 10-30× para multi-hop; PathRAG corta contexto 44%.",
          "tags": [
            "pattern",
            "graph",
            "multi-hop",
            "knowledge-graph",
            "enterprise",
            "microsoft"
          ],
          "reliability": "HIGH"
        },
        {
          "id": "agentic-rag",
          "name": "Agentic RAG (Self-RAG / CRAG / LlamaIndex AgentRetriever)",
          "type_es": "Patrón",
          "subtheme": "rag",
          "year": 2025,
          "authority": "LlamaIndex / LangGraph / academia",
          "url": "https://www.llamaindex.ai/blog/rag-is-dead-long-live-agentic-retrieval",
          "url_label": "LlamaIndex",
          "description_es": "Loop de razonamiento alrededor del retrieval: el agente decide si tiene info suficiente, evalúa relevancia (CRAG grading), reformula query y vuelve a buscar. Cubre Self-RAG, CRAG y patrones multi-tool en LangGraph. Default en stacks LlamaIndex/LangGraph para queries críticas.",
          "tags": [
            "pattern",
            "agent",
            "multi-step",
            "self-reflection",
            "langgraph",
            "llamaindex"
          ],
          "reliability": "HIGH"
        },
        {
          "id": "milvus-2-5",
          "name": "Milvus 2.5 (Sparse-BM25 + hybrid)",
          "type_es": "Plataforma",
          "subtheme": "rag",
          "year": 2026,
          "authority": "Zilliz / LF AI&Data Foundation",
          "url": "https://milvus.io/blog/introduce-milvus-2-5-full-text-search-powerful-metadata-filtering-and-more.md",
          "url_label": "Milvus 2.5",
          "description_es": "Vector DB de escala billones de vectores con full-text Sparse-BM25 nativo (~30× más rápido que stacks separados), hybrid retrieval, metadata filtering avanzado. Líder a >100M vectores en producción; elimina la necesidad de Elastic/OpenSearch en paralelo.",
          "tags": [
            "vector-db",
            "billion-scale",
            "hybrid",
            "bm25",
            "distributed"
          ],
          "reliability": "HIGH"
        },
        {
          "id": "lancedb",
          "name": "LanceDB (formato Lance)",
          "type_es": "Plataforma",
          "subtheme": "rag",
          "year": 2026,
          "authority": "LanceDB Inc. (YC)",
          "url": "https://lancedb.com/",
          "url_label": "lancedb.com",
          "description_es": "Embedded + serverless multimodal lakehouse construido sobre el formato columnar Lance. Une vector search, full-text search y SQL (DuckDB-native en 2026) en una sola tabla con multimodal data (texto, imagen, audio, video). Cloud GA serverless usage-based.",
          "tags": [
            "vector-db",
            "multimodal",
            "lakehouse",
            "embedded",
            "serverless"
          ],
          "reliability": "HIGH"
        },
        {
          "id": "turbopuffer",
          "name": "Turbopuffer",
          "type_es": "Plataforma",
          "subtheme": "rag",
          "year": 2026,
          "authority": "Turbopuffer Inc.",
          "url": "https://turbopuffer.com/",
          "url_label": "turbopuffer.com",
          "description_es": "Vector + BM25 + hybrid search serverless construido sobre object storage (S3). Sub-10ms p50 en cached, 3.5T+ documentos en producción, 10M writes/s, 25k qps. ~10× más barato que alternativas. Clientes destacados: Cursor, Notion, Linear.",
          "tags": [
            "vector-db",
            "serverless",
            "object-storage",
            "hybrid",
            "bm25"
          ],
          "reliability": "HIGH"
        },
        {
          "id": "cohere-embed-v4",
          "name": "Cohere Embed v4",
          "type_es": "Modelo",
          "subtheme": "rag",
          "year": 2025,
          "authority": "Cohere",
          "url": "https://docs.cohere.com/changelog/embed-multimodal-v4",
          "url_label": "Cohere Embed v4",
          "description_es": "Modelo de embeddings multimodal (texto + imagen unificados) con ventana 128k tokens (~200 páginas), Matryoshka (256/512/1024/1536 dims), MTEB ~65, líder en text-to-mixed-modality retrieval para PDFs corporativos. GA Cohere Platform, AWS Bedrock/SageMaker, Azure AI Foundry.",
          "tags": [
            "embedding",
            "multimodal",
            "enterprise",
            "matryoshka",
            "long-context",
            "cohere"
          ],
          "reliability": "HIGH"
        },
        {
          "id": "voyage-3-large",
          "name": "Voyage-3-large (Voyage AI / MongoDB)",
          "type_es": "Modelo",
          "subtheme": "rag",
          "year": 2025,
          "authority": "Voyage AI (acquired by MongoDB)",
          "url": "https://blog.voyageai.com/2025/01/07/voyage-3-large/",
          "url_label": "Voyage blog",
          "description_es": "Embedding general-purpose con la mejor calidad de retrieval medida en 100 datasets (8 dominios incluyendo legal, finanzas, código). 32k context, dimensiones Matryoshka (256/512/1024/2048), int8/binary nativos. Supera a OpenAI v3-large por ~9.7%.",
          "tags": [
            "embedding",
            "retrieval",
            "code",
            "legal",
            "finance",
            "matryoshka",
            "mongodb"
          ],
          "reliability": "HIGH"
        },
        {
          "id": "jina-embeddings-v4",
          "name": "Jina Embeddings v4",
          "type_es": "Modelo",
          "subtheme": "rag",
          "year": 2025,
          "authority": "Jina AI",
          "url": "https://jina.ai/models/jina-embeddings-v4/",
          "url_label": "Jina v4",
          "description_es": "Embedding universal multimodal y multilingüe construido sobre Qwen2.5-VL-3B. Soporta single-vector (2048-d truncable a 128) y multi-vector late-interaction (128-d/token estilo ColBERT). Tres LoRA adapters task-specific (retrieval, matching, code). Open weights.",
          "tags": [
            "embedding",
            "multimodal",
            "late-interaction",
            "multilingual",
            "open-weights",
            "jina"
          ],
          "reliability": "HIGH"
        },
        {
          "id": "bge-m3",
          "name": "BGE-M3 (BAAI)",
          "type_es": "Modelo",
          "subtheme": "rag",
          "year": 2024,
          "authority": "BAAI (Beijing Academy of AI)",
          "url": "https://huggingface.co/BAAI/bge-m3",
          "url_label": "HF BGE-M3",
          "description_es": "Único embedding open-weights que produce simultáneamente representaciones dense, sparse (lexical) y multi-vector ColBERT-style desde un solo modelo. Soporta 100+ idiomas y hasta 8192 tokens. MTEB ~63; default OSS para hybrid retrieval self-hosted.",
          "tags": [
            "embedding",
            "open-weights",
            "multilingual",
            "hybrid",
            "long-context",
            "baai"
          ],
          "reliability": "HIGH"
        },
        {
          "id": "cohere-rerank-3-5",
          "name": "Cohere Rerank 3.5",
          "type_es": "Modelo",
          "subtheme": "rag",
          "year": 2024,
          "authority": "Cohere",
          "url": "https://docs.cohere.com/docs/rerank-overview",
          "url_label": "Cohere Rerank docs",
          "description_es": "Reranker cross-encoder multilingüe con 4096 tokens contexto, soporta JSON / datos semi-estructurados. Latencia ~600ms; el más rápido del top-tier. Acompaña a Embed v4 en el stack Cohere.",
          "tags": [
            "reranker",
            "cross-encoder",
            "multilingual",
            "low-latency",
            "cohere"
          ],
          "reliability": "HIGH"
        },
        {
          "id": "colpali",
          "name": "ColPali / ColQwen2 (visual document retrieval)",
          "type_es": "Modelo",
          "subtheme": "rag",
          "year": 2025,
          "authority": "Illuin Tech / academia (Faysse et al., ICLR 2025)",
          "url": "https://github.com/illuin-tech/colpali",
          "url_label": "GitHub ColPali",
          "description_es": "Familia de Vision-Language Models (PaliGemma, Qwen2-VL) que producen embeddings multi-vector por patch de página, indexados con late-interaction tipo ColBERT. Elimina OCR + layout parsing: el modelo mira la página directamente. ColQwen2 supera a ColPali en +5.3 nDCG@5.",
          "tags": [
            "retrieval",
            "multimodal",
            "late-interaction",
            "document-ai",
            "vlm",
            "iclr-2025"
          ],
          "reliability": "HIGH"
        },
        {
          "id": "unsloth",
          "name": "Unsloth",
          "type_es": "Framework",
          "subtheme": "training-frameworks-and-patterns",
          "year": 2026,
          "authority": "Unsloth AI (Daniel & Michael Han)",
          "url": "https://github.com/unslothai/unsloth",
          "url_label": "GitHub",
          "description_es": "Framework OSS de fine-tuning que acelera 2-5× el entrenamiento de LLMs (Llama, Qwen, Gemma, DeepSeek, Phi, gpt-oss) con ~80% menos VRAM mediante kernels Triton manuales, Flash Attention y soporte 4-bit/FP8. Edición 2026 añade FP8 GRPO en GPU consumer, MoE rápido (Qwen3-30B-A3B en 17.5 GB) y UI no-code.",
          "tags": [
            "fine-tuning",
            "peft",
            "lora",
            "qlora",
            "grpo",
            "triton",
            "single-gpu",
            "unsloth"
          ],
          "reliability": "HIGH"
        },
        {
          "id": "axolotl",
          "name": "Axolotl",
          "type_es": "Framework",
          "subtheme": "training-frameworks-and-patterns",
          "year": 2025,
          "authority": "OpenAccess AI Collective",
          "url": "https://github.com/axolotl-ai-cloud/axolotl",
          "url_label": "GitHub",
          "description_es": "Framework declarativo (YAML) para fine-tuning de LLMs con soporte amplio: SFT, LoRA/QLoRA/DoRA, DPO/KTO/ORPO, GRPO, QAT, sequence parallelism para contexto largo, reward modeling. v0.8 (2025) añadió GRPO y RL pipelines completas. Opción industrial preferida fuera de single-GPU.",
          "tags": [
            "fine-tuning",
            "yaml",
            "multi-gpu",
            "dpo",
            "grpo",
            "qat"
          ],
          "reliability": "HIGH"
        },
        {
          "id": "llamafactory",
          "name": "LLaMA-Factory",
          "type_es": "Framework",
          "subtheme": "training-frameworks-and-patterns",
          "year": 2024,
          "authority": "hiyouga (ACL 2024)",
          "url": "https://github.com/hiyouga/LLaMA-Factory",
          "url_label": "GitHub",
          "description_es": "Framework unificado para fine-tuning de 100+ LLMs y VLMs con CLI, WebUI no-code y backend Unsloth opcional para 2-5× speedup. Soporta SFT, DPO, KTO, ORPO, PPO, GRPO, LoRA family, QAT y full-parameter. Camino click-to-finetune más popular en China.",
          "tags": [
            "fine-tuning",
            "webui",
            "no-code",
            "lora",
            "dpo",
            "grpo"
          ],
          "reliability": "HIGH"
        },
        {
          "id": "ms-swift",
          "name": "MS-Swift (ModelScope SWIFT)",
          "type_es": "Framework",
          "subtheme": "training-frameworks-and-patterns",
          "year": 2025,
          "authority": "Alibaba ModelScope (AAAI 2025)",
          "url": "https://github.com/modelscope/ms-swift",
          "url_label": "GitHub",
          "description_es": "Framework de fine-tuning a escala que cubre 600+ LLMs y 300+ MLLMs (Qwen3, DeepSeek-R1, Llama 4, GLM, InternVL) con CPT/SFT/DPO/GRPO/PPO, Megatron parallelism, soporte MoE y familia GRPO completa. Mejor para multimodal y MoE a gran escala.",
          "tags": [
            "fine-tuning",
            "multimodal",
            "megatron",
            "grpo",
            "moe",
            "mllm",
            "alibaba"
          ],
          "reliability": "HIGH"
        },
        {
          "id": "trl",
          "name": "Hugging Face TRL",
          "type_es": "Framework",
          "subtheme": "training-frameworks-and-patterns",
          "year": 2026,
          "authority": "Hugging Face",
          "url": "https://huggingface.co/docs/trl",
          "url_label": "Docs",
          "description_es": "Librería de referencia para post-training de LLMs: SFT, DPO, KTO, ORPO, IPO, GRPO, reward modeling, online DPO, PPO. Integra Liger Kernel para GRPO eficiente, FSDP/PEFT y entornos NeMo Gym para rollouts multi-turn. Núcleo común que casi todos los frameworks acaban llamando.",
          "tags": [
            "post-training",
            "dpo",
            "grpo",
            "sft",
            "rlhf",
            "library",
            "huggingface"
          ],
          "reliability": "HIGH"
        },
        {
          "id": "nemo-rl",
          "name": "NVIDIA NeMo RL",
          "type_es": "Framework",
          "subtheme": "training-frameworks-and-patterns",
          "year": 2025,
          "authority": "NVIDIA",
          "url": "https://github.com/NVIDIA-NeMo/RL",
          "url_label": "GitHub",
          "description_es": "Sucesor escalable de NeMo-Aligner para post-training a escala de cluster. Construido sobre Ray, Megatron Core y vLLM; integra Hugging Face e implementa GRPO, DAPO, DPO y RLHF multimodal con rollouts distribuidos. NVIDIA marcó deprecation de NeMo-Aligner en favor de NeMo RL en 2025.",
          "tags": [
            "rlhf",
            "grpo",
            "dapo",
            "megatron",
            "ray",
            "multimodal",
            "supersedes-nemo-aligner"
          ],
          "reliability": "HIGH"
        },
        {
          "id": "openrlhf",
          "name": "OpenRLHF",
          "type_es": "Framework",
          "subtheme": "training-frameworks-and-patterns",
          "year": 2025,
          "authority": "OpenLLMAI (EMNLP 2025 Demo)",
          "url": "https://github.com/OpenRLHF/OpenRLHF",
          "url_label": "GitHub",
          "description_es": "Pipeline distribuido de RLHF de alto rendimiento basado en Ray + DeepSpeed ZeRO-3 + vLLM, optimizado para entrenar modelos 70B+ con PPO, GRPO, DPO y rejection sampling. Competencia directa de NeMo RL.",
          "tags": [
            "rlhf",
            "ppo",
            "grpo",
            "ray",
            "distributed",
            "70b"
          ],
          "reliability": "HIGH"
        },
        {
          "id": "grpo",
          "name": "GRPO (Group Relative Policy Optimization)",
          "type_es": "Patrón",
          "subtheme": "training-frameworks-and-patterns",
          "year": 2024,
          "authority": "DeepSeek (DeepSeekMath, DeepSeek-R1)",
          "url": "https://arxiv.org/abs/2402.03300",
          "url_label": "DeepSeekMath paper",
          "description_es": "Algoritmo de RL que elimina el critic/value model de PPO y normaliza la ventaja relativa dentro de un grupo de respuestas muestreadas, recortando ~50% cómputo y memoria. Adoptado masivamente tras DeepSeek-R1 (Nature 2025) como método dominante para reasoning models con verifiable rewards.",
          "tags": [
            "rl",
            "reasoning",
            "ppo-alternative",
            "deepseek",
            "rlvr",
            "grpo"
          ],
          "reliability": "HIGH"
        },
        {
          "id": "dpo",
          "name": "DPO (Direct Preference Optimization)",
          "type_es": "Patrón",
          "subtheme": "training-frameworks-and-patterns",
          "year": 2023,
          "authority": "Rafailov et al. (Stanford)",
          "url": "https://arxiv.org/abs/2305.18290",
          "url_label": "Paper",
          "description_es": "Optimización directa sobre pares de preferencias chosen/rejected sin reward model ni loop de RL. Reemplazó PPO/RLHF como default de alignment para la mayoría de modelos open-weight 2024-2026; estudios controlados muestran que SimPO/KTO/ORPO no la superan estadísticamente.",
          "tags": [
            "post-training",
            "alignment",
            "preference",
            "no-rl",
            "stanford"
          ],
          "reliability": "HIGH"
        },
        {
          "id": "dapo",
          "name": "DAPO (Decoupled Clip and Dynamic sAmpling Policy Optimization)",
          "type_es": "Patrón",
          "subtheme": "training-frameworks-and-patterns",
          "year": 2025,
          "authority": "ByteDance Seed + Tsinghua",
          "url": "https://dapo-sia.github.io/",
          "url_label": "Project page",
          "description_es": "Extensión de GRPO con cuatro técnicas (Clip-Higher, Dynamic Sampling, Token-Level Policy Gradient Loss, Overlong Reward Shaping) para estabilizar RL en cadenas de razonamiento largas. Alcanza AIME 2024 50pt con ~50% pasos vs DeepSeek-R1-Zero-Qwen-32B; integrado en NeMo RL y OpenRLHF.",
          "tags": [
            "rl",
            "reasoning",
            "grpo-variant",
            "long-cot",
            "bytedance"
          ],
          "reliability": "HIGH"
        },
        {
          "id": "rlvr",
          "name": "RLVR (Reinforcement Learning with Verifiable Rewards)",
          "type_es": "Patrón",
          "subtheme": "training-frameworks-and-patterns",
          "year": 2024,
          "authority": "AI2 / DeepSeek (popularizado vía Tülu 3 y R1)",
          "url": "https://arxiv.org/abs/2411.15124",
          "url_label": "Tülu 3 paper",
          "description_es": "Paradigma de post-training donde la señal de reward proviene de verificadores programáticos (math checkers, unit tests, formato) en lugar de preferencias humanas. Demostrado por DeepSeek-R1 como suficiente para inducir capacidades emergentes de razonamiento con RL puro.",
          "tags": [
            "rl",
            "reasoning",
            "verifiable",
            "no-human-feedback",
            "ai2",
            "deepseek"
          ],
          "reliability": "HIGH"
        },
        {
          "id": "simpo",
          "name": "SimPO",
          "type_es": "Patrón",
          "subtheme": "training-frameworks-and-patterns",
          "year": 2024,
          "authority": "Princeton + Meta",
          "url": "https://arxiv.org/abs/2405.14734",
          "url_label": "Paper",
          "description_es": "Variante reference-free de DPO que usa el log-probability promedio como reward implícito y añade un margen objetivo, eliminando el reference model y reduciendo memoria. Estudios controlados 2026 muestran resultados mixtos vs DPO vainilla.",
          "tags": [
            "post-training",
            "preference",
            "reference-free"
          ],
          "reliability": "HIGH"
        },
        {
          "id": "kto",
          "name": "KTO (Kahneman-Tversky Optimization)",
          "type_es": "Patrón",
          "subtheme": "training-frameworks-and-patterns",
          "year": 2024,
          "authority": "ContextualAI (Ethayarajh et al.)",
          "url": "https://arxiv.org/abs/2402.01306",
          "url_label": "Paper",
          "description_es": "Post-training desde feedback unario (thumbs-up/down) usando una utilidad prospect-theoretic, evitando comparaciones por pares. Encaja con datos de producción (logs binarios) en lugar de pairwise preferences caras.",
          "tags": [
            "post-training",
            "alignment",
            "unary-feedback",
            "kahneman-tversky"
          ],
          "reliability": "HIGH"
        },
        {
          "id": "orpo",
          "name": "ORPO (Odds-Ratio Preference Optimization)",
          "type_es": "Patrón",
          "subtheme": "training-frameworks-and-patterns",
          "year": 2024,
          "authority": "KAIST (Hong et al.)",
          "url": "https://arxiv.org/abs/2403.07691",
          "url_label": "Paper",
          "description_es": "Método single-stage reference-free que integra preference learning en SFT añadiendo un odds-ratio penalty al NLL loss, fine-tuneando una sola vez. Combina SFT y alignment en un solo paso.",
          "tags": [
            "post-training",
            "single-stage",
            "reference-free",
            "kaist"
          ],
          "reliability": "HIGH"
        },
        {
          "id": "qlora",
          "name": "QLoRA",
          "type_es": "Patrón",
          "subtheme": "training-frameworks-and-patterns",
          "year": 2023,
          "authority": "Dettmers et al. (UW)",
          "url": "https://arxiv.org/abs/2305.14314",
          "url_label": "Paper",
          "description_es": "Fine-tuning con LoRA sobre un base model cuantizado a 4-bit (NF4) usando double quantization y paged optimizers. Permitió fine-tunear modelos 65B en una sola GPU de 48 GB. Sigue siendo el preset por defecto de Unsloth/Axolotl/LLaMA-Factory.",
          "tags": [
            "peft",
            "lora",
            "4-bit",
            "quantization",
            "nf4"
          ],
          "reliability": "HIGH"
        },
        {
          "id": "dora",
          "name": "DoRA (Weight-Decomposed Low-Rank Adaptation)",
          "type_es": "Patrón",
          "subtheme": "training-frameworks-and-patterns",
          "year": 2024,
          "authority": "NVlabs (ICML 2024 Oral)",
          "url": "https://github.com/NVlabs/DoRA",
          "url_label": "GitHub",
          "description_es": "Descompone el peso pre-entrenado en magnitud y dirección, aplicando LoRA solo a la dirección. Mejora capacidad y estabilidad sobre LoRA, mantiene calidad incluso a rank 8 y supera a LoRA rank 32 en muchos benchmarks. Sin overhead de inferencia. QDoRA emerge como nuevo default PEFT 2025.",
          "tags": [
            "peft",
            "lora-variant",
            "weight-decomposition",
            "icml-2024",
            "nvlabs"
          ],
          "reliability": "HIGH"
        },
        {
          "id": "liger-kernel",
          "name": "Liger Kernel",
          "type_es": "Herramienta",
          "subtheme": "training-frameworks-and-patterns",
          "year": 2025,
          "authority": "LinkedIn",
          "url": "https://github.com/linkedin/Liger-Kernel",
          "url_label": "GitHub",
          "description_es": "Colección de kernels Triton (RMSNorm, RoPE, SwiGLU, FusedLinearCrossEntropy, GRPO loss) que aceleran ~20% el throughput y reducen ~60% el peak memory en training. Integrado en TRL para GRPO eficiente. Capa drop-in estándar; cualquier framework moderno la importa.",
          "tags": [
            "triton",
            "kernels",
            "training",
            "grpo",
            "memory",
            "linkedin"
          ],
          "reliability": "HIGH"
        },
        {
          "id": "lorax",
          "name": "LoRAX (Predibase)",
          "type_es": "Plataforma",
          "subtheme": "training-frameworks-and-patterns",
          "year": 2025,
          "authority": "Predibase",
          "url": "https://github.com/predibase/lorax",
          "url_label": "GitHub",
          "description_es": "Servidor de inferencia multi-LoRA OSS que sirve miles de adapters fine-tuneados sobre un único base model en una GPU, con dynamic adapter loading, SGMV kernels y heterogeneous batching. Pionero del patrón multi-tenant LoRA.",
          "tags": [
            "serving",
            "multi-lora",
            "inference",
            "adapters",
            "sgmv"
          ],
          "reliability": "HIGH"
        },
        {
          "id": "vllm-multi-lora",
          "name": "vLLM Multi-LoRA",
          "type_es": "Patrón",
          "subtheme": "training-frameworks-and-patterns",
          "year": 2025,
          "authority": "vLLM project",
          "url": "https://docs.vllm.ai/en/latest/features/lora/",
          "url_label": "Docs",
          "description_es": "Soporte nativo en vLLM para servir múltiples adapters LoRA sobre un mismo base, integrando los CUDA kernels SGMV de Punica. Per-request adapter selection con overhead mínimo y compatibilidad con la API OpenAI-compatible. Estándar de facto producción 2026.",
          "tags": [
            "serving",
            "vllm",
            "lora",
            "sgmv",
            "punica"
          ],
          "reliability": "HIGH"
        },
        {
          "id": "llama-4",
          "name": "Llama 4 (Scout, Maverick)",
          "type_es": "Modelo",
          "subtheme": "models-open-weights",
          "year": 2025,
          "authority": "Meta AI",
          "url": "https://ai.meta.com/blog/llama-4-multimodal-intelligence/",
          "url_label": "Anuncio Meta",
          "description_es": "Familia open-weight nativamente multimodal con arquitectura MoE: Scout (~17B activos / 109B totales) y Maverick (~17B activos / 400B totales) liberados en abril 2025 bajo Llama Community License. Behemoth (2T) NO liberado, postpuesto indefinidamente; posible cancelación.",
          "tags": [
            "open-weights",
            "moe",
            "multimodal",
            "meta",
            "llama-community-license"
          ],
          "reliability": "HIGH"
        },
        {
          "id": "qwen3",
          "name": "Qwen 3",
          "type_es": "Modelo",
          "subtheme": "models-open-weights",
          "year": 2025,
          "authority": "Alibaba Qwen Team",
          "url": "https://github.com/QwenLM/Qwen3",
          "url_label": "GitHub",
          "description_es": "Familia open-weight Apache 2.0 liberada el 28-abr-2025 con dense (0.6B-32B) y MoE (30B-A3B, 235B-A22B). Soporta razonamiento híbrido (thinking on/off), 119 idiomas y 36T tokens de pre-training. Probablemente la familia open-weight más usada y mejor licenciada a mayo 2026.",
          "tags": [
            "open-weights",
            "apache-2",
            "hybrid-reasoning",
            "moe",
            "multilingual",
            "alibaba"
          ],
          "reliability": "HIGH"
        },
        {
          "id": "deepseek-r1",
          "name": "DeepSeek R1 / V3.2",
          "type_es": "Modelo",
          "subtheme": "models-open-weights",
          "year": 2025,
          "authority": "DeepSeek",
          "url": "https://huggingface.co/deepseek-ai/DeepSeek-V3.2",
          "url_label": "HF model card",
          "description_es": "Familia MoE open-weight (MIT) que catalizó el shift hacia RLVR+GRPO. R1 (ene 2025, Nature 2025) demostró razonamiento emergente con RL puro; V3.2 (1-dic-2025) introdujo Sparse Attention (DSA) para contexto largo barato. R2/V4 sin release pública verificada a mayo 2026.",
          "tags": [
            "open-weights",
            "mit",
            "moe",
            "reasoning",
            "sparse-attention",
            "grpo",
            "deepseek"
          ],
          "reliability": "HIGH"
        },
        {
          "id": "mistral-large-3",
          "name": "Mistral Large 3 / Medium 3",
          "type_es": "Modelo",
          "subtheme": "models-open-weights",
          "year": 2025,
          "authority": "Mistral AI",
          "url": "https://mistral.ai/news/mistral-3",
          "url_label": "Anuncio Mistral",
          "description_es": "Mistral Large 3 (2-dic-2025) es MoE granular con ~41B activos / 675B totales y contexto 256k. Medium 3 (mayo 2025) y Medium 3.5 (abril 2026, 128B dense, MIT modificada) cubren el segmento medio. Ministral 3 (3B/7B/14B) lanzado junto a Large 3.",
          "tags": [
            "open-weights",
            "moe",
            "mistral",
            "european"
          ],
          "reliability": "HIGH"
        },
        {
          "id": "gemma-3",
          "name": "Gemma 3",
          "type_es": "Modelo",
          "subtheme": "models-open-weights",
          "year": 2025,
          "authority": "Google DeepMind",
          "url": "https://huggingface.co/blog/gemma3",
          "url_label": "Blog HF",
          "description_es": "Familia open-weight (1B/4B/12B/27B) liberada en marzo 2025; multimodal (vision via SigLIP en 4B+), 128k contexto, 140+ idiomas, variantes oficiales QAT 4-bit. El 27B alcanza ~1338 ELO en Chatbot Arena, líder entre open-weights tras DeepSeek-R1 en su release.",
          "tags": [
            "open-weights",
            "multimodal",
            "qat",
            "google",
            "gemma"
          ],
          "reliability": "HIGH"
        },
        {
          "id": "phi-4-reasoning",
          "name": "Phi-4 / Phi-4-reasoning",
          "type_es": "Modelo",
          "subtheme": "models-open-weights",
          "year": 2025,
          "authority": "Microsoft Research",
          "url": "https://huggingface.co/microsoft/Phi-4-reasoning",
          "url_label": "HF model card",
          "description_es": "Phi-4 (14B dense, MIT) liberado a inicios de 2025 sobre 9.8T tokens curados+sintéticos. Phi-4-reasoning y Phi-4-reasoning-plus (mayo 2025) añaden RL para razonamiento y superan a modelos mucho mayores en MATH/MGSM. Phi-4-mini-reasoning (3.8B) para edge.",
          "tags": [
            "open-weights",
            "mit",
            "small-model",
            "reasoning",
            "synthetic-data",
            "microsoft"
          ],
          "reliability": "HIGH"
        },
        {
          "id": "hermes-4",
          "name": "Hermes 4 (Nous Research)",
          "type_es": "Modelo",
          "subtheme": "models-open-weights",
          "year": 2025,
          "authority": "Nous Research",
          "url": "https://hermes4.nousresearch.com/",
          "url_label": "Sitio oficial",
          "description_es": "Familia open-weight (14B/70B/405B) basada en checkpoints Llama 3.1, liberada el 26-ago-2025. Introduce hybrid reasoning con tags <think>...</think>, ~5M muestras / 60B tokens de post-training. El 405B alcanza 96.3% en MATH-500 y top en RefusalBench (modo poco-censurado).",
          "tags": [
            "open-weights",
            "hybrid-reasoning",
            "uncensored",
            "community",
            "nous"
          ],
          "reliability": "HIGH"
        },
        {
          "id": "distilabel",
          "name": "Distilabel",
          "type_es": "Herramienta",
          "subtheme": "training-frameworks-and-patterns",
          "year": 2025,
          "authority": "Argilla (Hugging Face)",
          "url": "https://github.com/argilla-io/distilabel",
          "url_label": "GitHub",
          "description_es": "Framework de pipelines para generación de datos sintéticos y AI feedback con LLMs, basado en métodos verificados de papers (UltraFeedback, Magpie, Self-Instruct). Conecta steps/tasks en un DAG; soporta multimodal y backends locales (mlx-lm). Estándar OSS para SFT/DPO sintéticos.",
          "tags": [
            "synthetic-data",
            "ai-feedback",
            "pipeline",
            "argilla",
            "huggingface"
          ],
          "reliability": "HIGH"
        },
        {
          "id": "gpt-realtime",
          "name": "OpenAI Realtime API (gpt-realtime)",
          "type_es": "API",
          "subtheme": "voice-agents",
          "year": 2025,
          "authority": "OpenAI",
          "url": "https://openai.com/index/introducing-gpt-realtime/",
          "url_label": "Anuncio GA gpt-realtime",
          "description_es": "API speech-to-speech de un solo modelo que sustituye la cadena STT→LLM→TTS. GA 28-ago-2025 con gpt-realtime; soporta entrada de imagen, MCP remoto, llamadas SIP y nuevas voces (Cedar, Marin). Precio $32/$64 por 1M tokens audio in/out (-20% vs preview).",
          "tags": [
            "voice",
            "realtime",
            "speech-to-speech",
            "mcp",
            "sip",
            "openai"
          ],
          "reliability": "HIGH"
        },
        {
          "id": "gemini-live-api",
          "name": "Gemini Live API (Native Audio)",
          "type_es": "API",
          "subtheme": "voice-agents",
          "year": 2025,
          "authority": "Google DeepMind",
          "url": "https://ai.google.dev/gemini-api/docs/live-api",
          "url_label": "Gemini Live docs",
          "description_es": "API realtime audio-a-audio sobre Gemini 2.5 Flash Native Audio (y 3.1 Flash Live preview). 30 voces HD en 24 idiomas, comprensión emocional, video/screen streaming continuo. Native audio preview sept 2025; GA Vertex dic 2025.",
          "tags": [
            "voice",
            "realtime",
            "multimodal",
            "gemini",
            "vertex"
          ],
          "reliability": "HIGH"
        },
        {
          "id": "livekit-agents",
          "name": "LiveKit Agents",
          "type_es": "Framework",
          "subtheme": "voice-agents",
          "year": 2026,
          "authority": "LiveKit",
          "url": "https://github.com/livekit/agents",
          "url_label": "Repo",
          "description_es": "Framework Python OSS para agentes de voz/video tiempo real. AgentSession unifica STT/LLM/TTS como componentes intercambiables; turn-detection multilingüe (13 idiomas, <25 ms CPU); soporte MCP nativo. v1.0 abril 2025; v1.5 abril 2026; usado por OpenAI Realtime demo y Vapi.",
          "tags": [
            "voice",
            "agent-runtime",
            "webrtc",
            "mcp",
            "framework",
            "oss"
          ],
          "reliability": "HIGH"
        },
        {
          "id": "vapi",
          "name": "Vapi",
          "type_es": "Plataforma",
          "subtheme": "voice-agents",
          "year": 2026,
          "authority": "Vapi Labs",
          "url": "https://vapi.ai/",
          "url_label": "vapi.ai",
          "description_es": "Plataforma developer-first para voice agents que abstrae STT/LLM/TTS/turn-taking detrás de configuración; permite swap de proveedores sin reescribir. Telefonía, web y SIP integrados. Top-3 plataforma en mercado $47B (2025).",
          "tags": [
            "voice-agent",
            "platform",
            "telephony",
            "multi-vendor"
          ],
          "reliability": "HIGH"
        },
        {
          "id": "retell-ai",
          "name": "Retell AI",
          "type_es": "Plataforma",
          "subtheme": "voice-agents",
          "year": 2025,
          "authority": "Retell AI",
          "url": "https://www.retellai.com/",
          "url_label": "retellai.com",
          "description_es": "Plataforma voice-agent enfocada en telefonía carrier-grade: SIP trunks, inbound routing, warm handoff a humanos, turn-taking de baja latencia. Líder en voice agents para call centers 2026.",
          "tags": [
            "voice-agent",
            "telephony",
            "contact-center",
            "sip"
          ],
          "reliability": "HIGH"
        },
        {
          "id": "cartesia-sonic",
          "name": "Cartesia Sonic-3",
          "type_es": "Modelo",
          "subtheme": "voice-agents",
          "year": 2025,
          "authority": "Cartesia",
          "url": "https://cartesia.ai/sonic",
          "url_label": "Cartesia Sonic-3",
          "description_es": "Modelo TTS state-space realtime, TTFA ~40 ms (récord), risas y emoción nativas, 15 idiomas. Optimizado para conversación bidireccional con voice agents; default en muchos stacks por baja latencia.",
          "tags": [
            "tts",
            "realtime",
            "voice",
            "ssm",
            "cartesia"
          ],
          "reliability": "HIGH"
        },
        {
          "id": "elevenlabs-v3",
          "name": "ElevenLabs v3",
          "type_es": "Modelo",
          "subtheme": "voice-agents",
          "year": 2025,
          "authority": "ElevenLabs",
          "url": "https://elevenlabs.io/",
          "url_label": "elevenlabs.io",
          "description_es": "Modelo TTS expresivo (v3 ~300 ms, Flash v2.5 ~75 ms), 70+ idiomas, voice cloning. Líder en calidad/expresividad para audiobooks y agents premium; competidor directo de Cartesia.",
          "tags": [
            "tts",
            "voice-cloning",
            "multilingual",
            "elevenlabs"
          ],
          "reliability": "HIGH"
        },
        {
          "id": "pipecat",
          "name": "Pipecat",
          "type_es": "Framework",
          "subtheme": "voice-agents",
          "year": 2026,
          "authority": "Daily.co (Pipecat AI)",
          "url": "https://www.pipecat.ai/",
          "url_label": "pipecat.ai",
          "description_es": "Framework OSS Python para agentes de voz/multimodal. 40+ servicios como plugins; Pipecat Subagents para sistemas multi-agente distribuidos con bus de mensajes compartido. NVIDIA lo distribuye en build.nvidia.com; alternativa vendor-neutral a LiveKit Agents.",
          "tags": [
            "voice",
            "multimodal",
            "framework",
            "open-source",
            "subagents",
            "daily"
          ],
          "reliability": "HIGH"
        },
        {
          "id": "mistral-ocr",
          "name": "Mistral OCR",
          "type_es": "API",
          "subtheme": "document-ai",
          "year": 2025,
          "authority": "Mistral AI",
          "url": "https://mistral.ai/news/mistral-ocr",
          "url_label": "Mistral OCR",
          "description_es": "API VLM-based OCR optimizada para velocidad y precisión multilingüe, output markdown-first orientado a RAG. OCR 3 (mistral-ocr-2512, dic 2025) a $2 por 1.000 páginas (-50% en Batch). Disrumpe OCR clásico; estándar de facto para document understanding en RAG empresarial.",
          "tags": [
            "document-ai",
            "ocr",
            "vlm",
            "multimodal",
            "rag",
            "mistral"
          ],
          "reliability": "HIGH"
        },
        {
          "id": "llamaparse",
          "name": "LlamaParse v2 (LlamaCloud)",
          "type_es": "Herramienta",
          "subtheme": "document-ai",
          "year": 2025,
          "authority": "LlamaIndex",
          "url": "https://www.llamaindex.ai/blog/introducing-llamaparse-v2-simpler-better-cheaper",
          "url_label": "LlamaParse v2",
          "description_es": "Servicio de parsing agentic de documentos (PDF, PPTX, XLSX) con tiers: Fast / Cost-effective (~3 cred/pág, <0.4¢) / Agentic (10) / Agentic Plus (45). Modo Agentic alcanza ~85% accuracy en ParseBench, líder entre APIs comerciales.",
          "tags": [
            "document-ai",
            "parsing",
            "agentic",
            "llamacloud",
            "llamaindex"
          ],
          "reliability": "HIGH"
        },
        {
          "id": "docling",
          "name": "Docling (IBM)",
          "type_es": "Framework",
          "subtheme": "document-ai",
          "year": 2026,
          "authority": "IBM Research / LF AI&Data → AAIF",
          "url": "https://github.com/docling-project/docling",
          "url_label": "docling-project/docling",
          "description_es": "Pipeline OSS para document parsing (PDF, DOCX, PPTX, HTML, imágenes) con layout/table extraction. v2.72 (feb 2026): 97.9% precisión en tablas con Granite-Docling-258M VLM single-pass. OpenShift Operator con Red Hat. Donado a Linux Foundation Agentic AI Foundation 2026.",
          "tags": [
            "document-ai",
            "ocr",
            "rag",
            "ibm",
            "vlm",
            "oss",
            "aaif"
          ],
          "reliability": "HIGH"
        },
        {
          "id": "cerebras-inference",
          "name": "Cerebras Inference",
          "type_es": "Servicio",
          "subtheme": "serving-infrastructure",
          "year": 2025,
          "authority": "Cerebras Systems",
          "url": "https://www.cerebras.ai/inference",
          "url_label": "Cerebras Inference",
          "description_es": "Servicio de inferencia hosted sobre WSE-3 (4T transistores, 900K cores, 44 GB SRAM on-chip, 21 PB/s). Llama 3.1 70B a 450 t/s, 405B a 969 t/s, gpt-oss-120B a ~3.000 t/s. Tiers Free/Developer/Enterprise; precios desde $0.10/M tokens. Inferencia más rápida del mercado para reasoning.",
          "tags": [
            "inference",
            "wafer-scale",
            "hardware",
            "hosted",
            "ultra-low-latency",
            "cerebras"
          ],
          "reliability": "HIGH"
        },
        {
          "id": "groq-lpu",
          "name": "Groq (LPU + GroqCloud)",
          "type_es": "Hardware",
          "subtheme": "serving-infrastructure",
          "year": 2024,
          "authority": "Groq",
          "url": "https://groq.com/",
          "url_label": "groq.com",
          "description_es": "Language Processing Unit (LPU) determinista, ~300 t/s en Llama 70B, hasta 1.200 t/s en modelos ligeros. GroqCloud serverless + GroqRack on-prem. Llama 4 Scout $0.11/$0.34 por M tokens. Junto con Cerebras define la frontera de inferencia rápida y barata 2026.",
          "tags": [
            "hardware",
            "inference",
            "lpu",
            "low-latency",
            "hosted",
            "groq"
          ],
          "reliability": "HIGH"
        },
        {
          "id": "modal",
          "name": "Modal Labs",
          "type_es": "Plataforma",
          "subtheme": "serving-infrastructure",
          "year": 2026,
          "authority": "Modal Labs",
          "url": "https://modal.com/",
          "url_label": "modal.com",
          "description_es": "Plataforma serverless GPU Python-native con sub-second cold starts, autoscaling, billing por segundo (A10G ~$0.000306/s). 2025: GPU memory snapshots (alpha) capturan estado VRAM completo para arranque instantáneo de modelos. Referencia para inferencia variable, batch y voice agents.",
          "tags": [
            "serverless-gpu",
            "inference",
            "python-native",
            "batch",
            "modal"
          ],
          "reliability": "HIGH"
        },
        {
          "id": "baseten",
          "name": "Baseten",
          "type_es": "Plataforma",
          "subtheme": "serving-infrastructure",
          "year": 2026,
          "authority": "Baseten",
          "url": "https://www.baseten.co/",
          "url_label": "baseten.co",
          "description_es": "Plataforma enterprise de inferencia y observabilidad para modelos custom y compound AI systems; runtime Truss configurable; despliegue self-hosted/hybrid en VPC del cliente para compliance. Series E $300M @ $5B (feb 2026).",
          "tags": [
            "hosted-inference",
            "enterprise",
            "vpc",
            "observability",
            "truss",
            "baseten"
          ],
          "reliability": "HIGH"
        },
        {
          "id": "kserve",
          "name": "KServe (CNCF)",
          "type_es": "Framework",
          "subtheme": "serving-infrastructure",
          "year": 2025,
          "authority": "Kubeflow / CNCF Incubating",
          "url": "https://kserve.github.io/website/",
          "url_label": "KServe",
          "description_es": "Plataforma estándar CNCF para serving multi-framework en Kubernetes. v0.16 introduce LLMInferenceService con APIs OpenAI-compatible, streaming y integración nativa con runtimes LLM. Combina con llm-d como capa de scheduling.",
          "tags": [
            "k8s-operator",
            "serving",
            "cncf",
            "llm",
            "kserve"
          ],
          "reliability": "HIGH"
        },
        {
          "id": "llm-d",
          "name": "llm-d",
          "type_es": "Plataforma",
          "subtheme": "serving-infrastructure",
          "year": 2026,
          "authority": "Red Hat, Google Cloud, IBM Research, CoreWeave, NVIDIA (CNCF Sandbox)",
          "url": "https://github.com/llm-d/llm-d",
          "url_label": "GitHub",
          "description_es": "Pila Kubernetes-nativa de inferencia distribuida construida sobre vLLM, con desagregación prefill/decode, enrutado cache-aware, autoscaling scale-to-zero y soporte multi-acelerador (NVIDIA, AMD, Intel XPU, Google TPU). Lanzada mayo 2025; donada a CNCF como sandbox project en KubeCon EU 2026 (24-mar-2026); v0.6 abril 2026.",
          "tags": [
            "llm-serving",
            "kubernetes",
            "distributed-inference",
            "pd-disaggregation",
            "vllm",
            "multi-accelerator",
            "cncf",
            "kv-routing",
            "lora-routing",
            "scale-to-zero"
          ],
          "reliability": "HIGH"
        },
        {
          "id": "kaito",
          "name": "KAITO (Kubernetes AI Toolchain Operator)",
          "type_es": "Framework",
          "subtheme": "serving-infrastructure",
          "year": 2025,
          "authority": "Microsoft / kaito-project",
          "url": "https://github.com/kaito-project/kaito",
          "url_label": "GitHub",
          "description_es": "Operator suite que automatiza inferencia, fine-tuning y RAG engine en clusters Kubernetes (originalmente AKS, ahora multi-cloud). Workspace y RAGEngine CRDs. Estándar Microsoft/AKS para LLM workloads en K8s.",
          "tags": [
            "k8s-operator",
            "llm",
            "fine-tuning",
            "rag",
            "microsoft",
            "aks"
          ],
          "reliability": "HIGH"
        },
        {
          "id": "vllm-production-stack",
          "name": "vLLM Production Stack",
          "type_es": "Framework",
          "subtheme": "serving-infrastructure",
          "year": 2025,
          "authority": "vllm-project + LMCache",
          "url": "https://github.com/vllm-project/production-stack",
          "url_label": "Repo",
          "description_es": "Reference Kubernetes-native stack sobre vLLM con prefix-aware routing, KV-cache sharing (LMCache), observabilidad (TTFT, TBT, throughput) y autoscaling. Helm charts + CRDs (Router, LoRA, Autoscale). 3-10× lower latency, 2-5× más throughput vs vLLM solo.",
          "tags": [
            "k8s",
            "vllm",
            "helm",
            "lmcache",
            "autoscaling"
          ],
          "reliability": "HIGH"
        },
        {
          "id": "nvidia-blackwell-b200",
          "name": "NVIDIA B200 / GB200 NVL72 (Blackwell)",
          "type_es": "Hardware",
          "subtheme": "serving-infrastructure",
          "year": 2026,
          "authority": "NVIDIA",
          "url": "https://nvidianews.nvidia.com/news/nvidia-blackwell-platform-arrives-to-power-a-new-era-of-computing",
          "url_label": "Blackwell launch",
          "description_es": "GPU Blackwell B200 y rack liquid-cooled GB200 NVL72; volume production 5-feb-2026. Backlog ~3.6M unidades agotado hasta mid-2026. DGX B300 enviando tras GTC 2026. Plataforma dominante de entrenamiento e inferencia mayo 2026; sustituye H100/H200 en nuevos clusters.",
          "tags": [
            "hardware",
            "gpu",
            "training",
            "inference",
            "nvidia",
            "blackwell"
          ],
          "reliability": "HIGH"
        },
        {
          "id": "nvidia-vera-rubin",
          "name": "NVIDIA Vera Rubin Platform",
          "type_es": "Hardware",
          "subtheme": "serving-infrastructure",
          "year": 2026,
          "authority": "NVIDIA",
          "url": "https://nvidianews.nvidia.com/news/nvidia-vera-rubin-platform",
          "url_label": "Vera Rubin platform",
          "description_es": "Plataforma Vera CPU + Rubin GPU + NVLink 6 + ConnectX-9 + BlueField-4 + Spectrum-6. 7 chips, 1.3M componentes, 10× perf/W vs Grace Blackwell. Anunciada GTC 16-mar-2026; producción declarada datacenter H2 2026. Roadmap visible mayo 2026, condiciona compras 2026-2027.",
          "tags": [
            "hardware",
            "gpu",
            "roadmap",
            "nvidia",
            "agentic",
            "vera-rubin"
          ],
          "reliability": "MEDIUM"
        },
        {
          "id": "amd-mi350",
          "name": "AMD Instinct MI350 (CDNA 4)",
          "type_es": "Hardware",
          "subtheme": "serving-infrastructure",
          "year": 2025,
          "authority": "AMD",
          "url": "https://www.amd.com/en/products/accelerators/instinct/mi350.html",
          "url_label": "AMD Instinct MI350",
          "description_es": "CDNA 4 en 3 nm, hasta 288 GB HBM3E, soporte FP4/FP6, hasta 35× perf en inferencia vs MI300. Línea anual: MI325X (Q4 2024) → MI350 (2025) → MI400 (2026 CDNA Next, rack-level). Alternativa real a NVIDIA H200/B200 con ROCm madurando.",
          "tags": [
            "hardware",
            "gpu",
            "amd",
            "training",
            "inference",
            "cdna-4",
            "fp4"
          ],
          "reliability": "HIGH"
        },
        {
          "id": "tpu-ironwood",
          "name": "Google TPU v7x Ironwood",
          "type_es": "Hardware",
          "subtheme": "serving-infrastructure",
          "year": 2025,
          "authority": "Google Cloud",
          "url": "https://cloud.google.com/tpu",
          "url_label": "Google Cloud TPU",
          "description_es": "TPU séptima generación; 4× training y inference vs Trillium (v6e). Anthropic comprometió cientos de miles de Trillium en 2026 escalando a ~1M para 2027 — referencia de adopción TPU. GA noviembre 2025.",
          "tags": [
            "hardware",
            "tpu",
            "training",
            "inference",
            "google",
            "ironwood"
          ],
          "reliability": "HIGH"
        }
      ],
      "relationships": [
        {
          "from": "vllm",
          "type": "implementa",
          "to": "paged-attention-paper"
        },
        {
          "from": "langfuse",
          "type": "adopta",
          "to": "otel-genai"
        },
        {
          "from": "arize-phoenix",
          "type": "adopta",
          "to": "otel-genai"
        },
        {
          "from": "helicone",
          "type": "compite-con",
          "to": "litellm"
        },
        {
          "from": "litellm",
          "type": "exporta-a",
          "to": "langfuse"
        },
        {
          "from": "hybrid-rag",
          "type": "se-evalua-con",
          "to": "ragas"
        },
        {
          "from": "qdrant",
          "type": "sustrato-de",
          "to": "hybrid-rag"
        },
        {
          "from": "speculative-decoding",
          "type": "acelera",
          "to": "vllm"
        },
        {
          "from": "deepeval",
          "type": "integra",
          "to": "ragas"
        },
        {
          "from": "pydantic-logfire",
          "type": "adopta",
          "to": "otel-genai"
        },
        {
          "from": "pydantic-logfire",
          "type": "compite-con",
          "to": "langfuse"
        },
        {
          "from": "pydantic-logfire",
          "type": "compite-con",
          "to": "arize-phoenix"
        },
        {
          "from": "pydantic-logfire",
          "type": "compite-con",
          "to": "helicone"
        },
        {
          "from": "nvidia-dynamo",
          "type": "orquesta",
          "to": "vllm"
        },
        {
          "from": "nvidia-dynamo",
          "type": "orquesta",
          "to": "tensorrt-llm"
        },
        {
          "from": "nvidia-dynamo",
          "type": "orquesta",
          "to": "sglang"
        },
        {
          "from": "nvidia-dynamo",
          "type": "consume",
          "to": "lmcache"
        },
        {
          "from": "lmcache",
          "type": "extiende",
          "to": "paged-attention-paper"
        },
        {
          "from": "lmcache",
          "type": "implementa",
          "to": "mooncake"
        },
        {
          "from": "vllm",
          "type": "implementa",
          "to": "mooncake"
        },
        {
          "from": "sglang",
          "type": "implementa",
          "to": "mooncake"
        },
        {
          "from": "tensorrt-llm",
          "type": "usa",
          "to": "flashinfer"
        },
        {
          "from": "vllm",
          "type": "usa",
          "to": "flashinfer"
        },
        {
          "from": "sglang",
          "type": "usa",
          "to": "flashinfer"
        },
        {
          "from": "vllm",
          "type": "usa",
          "to": "xgrammar"
        },
        {
          "from": "sglang",
          "type": "usa",
          "to": "xgrammar"
        },
        {
          "from": "tensorrt-llm",
          "type": "usa",
          "to": "xgrammar"
        },
        {
          "from": "aibrix",
          "type": "control-plane-de",
          "to": "vllm"
        },
        {
          "from": "llm-d",
          "type": "orquesta",
          "to": "vllm"
        },
        {
          "from": "llm-d",
          "type": "integra-con",
          "to": "kserve"
        },
        {
          "from": "kaito",
          "type": "complementa",
          "to": "kserve"
        },
        {
          "from": "vllm-production-stack",
          "type": "construido-sobre",
          "to": "vllm"
        },
        {
          "from": "vllm-production-stack",
          "type": "consume",
          "to": "lmcache"
        },
        {
          "from": "kserve",
          "type": "compite-con",
          "to": "kaito"
        },
        {
          "from": "braintrust",
          "type": "compite-con",
          "to": "langfuse"
        },
        {
          "from": "braintrust",
          "type": "compite-con",
          "to": "pydantic-logfire"
        },
        {
          "from": "comet-opik",
          "type": "compite-con",
          "to": "langfuse"
        },
        {
          "from": "galileo-luna",
          "type": "es-evaluator-foundation-model-junto-a",
          "to": "patronus-lynx"
        },
        {
          "from": "patronus-lynx",
          "type": "es-evaluator-foundation-model-junto-a",
          "to": "prometheus-2"
        },
        {
          "from": "promptfoo",
          "type": "complementa",
          "to": "deepeval"
        },
        {
          "from": "contextual-retrieval",
          "type": "depende-de",
          "to": "anthropic-prompt-cache-1h"
        },
        {
          "from": "contextual-retrieval",
          "type": "complementa",
          "to": "hybrid-rag"
        },
        {
          "from": "graphrag",
          "type": "complementa",
          "to": "hybrid-rag"
        },
        {
          "from": "agentic-rag",
          "type": "extiende",
          "to": "hybrid-rag"
        },
        {
          "from": "milvus-2-5",
          "type": "compite-con",
          "to": "qdrant"
        },
        {
          "from": "lancedb",
          "type": "compite-con",
          "to": "qdrant"
        },
        {
          "from": "turbopuffer",
          "type": "compite-con",
          "to": "qdrant"
        },
        {
          "from": "voyage-3-large",
          "type": "compite-con",
          "to": "cohere-embed-v4"
        },
        {
          "from": "jina-embeddings-v4",
          "type": "compite-con",
          "to": "cohere-embed-v4"
        },
        {
          "from": "bge-m3",
          "type": "es-default-oss-junto-a",
          "to": "jina-embeddings-v4"
        },
        {
          "from": "colpali",
          "type": "elimina-pipeline-de",
          "to": "mistral-ocr"
        },
        {
          "from": "cohere-rerank-3-5",
          "type": "complementa",
          "to": "cohere-embed-v4"
        },
        {
          "from": "mistral-ocr",
          "type": "compite-con",
          "to": "llamaparse"
        },
        {
          "from": "docling",
          "type": "compite-con",
          "to": "mistral-ocr"
        },
        {
          "from": "unsloth",
          "type": "acelera",
          "to": "trl"
        },
        {
          "from": "axolotl",
          "type": "usa",
          "to": "trl"
        },
        {
          "from": "llamafactory",
          "type": "usa-backend",
          "to": "unsloth"
        },
        {
          "from": "ms-swift",
          "type": "compite-con",
          "to": "axolotl"
        },
        {
          "from": "nemo-rl",
          "type": "supersedes-nemo-aligner",
          "to": "trl"
        },
        {
          "from": "trl",
          "type": "implementa",
          "to": "grpo"
        },
        {
          "from": "trl",
          "type": "implementa",
          "to": "dpo"
        },
        {
          "from": "dapo",
          "type": "extiende",
          "to": "grpo"
        },
        {
          "from": "rlvr",
          "type": "subyace-a",
          "to": "grpo"
        },
        {
          "from": "dora",
          "type": "supera-a",
          "to": "qlora"
        },
        {
          "from": "qwen3",
          "type": "post-trained-con",
          "to": "grpo"
        },
        {
          "from": "deepseek-r1",
          "type": "post-trained-con",
          "to": "grpo"
        },
        {
          "from": "deepseek-r1",
          "type": "post-trained-con",
          "to": "rlvr"
        },
        {
          "from": "phi-4-reasoning",
          "type": "demuestra",
          "to": "rlvr"
        },
        {
          "from": "hermes-4",
          "type": "post-trained-con",
          "to": "dpo"
        },
        {
          "from": "vllm-multi-lora",
          "type": "habilita-produccion-de",
          "to": "qlora"
        },
        {
          "from": "lorax",
          "type": "compite-con",
          "to": "vllm-multi-lora"
        },
        {
          "from": "liger-kernel",
          "type": "acelera",
          "to": "trl"
        },
        {
          "from": "livekit-agents",
          "type": "compite-con",
          "to": "pipecat"
        },
        {
          "from": "vapi",
          "type": "compite-con",
          "to": "retell-ai"
        },
        {
          "from": "gpt-realtime",
          "type": "compite-con",
          "to": "gemini-live-api"
        },
        {
          "from": "cartesia-sonic",
          "type": "compite-con",
          "to": "elevenlabs-v3"
        },
        {
          "from": "livekit-agents",
          "type": "consume",
          "to": "gpt-realtime"
        },
        {
          "from": "pipecat",
          "type": "consume",
          "to": "gpt-realtime"
        },
        {
          "from": "cerebras-inference",
          "type": "compite-con",
          "to": "groq-lpu"
        },
        {
          "from": "modal",
          "type": "compite-con",
          "to": "baseten"
        },
        {
          "from": "nvidia-vera-rubin",
          "type": "sucede-a",
          "to": "nvidia-blackwell-b200"
        },
        {
          "from": "amd-mi350",
          "type": "compite-con",
          "to": "nvidia-blackwell-b200"
        },
        {
          "from": "tpu-ironwood",
          "type": "compite-con",
          "to": "nvidia-blackwell-b200"
        }
      ],
      "cross_facet_links": [
        {
          "to_facet": "agentic-llmops",
          "to_entity": "langgraph",
          "from_entity": "hybrid-rag",
          "rationale": "El patrón Agentic RAG se materializa principalmente sobre grafos de estado LangGraph, puente natural a la facet agéntica."
        },
        {
          "to_facet": "cross-domain-resilience",
          "to_entity": "nygard-stability-patterns",
          "from_entity": "litellm",
          "rationale": "Los gateways LLM aplican patrones clásicos de resiliencia (circuit breaker, bulkhead, hedged requests) reutilizados desde sistemas distribuidos generales."
        },
        {
          "to_facet": "research-frontier",
          "to_entity": "shoal-plus-plus",
          "from_entity": "speculative-decoding",
          "rationale": "EAGLE-2 y Medusa son investigación de frontera 2025-2026 que se filtra rápidamente a producción vía vLLM y TensorRT-LLM."
        },
        {
          "to_facet": "stack-templates",
          "to_entity": "well-architected",
          "from_entity": "hybrid-rag",
          "rationale": "El stack de referencia (vLLM + LiteLLM + Qdrant + Langfuse + RAGAS) es la plantilla canónica que enlaza con el catálogo de stack-templates."
        },
        {
          "to_facet": "ml-sat-ops",
          "to_entity": "phi-sat-2",
          "from_entity": "nvidia-vera-rubin",
          "rationale": "Vera CPU + Rubin GPU + BlueField-4 son base de los racks space-grade que NVIDIA anunció (Space-1); convergencia compute frontera ↔ ML on-board."
        },
        {
          "to_facet": "agentic-llmops",
          "to_entity": "anthropic-agent-skills",
          "from_entity": "instructor-lib",
          "rationale": "Outputs estructurados (Instructor + APIs nativas + XGrammar) son el cimiento técnico que hace fiables las invocaciones de tools en los Skills."
        },
        {
          "to_facet": "cross-domain-resilience",
          "to_entity": "nygard-stability-patterns",
          "from_entity": "nvidia-dynamo",
          "rationale": "Dynamo y llm-d implementan migración de requests in-flight + planner SLA-driven autoscaling; concretan los patrones de bulkhead y backpressure de Nygard en serving LLM."
        },
        {
          "to_facet": "edge-swarms",
          "to_entity": "k8s-edge-distros",
          "from_entity": "llm-d",
          "rationale": "K8s LLM-native (KServe + llm-d) baja a edge; converge con K3s/K0s/KubeEdge/OpenYurt para inferencia LLM en orbital edge / robótica."
        },
        {
          "to_facet": "agentic-llmops",
          "to_entity": "cost-control",
          "from_entity": "anthropic-prompt-cache-1h",
          "rationale": "El cache extendido 1h reanchora el patrón cost-control: 2 hits ya amortizan el write 2.0×; cambia el diseño económico de loops agénticos largos."
        },
        {
          "to_facet": "research-frontier",
          "to_entity": "shoal-plus-plus",
          "from_entity": "mooncake",
          "rationale": "KVCache-centric scheduler representa cambio de paradigma scheduling-centric → storage-centric; tracking de frontier en sistemas de almacenamiento computacional."
        },
        {
          "to_facet": "agentic-llmops",
          "to_entity": "langgraph",
          "from_entity": "agentic-rag",
          "rationale": "Agentic RAG se materializa principalmente sobre LangGraph; bridge LLM↔AGT canónico."
        },
        {
          "to_facet": "agentic-llmops",
          "to_entity": "claude-agent-sdk",
          "from_entity": "pydantic-logfire",
          "rationale": "Logfire instrumenta nativamente Claude Agent SDK; bridge observability↔runtime."
        },
        {
          "to_facet": "agentic-llmops",
          "to_entity": "bifrost-gateway",
          "from_entity": "litellm",
          "rationale": "Gateways competidores (LiteLLM vs Bifrost MCP gateway)."
        }
      ],
      "super_category_id": "ai-ml-production"
    },
    {
      "facet_id": "agentic-llmops",
      "facet_label_es": "LLMOps agéntico",
      "intro_es": "El LLMOps agéntico extiende el LLMOps clásico de un span por inferencia a árboles de trazas con planificación, llamadas a tools, sub-LLMs y handoffs entre agentes, donde el fallo es probabilístico y no determinista. Los agentes pueden entrar en loops infinitos, borrar entornos productivos o envenenar memoria persistente, y sus modos de fallo se concentran en especificación (41,77%), coordinación inter-agente (36,94%) y verificación (21,30%) según la taxonomía MAST de Berkeley. La gobernanza es el cuello de botella: enforcement basado en prompts produce 26,67% de violaciones de política frente al 0% de la enforcement determinista a nivel infraestructura. La pila resiliente 2026 se construye sobre MCP como sustrato, OTel GenAI como semántica, LangGraph/Claude Agent SDK/OpenAI Agents SDK como orquestadores, y OWASP ASI Top 10 como contrato de seguridad.",
      "subthemes": [
        {
          "id": "protocolos-sdks-mcp",
          "label_es": "Protocolos, SDKs y ecosistema MCP (Claude/OpenAI Agents, MCP)"
        },
        {
          "id": "orquestacion-patrones",
          "label_es": "Orquestación y patrones multiagente"
        },
        {
          "id": "fallos-observabilidad",
          "label_es": "Modos de fallo, observabilidad, depuración y control de coste"
        },
        {
          "id": "evaluacion-evals",
          "label_es": "Evaluación y benchmarks agénticos"
        },
        {
          "id": "agent-memory",
          "label_es": "Memoria de agentes (Letta, mem0, Zep)"
        },
        {
          "id": "coding-agents",
          "label_es": "Coding agents y harness autónomos"
        }
      ],
      "entities": [
        {
          "id": "mcp",
          "name": "Model Context Protocol (MCP)",
          "type_es": "Protocolo",
          "subtheme": "protocolos-sdks-mcp",
          "year": 2024,
          "authority": "Anthropic / Linux Foundation (Agentic AI Foundation)",
          "url": "https://modelcontextprotocol.io/",
          "url_label": "MCP Site",
          "description_es": "Protocolo abierto de Anthropic para conectar agentes con herramientas y datos. En 2026 cuenta con spec rev. 2025-11-25, Official MCP Registry (preview 8-sep-2025, API v0.1 freeze oct-2025) en registry.modelcontextprotocol.io, ~12k servidores entre Smithery (~7k) y PulseMCP (~11.8k); MCP Authorization Spec sobre OAuth 2.1+PKCE+RFC 8707; complementario con A2A Protocol (Linux Foundation) para agent-to-agent.",
          "tags": [
            "protocolo",
            "mcp",
            "tools",
            "linux-foundation"
          ],
          "reliability": "HIGH"
        },
        {
          "id": "langgraph",
          "name": "LangGraph 1.0",
          "type_es": "Framework",
          "subtheme": "orquestacion-patrones",
          "year": 2025,
          "authority": "LangChain",
          "url": "https://www.langchain.com/langgraph",
          "url_label": "LangGraph (LangChain)",
          "description_es": "Framework de orquestación tipo grafo-máquina-de-estados con checkpointing durable, retries por nodo, HITL en breakpoints y time-travel debug. Versión 1.0 en octubre 2025; despliegues en Uber, JPMorgan, BlackRock, Cisco, LinkedIn y Klarna.",
          "tags": [
            "orquestacion",
            "durable-execution",
            "supervisor",
            "hitl"
          ],
          "reliability": "HIGH"
        },
        {
          "id": "claude-agent-sdk",
          "name": "Claude Agent SDK",
          "type_es": "SDK",
          "subtheme": "protocolos-sdks-mcp",
          "year": 2025,
          "authority": "Anthropic",
          "url": "https://docs.anthropic.com/en/api/agent-sdk/overview",
          "url_label": "Claude Agent SDK Docs",
          "description_es": "SDK oficial de Anthropic para construir agentes Claude (terminal, IDE, servidores). En 2026 se apoya en MCP para tools y en Anthropic Agent Skills (estándar abierto agentskills.io, 18-dic-2025) para capacidades reutilizables con hot-reload. Soporta PDF nativo (Files + 32 MB / 100 páginas).",
          "tags": [
            "sdk",
            "hooks",
            "subagents",
            "anthropic"
          ],
          "reliability": "HIGH"
        },
        {
          "id": "openai-agents-sdk",
          "name": "OpenAI Agents SDK",
          "type_es": "SDK",
          "subtheme": "protocolos-sdks-mcp",
          "year": 2025,
          "authority": "OpenAI",
          "url": "https://openai.github.io/openai-agents-python/",
          "url_label": "OpenAI Agents SDK",
          "description_es": "SDK oficial OpenAI para agentes con handoffs minimalistas. En 2026 adopta Anthropic Agent Skills (estándar abierto cross-vendor) y converge con OTel-GenAI agent semconv para observabilidad portable.",
          "tags": [
            "sdk",
            "handoffs",
            "guardrails",
            "openai"
          ],
          "reliability": "HIGH"
        },
        {
          "id": "langsmith-time-travel",
          "name": "LangSmith Time-Travel Debug & Polly",
          "type_es": "Herramienta",
          "subtheme": "fallos-observabilidad",
          "year": 2025,
          "authority": "LangChain",
          "url": "https://blog.langchain.com/debugging-deep-agents-with-langsmith/",
          "url_label": "Debugging Deep Agents (Dic 2025)",
          "description_es": "Plataforma de debug con time-travel en Agent Studio (breakpoints en cualquier nodo), AI assistant Polly que analiza trazas y sugiere mejoras, y CLI LangSmith Fetch para extraer trazas/threads por id o tiempo.",
          "tags": [
            "debug",
            "time-travel",
            "langsmith",
            "polly"
          ],
          "reliability": "HIGH"
        },
        {
          "id": "inspect-ai",
          "name": "Inspect AI",
          "type_es": "Framework",
          "subtheme": "evaluacion-evals",
          "year": 2024,
          "authority": "UK AI Safety Institute (AISI)",
          "url": "https://inspect.ai-safety-institute.org.uk/",
          "url_label": "Inspect AI",
          "description_es": "Framework open-source de referencia para evals multi-proveedor con sandbox K8s, datasets HuggingFace, model-graded scoring y GUI rica. Decoradores @task/@solver/@scorer y bridge nativo con SWE-bench.",
          "tags": [
            "evals",
            "framework",
            "aisi",
            "sandbox"
          ],
          "reliability": "HIGH"
        },
        {
          "id": "agentbench",
          "name": "AgentBench",
          "type_es": "Paper",
          "subtheme": "evaluacion-evals",
          "year": 2024,
          "authority": "THUDM (Tsinghua) — ICLR 2024",
          "url": "https://arxiv.org/abs/2308.03688",
          "url_label": "arXiv:2308.03688",
          "description_es": "Benchmark multi-dimensional con 8 entornos distintos para evaluar capacidades de razonamiento y toma de decisiones de LLM-as-Agent. Documenta gap significativo entre LLMs comerciales y open-source <70B en razonamiento de largo plazo.",
          "tags": [
            "benchmark",
            "agentbench",
            "iclr",
            "tsinghua"
          ],
          "reliability": "HIGH"
        },
        {
          "id": "cost-control",
          "name": "Control de Coste y Budget Caps Agénticos",
          "type_es": "Patrón",
          "subtheme": "fallos-observabilidad",
          "year": 2026,
          "authority": "MindStudio / Portal26 / Helicone / Langfuse",
          "url": "https://docs.litellm.ai/docs/proxy/users",
          "url_label": "LiteLLM budget controls",
          "description_es": "Defensa en profundidad de 7 capas: max_tokens por llamada, recursion_limit, loop hash detector, compactación al 70%, prompt caching, gateway caps (LiteLLM, Helicone, Portal26), pre-execution estimation y atribución de coste por agente.",
          "tags": [
            "coste",
            "budget",
            "circuit-breaker",
            "gateway"
          ],
          "reliability": "HIGH"
        },
        {
          "id": "microsoft-agent-framework",
          "name": "Microsoft Agent Framework (MAF) 1.0",
          "type_es": "Framework",
          "subtheme": "protocolos-sdks-mcp",
          "year": 2026,
          "authority": "Microsoft (Semantic Kernel + AutoGen teams)",
          "url": "https://learn.microsoft.com/en-us/agent-framework/overview/",
          "url_label": "MAF docs",
          "description_es": "Sucesor unificado de Semantic Kernel y AutoGen, lanzado como 1.0 en abril 2026 para .NET y Python. Combina abstracciones simples multi-agente, gestión de estado tipada, workflows explícitos y orquestaciones empresariales (Magentic-One incluido) con soporte largo plazo. AutoGen v0.4 pasa a maintenance.",
          "tags": [
            "agent-framework",
            "microsoft",
            "multi-agent",
            "dotnet",
            "python",
            "workflow",
            "magentic-one",
            "enterprise",
            "supersedes-autogen-sk"
          ],
          "reliability": "HIGH"
        },
        {
          "id": "pydantic-ai",
          "name": "Pydantic AI",
          "type_es": "Framework",
          "subtheme": "protocolos-sdks-mcp",
          "year": 2026,
          "authority": "Pydantic Services Inc. (Samuel Colvin et al.)",
          "url": "https://ai.pydantic.dev/",
          "url_label": "Pydantic AI docs",
          "description_es": "Framework Python de agentes con tipado estricto, agnóstico de modelo (OpenAI, Anthropic, Bedrock, Vertex), construido por el equipo de Pydantic. Integra de forma nativa con Logfire/OTel-GenAI; API estable 1.x desde finales 2025; v1.85 abril 2026. Opción FastAPI feeling para agentes Python tipados.",
          "tags": [
            "agent-framework",
            "python",
            "type-safe",
            "pydantic",
            "logfire",
            "otel-genai",
            "model-agnostic",
            "validation",
            "production"
          ],
          "reliability": "HIGH"
        },
        {
          "id": "mastra",
          "name": "Mastra",
          "type_es": "Framework",
          "subtheme": "protocolos-sdks-mcp",
          "year": 2026,
          "authority": "Mastra (ex-Gatsby team, YC W25)",
          "url": "https://mastra.ai/",
          "url_label": "Mastra",
          "description_es": "Framework TypeScript/JavaScript para agentes y workflows con persistencia de estado, memoria, herramientas, evals y despliegue serverless. Lanzamiento 1.0 enero 2026; +22k estrellas, +300k descargas semanales y plataforma Mastra Cloud gestionada. Cubre el hueco TypeScript first-class.",
          "tags": [
            "agent-framework",
            "typescript",
            "javascript",
            "workflow",
            "memory",
            "evals",
            "mastra-cloud",
            "web-developer",
            "production"
          ],
          "reliability": "HIGH"
        },
        {
          "id": "a2a-protocol",
          "name": "Agent2Agent Protocol (A2A) 1.2",
          "type_es": "Protocolo",
          "subtheme": "protocolos-sdks-mcp",
          "year": 2026,
          "authority": "Linux Foundation Agentic AI Foundation (origen Google)",
          "url": "https://a2a-protocol.org/latest/",
          "url_label": "A2A protocol",
          "description_es": "Protocolo abierto de comunicación agente-a-agente. v1.0 a comienzos 2026 lo llevó a producción; v1.2 introdujo agent cards firmados criptográficamente. +150 organizaciones, despliegues productivos en Microsoft, AWS, Salesforce, SAP, ServiceNow. Complementario a MCP (tools) cubriendo interop entre agentes.",
          "tags": [
            "a2a",
            "agent-interop",
            "linux-foundation",
            "google",
            "agent-cards",
            "multi-agent",
            "protocol",
            "complementary-to-mcp"
          ],
          "reliability": "HIGH"
        },
        {
          "id": "anthropic-agent-skills",
          "name": "Anthropic Agent Skills (open standard)",
          "type_es": "Estándar",
          "subtheme": "protocolos-sdks-mcp",
          "year": 2026,
          "authority": "Anthropic",
          "url": "https://agentskills.io/home",
          "url_label": "agentskills.io",
          "description_es": "Estándar abierto publicado el 18-dic-2025 para empaquetar capacidades de agentes como carpetas con SKILL.md, scripts y recursos. Diseñado por progressive disclosure (decenas de tokens por skill hasta cargarse). Adoptado por Microsoft (VS Code, GitHub), Cursor, Goose, Amp, OpenCode y partners Atlassian/Figma/Stripe/Notion.",
          "tags": [
            "skills",
            "anthropic",
            "open-standard",
            "claude-code",
            "progressive-disclosure",
            "skill-md",
            "agent-capabilities",
            "cross-vendor"
          ],
          "reliability": "HIGH"
        },
        {
          "id": "mcp-registry",
          "name": "Official MCP Registry",
          "type_es": "Servicio",
          "subtheme": "protocolos-sdks-mcp",
          "year": 2026,
          "authority": "Model Context Protocol project",
          "url": "https://registry.modelcontextprotocol.io/",
          "url_label": "MCP Registry",
          "description_es": "Registro oficial de servidores MCP, en preview desde el 8-sep-2025 con API freeze v0.1 desde octubre. Actúa como fuente de verdad federada: directorios como Smithery (~7k servers), PulseMCP (~11.8k) y mcp.so consumen y enriquecen sus datos.",
          "tags": [
            "mcp",
            "registry",
            "discovery",
            "smithery",
            "pulsemcp",
            "federation",
            "app-store",
            "official"
          ],
          "reliability": "HIGH"
        },
        {
          "id": "fastmcp",
          "name": "FastMCP 3.0",
          "type_es": "Framework",
          "subtheme": "protocolos-sdks-mcp",
          "year": 2026,
          "authority": "Jeremiah Lowin / Prefect",
          "url": "https://github.com/prefecthq/fastmcp",
          "url_label": "FastMCP",
          "description_es": "Framework Python pythónico para construir servidores y clientes MCP. v3.0 (enero 2026) añadió versionado de componentes, autorización, integración OpenTelemetry y Apps con UI interactiva. Algún derivado de FastMCP impulsa ~70% de los servidores MCP. FastMCP 1.0 ya está dentro del SDK oficial.",
          "tags": [
            "mcp",
            "fastmcp",
            "python",
            "mcp-server",
            "mcp-client",
            "otel",
            "prefect",
            "hot-reload",
            "decorator"
          ],
          "reliability": "HIGH"
        },
        {
          "id": "bifrost-gateway",
          "name": "Bifrost (Maxim AI)",
          "type_es": "Plataforma",
          "subtheme": "protocolos-sdks-mcp",
          "year": 2026,
          "authority": "Maxim AI (maximhq)",
          "url": "https://github.com/maximhq/bifrost",
          "url_label": "Bifrost",
          "description_es": "Gateway LLM/MCP de alto rendimiento en Go: 11 µs overhead a 5.000 RPS, 1000+ modelos, MCP gateway integrado (STDIO/HTTP/SSE con OAuth 2.0), Agent Mode, semantic cache, governance enterprise. Categoría emergente MCP gateway: combina enrutado LLM con tool routing MCP.",
          "tags": [
            "llm-gateway",
            "mcp-gateway",
            "performance",
            "go",
            "semantic-cache",
            "oauth",
            "governance",
            "agent-mode",
            "open-source"
          ],
          "reliability": "MEDIUM"
        },
        {
          "id": "letta",
          "name": "Letta (ex-MemGPT)",
          "type_es": "Plataforma",
          "subtheme": "agent-memory",
          "year": 2026,
          "authority": "Letta Inc. (UC Berkeley Sky Lab origin)",
          "url": "https://www.letta.com/",
          "url_label": "Letta",
          "description_es": "Plataforma de agentes con estado persistente y memoria jerárquica (core/archival/recall) heredada de MemGPT. Diciembre 2025 introdujo Context Repositories y Letta Code (coding agent líder en Terminal-Bench); enero 2026 la Conversations API para memoria compartida multi-usuario.",
          "tags": [
            "agent-memory",
            "stateful-agents",
            "memgpt",
            "context-repositories",
            "letta-code",
            "terminal-bench",
            "conversations-api",
            "open-source"
          ],
          "reliability": "HIGH"
        },
        {
          "id": "mem0",
          "name": "Mem0",
          "type_es": "Framework",
          "subtheme": "agent-memory",
          "year": 2026,
          "authority": "Mem0 (mem0ai)",
          "url": "https://mem0.ai/",
          "url_label": "Mem0",
          "description_es": "Capa de memoria universal para agentes que extrae, consolida y recupera hechos relevantes de conversaciones, con variante Mem0g basada en grafo. Reduce 90% tokens y 91% latencia p95 frente a contexto completo; integra 21 frameworks y 19 vector stores. Paper arXiv 2504.19413.",
          "tags": [
            "agent-memory",
            "memory-layer",
            "graph-memory",
            "vector-store",
            "mcp",
            "production",
            "open-source",
            "arxiv-2504-19413"
          ],
          "reliability": "HIGH"
        },
        {
          "id": "zep-graphiti",
          "name": "Zep / Graphiti",
          "type_es": "Framework",
          "subtheme": "agent-memory",
          "year": 2026,
          "authority": "Zep AI (Preston Rasmussen et al.)",
          "url": "https://www.getzep.com/",
          "url_label": "Zep",
          "description_es": "Capa de memoria para agentes basada en Graphiti, un grafo de conocimiento temporal bi-temporal que modela la validez de los hechos en el tiempo. Supera a MemGPT en DMR (94.8% vs 93.4%) y mejora hasta 18.5% en LongMemEval reduciendo latencia 90%. arXiv 2501.13956.",
          "tags": [
            "agent-memory",
            "knowledge-graph",
            "temporal",
            "graphiti",
            "bi-temporal",
            "context-engineering",
            "longmemeval",
            "dmr-benchmark"
          ],
          "reliability": "HIGH"
        },
        {
          "id": "openhands",
          "name": "OpenHands",
          "type_es": "Herramienta",
          "subtheme": "coding-agents",
          "year": 2026,
          "authority": "All Hands AI (ex OpenDevin community)",
          "url": "https://openhands.dev/",
          "url_label": "OpenHands",
          "description_es": "Plataforma OSS de agentes de programación autónomos en sandbox Docker (escribe código, ejecuta terminal, navega, abre PRs). Resuelve >53% de SWE-bench Verified con Claude 4.5; en enero 2026 lanzó OpenHands Index, evaluación más amplia (greenfield, frontend, testing). Coding agent OSS de referencia self-hosted (rename de OpenDevin).",
          "tags": [
            "coding-agent",
            "open-source",
            "swe-bench",
            "sandbox",
            "openhands-index",
            "self-hosted",
            "opendevin-rename",
            "multi-model"
          ],
          "reliability": "HIGH"
        },
        {
          "id": "tau2-bench",
          "name": "τ²-Bench (tau2-bench)",
          "type_es": "Benchmark",
          "subtheme": "evaluacion-evals",
          "year": 2026,
          "authority": "Sierra Research",
          "url": "https://github.com/sierra-research/tau2-bench",
          "url_label": "tau2-bench",
          "description_es": "Evolución de τ-bench (Sierra) que evalúa agentes conversacionales en un entorno dual-control modelado como Dec-POMDP, donde tanto agente como usuario simulado usan herramientas en un entorno compartido (dominio Telecom). En 2026 añade τ-Voice para agentes full-duplex de voz.",
          "tags": [
            "agent-benchmark",
            "tool-use",
            "dec-pomdp",
            "telecom",
            "dual-control",
            "conversational",
            "sierra",
            "voice-agent"
          ],
          "reliability": "HIGH"
        },
        {
          "id": "swe-bench-pro",
          "name": "SWE-bench Pro / Multilingual",
          "type_es": "Benchmark",
          "subtheme": "evaluacion-evals",
          "year": 2026,
          "authority": "SWE-bench team / Scale AI",
          "url": "https://www.swebench.com/multilingual-leaderboard.html",
          "url_label": "SWE-bench Pro",
          "description_es": "Sucesores de SWE-bench Verified después de que OpenAI dejara de reportar Verified por contaminación en frontier models: SWE-bench Pro (1.865 tareas, 41 repos, Python/Go/TS/JS) y SWE-bench Multilingual (300 tareas, 9 lenguajes). Benchmark estándar para coding agents en 2026.",
          "tags": [
            "coding-agent",
            "swe-bench",
            "multilingual",
            "contamination-resistant",
            "agent-benchmark",
            "scale-ai",
            "supersedes-swe-verified"
          ],
          "reliability": "HIGH"
        },
        {
          "id": "terminal-bench-2",
          "name": "Terminal-Bench 2.0",
          "type_es": "Benchmark",
          "subtheme": "evaluacion-evals",
          "year": 2026,
          "authority": "Laude Institute (harbor-framework)",
          "url": "https://www.tbench.ai/",
          "url_label": "Terminal-Bench",
          "description_es": "Benchmark agéntico que evalúa capacidades en entornos de terminal mediante 89 tareas reales de SRE, ingeniería de software y procesado de datos. Líderes en mayo 2026: GPT-5.5 (82.0%), Claude Opus 4.7 Adaptive (69.4%), MiMo-V2.5-Pro (68.4%). Complementa SWE-bench Pro con tareas más realistas y agnósticas de lenguaje.",
          "tags": [
            "benchmark",
            "terminal",
            "agentic-coding",
            "swe",
            "sre",
            "leaderboard",
            "laude-institute",
            "evaluation"
          ],
          "reliability": "HIGH"
        },
        {
          "id": "claude-code",
          "name": "Claude Code",
          "type_es": "Herramienta",
          "subtheme": "coding-agents",
          "year": 2026,
          "authority": "Anthropic",
          "url": "https://www.anthropic.com/claude-code",
          "url_label": "Claude Code",
          "description_es": "CLI agéntico de Anthropic para coding, líder en SWE-bench Verified (~80.8% en mayo 2026). Construido sobre Claude Agent SDK + MCP + Anthropic Skills; soporta hooks, sub-agents y slash commands; integración nativa con repositorio Git y editor.",
          "tags": [
            "coding-agent",
            "cli",
            "anthropic",
            "swe-bench",
            "terminal",
            "claude"
          ],
          "reliability": "HIGH"
        },
        {
          "id": "cursor",
          "name": "Cursor",
          "type_es": "Plataforma",
          "subtheme": "coding-agents",
          "year": 2026,
          "authority": "Cursor (Anysphere)",
          "url": "https://cursor.com/",
          "url_label": "Cursor",
          "description_es": "IDE-fork de VS Code con agente integrado nativamente. Más de 1M devs activos en mayo 2026, adopción en 64% de Fortune 500; líder de mercado en IDE-coding agéntico.",
          "tags": [
            "coding-agent",
            "ide",
            "vscode-fork",
            "cursor",
            "enterprise"
          ],
          "reliability": "HIGH"
        },
        {
          "id": "aider",
          "name": "Aider",
          "type_es": "Herramienta",
          "subtheme": "coding-agents",
          "year": 2026,
          "authority": "Paul Gauthier",
          "url": "https://aider.chat/",
          "url_label": "aider.chat",
          "description_es": "CLI OSS de pair-programming con LLMs; integración nativa con Git, soporte multi-backend (OpenAI, Anthropic, Bedrock, Ollama). Pionero del patrón repo-map + automatic commit.",
          "tags": [
            "coding-agent",
            "cli",
            "oss",
            "git-native",
            "aider"
          ],
          "reliability": "HIGH"
        },
        {
          "id": "cline",
          "name": "Cline",
          "type_es": "Herramienta",
          "subtheme": "coding-agents",
          "year": 2026,
          "authority": "Cline (community)",
          "url": "https://cline.bot/",
          "url_label": "cline.bot",
          "description_es": "Extensión OSS de VS Code con agente autónomo (5M+ instalaciones en mayo 2026). Soporta modos Plan/Act y MCP nativo; alternativa gratis y extensible a Cursor.",
          "tags": [
            "coding-agent",
            "vscode-extension",
            "oss",
            "mcp",
            "cline"
          ],
          "reliability": "HIGH"
        },
        {
          "id": "openai-codex-cli",
          "name": "OpenAI Codex CLI",
          "type_es": "Herramienta",
          "subtheme": "coding-agents",
          "year": 2026,
          "authority": "OpenAI",
          "url": "https://github.com/openai/codex",
          "url_label": "openai/codex repo",
          "description_es": "CLI agéntico OSS de OpenAI para coding; competencia directa de Claude Code. Soporta GPT-5/Codex models y patrón de approval+execution iterativo.",
          "tags": [
            "coding-agent",
            "cli",
            "openai",
            "codex",
            "oss"
          ],
          "reliability": "HIGH"
        }
      ],
      "relationships": [
        {
          "from": "claude-agent-sdk",
          "type": "consume",
          "to": "mcp"
        },
        {
          "from": "openai-agents-sdk",
          "type": "compite-con",
          "to": "claude-agent-sdk"
        },
        {
          "from": "langsmith-time-travel",
          "type": "depura",
          "to": "langgraph"
        },
        {
          "from": "inspect-ai",
          "type": "evalua-con",
          "to": "agentbench"
        },
        {
          "from": "pydantic-ai",
          "type": "compite-con",
          "to": "claude-agent-sdk"
        },
        {
          "from": "mastra",
          "type": "complementa",
          "to": "langgraph"
        },
        {
          "from": "a2a-protocol",
          "type": "complementa",
          "to": "mcp"
        },
        {
          "from": "anthropic-agent-skills",
          "type": "complementa",
          "to": "mcp"
        },
        {
          "from": "anthropic-agent-skills",
          "type": "extiende",
          "to": "claude-agent-sdk"
        },
        {
          "from": "fastmcp",
          "type": "implementa-servidor-de",
          "to": "mcp"
        },
        {
          "from": "mcp-registry",
          "type": "indexa-servidores-de",
          "to": "mcp"
        },
        {
          "from": "bifrost-gateway",
          "type": "es-gateway-de",
          "to": "mcp"
        },
        {
          "from": "letta",
          "type": "compite-con",
          "to": "mem0"
        },
        {
          "from": "mem0",
          "type": "compite-con",
          "to": "zep-graphiti"
        },
        {
          "from": "openhands",
          "type": "evaluado-con",
          "to": "swe-bench-pro"
        },
        {
          "from": "openhands",
          "type": "evaluado-con",
          "to": "terminal-bench-2"
        },
        {
          "from": "terminal-bench-2",
          "type": "complementa",
          "to": "swe-bench-pro"
        }
      ],
      "cross_facet_links": [
        {
          "to_facet": "stack-templates",
          "to_entity": "well-architected",
          "from_entity": "mcp",
          "rationale": "Toda plantilla de stack agéntico 2026 debe ser MCP-native: tools como servidores MCP, agentes como clientes, gateway de política intermedio."
        },
        {
          "to_facet": "llmops",
          "to_entity": "pydantic-logfire",
          "from_entity": "pydantic-ai",
          "rationale": "PydanticAI + Logfire forman el stack canónico Python tipado; uno alimenta al otro nativamente vía OTel-GenAI."
        },
        {
          "to_facet": "llmops",
          "to_entity": "llm-d",
          "from_entity": "letta",
          "rationale": "Letta Code y agentes con memoria se sirven sobre llm-d cuando se distribuyen a escala; convergencia memoria persistente ↔ K8s LLM-native serving."
        },
        {
          "to_facet": "llmops",
          "to_entity": "anthropic-prompt-cache-1h",
          "from_entity": "letta",
          "rationale": "La memoria de agentes consume cache extendida 1h para mantener costes bajo control en loops largos; relación operativa directa."
        },
        {
          "to_facet": "research-frontier",
          "to_entity": "mast-taxonomy",
          "from_entity": "cost-control",
          "rationale": "Conversión automática tras dedup (refuerza)."
        },
        {
          "to_facet": "llmops",
          "to_entity": "otel-genai",
          "from_entity": "langgraph",
          "rationale": "Conversión automática tras dedup (emite-trazas-segun)."
        },
        {
          "to_facet": "llmops",
          "to_entity": "otel-genai",
          "from_entity": "pydantic-ai",
          "rationale": "Conversión automática tras dedup (se-observa-con)."
        },
        {
          "to_facet": "llmops",
          "to_entity": "helicone",
          "from_entity": "cost-control",
          "rationale": "cost-control referencia Helicone como herramienta nominal."
        },
        {
          "to_facet": "llmops",
          "to_entity": "litellm",
          "from_entity": "cost-control",
          "rationale": "cost-control referencia LiteLLM como herramienta nominal."
        },
        {
          "to_facet": "ml-sat-ops",
          "to_entity": "phi-sat-2",
          "from_entity": "langgraph",
          "rationale": "Pendiente: orbital-edge-computing menciona phi-sat-2; añadir el link directo."
        }
      ],
      "super_category_id": "ai-ml-production"
    },
    {
      "facet_id": "stack-templates",
      "facet_label_es": "Plantillas de stack y decisión arquitectónica",
      "intro_es": "Esta faceta agrupa las plantillas, métodos y catálogos de referencia que convierten ideas arquitectónicas en decisiones trazables, evaluables y evolutivas. Combina registros de decisión (ADR/MADR/Y-Statement) con modelado visual (C4), evaluación de tradeoffs (ATAM, fitness functions), análisis de amenazas (TARA, NIST 800-160) y estrategia de stack (Wardley, Well-Architected). Su objetivo es ofrecer una columna vertebral común para razonar sobre arquitecturas resilientes en cualquier dominio.",
      "subthemes": [
        {
          "id": "registros-decision",
          "label_es": "Registros de decisión (ADR, MADR, Y-Statement)"
        },
        {
          "id": "modelado-visual",
          "label_es": "Modelado visual y diagramación arquitectónica"
        },
        {
          "id": "evaluacion-arquitectonica",
          "label_es": "Evaluación arquitectónica y fitness functions"
        },
        {
          "id": "analisis-amenazas",
          "label_es": "Análisis de amenazas y ciber-resiliencia"
        },
        {
          "id": "estrategia-stack",
          "label_es": "Estrategia de stack y frameworks Well-Architected"
        }
      ],
      "entities": [
        {
          "id": "adr-nygard",
          "name": "ADR (Nygard original)",
          "type_es": "Práctica",
          "subtheme": "registros-decision",
          "year": 2011,
          "authority": "Michael Nygard / Cognitect",
          "url": "https://www.cognitect.com/blog/2011/11/15/documenting-architecture-decisions",
          "url_label": "Documenting Architecture Decisions",
          "description_es": "Plantilla fundacional para registros de decisión arquitectónica con cuatro secciones (Title, Status, Context, Decision, Consequences). Cada ADR es una respuesta inmutable y datada a por qué se eligió una opción, escrita como conversación con un futuro desarrollador.",
          "tags": [
            "adr",
            "rationale",
            "documentación-viva",
            "inmutable"
          ],
          "reliability": "HIGH"
        },
        {
          "id": "madr-y-statement",
          "name": "MADR y Y-Statement",
          "type_es": "Plantilla",
          "subtheme": "registros-decision",
          "year": 2018,
          "authority": "adr.github.io / Olaf Zimmermann (SATURN 2012)",
          "url": "https://adr.github.io/",
          "url_label": "ADR Templates community",
          "description_es": "MADR (Markdown ADR) extiende Nygard con deciders, decision drivers, alternativas evaluadas y pros/cons; Y-Statement de Zimmermann condensa una decisión en una sola frase con seis ranuras. Cubren todo el espectro de ceremonia, desde una línea hasta una página.",
          "tags": [
            "madr",
            "y-statement",
            "markdown",
            "auditoría"
          ],
          "reliability": "HIGH"
        },
        {
          "id": "c4-model",
          "name": "C4 Model",
          "type_es": "Modelo",
          "subtheme": "modelado-visual",
          "year": 2024,
          "authority": "Simon Brown",
          "url": "https://c4model.com/",
          "url_label": "c4model.com",
          "description_es": "Modelo de diagramación jerárquica en cuatro niveles (Context, Container, Component, Code) más diagramas suplementarios de despliegue, dinámica y system landscape. Es agnóstico de notación y herramienta y fuerza la elección explícita de tecnología por contenedor.",
          "tags": [
            "c4",
            "diagramas",
            "structurizr",
            "mermaid"
          ],
          "reliability": "HIGH"
        },
        {
          "id": "well-architected",
          "name": "Well-Architected Frameworks (AWS, Azure, GCP)",
          "type_es": "Framework",
          "subtheme": "estrategia-stack",
          "year": 2025,
          "authority": "AWS / Microsoft Azure / Google Cloud",
          "url": "https://docs.aws.amazon.com/wellarchitected/latest/framework/the-pillars-of-the-framework.html",
          "url_label": "AWS Well-Architected Framework",
          "description_es": "Trío convergente de frameworks de los hyperscalers que organizan calidades arquitectónicas en pilares (Reliability, Security, Cost, Operational Excellence, Performance, Sustainability). Documentan tradeoffs explícitos por pilar y son transferibles a dominios no-cloud.",
          "tags": [
            "aws",
            "azure",
            "gcp",
            "pilares",
            "tradeoffs"
          ],
          "reliability": "HIGH"
        },
        {
          "id": "atam",
          "name": "ATAM (Architecture Tradeoff Analysis Method)",
          "type_es": "Método",
          "subtheme": "evaluacion-arquitectonica",
          "year": 2000,
          "authority": "SEI / Carnegie Mellon University (Kazman, Klein, Clements)",
          "url": "https://www.sei.cmu.edu/library/architecture-tradeoff-analysis-method-collection/",
          "url_label": "SEI ATAM Collection",
          "description_es": "Método de evaluación arquitectónica en 9 pasos y 2 fases que identifica sensitivity points, tradeoff points, riesgos y no-riesgos a partir de un utility tree de escenarios priorizados. Variantes ligeras (ARID, CBAM) y mini-ATAM permiten aplicarlo en equipos pequeños.",
          "tags": [
            "atam",
            "utility-tree",
            "tradeoffs",
            "sei",
            "cbam"
          ],
          "reliability": "HIGH"
        },
        {
          "id": "tara-iso21434",
          "name": "TARA (ISO/SAE 21434)",
          "type_es": "Estándar",
          "subtheme": "analisis-amenazas",
          "year": 2021,
          "authority": "ISO/SAE 21434:2021",
          "url": "https://www.iso.org/standard/70918.html",
          "url_label": "ISO/SAE 21434:2021",
          "description_es": "Procedimiento canónico de Threat Analysis and Risk Assessment con 5 sub-actividades (asset ID, threat scenario ID, impact rating SFOP, attack feasibility, risk determination y treatment). Iterativo, produce un risk register y se extiende fuera de automoción.",
          "tags": [
            "tara",
            "iso-21434",
            "stride",
            "sfop",
            "risk-register"
          ],
          "reliability": "HIGH"
        },
        {
          "id": "wardley-mapping",
          "name": "Wardley Mapping",
          "type_es": "Modelo",
          "subtheme": "estrategia-stack",
          "year": 2018,
          "authority": "Simon Wardley (Leading Edge Forum)",
          "url": "https://learnwardleymapping.com/",
          "url_label": "Learn Wardley Mapping",
          "description_es": "Mapa estratégico 2D (visibilidad × evolución: Genesis, Custom, Product, Commodity) que guía decisiones build-vs-buy-vs-rent. Combinado con doctrina y patrones climáticos identifica dónde diferenciar y dónde delegar a commodities.",
          "tags": [
            "wardley",
            "build-vs-buy",
            "evolución",
            "estrategia"
          ],
          "reliability": "HIGH"
        },
        {
          "id": "fitness-functions",
          "name": "Evolutionary Architecture y Fitness Functions",
          "type_es": "Libro",
          "subtheme": "evaluacion-arquitectonica",
          "year": 2023,
          "authority": "Neal Ford, Rebecca Parsons, Patrick Kua (Thoughtworks / O'Reilly)",
          "url": "https://nealford.com/books/buildingevolutionaryarchitectures.html",
          "url_label": "Building Evolutionary Architectures",
          "description_es": "Disciplina de cambio incremental guiado por fitness functions: tests automatizados que protegen dimensiones arquitectónicas (modularidad, performance, seguridad, coste) en cada cambio de CI/CD. Convierte escenarios ATAM puntuales en validación continua.",
          "tags": [
            "fitness-functions",
            "evolutionary",
            "archunit",
            "ci-cd",
            "chaos"
          ],
          "reliability": "HIGH"
        },
        {
          "id": "nist-800-160-caf",
          "name": "NIST SP 800-160 y Microsoft CAF",
          "type_es": "Estándar",
          "subtheme": "analisis-amenazas",
          "year": 2022,
          "authority": "NIST / Microsoft Learn",
          "url": "https://csrc.nist.gov/pubs/sp/800/160/v1/r1/final",
          "url_label": "NIST SP 800-160 Vol. 1 Rev. 1",
          "description_es": "NIST 800-160 v1 codifica ingeniería de sistemas seguros confiables y v2 introduce los 4 goals (Anticipate, Withstand, Recover, Adapt), 8 objetivos y 14 técnicas de ciber-resiliencia. Microsoft CAF aporta 7 metodologías (Strategy, Plan, Ready, Adopt, Govern, Secure, Manage) y Azure Landing Zones como arquitectura de referencia.",
          "tags": [
            "nist-800-160",
            "caf",
            "landing-zone",
            "ciber-resiliencia",
            "iso-15288"
          ],
          "reliability": "HIGH"
        }
      ],
      "relationships": [
        {
          "from": "madr-y-statement",
          "type": "extiende",
          "to": "adr-nygard"
        },
        {
          "from": "c4-model",
          "type": "ancla-decisiones-en",
          "to": "adr-nygard"
        },
        {
          "from": "atam",
          "type": "alimenta-escenarios-a",
          "to": "fitness-functions"
        },
        {
          "from": "fitness-functions",
          "type": "valida-continuamente",
          "to": "well-architected"
        },
        {
          "from": "tara-iso21434",
          "type": "produce-controles-para",
          "to": "nist-800-160-caf"
        },
        {
          "from": "wardley-mapping",
          "type": "informa-pilar-cost-de",
          "to": "well-architected"
        },
        {
          "from": "atam",
          "type": "evalua-tradeoffs-de",
          "to": "well-architected"
        },
        {
          "from": "nist-800-160-caf",
          "type": "cataloga-tecnicas-para",
          "to": "tara-iso21434"
        },
        {
          "from": "c4-model",
          "type": "soporta-utility-tree-de",
          "to": "atam"
        }
      ],
      "cross_facet_links": [
        {
          "to_facet": "space-cybersec",
          "to_entity": "sparta-v3",
          "from_entity": "tara-iso21434",
          "rationale": "TARA se reutiliza en sistemas espaciales con la matriz SPARTA y CCSDS Green Books como rúbrica de attack feasibility específica."
        },
        {
          "to_facet": "llmops",
          "to_entity": "vllm",
          "from_entity": "wardley-mapping",
          "rationale": "Un mapa Wardley sitúa vLLM y Triton como Product y Bedrock/Vertex como Utility, justificando build-vs-rent del stack de inferencia LLM."
        },
        {
          "to_facet": "ai-trust-safety",
          "to_entity": "owasp-asi",
          "from_entity": "tara-iso21434",
          "rationale": "El proceso TARA aplicado a agentes hereda taxonomías de MITRE ATLAS y OWASP Top 10 LLM (prompt injection, training-data poisoning, supply chain MCP)."
        },
        {
          "to_facet": "cross-domain-resilience",
          "to_entity": "chaos-engineering-principles",
          "from_entity": "fitness-functions",
          "rationale": "Los experimentos de chaos engineering son fitness functions de resiliencia que codifican el manifiesto Principles of Chaos como aserciones automatizadas."
        },
        {
          "to_facet": "space-grade-sw",
          "to_entity": "ecss-e-st-40c",
          "from_entity": "nist-800-160-caf",
          "rationale": "NIST SP 800-160 v2 alinea técnicas de ciber-resiliencia con el catálogo de controles NIST 800-53 Rev.5."
        }
      ],
      "super_category_id": "meta-frontier"
    },
    {
      "facet_id": "ai-trust-safety",
      "facet_label_es": "Confianza, seguridad y gobernanza IA",
      "super_category_id": "ai-ml-production",
      "intro_es": "Faceta dedicada al triángulo de confianza para sistemas IA en producción: (1) marcos normativos y estándares — NIST AI RMF + perfil GenAI 600-1, EU AI Act y Code of Practice GPAI (activación high-risk 2-ago-2026), ISO/IEC 42001 e ISO/IEC 23894, OWASP GenAI Top-10/AISVS/AIBOM y MITRE ATLAS v5; (2) guardrails de runtime para LLMs y agentes — Llama Guard 3/4, Llama Prompt Guard 2, ShieldGemma 2, NeMo Guardrails, Guardrails AI, Lakera Guard, Azure Prompt Shields, Cisco AI Defense; (3) red-team y adversarial testing — Garak, PyRIT, DeepTeam — más controles agent-specific (LlamaFirewall, MCP Tool Poisoning, MCP OAuth 2.1+PKCE+RFC 8707) y process governance (Microsoft Agent Governance Toolkit, OWASP ASI). \n\nQué pertenece aquí: cualquier control, framework o herramienta cuya función primaria sea trust/safety/governance/red-team de modelos o agentes. Qué NO pertenece: serving infrastructure, RAG, fine-tuning, observabilidad puramente operacional (esos quedan en `llmops`); orquestación o memoria agéntica (en `agentic-llmops`). \n\nEsta faceta materializa la promoción a primera clase de la trust-safety stack tras la expansión LLMOps de mayo 2026, alineando la estructura del atlas con el cliff regulatorio (EU AI Act high-risk, NIST AI 600-1 v1.0, ISO/IEC 42001 adoption).",
      "subthemes": [
        {
          "id": "standards-frameworks",
          "label_es": "Estándares y marcos normativos (NIST, EU AI Act, ISO, OWASP, MITRE)"
        },
        {
          "id": "runtime-guardrails",
          "label_es": "Guardrails de runtime para LLM y agentes"
        },
        {
          "id": "red-team-tooling",
          "label_es": "Red-team y adversarial testing"
        },
        {
          "id": "agent-security-specific",
          "label_es": "Seguridad específica de agentes (LlamaFirewall, MCP attacks, MCP authz)"
        },
        {
          "id": "governance-process",
          "label_es": "Gobernanza de proceso y operating models"
        }
      ],
      "entities": [
        {
          "id": "nist-ai-600-1",
          "name": "NIST AI 600-1 (Generative AI Profile)",
          "type_es": "Especificación",
          "subtheme": "standards-frameworks",
          "year": 2024,
          "authority": "NIST",
          "url": "https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.600-1.pdf",
          "url_label": "NIST AI 600-1 (PDF)",
          "description_es": "Perfil GenAI del NIST AI RMF con 12 categorías de riesgo (CBRN, confabulación, sesgo, privacidad, integridad de información, autonomía, etc.). Recomienda red teaming adversarial como control.",
          "tags": [
            "nist",
            "rmf",
            "genai-profile"
          ],
          "reliability": "HIGH"
        },
        {
          "id": "eu-ai-act",
          "name": "EU AI Act",
          "type_es": "Estándar",
          "subtheme": "standards-frameworks",
          "year": 2025,
          "authority": "Comisión Europea",
          "url": "https://artificialintelligenceact.eu/",
          "url_label": "EU AI Act portal",
          "description_es": "Reglamento europeo con tiers de riesgo (prohibido/alto/limitado/mínimo) y obligaciones específicas para GPAI. Calendario: prohibiciones feb 2025, GPAI ago 2025, alto riesgo ago 2026.",
          "tags": [
            "eu",
            "regulacion",
            "gpai"
          ],
          "reliability": "HIGH"
        },
        {
          "id": "owasp-genai-top10",
          "name": "OWASP GenAI Top-10",
          "type_es": "Estándar",
          "subtheme": "standards-frameworks",
          "year": 2025,
          "authority": "OWASP",
          "url": "https://genai.owasp.org/llm-top-10/",
          "url_label": "OWASP GenAI Top 10",
          "description_es": "Catálogo de las 10 vulnerabilidades principales en aplicaciones LLM: prompt injection, insecure output handling, data poisoning, DoS de modelo, supply chain, sensitive disclosure, plugin design, agencia excesiva, sobreconfianza y robo de modelo.",
          "tags": [
            "owasp",
            "top-10",
            "seguridad"
          ],
          "reliability": "HIGH"
        },
        {
          "id": "garak",
          "name": "Garak",
          "type_es": "Herramienta",
          "subtheme": "red-team-tooling",
          "year": 2026,
          "authority": "NVIDIA",
          "url": "https://github.com/leondz/garak",
          "url_label": "leondz/garak",
          "description_es": "Scanner de vulnerabilidades LLM. **Transferido oficialmente a org NVIDIA** (github.com/NVIDIA/garak). Versiones 0.13/0.14 (2025-2026) añaden soporte para sistemas agénticos. Junto con PyRIT y DeepTeam forma el trío OSS de red-team para LLMs/agentes.",
          "tags": [
            "red-team",
            "cli",
            "probes"
          ],
          "reliability": "HIGH"
        },
        {
          "id": "llama-guard-3",
          "name": "Llama Guard 3",
          "type_es": "Herramienta",
          "subtheme": "runtime-guardrails",
          "year": 2024,
          "authority": "Meta",
          "url": "https://huggingface.co/meta-llama/Llama-Guard-3-8B",
          "url_label": "Llama-Guard-3-8B",
          "description_es": "Llama Guard 3 (Meta PurpleLlama) — clasificador de seguridad LLM. **Status mayo 2026: superseded por Llama Guard 4 (12B multimodal nativo, 30-abr-2025)**. Mantener como referencia histórica; nuevos despliegues deben migrar a Llama Guard 4 + Llama Prompt Guard 2.",
          "tags": [
            "guardrail",
            "clasificador",
            "output",
            "superseded-by-llama-guard-4"
          ],
          "reliability": "HIGH"
        },
        {
          "id": "nemo-guardrails",
          "name": "NeMo Guardrails",
          "type_es": "Framework",
          "subtheme": "runtime-guardrails",
          "year": 2026,
          "authority": "NVIDIA",
          "url": "https://github.com/NVIDIA/NeMo-Guardrails",
          "url_label": "NVIDIA/NeMo-Guardrails",
          "description_es": "Framework de guardrails de NVIDIA. Repo movido a github.com/NVIDIA-NeMo/Guardrails en 2025; añade BotThinking events para guardrails sobre reasoning traces, soporte LangChain 1.x, Python 3.13, compatibilidad con Nemotron y DeepSeek-r1. Disponible como NVIDIA NIM microservices (content safety, topic control, jailbreak detection).",
          "tags": [
            "colang",
            "rails",
            "nvidia"
          ],
          "reliability": "HIGH"
        },
        {
          "id": "llama-guard-4",
          "name": "Llama Guard 4 (12B, multimodal)",
          "type_es": "Herramienta",
          "subtheme": "runtime-guardrails",
          "year": 2025,
          "authority": "Meta (PurpleLlama)",
          "url": "https://huggingface.co/meta-llama/Llama-Guard-4-12B",
          "url_label": "Model card HF",
          "description_es": "Clasificador de seguridad nativamente multimodal de 12B, podado de Llama 4 Scout y entrenado contra MLCommons taxonomy. Filtra entradas y respuestas (texto + múltiples imágenes); reemplaza Llama Guard 3 8B y 11B-vision como guardrail unificado. Release 30-abr-2025.",
          "tags": [
            "guardrail",
            "content-moderation",
            "multimodal",
            "mlcommons",
            "input-output-filter",
            "supersedes-llama-guard-3"
          ],
          "reliability": "HIGH"
        },
        {
          "id": "llama-prompt-guard-2",
          "name": "Llama Prompt Guard 2 (86M / 22M)",
          "type_es": "Herramienta",
          "subtheme": "runtime-guardrails",
          "year": 2025,
          "authority": "Meta (PurpleLlama)",
          "url": "https://huggingface.co/meta-llama/Llama-Prompt-Guard-2-86M",
          "url_label": "Model card HF",
          "description_es": "Clasificadores BERT-style especializados en detección de prompt injection y jailbreaks directos. Variante 22M reduce latencia/coste 75%; multilingüe (ES/EN/FR/DE/HI/IT/PT/TH); fix de tokenización contra ataques adversariales con Unicode.",
          "tags": [
            "prompt-injection",
            "jailbreak-detection",
            "classifier",
            "multilingual",
            "low-latency"
          ],
          "reliability": "HIGH"
        },
        {
          "id": "shieldgemma-2",
          "name": "ShieldGemma 2 (4B multimodal)",
          "type_es": "Herramienta",
          "subtheme": "runtime-guardrails",
          "year": 2025,
          "authority": "Google DeepMind",
          "url": "https://deepmind.google/models/gemma/shieldgemma-2/",
          "url_label": "DeepMind",
          "description_es": "Clasificador de seguridad de imágenes (4B) construido sobre Gemma 3, recomendado como filtro de entrada para VLMs o como filtro de salida en sistemas de generación de imágenes. Sucesor de ShieldGemma (texto, sobre Gemma 2).",
          "tags": [
            "guardrail",
            "multimodal",
            "image-safety",
            "content-moderation",
            "gemma-3"
          ],
          "reliability": "HIGH"
        },
        {
          "id": "guardrails-ai",
          "name": "Guardrails AI (.rail spec)",
          "type_es": "Framework",
          "subtheme": "runtime-guardrails",
          "year": 2026,
          "authority": "Guardrails AI Inc. (open-source)",
          "url": "https://github.com/guardrails-ai/guardrails",
          "url_label": "Repo GitHub",
          "description_es": "Framework Python con formato .rail (XML) para especificar validadores y acciones correctivas sobre salidas LLM. En febrero 2025 lanzó Guardrails Index, primer benchmark público que compara 24 guardrails en 6 categorías (latencia + accuracy). Hub de validadores comunitarios.",
          "tags": [
            "guardrail-framework",
            "validation",
            "structured-output",
            "benchmark",
            "rail-spec",
            "oss"
          ],
          "reliability": "HIGH"
        },
        {
          "id": "lakera-guard",
          "name": "Lakera Guard",
          "type_es": "Plataforma",
          "subtheme": "runtime-guardrails",
          "year": 2025,
          "authority": "Lakera AI",
          "url": "https://www.lakera.ai/lakera-guard",
          "url_label": "Lakera Guard",
          "description_es": "Servicio comercial real-time de protección LLM contra prompt injection, jailbreaks y data leakage. Latencia <50ms, 100+ idiomas; threat intel alimentada por Gandalf (~100k ataques nuevos/día). SOC2/GDPR/NIST; clientes Dropbox, Fortune 500.",
          "tags": [
            "managed-guardrail",
            "prompt-injection",
            "threat-intel",
            "multilingual",
            "enterprise"
          ],
          "reliability": "HIGH"
        },
        {
          "id": "azure-prompt-shields",
          "name": "Azure AI Content Safety — Prompt Shields (con Spotlighting)",
          "type_es": "Plataforma",
          "subtheme": "runtime-guardrails",
          "year": 2025,
          "authority": "Microsoft Azure",
          "url": "https://learn.microsoft.com/en-us/azure/ai-services/content-safety/concepts/jailbreak-detection",
          "url_label": "Azure docs",
          "description_es": "API gestionada que detecta ataques directos (jailbreak) e indirectos (XPIA - cross-prompt injection en documentos/correos). En Build 2025 añadió Spotlighting para distinguir input confiable vs no confiable. Integrado con Microsoft Defender en Azure AI Foundry.",
          "tags": [
            "prompt-injection",
            "xpia",
            "managed-service",
            "content-safety",
            "azure",
            "spotlighting"
          ],
          "reliability": "HIGH"
        },
        {
          "id": "cisco-ai-defense",
          "name": "Cisco AI Defense (ex-Robust Intelligence)",
          "type_es": "Plataforma",
          "subtheme": "runtime-guardrails",
          "year": 2025,
          "authority": "Cisco (post-adquisición Robust Intelligence, oct 2024)",
          "url": "https://www.cisco.com/site/us/en/products/security/ai-defense/index.html",
          "url_label": "Cisco AI Defense",
          "description_es": "Suite empresarial con tres pilares: AI Supply Chain Risk Management (escaneo modelos, repos, MCP servers), Validation con red-teaming algorítmico (heredado Robust Intelligence) y Runtime Protection con guardrails alimentados por Cisco Talos. Primer AI Firewall comercial.",
          "tags": [
            "ai-firewall",
            "supply-chain",
            "red-team",
            "runtime-protection",
            "mcp-scanning",
            "enterprise"
          ],
          "reliability": "HIGH"
        },
        {
          "id": "pyrit",
          "name": "PyRIT — Python Risk Identification Tool",
          "type_es": "Herramienta",
          "subtheme": "red-team-tooling",
          "year": 2026,
          "authority": "Microsoft (AI Red Team)",
          "url": "https://github.com/microsoft/PyRIT",
          "url_label": "Repo oficial",
          "description_es": "Framework abierto de red-team automatizado para sistemas GenAI. Soporta targets OpenAI, Azure, Anthropic, Google, HuggingFace, endpoints HTTP/WebSocket, web apps via Playwright. v0.13.0 17-abr-2026; complementa Garak/DeepTeam.",
          "tags": [
            "red-team",
            "automation",
            "security-testing",
            "multi-target",
            "microsoft"
          ],
          "reliability": "HIGH"
        },
        {
          "id": "deepteam",
          "name": "DeepTeam",
          "type_es": "Framework",
          "subtheme": "red-team-tooling",
          "year": 2025,
          "authority": "Confident AI",
          "url": "https://www.trydeepteam.com/",
          "url_label": "DeepTeam docs",
          "description_es": "Framework OSS de LLM red-teaming construido por autores de DeepEval. Detecta 40+ vulnerabilidades (bias, misinfo, PII leakage, harmful content), simula 10+ tipos de ataque y mapea a OWASP LLM Top-10, NIST AI RMF y MITRE ATLAS. Incluye 7 guardrails de producción.",
          "tags": [
            "red-team",
            "llm-testing",
            "owasp-llm",
            "nist-rmf",
            "mitre-atlas",
            "guardrails"
          ],
          "reliability": "HIGH"
        },
        {
          "id": "iso-iec-42001",
          "name": "ISO/IEC 42001:2023 — AI Management Systems",
          "type_es": "Estándar",
          "subtheme": "standards-frameworks",
          "year": 2023,
          "authority": "ISO/IEC JTC 1/SC 42",
          "url": "https://www.iso.org/standard/42001",
          "url_label": "ISO 42001",
          "description_es": "Primer estándar internacional certificable de Sistema de Gestión de IA (AIMS). Define políticas, roles, procesos y controles organizativos para desplegar IA de forma responsable. Estructura tipo ISO 27001 (Annex A con controles). Gartner: 83% Fortune 500 lo exigirán a vendors hacia 2027.",
          "tags": [
            "management-system",
            "certification",
            "governance",
            "aims",
            "procurement-gate",
            "iso"
          ],
          "reliability": "HIGH"
        },
        {
          "id": "iso-iec-23894",
          "name": "ISO/IEC 23894:2023 — AI Risk Management Guidance",
          "type_es": "Estándar",
          "subtheme": "standards-frameworks",
          "year": 2023,
          "authority": "ISO/IEC JTC 1/SC 42",
          "url": "https://www.iso.org/standard/77304.html",
          "url_label": "ISO 23894",
          "description_es": "Guía de gestión de riesgos específica para IA, alineada con ISO 31000:2018. Complementa ISO/IEC 42001 aportando el cómo del risk management (identificación, evaluación, tratamiento de riesgos AI). No certificable por sí misma.",
          "tags": [
            "risk-management",
            "guidance",
            "iso-31000-aligned",
            "iso"
          ],
          "reliability": "HIGH"
        },
        {
          "id": "mitre-atlas-v5",
          "name": "MITRE ATLAS v5.4 (feb 2026)",
          "type_es": "Framework",
          "subtheme": "standards-frameworks",
          "year": 2026,
          "authority": "MITRE",
          "url": "https://atlas.mitre.org/",
          "url_label": "ATLAS",
          "description_es": "Matriz tipo ATT&CK de tácticas, técnicas y mitigaciones para sistemas IA. v5.4 (feb 2026) añade técnicas de agentes como 'Publish Poisoned AI Agent Tool' y 'Escape to Host'. Total: 16 tácticas, 84 técnicas, 56 sub-técnicas, 32 mitigaciones, 42 case studies; gira de model-centric a agent-execution-layer.",
          "tags": [
            "threat-modeling",
            "attack-style",
            "agent-security",
            "tool-poisoning",
            "mitre"
          ],
          "reliability": "HIGH"
        },
        {
          "id": "owasp-aisvs",
          "name": "OWASP AISVS — AI Security Verification Standard v1.0",
          "type_es": "Estándar",
          "subtheme": "standards-frameworks",
          "year": 2025,
          "authority": "OWASP Foundation",
          "url": "https://owasp.org/www-project-artificial-intelligence-security-verification-standard-aisvs-docs/",
          "url_label": "OWASP AISVS",
          "description_es": "Checklist verificable y testeable de requisitos de seguridad para sistemas con IA, modelado sobre OWASP ASVS. v1.0 cubre validación input usuario, supply chain de modelos, controles ML clásicos y arquitecturas LLM. Diseñado para auditoría/penetration testing.",
          "tags": [
            "verification-standard",
            "checklist",
            "owasp",
            "audit",
            "ai-security"
          ],
          "reliability": "HIGH"
        },
        {
          "id": "owasp-aibom",
          "name": "OWASP AIBOM Project",
          "type_es": "Especificación",
          "subtheme": "standards-frameworks",
          "year": 2025,
          "authority": "OWASP GenAI Security Project",
          "url": "https://owasp.org/www-project-aibom/",
          "url_label": "OWASP AIBOM",
          "description_es": "Estandariza el AI Bill of Materials cubriendo modelos, datasets, código, hardware, data processing y governance. Genera output en CycloneDX 1.7 (oct 2025, ECMA-424 2nd ed) alineado con SPDX 3.0. Incluye AIBOM Generator OSS para HF; auditores EU AI Act ya lo piden.",
          "tags": [
            "aibom",
            "supply-chain",
            "transparency",
            "cyclonedx",
            "spdx",
            "ml-bom"
          ],
          "reliability": "HIGH"
        },
        {
          "id": "eu-ai-act-gpai-cop",
          "name": "EU AI Act — GPAI Code of Practice",
          "type_es": "Especificación",
          "subtheme": "standards-frameworks",
          "year": 2025,
          "authority": "Comisión Europea + AI Board",
          "url": "https://digital-strategy.ec.europa.eu/en/policies/contents-code-gpai",
          "url_label": "GPAI CoP",
          "description_es": "Código de práctica voluntario para proveedores de modelos GPAI, elaborado por ~1.000 stakeholders. Tres capítulos: Transparencia, Copyright, Safety & Security. La CE lo reconoce como herramienta válida para demostrar cumplimiento de los Artículos 53 y 55. Vigente desde 2-ago-2025.",
          "tags": [
            "eu-ai-act",
            "gpai",
            "voluntary-code",
            "compliance-evidence"
          ],
          "reliability": "HIGH"
        },
        {
          "id": "owasp-asi",
          "name": "OWASP Top 10 for Agentic Applications (ASI)",
          "type_es": "Estándar",
          "subtheme": "governance-process",
          "year": 2025,
          "authority": "OWASP Gen AI Security Project",
          "url": "https://genai.owasp.org/2025/12/09/owasp-top-10-for-agentic-applications-the-benchmark-for-agentic-security-in-the-age-of-autonomous-ai/",
          "url_label": "OWASP ASI Top 10 (Dic 2025)",
          "description_es": "Estándar publicado el 10 de diciembre de 2025 con 10 riesgos ASI01-ASI10: Goal Manipulation, Tool Misuse, Identity, Privilege Escalation, Supply Chain, Memory Poisoning, Inter-Agent Comms, Cascading Failures, Trust Exploitation, Rogue Agents.",
          "tags": [
            "seguridad",
            "owasp",
            "asi",
            "gobernanza"
          ],
          "reliability": "HIGH"
        },
        {
          "id": "ms-agent-governance-toolkit",
          "name": "Microsoft Agent Governance Toolkit",
          "type_es": "Plataforma",
          "subtheme": "governance-process",
          "year": 2025,
          "authority": "Microsoft",
          "url": "https://github.com/microsoft/agent-governance-toolkit",
          "url_label": "GitHub microsoft/agent-governance-toolkit",
          "description_es": "Toolkit open-source que cubre los 10 riesgos ASI mediante enforcement determinista a nivel aplicación. Logra 0,00% de violaciones de política frente al 26,67% de enforcement basado en prompts en pruebas red-team.",
          "tags": [
            "governance",
            "policy",
            "microsoft",
            "determinista"
          ],
          "reliability": "HIGH"
        },
        {
          "id": "llamafirewall",
          "name": "LlamaFirewall",
          "type_es": "Framework",
          "subtheme": "agent-security-specific",
          "year": 2025,
          "authority": "Meta AI",
          "url": "https://meta-llama.github.io/PurpleLlama/LlamaFirewall/",
          "url_label": "Documentación oficial",
          "description_es": "Framework OSS de guardrails para agentes que combina PromptGuard 2 (jailbreak), AlignmentCheck (auditoría chain-of-thought contra goal hijacking) y CodeShield (análisis estático online de código generado). Reduce ataques exitosos de 17.6% a 1.7% en benchmarks Meta. Primer framework guardrails público específico para agentes.",
          "tags": [
            "agent-security",
            "guardrail-framework",
            "prompt-injection",
            "alignment",
            "code-security",
            "meta",
            "oss"
          ],
          "reliability": "HIGH"
        },
        {
          "id": "mcp-tool-poisoning",
          "name": "MCP Tool Poisoning (clase de ataque)",
          "type_es": "Patrón",
          "subtheme": "agent-security-specific",
          "year": 2025,
          "authority": "Invariant Labs / academia (MCPTox AAAI)",
          "url": "https://invariantlabs.ai/blog/mcp-security-notification-tool-poisoning-attacks",
          "url_label": "Invariant Labs",
          "description_es": "Forma específica de prompt injection indirecto donde instrucciones maliciosas se incrustan en metadata de herramientas MCP (descripción, parámetros, schema), no en input de usuario. El cliente MCP pasa la metadata sin validación al contexto del LLM, que la interpreta como instrucción. Demostrado por Invariant exfiltrando historial WhatsApp completo. Recogido en MITRE ATLAS v5.4 como 'Publish Poisoned AI Agent Tool'.",
          "tags": [
            "mcp",
            "indirect-prompt-injection",
            "supply-chain",
            "attack-pattern",
            "agentic",
            "mitre-atlas"
          ],
          "reliability": "HIGH"
        },
        {
          "id": "mcp-oauth-authz",
          "name": "MCP Authorization Specification (OAuth 2.1 + PKCE + RFC 8707)",
          "type_es": "Especificación",
          "subtheme": "agent-security-specific",
          "year": 2026,
          "authority": "Anthropic / MCP Working Group",
          "url": "https://modelcontextprotocol.io/specification/draft/basic/authorization",
          "url_label": "MCP authorization spec",
          "description_es": "Modelo de autorización para servidores MCP basado en OAuth 2.1, PKCE obligatorio y resource indicators (RFC 8707). Desacopla la lógica de control de acceso del servidor MCP delegando en un authorization server confiable. Anuncia recursos via Protected Resource Metadata (PRM). Pieza clave para adopción enterprise MCP.",
          "tags": [
            "oauth-2.1",
            "pkce",
            "mcp",
            "agent-identity",
            "authorization",
            "rfc-8707"
          ],
          "reliability": "HIGH"
        }
      ],
      "relationships": [
        {
          "from": "mitre-atlas-v5",
          "type": "referencia",
          "to": "owasp-asi"
        },
        {
          "from": "llama-guard-3",
          "type": "implementa-control-de",
          "to": "owasp-genai-top10"
        },
        {
          "from": "nemo-guardrails",
          "type": "implementa-control-de",
          "to": "owasp-genai-top10"
        },
        {
          "from": "iso-iec-23894",
          "type": "complementa",
          "to": "iso-iec-42001"
        },
        {
          "from": "owasp-aisvs",
          "type": "complementa",
          "to": "owasp-genai-top10"
        },
        {
          "from": "owasp-aibom",
          "type": "implementa",
          "to": "iso-iec-42001"
        },
        {
          "from": "eu-ai-act-gpai-cop",
          "type": "implementa",
          "to": "eu-ai-act"
        },
        {
          "from": "llama-guard-4",
          "type": "supersede",
          "to": "llama-guard-3"
        },
        {
          "from": "pyrit",
          "type": "complementa",
          "to": "garak"
        },
        {
          "from": "deepteam",
          "type": "complementa",
          "to": "garak"
        },
        {
          "from": "guardrails-ai",
          "type": "compite-con",
          "to": "nemo-guardrails"
        },
        {
          "from": "lakera-guard",
          "type": "compite-con",
          "to": "azure-prompt-shields"
        },
        {
          "from": "ms-agent-governance-toolkit",
          "type": "mitiga",
          "to": "owasp-asi"
        },
        {
          "from": "llamafirewall",
          "type": "implementa-control-de",
          "to": "owasp-asi"
        },
        {
          "from": "mcp-oauth-authz",
          "type": "mitiga",
          "to": "mcp-tool-poisoning"
        }
      ],
      "cross_facet_links": [
        {
          "to_facet": "ml-sat-ops",
          "to_entity": "telemanom-jpl",
          "from_entity": "nist-ai-600-1",
          "rationale": "NIST AI 600-1 exige model cards y eval baselines, controles compartidos con MLOps tradicional."
        },
        {
          "to_facet": "space-cybersec",
          "to_entity": "supply-chain-bom",
          "from_entity": "owasp-aibom",
          "rationale": "AIBOM (CycloneDX 1.7 + SPDX 3.0) sigue exactamente el patrón del SBOM/FBOM/HBOM espacial; misma capa de transparencia supply-chain aplicada a artefactos AI."
        },
        {
          "to_facet": "stack-templates",
          "to_entity": "well-architected",
          "from_entity": "iso-iec-42001",
          "rationale": "ISO 42001 es el estándar de management system AI análogo a ISO 27001; debe entrar en los stack templates como gate de procurement enterprise."
        },
        {
          "to_facet": "llmops",
          "to_entity": "vllm",
          "from_entity": "garak",
          "rationale": "Conversión tras migración (red-teamea)."
        },
        {
          "to_facet": "space-cybersec",
          "to_entity": "ccsds-sdls",
          "from_entity": "owasp-asi",
          "rationale": "ASI03/ASI07 (identidad y comms inter-agente firmadas) replican CCSDS Space Data Link Security: autenticación de comandos como defensa no negociable."
        },
        {
          "to_facet": "space-cybersec",
          "to_entity": "ccsds-sdls",
          "from_entity": "mcp-oauth-authz",
          "rationale": "OAuth 2.1 + PKCE + RFC 8707 para MCP busca trust entre peers no-humanos sobre canales no confiables, idéntico problema que CCSDS SDLS resolvió para enlaces espaciales."
        },
        {
          "to_facet": "cross-domain-resilience",
          "to_entity": "do178c-arp4754",
          "from_entity": "llamafirewall",
          "rationale": "AlignmentCheck de LlamaFirewall (auditoría chain-of-thought) tiene paralelo conceptual con runtime assurance / monitor architectures de DO-178C/ARP4754A."
        },
        {
          "to_facet": "agentic-llmops",
          "to_entity": "mcp",
          "from_entity": "mcp-tool-poisoning",
          "rationale": "Conversión tras migración (amenaza-a)."
        },
        {
          "to_facet": "space-cybersec",
          "to_entity": "sparta-v3",
          "from_entity": "mitre-atlas-v5",
          "rationale": "ATLAS v5.4 y SPARTA v3.2 son matrices threat-modeling análogas adaptadas a dominios distintos."
        }
      ]
    },
    {
      "facet_id": "gitops-cd",
      "facet_label_es": "GitOps y entrega continua",
      "super_category_id": "cross-cutting-edge",
      "intro_es": "Faceta dedicada a GitOps declarativo y entrega continua progresiva sobre Kubernetes. Cubre los 4 principios CNCF de OpenGitOps (declarative + versioned/immutable + pulled automatically + continuously reconciled), la suite Argo graduada en CNCF en diciembre 2022 (Argo CD, Argo Rollouts, ApplicationSet, App-of-Apps, Sync Waves) y plataformas de promoción multi-stage (Kargo de Akuity).\n\nEl núcleo del modelo: Git como única fuente de verdad, controladores que reconcilian estado declarado contra clusters K8s, y patrones de progresión (canary, blue-green, AnalysisRuns con métricas) que sustituyen los rollouts imperativos. App-of-Apps y ApplicationSet escalan la operación a contextos multi-cluster/multi-tenant; Sync Waves ordena dependencias temporales; Kargo orquesta promoción de Freight entre Stages (dev→staging→prod) con gates programables.\n\nEsta faceta complementa a `cross-domain-resilience` (donde viven los patrones de estabilidad clásicos: timeouts, retries, circuit breakers, sagas) aportando la capa operacional de cómo se entrega ese sistema resiliente. La graduación CNCF de Argo (diciembre 2022) y la consolidación 2024-2026 de Kargo como estándar de promoción justifican promover GitOps de subtheme a faceta de primer nivel.",
      "subthemes": [
        {
          "id": "gitops-principles",
          "label_es": "Principios y especificación GitOps (CNCF)"
        },
        {
          "id": "argo-suite",
          "label_es": "Suite Argo (CD, Rollouts, ApplicationSet, App-of-Apps, Sync Waves)"
        },
        {
          "id": "progressive-delivery-platforms",
          "label_es": "Plataformas de promoción y gobernanza GitOps"
        }
      ],
      "entities": [
        {
          "id": "argo-cd",
          "name": "Argo CD — Declarative GitOps CD for Kubernetes",
          "type_es": "Plataforma",
          "subtheme": "argo-suite",
          "year": 2024,
          "authority": "Argo Project / CNCF",
          "url": "https://argo-cd.readthedocs.io/en/stable/",
          "url_label": "Argo CD Docs",
          "description_es": "Controlador GitOps graduado en CNCF que reconcilia continuamente el estado declarado en Git con clústeres Kubernetes mediante un bucle pull, soportando OOTB Helm, Kustomize, Jsonnet y plain manifests con RBAC y SSO integrados.",
          "tags": [
            "gitops",
            "kubernetes",
            "cncf",
            "ci-cd",
            "declarative",
            "control-plane"
          ],
          "reliability": "HIGH"
        },
        {
          "id": "opengitops-principles",
          "name": "OpenGitOps Principles v1.0.0",
          "type_es": "Especificación",
          "subtheme": "gitops-principles",
          "year": 2022,
          "authority": "CNCF App Delivery TAG — GitOps Working Group",
          "url": "https://opengitops.dev/",
          "url_label": "OpenGitOps",
          "description_es": "Especificación canónica del GitOps Working Group de la CNCF que define cuatro principios neutros respecto al proveedor: estado declarativo, versionado e inmutable, extracción automática y reconciliación continua frente a deriva.",
          "tags": [
            "opengitops",
            "cncf",
            "principles",
            "framework",
            "declarative",
            "inmutable"
          ],
          "reliability": "HIGH"
        },
        {
          "id": "argo-rollouts",
          "name": "Argo Rollouts — Progressive Delivery Controller",
          "type_es": "Herramienta",
          "subtheme": "argo-suite",
          "year": 2024,
          "authority": "Argo Project / CNCF",
          "url": "https://argo-rollouts.readthedocs.io/en/stable/",
          "url_label": "Argo Rollouts Docs",
          "description_es": "Controlador Kubernetes que extiende los Deployments con estrategias blue/green y canary, gestionando AnalysisRuns sobre métricas Prometheus, Datadog o web webhooks para promoción y rollback automatizados.",
          "tags": [
            "argo-rollouts",
            "canary",
            "blue-green",
            "progressive-delivery",
            "analysis",
            "rollback"
          ],
          "reliability": "HIGH"
        },
        {
          "id": "argo-app-of-apps",
          "name": "App of Apps — Argo CD Cluster Bootstrapping Pattern",
          "type_es": "Patrón",
          "subtheme": "argo-suite",
          "year": 2024,
          "authority": "Argo Project",
          "url": "https://argo-cd.readthedocs.io/en/stable/operator-manual/cluster-bootstrapping/",
          "url_label": "Argo CD Docs — Cluster Bootstrapping",
          "description_es": "Patrón jerárquico donde una Application raíz despliega Applications hijas que a su vez gestionan cargas de trabajo, permitiendo bootstrap declarativo completo de un clúster desde un único repositorio Git.",
          "tags": [
            "app-of-apps",
            "bootstrap",
            "patron",
            "hierarchy",
            "kubernetes"
          ],
          "reliability": "HIGH"
        },
        {
          "id": "argo-applicationset",
          "name": "ApplicationSet Controller — Argo CD",
          "type_es": "Herramienta",
          "subtheme": "argo-suite",
          "year": 2024,
          "authority": "Argo Project",
          "url": "https://argo-cd.readthedocs.io/en/stable/operator-manual/applicationset/",
          "url_label": "Argo CD Docs — ApplicationSet",
          "description_es": "Controlador que genera Applications de Argo CD a escala mediante generadores List, Cluster, Git, Matrix y Pull Request, habilitando despliegues multi-clúster y multi-tenant con plantillas parametrizadas.",
          "tags": [
            "applicationset",
            "multi-cluster",
            "multi-tenant",
            "generators",
            "scaling"
          ],
          "reliability": "HIGH"
        },
        {
          "id": "argo-sync-waves",
          "name": "Argo CD Sync Waves & Phases",
          "type_es": "Patrón",
          "subtheme": "argo-suite",
          "year": 2024,
          "authority": "Argo Project",
          "url": "https://argo-cd.readthedocs.io/en/stable/user-guide/sync-waves/",
          "url_label": "Argo CD Docs — Sync Waves",
          "description_es": "Mecanismo de ordenamiento por fases (PreSync, Sync, PostSync) y olas (anotación argocd.argoproj.io/sync-wave) que controla dependencias entre recursos durante la reconciliación, esencial para migraciones de esquema y bootstrapping.",
          "tags": [
            "sync-waves",
            "hooks",
            "ordering",
            "dependencias",
            "patron"
          ],
          "reliability": "HIGH"
        },
        {
          "id": "kargo-akuity",
          "name": "Kargo — Multi-Stage Promotion para Argo CD",
          "type_es": "Herramienta",
          "subtheme": "progressive-delivery-platforms",
          "year": 2024,
          "authority": "Akuity",
          "url": "https://kargo.akuity.io/",
          "url_label": "Kargo Docs",
          "description_es": "Plataforma open-source que orquesta la promoción de Freight (artefactos versionados) entre Stages dev/staging/prod sobre Argo CD modelando Warehouses y políticas, resolviendo el hueco de promoción multi-entorno en GitOps puro.",
          "tags": [
            "kargo",
            "akuity",
            "promotion",
            "multi-stage",
            "gitops"
          ],
          "reliability": "MEDIUM"
        },
        {
          "id": "argo-cncf-graduation",
          "name": "Argo Graduates from the CNCF (Dec 2022)",
          "type_es": "Documentación",
          "subtheme": "progressive-delivery-platforms",
          "year": 2022,
          "authority": "Cloud Native Computing Foundation",
          "url": "https://www.cncf.io/announcements/2022/12/06/argo-graduates-from-the-cloud-native-computing-foundation-incubator/",
          "url_label": "CNCF Announcement",
          "description_es": "Anuncio oficial de graduación del proyecto Argo (Argo CD, Workflows, Rollouts, Events) en la CNCF, hito que certifica madurez de gobernanza, neutralidad y adopción en producción a escala empresarial.",
          "tags": [
            "cncf",
            "graduation",
            "argo",
            "milestone",
            "governance"
          ],
          "reliability": "HIGH"
        }
      ],
      "relationships": [],
      "cross_facet_links": []
    }
  ],
  "super_categories": [
    {
      "id": "space-resilience",
      "label_es": "Espacio y resiliencia",
      "intro_es": "Cluster espacial del atlas: constelaciones LEO, tolerancia a fallos a bordo, software flight-grade, ML orbital y ciberseguridad espacial. Cinco facetas que comparten el contexto físico (radiación, latencia, ventanas de contacto) y un vocabulario común (FDIR, SDLS, ECSS, CCSDS) que las distingue del resto del corpus.",
      "facet_ids": [
        "sat-constellation",
        "sat-fault-tolerance",
        "space-grade-sw",
        "ml-sat-ops",
        "space-cybersec"
      ]
    },
    {
      "id": "ai-ml-production",
      "label_es": "Producción AI/ML",
      "intro_es": "Super-cluster AI: stack LLMOps de producción (serving, RAG, evaluación, fine-tuning), stack agéntico (MCP, A2A, orquestación, coding agents) y la nueva faceta de confianza/seguridad/gobernanza IA (NIST AI RMF, EU AI Act, OWASP GenAI, runtime guardrails y red-team tooling). Es el cluster más denso del atlas tras la expansión de mayo 2026.",
      "facet_ids": [
        "llmops",
        "agentic-llmops",
        "ai-trust-safety"
      ]
    },
    {
      "id": "cross-cutting-edge",
      "label_es": "Cross-cutting y edge",
      "intro_es": "Patrones transversales (estabilidad cloud, chaos engineering, assurance frameworks), edge orquestado y enjambres físicos/orbitales, y la nueva faceta de GitOps + entrega continua (Argo, Kargo, OpenGitOps). Cluster integrador que conecta SPACE con AI vía primitivas de resiliencia comunes.",
      "facet_ids": [
        "cross-domain-resilience",
        "edge-swarms",
        "gitops-cd"
      ]
    },
    {
      "id": "meta-frontier",
      "label_es": "Meta y frontera",
      "intro_es": "Plantillas de decisión arquitectónica (ADR, C4, ATAM, STRIDE) y frontera de investigación (BFT moderno, verificación formal, neuromórfico, PQC/FHE, fiabilidad IA). El meta-nivel del atlas: cómo se decide y hacia dónde mira la disciplina.",
      "facet_ids": [
        "stack-templates",
        "research-frontier"
      ]
    }
  ]
}