1
0
Fork 1
mirror of https://github.com/thatmattlove/hyperglass.git synced 2026-04-17 21:38:27 +00:00

feat(structured): release structured feature set (squash merge)

Summary:
- Add structured traceroute support with comprehensive IP enrichment (ASN/org/RDNS).
- Improve MikroTik traceroute cleaning and aggregation; collapse repeated tables into a single representative table.
- Enhance traceroute logging for visibility and add traceroute-specific cleaning helpers.
- Add/adjust IP enrichment plugins and BGP/traceroute enrichment integrations.
- UI updates for traceroute output and path visualization; update docs and configuration for structured output.

This commit squashes changes from 'structured-dev' into a single release commit.
This commit is contained in:
Wilhelm Schonfeldt 2025-09-30 16:46:01 +02:00
parent 0398966062
commit 9db9849a59
No known key found for this signature in database
GPG key ID: 9A15BF796D5C3F1E
27 changed files with 3748 additions and 933 deletions

4
.gitignore vendored
View file

@ -21,6 +21,9 @@ __pycache__/
.python-version
.venv
# Local virtualenv freeze for this workspace
venv-requirements.txt
# MyPy
.mypy_cache
@ -84,3 +87,4 @@ docs/_build/
# PyBuilder
target/
assets/
venv-requirements.txt

140
README.md
View file

@ -1,133 +1,3 @@
"### Install https://hyperglass.dev/installation/docker"
mkdir -p /etc/hyperglass/svg
cd /opt
git clone https://github.com/CarlosSuporteISP/hyperglass_structured.git --depth=1
mv hyperglass_structured hyperglass
cd /opt/hyperglass
"### https://hyperglass.dev/configuration/overview"
"### https://hyperglass.dev/configuration/config Change the files in the /etc/hyperglass folder after copying with your information or add something following the official doc"
cp /opt/hyperglass/.samples/sample_config /etc/hyperglass/config.yaml
cp /opt/hyperglass/.samples/sample_terms-and-conditions /etc/hyperglass/terms-and-conditions.md
"### https://hyperglass.dev/configuration/devices Change the files in the /etc/hyperglass folder after copying with your information or add something following the official doc"
cp /opt/hyperglass/.samples/sample_devices2 /etc/hyperglass/devices.yaml
"### https://hyperglass.dev/configuration/directives Change the files in the /etc/hyperglass folder after copying with your information or add something following the official doc"
cp /opt/hyperglass/.samples/sample_directives_huawei /etc/hyperglass/directives.yaml
cp /opt/hyperglass/.samples/sample_directives_juniper /etc/hyperglass/directives.yaml
cp /opt/hyperglass/.samples/sample_directives_mikrotik /etc/hyperglass/directives.yaml
"### Environment Variables https://hyperglass.dev/installation/environment-variables"
cp /opt/hyperglass/.samples/sample_hyperglass /etc/hyperglass/hyperglass.env
"###"
You also need to add your AS prefixes to deny queries if you don't want others to look up your own prefixes from your hyperglass instance.
In the directives file, there is a field that is usually commented out. This configuration is meant for devices like Huawei or MikroTik, but it is currently still using the default option from the directives. From what I've tested, putting the rules in the configuration folder (/etc/hyperglass/...) didn't work. If it works later, we can do everything within the directives file in /etc/hyperglass, but for now, it's okay to use the default.
It's possible to create or use the ENTRYPOINT in the Dockerfile to change this at build time when starting the service, but I don't have time right now to stop and implement this.
/opt/hyperglass/hyperglass/defaults/directives/huawei.py | /opt/hyperglass_structured/hyperglass/defaults/directives/mikrotik.py
The code snippet, originally commented, should be modified to something like this:
# DENY RULE FOR AS PREFIX - IPv4
RuleWithIPv4(
condition="172.16.0.0/22",
ge="22",
le="32",
action="deny",
command="",
),
# DENY RULE FOR AS PREFIX - IPv6
RuleWithIPv6(
condition="fd00:2::/32",
ge="32",
le="128",
action="deny",
command="",
),
mikrotik v6
command="ip route print detail without-paging where {target} in dst-address bgp and dst-address !=0.0.0.0/0",
command="ipv6 route print detail without-paging where {target} in dst-address bgp and dst-address !=::/0",
mikrotik v7
command="routing route print detail without-paging where {target} in dst-address bgp and dst-address !=0.0.0.0/0",
command="routing route print detail without-paging where {target} in dst-address bgp and dst-address !=::/0",
"###"
"### Optional: Quickstart"
cd /opt/hyperglass
docker compose up
"### Create a systemd service"
cp /opt/hyperglass/.samples/hyperglass-docker.service /etc/hyperglass/hyperglass.service
ln -s /etc/hyperglass/hyperglass.service /etc/systemd/system/hyperglass.service
systemctl daemon-reload
systemctl enable hyperglass
systemctl start hyperglass
"###"
Acknowledgments:
To thatmatt for this incredible project that I really like. Nothing against other Looking Glass (LG) projects. https://github.com/thatmattlove/hyperglass
To remotti for the tips on Telegram, his attention, and for his fork https://github.com/remontti/hyperglass/tree/main, https://blog.remontti.com.br/7201, which is already quite deprecated due to its age (Node 14, etc.) and not being in Docker. This is why I decided to move to the official version.
To the user \邪萬教教我/ @Yukaphoenix572 好呆. Thanks to a message from him in the Telegram group, my mind was opened to the solution after I searched through the conversations.
To issue https://github.com/thatmattlove/hyperglass/issues/318 for the solution to queries that also weren't working on Tik-Tik (for those who use Claro).
And of course, last but not least: to AIs. My apologies to those who don't like the "code vibe," but they help a lot. I used many of the six main AIs on the market, but only Manus truly managed to help me, contributing about 45% of the development, testing, adjustments, and descriptions.
The total development time took over three weeks to get everything adjusted. Yes, I know I'm not that great at development, but I'm studying and improving. As I always say, in life and professionally, we always have something to learn; we never know everything.
I also adjusted the official plugin (which wasn't working) for Huawei.
The issue was the format in which the prefix was being passed to the device. Huawei expects the format 192.0.2.0 24 (with a space), but the official plugin was sending it in the 192.0.2.0/24 format (with a slash).
The fix was made to adapt to the format that Huawei accepts for queries.
"###"
<div align="center">
<br/>
<img src="https://res.cloudinary.com/hyperglass/image/upload/v1593916013/logo-light.svg" width=300></img>
@ -157,6 +27,8 @@ hyperglass is intended to make implementing a looking glass too easy not to do,
## Features
- BGP Route, BGP Community, BGP AS Path, Ping, & Traceroute, or [add your own commands](https://hyperglass.dev/configuration/directives).
- **Structured data output** with rich metadata for supported platforms
- **Enhanced traceroute** with ASN information, organization names, country codes, and IXP detection
- Full IPv6 support
- Customizable everything: features, theme, UI/API text, error messages, commands
- Built-in support for:
@ -173,12 +45,17 @@ hyperglass is intended to make implementing a looking glass too easy not to do,
- OpenBGPD
- TNSR
- VyOS
- **Structured BGP Route support** for: Arista EOS, FRRouting, Huawei VRP, Juniper Junos, MikroTik RouterOS
- **Structured Traceroute support** for: Arista EOS, FRRouting, Huawei VRP, Juniper Junos, MikroTik RouterOS/SwitchOS
- Configurable support for any other [supported platform](https://hyperglass.dev/platforms)
- Optionally access devices via an SSH proxy/jump server
- Access-list/prefix-list style query control to whitelist or blacklist query targets
- REST API with automatic, configurable OpenAPI documentation
- Modern, responsive UI built on [ReactJS](https://reactjs.org/), with [NextJS](https://nextjs.org/) & [Chakra UI](https://chakra-ui.com/), written in [TypeScript](https://www.typescriptlang.org/)
- **AS path visualization** with interactive flow charts showing organization names
- **Offline IP enrichment** using BGP.tools bulk data and PeeringDB for maximum performance
- Query multiple devices simultaneously
- **Concurrent processing** with non-blocking operations for improved performance
- Browser-based DNS-over-HTTPS resolution of FQDN queries
*To request support for a specific platform, please [submit a Github Issue](https://github.com/thatmattlove/hyperglass/issues/new) with the **feature** label.*
@ -206,5 +83,8 @@ hyperglass is built entirely on open-source software. Here are some of the aweso
- [Litestar](https://litestar.dev)
- [Pydantic](https://docs.pydantic.dev/latest/)
- [Chakra UI](https://chakra-ui.com/)
- [React Flow](https://reactflow.dev/) - AS path visualization
- [BGP.tools](https://bgp.tools/) - IP enrichment data
- [PeeringDB](https://peeringdb.com/) - Network organization and IXP data
[![GitHub](https://img.shields.io/github/license/thatmattlove/hyperglass?color=330036&style=for-the-badge)](https://github.com/thatmattlove/hyperglass/blob/main/LICENSE)

View file

@ -21,14 +21,20 @@ For external validation, hyperglass supports two backends:
Additionally, hyperglass provides the ability to control which BGP communities are shown to the end user.
For devices with structured traceroute support (Arista EOS, FRRouting, Huawei VRP, Juniper Junos, and MikroTik RouterOS), hyperglass can enhance the output with IP enrichment data including ASN information, organization names, country codes, and IXP detection using offline data from BGP.tools and PeeringDB.
| Parameter | Type | Default Value | Description |
| :----------------------------- | :-------------- | :------------ | :------------------------------------------------------------------------------------------------------------------------------------- |
| :-------------------------------- | :-------------- | :------------ | :------------------------------------------------------------------------------------------------------------------------------------- |
| `structured.rpki.mode` | String | router | Use `router` to use the router's view of the RPKI state, or `external` to use an external validation service. |
| `structured.rpki.backend` | String | cloudflare | When using `external` mode, choose `cloudflare` or `routinator` as the validation backend. |
| `structured.rpki.rpki_server_url` | String | | When using `routinator` backend, specify the base URL of your Routinator server (e.g., `http://rpki.example.com:3323`). |
| `structured.communities.mode` | String | deny | Use `deny` to deny any communities listed, `permit` to _only_ permit communities listed, or `name` to append friendly names. |
| `structured.communities.items` | List of Strings | | List of communities to match (used by `deny` and `permit` modes). |
| `structured.communities.names` | Dict | | Dictionary mapping BGP community codes to friendly names (used by `name` mode). |
| `structured.ip_enrichment.cache_timeout` | Integer | 86400 | Cache timeout in seconds for IP enrichment data (minimum 24 hours/86400 seconds). |
| `structured.ip_enrichment.enrich_traceroute`| Boolean | true | When `structured:` is present, enable IP enrichment of traceroute hops (ASN, org, IXP). This must be true for enrichment to run. |
| `structured.enable_for_traceroute`| Boolean | (when structured present) true | When `structured:` is present this controls whether the structured traceroute table output is shown. Set to false to force raw router output. |
| `structured.enable_for_bgp_route`| Boolean | (when structured present) true | When `structured:` is present this controls whether the structured BGP route table output is shown. Set to false to force raw router output. |
### RPKI Examples
@ -104,3 +110,115 @@ structured:
"65000:1102": "Upstream B Location 1"
"65000:2000": "IXP Any"
```
### IP Enrichment Examples
<Callout type="info" emoji="">
**IP Enrichment Requirements**
IP enrichment is currently supported for traceroute outputs on supported platforms.
The system uses offline data from BGP.tools (1.3M+ CIDR entries) and PeeringDB for maximum performance and reliability.
</Callout>
#### Enable IP Enrichment for Traceroute
```yaml filename="config.yaml" copy {2-4}
structured:
# Ensure `structured:` exists to enable structured output. By default the
# structured table output is enabled when this block is present. To disable
# the structured traceroute table, set `structured.enable_for_traceroute: false`.
ip_enrichment:
enrich_traceroute: true
```
#### Enable IP Enrichment with Custom Cache Timeout
```yaml filename="config.yaml" copy {2-5}
structured:
ip_enrichment:
enrich_traceroute: true
cache_timeout: 172800 # 48 hours
```
#### Enable IP Enrichment for Traceroute
```yaml filename="config.yaml" copy {2-4}
structured:
ip_enrichment:
enrich_traceroute: true
cache_timeout: 86400 # 24 hours (minimum)
```
<Callout type="warning" emoji="⚠️">
**Performance Considerations**
- Initial cache loading may take 30-60 seconds on first startup
- Data is cached locally using pickle format for ultra-fast subsequent loads
- Cache files are stored in `/etc/hyperglass/ip_enrichment/`
- Minimum cache timeout is 24 hours (86400 seconds) to prevent excessive API usage
</Callout>
### Structured Traceroute Configuration
<Callout type="info" emoji="">
**Structured Traceroute Support**
Structured traceroute with rich metadata is available for:
- **Arista EOS**: Parses Unix-style traceroute output with hostname, multiple RTT support, and MPLS labels
- **FRRouting**: Parses Unix-style traceroute output with load balancing and multi-path support
- **Huawei VRP**: Parses Unix-style traceroute output
- **Juniper Junos**: Parses traceroute output with MPLS labels, multipath, and partial timeouts
- **MikroTik RouterOS/SwitchOS**: Parses multi-table format with statistics
When IP enrichment is enabled, traceroute hops are enhanced with ASN numbers, organization names, country codes, prefixes, and IXP detection.
</Callout>
#### Complete Structured Traceroute Setup
```yaml filename="config.yaml" copy {2-12}
structured:
rpki:
mode: external
backend: routinator
rpki_server_url: "https://rpki.example.com"
communities:
mode: name
names:
"65000:1000": "Transit Routes"
"65000:2000": "Peer Routes"
ip_enrichment:
enrich_traceroute: true
cache_timeout: 86400
```
#### Structured Traceroute with Cloudflare RPKI
```yaml filename="config.yaml" copy {2-9}
structured:
rpki:
mode: external
backend: cloudflare
ip_enrichment:
enrich_traceroute: true
```
#### Minimal Structured Traceroute (No IP Enrichment)
```yaml filename="config.yaml" copy {2-4}
structured:
ip_enrichment:
enrich_traceroute: false # Traceroute will show basic hop info without ASN/org data
```
<Callout type="warning" emoji="⚠️">
**IP Enrichment Dependency**
Without IP enrichment enabled:
- Traceroute hops will only show IP addresses and RTT values
- No ASN, organization names, or country information will be displayed
- AS path visualization will be limited or unavailable
- IXP detection will not function
For the full structured traceroute experience with rich metadata, `ip_enrichment.enrich_traceroute: true` is required.
</Callout>

View file

@ -15,8 +15,17 @@ from hyperglass.constants import __version__
from hyperglass.exceptions import HyperglassError
# Local
from .events import check_redis, init_ip_enrichment
from .routes import info, query, device, devices, queries
from .events import check_redis
from .routes import (
info,
query,
device,
devices,
queries,
ip_enrichment_status,
ip_enrichment_refresh,
aspath_enrich,
)
from .middleware import COMPRESSION_CONFIG, create_cors_config
from .error_handlers import app_handler, http_handler, default_handler, validation_handler
@ -42,6 +51,9 @@ HANDLERS = [
queries,
info,
query,
ip_enrichment_status,
ip_enrichment_refresh,
aspath_enrich,
]
if not STATE.settings.disable_ui:
@ -64,7 +76,7 @@ app = Litestar(
ValidationException: validation_handler,
Exception: default_handler,
},
on_startup=[check_redis, init_ip_enrichment],
on_startup=[check_redis],
debug=STATE.settings.debug,
cors_config=create_cors_config(state=STATE),
compression_config=COMPRESSION_CONFIG,

View file

@ -10,7 +10,7 @@ from litestar import Litestar
from hyperglass.state import use_state
from hyperglass.log import log
__all__ = ("check_redis", "init_ip_enrichment")
__all__ = ("check_redis",)
async def check_redis(_: Litestar) -> t.NoReturn:
@ -19,25 +19,6 @@ async def check_redis(_: Litestar) -> t.NoReturn:
cache.check()
async def init_ip_enrichment(_: Litestar) -> None:
"""Initialize IP enrichment data at startup."""
try:
params = use_state("params")
if not params.structured.ip_enrichment.enabled:
log.debug("IP enrichment disabled, skipping initialization")
return
except Exception as e:
log.debug(f"Could not check IP enrichment config: {e}")
return
try:
from hyperglass.external.ip_enrichment import _service
log.info("Initializing IP enrichment data at startup...")
success = await _service.ensure_data_loaded()
if success:
log.info("IP enrichment data loaded successfully at startup")
else:
log.warning("Failed to load IP enrichment data at startup")
except Exception as e:
log.error(f"Error initializing IP enrichment data: {e}")
# init_ip_enrichment removed: startup refresh is intentionally disabled and
# IP enrichment data is loaded on-demand when required. Keeping a no-op
# startup hook adds no value and may cause confusion.

View file

@ -60,6 +60,27 @@ __all__ = (
)
@post("/api/aspath/enrich")
async def aspath_enrich(data: dict) -> dict:
"""Enrich a list of ASNs with organization names on demand.
Expected JSON payload: { "as_path": [123, 456, ...] }
"""
try:
as_path = data.get("as_path", []) if isinstance(data, dict) else []
if not as_path:
return {"success": False, "error": "No as_path provided"}
# Convert to strings and call the existing bulk lookup
from hyperglass.external.ip_enrichment import lookup_asns_bulk
asn_strings = [str(a) for a in as_path]
results = await lookup_asns_bulk(asn_strings)
return {"success": True, "asn_organizations": results}
except Exception as e:
return {"success": False, "error": str(e)}
@get("/api/devices/{id:str}", dependencies={"devices": Provide(get_devices)})
async def device(devices: Devices, id: str) -> APIDevice:
"""Retrieve a device by ID."""
@ -163,6 +184,39 @@ async def query(_state: HyperglassState, request: Request, data: Query) -> Query
structured=data.device.structured_output or False,
)
else:
# Best-effort: if IP enrichment is enabled, schedule a
# non-blocking background refresh so the service can
# update PeeringDB caches without relying on the client.
try:
from hyperglass.state import use_state
params = use_state("params")
if (
getattr(params, "structured", None)
and params.structured.ip_enrichment.enrich_traceroute
and getattr(params.structured, "enable_for_traceroute", None)
is not False
):
try:
from hyperglass.external.ip_enrichment import (
refresh_ip_enrichment_data,
)
async def _bg_refresh():
try:
await refresh_ip_enrichment_data(force=False)
except Exception as e:
_log.debug("Background IP enrichment refresh failed: {}", e)
# Schedule background refresh and don't await it.
asyncio.create_task(_bg_refresh())
except Exception:
# If import or scheduling fails, proceed without refresh
pass
except Exception:
# If we can't access params, skip background refresh
pass
# Pass request to execution module
output = await execute(data)
@ -183,7 +237,27 @@ async def query(_state: HyperglassState, request: Request, data: Query) -> Query
else:
raw_output = str(output)
# Only cache successful results
# Detect semantically-empty structured outputs and avoid caching them.
# Examples:
# - BGPRouteTable: {'count': 0, 'routes': []}
# - TracerouteResult: {'hops': []}
skip_cache_empty = False
try:
if json_output and isinstance(raw_output, dict):
# BGP route table empty
if "count" in raw_output and "routes" in raw_output:
if raw_output.get("count", 0) == 0 or not raw_output.get("routes"):
skip_cache_empty = True
# Traceroute result empty
if "hops" in raw_output and (not raw_output.get("hops")):
skip_cache_empty = True
except Exception:
# If any unexpected shape is encountered, don't skip caching by
# accident — fall back to normal behavior.
skip_cache_empty = False
if not skip_cache_empty:
# Only cache successful, non-empty results
await loop.run_in_executor(
None, partial(cache.set_map_item, cache_key, "output", raw_output)
)
@ -191,10 +265,15 @@ async def query(_state: HyperglassState, request: Request, data: Query) -> Query
None, partial(cache.set_map_item, cache_key, "timestamp", timestamp)
)
await loop.run_in_executor(
None, partial(cache.expire, cache_key, expire_in=_state.params.cache.timeout)
None,
partial(cache.expire, cache_key, expire_in=_state.params.cache.timeout),
)
_log.bind(cache_timeout=_state.params.cache.timeout).debug("Response cached")
else:
_log.bind(cache_key=cache_key).warning(
"Structured output was empty (e.g. 0 routes / 0 hops) - skipping cache to allow immediate retry"
)
runtime = int(round(elapsedtime, 0))
@ -263,6 +342,21 @@ async def ip_enrichment_refresh(force: bool = False) -> dict:
try:
from hyperglass.external.ip_enrichment import refresh_ip_enrichment_data
# If enrichment is disabled in config, return a clear message
try:
from hyperglass.state import use_state
params = use_state("params")
if (
not getattr(params, "structured", None)
or not params.structured.ip_enrichment.enrich_traceroute
or getattr(params.structured, "enable_for_traceroute", None) is False
):
return {"success": False, "message": "IP enrichment for traceroute is not enabled"}
except Exception:
# If config can't be read, proceed with refresh call and let it decide
pass
success = await refresh_ip_enrichment_data(force=force)
return {
"success": success,

View file

@ -13,25 +13,28 @@ async def enrich_output_with_ip_enrichment(output: OutputDataModel) -> OutputDat
"""Enrich output data with IP enrichment information."""
params = use_state("params")
# Check if IP enrichment is enabled in configuration
if not params.structured.ip_enrichment.enabled:
log.debug("IP enrichment disabled in configuration, skipping")
# If structured block isn't present or traceroute enrichment explicitly disabled,
# skip enrichment entirely.
if (
not getattr(params, "structured", None)
or not params.structured.ip_enrichment.enrich_traceroute
or getattr(params.structured, "enable_for_traceroute", None) is False
):
log.debug("IP enrichment for traceroute disabled or structured config missing, skipping")
return output
_log = log.bind(enrichment="ip_enrichment")
_log.debug("Starting IP enrichment")
try:
if isinstance(output, BGPRouteTable):
if params.structured.ip_enrichment.enrich_next_hop:
_log.debug("Enriching BGP route table with next-hop information")
await output.enrich_with_ip_enrichment()
_log.info(f"Enriched {len(output.routes)} BGP routes with next-hop data")
else:
_log.debug("Next-hop enrichment disabled, skipping BGP enrichment")
elif isinstance(output, TracerouteResult):
if params.structured.ip_enrichment.enrich_traceroute:
if isinstance(output, TracerouteResult):
# Only enrich traceroute results when structured config exists,
# per-feature top-level flag isn't False, and ip_enrichment is enabled.
if (
getattr(params, "structured", None)
and params.structured.ip_enrichment.enrich_traceroute
and getattr(params.structured, "enable_for_traceroute", None) is not False
):
_log.debug("Enriching traceroute hops with ASN information")
await output.enrich_with_ip_enrichment()

File diff suppressed because it is too large Load diff

View file

@ -32,6 +32,7 @@ if node_major < MIN_NODE_VERSION:
from .util import cpu_count
from .state import use_state
from .settings import Settings
import os
LOG_LEVEL = logging.INFO if Settings.debug is False else logging.DEBUG
logging.basicConfig(handlers=[LibInterceptHandler()], level=0, force=True)
@ -155,6 +156,16 @@ def run(workers: int = None):
_workers = workers
if workers is None:
# Allow environment override (useful for Docker Compose):
# HYPERGLASS_WORKERS=n
env_workers = os.getenv("HYPERGLASS_WORKERS")
if env_workers:
try:
_workers = max(1, int(env_workers))
except Exception:
# Fall back to defaults on parse error
_workers = 1 if Settings.debug else cpu_count(2)
else:
if Settings.debug:
_workers = 1
else:

View file

@ -117,7 +117,49 @@ class Query(BaseModel):
@property
def device(self) -> Device:
"""Get this query's device object by query_location."""
return self._state.devices[self.query_location]
# Return a proxy around the device so we can override
# structured_output per-request without mutating global state.
device = self._state.devices[self.query_location]
# Determine effective structured_output based on global params
try:
params = use_state("params")
except Exception:
params = None
# Decide which top-level structured enable flag to consult
feature_flag_name = None
if getattr(self, "query_type", None) == "traceroute":
feature_flag_name = "enable_for_traceroute"
elif getattr(self, "query_type", None) in ("bgp_route", "bgp_routestr"):
feature_flag_name = "enable_for_bgp_route"
effective_structured = bool(getattr(device, "structured_output", False))
if params is None or not getattr(params, "structured", None):
# Global structured block absent => structured disabled
effective_structured = False
else:
# If structured is present, default is enabled; allow opt-out
if feature_flag_name is not None:
if getattr(params.structured, feature_flag_name, None) is False:
effective_structured = False
class _DeviceProxy:
"""Tiny proxy object that delegates to the real device but
overrides structured_output."""
def __init__(self, real, structured_value: bool) -> None:
self._real = real
self.structured_output = structured_value
def __getattr__(self, name: str):
return getattr(self._real, name)
def __repr__(self) -> str: # pragma: no cover - trivial
return repr(self._real)
return _DeviceProxy(device, effective_structured)
@field_validator("query_location")
def validate_query_location(cls, value):

View file

@ -87,7 +87,7 @@ class Params(ParamsPublic, HyperglassModel):
docs: Docs = Docs()
logging: Logging = Logging()
messages: Messages = Messages()
structured: Structured = Structured()
structured: t.Optional[Structured] = None
web: Web = Web()
def __init__(self, **kw: t.Any) -> None:

View file

@ -39,12 +39,14 @@ class StructuredRpki(HyperglassModel):
class StructuredIpEnrichment(HyperglassModel):
"""Control IP enrichment for structured data responses."""
"""Control IP enrichment for structured data responses.
Two tri-state flags are provided to allow the presence of a `structured:`
config block to imply the features are enabled, while still allowing users
to explicitly disable them.
"""
enabled: bool = False
cache_timeout: int = 86400 # 24 hours in seconds (minimum)
enrich_next_hop: bool = False
enrich_traceroute: bool = True
@field_validator("cache_timeout")
def validate_cache_timeout(cls, value: int) -> int:
@ -53,6 +55,14 @@ class StructuredIpEnrichment(HyperglassModel):
return 86400
return value
enrich_traceroute: bool = True
"""Enable ASN/org/IP enrichment for traceroute hops.
This option remains under `structured.ip_enrichment` per-user request and
must be True (in addition to top-level structured presence and
`structured.enable_for_traceroute` not being False) for enrichment to run.
"""
class Structured(HyperglassModel):
"""Control structured data responses."""
@ -60,3 +70,10 @@ class Structured(HyperglassModel):
communities: StructuredCommunities = StructuredCommunities()
rpki: StructuredRpki = StructuredRpki()
ip_enrichment: StructuredIpEnrichment = StructuredIpEnrichment()
# Top-level structured enable/disable flags. If `structured:` is present in
# the user's config and these are not set (None), the structured table
# output is considered enabled by default. Setting them to False disables
# the structured table output even when a `structured:` block exists.
enable_for_traceroute: t.Optional[bool] = None
enable_for_bgp_route: t.Optional[bool] = None

View file

@ -5,7 +5,7 @@ import typing as t
from ipaddress import ip_address, AddressValueError
# Third Party
from pydantic import field_validator
from pydantic import field_validator, computed_field
# Project
from hyperglass.external.ip_enrichment import TargetDetail
@ -58,6 +58,7 @@ class TracerouteHop(HyperglassModel):
"""Get the IP address for display purposes (may be truncated)."""
return self.display_ip or self.ip_address
@computed_field
@property
def avg_rtt(self) -> t.Optional[float]:
"""Calculate average RTT from available measurements."""

View file

@ -341,19 +341,47 @@ class MikrotikTracerouteTable(MikrotikBase):
"""
_log = log.bind(parser="MikrotikTracerouteTable")
# DEBUG: Log the raw input
_log.debug(f"=== RAW MIKROTIK TRACEROUTE INPUT ===")
_log.debug(f"Target: {target}, Source: {source}")
_log.debug(f"Raw text length: {len(text)} characters")
_log.debug(f"Raw text:\n{repr(text)}")
_log.debug(f"=== END RAW INPUT ===")
# Minimal input summary to avoid excessive logs while keeping context
_log.debug(
"Parsing MikroTik traceroute",
target=target,
source=source,
lines=len(text.splitlines()),
)
# Try to extract target from the traceroute command in the output
# Look for patterns like: "tool traceroute src-address=192.168.1.1 timeout=1 duration=30 count=3 8.8.8.8"
lines = text.split("\n")
extracted_target = target # Default to passed target
for line in lines[:10]: # Check first 10 lines for command
line = line.strip()
if line.startswith("tool traceroute") or "traceroute" in line:
# Extract target from command line - it's typically the last argument
parts = line.split()
for part in reversed(parts):
# Skip parameters with = signs and common flags
if (
"=" not in part
and not part.startswith("-")
and not part.startswith("[")
and part
not in ["tool", "traceroute", "src-address", "timeout", "duration", "count"]
):
# This looks like a target (IP or hostname)
if len(part) > 3: # Reasonable minimum length
extracted_target = part
break
break
# Use extracted target if found, otherwise keep the passed target
if extracted_target != target:
_log.info(
f"Updated target from '{target}' to '{extracted_target}' based on command output"
)
target = extracted_target
lines = text.strip().split("\n")
_log.debug(f"Split into {len(lines)} lines")
# DEBUG: Log each line with line numbers
for i, line in enumerate(lines):
_log.debug(f"Line {i:2d}: {repr(line)}")
# Find all table starts - handle both formats:
# Format 1: "Columns: ADDRESS, LOSS, SENT..." (newer format with hop numbers)
@ -367,7 +395,6 @@ class MikrotikTracerouteTable(MikrotikBase):
and not line.strip().startswith(("1", "2", "3", "4", "5", "6", "7", "8", "9"))
):
table_starts.append(i)
_log.debug(f"Found table start at line {i}: {repr(line)}")
if not table_starts:
_log.warning("No traceroute table headers found in output")
@ -376,14 +403,15 @@ class MikrotikTracerouteTable(MikrotikBase):
# Take the LAST table (newest/final results)
last_table_start = table_starts[-1]
_log.debug(
f"Found {len(table_starts)} tables, using the last one starting at line {last_table_start}"
"Found traceroute tables",
tables_found=len(table_starts),
last_table_start=last_table_start,
)
# Determine format by checking the header line
header_line = lines[last_table_start].strip()
is_columnar_format = "Columns:" in header_line
_log.debug(f"Header line: {repr(header_line)}")
_log.debug(f"Is columnar format: {is_columnar_format}")
_log.debug("Header determined", header=header_line, columnar=is_columnar_format)
# Parse only the last table
hops = []
@ -398,7 +426,6 @@ class MikrotikTracerouteTable(MikrotikBase):
# Skip empty lines
if not line:
_log.debug(f"Line {i}: EMPTY - skipping")
continue
# Skip the column header lines
@ -408,16 +435,14 @@ class MikrotikTracerouteTable(MikrotikBase):
or line.startswith("#")
):
in_data_section = True
_log.debug(f"Line {i}: HEADER - entering data section: {repr(line)}")
continue
# Skip paging prompts
if "-- [Q quit|C-z pause]" in line:
_log.debug(f"Line {i}: PAGING PROMPT - breaking: {repr(line)}")
break # End of this table
if in_data_section and line:
_log.debug(f"Line {i}: PROCESSING DATA LINE: {repr(line)}")
# Process data line
try:
# Define helper function for RTT parsing
def parse_rtt(rtt_str: str) -> t.Optional[float]:
@ -439,7 +464,6 @@ class MikrotikTracerouteTable(MikrotikBase):
):
# This is a timeout/continuation hop
parts = line.split()
_log.debug(f"Line {i}: Timeout/continuation line, parts: {parts}")
if len(parts) >= 2 and parts[0].endswith("%"):
ip_address = None
@ -471,15 +495,13 @@ class MikrotikTracerouteTable(MikrotikBase):
)
hops.append(hop)
current_hop_number += 1
_log.debug(f"Line {i}: Created timeout hop {hop.hop_number}")
continue
if is_columnar_format:
# New format: "1 10.0.0.41 0% 1 0.5ms 0.5 0.5 0.5 0"
parts = line.split()
_log.debug(f"Line {i}: Columnar format, parts: {parts}")
if len(parts) < 3:
_log.debug(f"Line {i}: Too few parts ({len(parts)}), skipping")
continue
continue
hop_number = int(parts[0])
@ -504,15 +526,14 @@ class MikrotikTracerouteTable(MikrotikBase):
best_rtt_str = "timeout"
worst_rtt_str = "timeout"
else:
_log.debug(f"Line {i}: Doesn't match columnar patterns, skipping")
continue
continue
else:
# Old format: "196.60.8.198 0% 1 17.1ms 17.1 17.1 17.1 0"
# We need to deduplicate by taking the LAST occurrence of each IP
parts = line.split()
_log.debug(f"Line {i}: Old format, parts: {parts}")
if len(parts) < 6:
_log.debug(f"Line {i}: Too few parts ({len(parts)}), skipping")
continue
continue
ip_address = parts[0] if not parts[0].endswith("%") else None
@ -520,7 +541,9 @@ class MikrotikTracerouteTable(MikrotikBase):
# Check for truncated IPv6 addresses
if ip_address and (ip_address.endswith("...") or ip_address.endswith("..")):
_log.warning(
f"Line {i}: Truncated IP address detected: {ip_address} - setting to None"
"Truncated IP address detected, setting to None",
line=i,
ip=ip_address,
)
ip_address = None
@ -548,7 +571,7 @@ class MikrotikTracerouteTable(MikrotikBase):
# Convert timing values
def parse_rtt(rtt_str: str) -> t.Optional[float]:
if rtt_str in ("timeout", "-", "0ms"):
if rtt_str in ("timeout", "-", "0ms", "*"):
return None
# Remove 'ms' suffix and convert to float
rtt_clean = re.sub(r"ms$", "", rtt_str)
@ -579,19 +602,17 @@ class MikrotikTracerouteTable(MikrotikBase):
)
hops.append(hop_obj)
_log.debug(
f"Line {i}: Created hop {final_hop_number}: {ip_address} - {loss_pct}% - {sent_count} sent"
)
except (ValueError, IndexError) as e:
_log.debug(f"Failed to parse line '{line}': {e}")
_log.debug("Failed to parse traceroute data line", line=line, error=str(e))
continue
_log.debug(f"Before deduplication: {len(hops)} hops")
# Snapshot before deduplication
orig_hop_count = len(hops)
# For old format, we need to deduplicate by IP and take only final stats
if not is_columnar_format and hops:
_log.debug(f"Old format detected - deduplicating {len(hops)} total entries")
_log.debug("Old format detected - deduplicating entries", total_entries=len(hops))
# Group by IP address and take the HIGHEST SENT count (final stats)
ip_to_final_hop = {}
@ -610,16 +631,11 @@ class MikrotikTracerouteTable(MikrotikBase):
if ip_key not in hop_order:
hop_order.append(ip_key)
ip_to_max_sent[ip_key] = 0
_log.debug(f"New IP discovered: {ip_key}")
# Keep hop with highest SENT count (most recent/final stats)
if hop.sent_count and hop.sent_count >= ip_to_max_sent[ip_key]:
ip_to_max_sent[ip_key] = hop.sent_count
ip_to_final_hop[ip_key] = hop
_log.debug(f"Updated {ip_key}: SENT={hop.sent_count} (final stats)")
_log.debug(f"IP order: {hop_order}")
_log.debug(f"Final IP stats: {[(ip, ip_to_max_sent[ip]) for ip in hop_order]}")
# Rebuild hops list with final stats and correct hop numbers
final_hops = []
@ -627,26 +643,59 @@ class MikrotikTracerouteTable(MikrotikBase):
final_hop = ip_to_final_hop[ip_key]
final_hop.hop_number = i # Correct hop numbering
final_hops.append(final_hop)
_log.debug(
f"Final hop {i}: {ip_key} - Loss: {final_hop.loss_pct}% - Sent: {final_hop.sent_count}"
)
hops = final_hops
_log.debug(f"Deduplication complete: {len(hops)} unique hops with final stats")
_log.debug(f"After processing: {len(hops)} final hops")
for hop in hops:
_log.debug(
f"Final hop {hop.hop_number}: {hop.ip_address} - {hop.loss_pct}% loss - {hop.sent_count} sent"
"Deduplication complete",
before=orig_hop_count,
after=len(hops),
)
# Filter excessive timeout hops ONLY at the end (no more valid hops after)
# Find the last hop with a valid IP address
last_valid_hop_index = -1
for i, hop in enumerate(hops):
if hop.ip_address is not None and hop.loss_pct < 100:
last_valid_hop_index = i
filtered_hops = []
trailing_timeouts = 0
for i, hop in enumerate(hops):
if i > last_valid_hop_index and hop.ip_address is None and hop.loss_pct == 100:
# This is a trailing timeout hop (after the last valid hop)
trailing_timeouts += 1
if trailing_timeouts <= 3: # Only keep first 3 trailing timeouts
filtered_hops.append(hop)
else:
# drop extra trailing timeouts
continue
else:
# This is either a valid hop or a timeout hop with valid hops after it
filtered_hops.append(hop)
# Renumber the filtered hops
for i, hop in enumerate(filtered_hops, 1):
hop.hop_number = i
hops = filtered_hops
if last_valid_hop_index >= 0:
_log.debug(
"Filtered trailing timeouts",
last_valid_index=last_valid_hop_index,
trailing_timeouts_removed=max(0, orig_hop_count - len(hops)),
)
result = MikrotikTracerouteTable(target=target, source=source, hops=hops)
_log.info(f"Parsed {len(hops)} hops from MikroTik traceroute final table")
_log.info("Parsed traceroute final table", hops=len(hops))
return result
def traceroute_result(self):
"""Convert to TracerouteResult format."""
from hyperglass.models.data.traceroute import TracerouteResult, TracerouteHop
from hyperglass.log import log
_log = log.bind(parser="MikrotikTracerouteTable")
converted_hops = []
for hop in self.hops:
@ -659,21 +708,22 @@ class MikrotikTracerouteTable(MikrotikBase):
display_ip = hop.ip_address
ip_address = None
converted_hops.append(
TracerouteHop(
created_hop = TracerouteHop(
hop_number=hop.hop_number,
ip_address=ip_address, # None for truncated IPs
display_ip=display_ip, # Truncated IP for display
hostname=hop.hostname,
rtt1=hop.best_rtt,
rtt2=hop.avg_rtt,
rtt3=hop.worst_rtt,
# MikroTik-specific statistics
# Set RTT values to ensure avg_rtt property returns MikroTik's AVG value
# Since avg_rtt = (rtt1 + rtt2 + rtt3) / 3, we set all to the MikroTik AVG
rtt1=hop.avg_rtt, # Set to AVG so computed average is correct
rtt2=hop.avg_rtt, # Set to AVG so computed average is correct
rtt3=hop.avg_rtt, # Set to AVG so computed average is correct
# MikroTik-specific statistics (preserve original values)
loss_pct=hop.loss_pct,
sent_count=hop.sent_count,
last_rtt=hop.last_rtt,
best_rtt=hop.best_rtt,
worst_rtt=hop.worst_rtt,
last_rtt=hop.last_rtt, # Preserve LAST value
best_rtt=hop.best_rtt, # Preserve BEST value
worst_rtt=hop.worst_rtt, # Preserve WORST value
# BGP enrichment fields will be populated by enrichment plugin
# For truncated IPs, these will remain None/empty
asn=None,
@ -683,7 +733,8 @@ class MikrotikTracerouteTable(MikrotikBase):
rir=None,
allocated=None,
)
)
converted_hops.append(created_hop)
return TracerouteResult(
target=self.target,

View file

@ -14,6 +14,9 @@ from .traceroute_ip_enrichment import ZTracerouteIpEnrichment
from .bgp_route_ip_enrichment import ZBgpRouteIpEnrichment
from .trace_route_mikrotik import TraceroutePluginMikrotik
from .trace_route_huawei import TraceroutePluginHuawei
from .trace_route_arista import TraceroutePluginArista
from .trace_route_frr import TraceroutePluginFrr
from .trace_route_juniper import TraceroutePluginJuniper
__all__ = (
"BGPRoutePluginArista",
@ -28,5 +31,8 @@ __all__ = (
"ZBgpRouteIpEnrichment",
"TraceroutePluginMikrotik",
"TraceroutePluginHuawei",
"TraceroutePluginArista",
"TraceroutePluginFrr",
"TraceroutePluginJuniper",
"RemoveCommand",
)

View file

@ -1,7 +1,6 @@
"""IP enrichment for structured BGP route data - show path functionality."""
# Standard Library
import asyncio
import typing as t
# Third Party
@ -18,7 +17,6 @@ if t.TYPE_CHECKING:
class ZBgpRouteIpEnrichment(OutputPlugin):
"""Enrich structured BGP route output with IP enrichment for next-hop ASN/organization data."""
_hyperglass_builtin: bool = PrivateAttr(True)
platforms: t.Sequence[str] = (
@ -35,80 +33,11 @@ class ZBgpRouteIpEnrichment(OutputPlugin):
directives: t.Sequence[str] = ("bgp_route", "bgp_community")
common: bool = True
async def _enrich_async(self, output: BGPRouteTable, enrich_next_hop: bool = True) -> None:
"""Async helper to enrich BGP route data."""
_log = log.bind(plugin=self.__class__.__name__)
if enrich_next_hop:
try:
# First enrich with next-hop IP information (if enabled)
await output.enrich_with_ip_enrichment()
_log.debug("BGP next-hop IP enrichment completed")
except Exception as e:
_log.error(f"BGP next-hop IP enrichment failed: {e}")
else:
_log.debug("BGP next-hop IP enrichment skipped (disabled in config)")
try:
# Always enrich AS path ASNs with organization names
await output.enrich_as_path_organizations()
_log.debug("BGP AS path organization enrichment completed")
except Exception as e:
_log.error(f"BGP AS path organization enrichment failed: {e}")
def process(self, *, output: "OutputDataModel", query: "Query") -> "OutputDataModel":
"""Enrich structured BGP route data with next-hop IP enrichment information."""
if not isinstance(output, BGPRouteTable):
return output
_log = log.bind(plugin=self.__class__.__name__)
_log.warning(f"🔍 BGP ROUTE PLUGIN STARTED - Processing {len(output.routes)} BGP routes")
# Check if IP enrichment is enabled in config
enrich_next_hop = True
try:
from hyperglass.state import use_state
params = use_state("params")
if not params.structured.ip_enrichment.enabled:
_log.debug("IP enrichment disabled in configuration")
return output
# Check next-hop enrichment setting but don't exit - we still want ASN org enrichment
enrich_next_hop = params.structured.ip_enrichment.enrich_next_hop
if not enrich_next_hop:
_log.debug(
"Next-hop enrichment disabled in configuration - will skip next-hop lookup but continue with ASN organization enrichment"
)
except Exception as e:
_log.debug(f"Could not check IP enrichment config: {e}")
# Use the built-in enrichment method from BGPRouteTable
try:
# Run async enrichment in sync context
loop = None
try:
loop = asyncio.get_event_loop()
if loop.is_running():
# If we're already in an event loop, create a new task
import concurrent.futures
with concurrent.futures.ThreadPoolExecutor() as executor:
future = executor.submit(
asyncio.run, self._enrich_async(output, enrich_next_hop)
)
future.result()
else:
loop.run_until_complete(self._enrich_async(output, enrich_next_hop))
except RuntimeError:
# No event loop, create one
asyncio.run(self._enrich_async(output, enrich_next_hop))
_log.warning(
f"🔍 BGP ROUTE PLUGIN COMPLETED - ASN organizations: {len(output.asn_organizations)}"
)
except Exception as e:
_log.error(f"BGP route IP enrichment failed: {e}")
_log.debug(f"Completed enrichment for BGP routes")
return output

View file

@ -41,78 +41,132 @@ class MikrotikGarbageOutput(OutputPlugin):
return ""
lines = raw_output.splitlines()
cleaned_lines = []
found_header = False
data_lines = []
# Remove command echoes and paging, keep only header markers and data lines
# We'll split the output into discrete tables (each table begins at a header)
tables: t.List[t.List[str]] = []
current_table: t.List[str] = []
header_line: t.Optional[str] = None
for line in lines:
stripped = line.strip()
# Skip empty lines
if not stripped:
continue
# Skip interactive paging prompts
if "-- [Q quit|C-z pause]" in stripped or "-- [Q quit|D dump|C-z pause]" in stripped:
# Skip empty lines and interactive paging prompts
if not stripped or "-- [Q quit|C-z pause]" in stripped or "-- [Q quit|D dump|C-z pause]" in stripped:
continue
# Skip command echo lines
if "tool traceroute" in stripped:
continue
# Look for the header line (ADDRESS LOSS SENT LAST AVG BEST WORST)
# If this is a header line, start a new table
if "ADDRESS" in stripped and "LOSS" in stripped and "SENT" in stripped:
if not found_header:
cleaned_lines.append(line)
found_header = True
header_line = line
# If we were collecting a table, push it
if current_table:
tables.append(current_table)
current_table = []
# Start collecting after header
continue
# After finding header, collect all data lines
if found_header and stripped:
data_lines.append(line)
# Collect data lines (will be associated with the most recent header)
if header_line is not None:
current_table.append(line)
# Process data lines to aggregate trailing timeouts
if data_lines:
processed_lines = []
trailing_timeout_count = 0
# Push the last collected table if any
if current_table:
tables.append(current_table)
# Work backwards to count trailing timeouts
for i in range(len(data_lines) - 1, -1, -1):
line = data_lines[i]
if (
"100%" in line.strip()
and "timeout" in line.strip()
and not line.strip().startswith(
("1", "2", "3", "4", "5", "6", "7", "8", "9", "0")
)
):
# This is a timeout line (no IP address at start)
trailing_timeout_count += 1
# If we didn't find any header/data, return cleaned minimal output
if not tables:
# Fallback to previous behavior: remove prompts and flags
filtered_lines: t.List[str] = []
in_flags_section = False
for line in lines:
stripped_line = line.strip()
if stripped_line.startswith("@") and stripped_line.endswith("] >"):
continue
if "[Q quit|D dump|C-z pause]" in stripped_line:
continue
if stripped_line.startswith("Flags:"):
in_flags_section = True
continue
if in_flags_section:
if "=" in stripped_line:
in_flags_section = False
else:
continue
filtered_lines.append(line)
return "\n".join(filtered_lines)
# Aggregate tables by hop index. For each hop position, pick the row with the
# highest SENT count. If SENT ties, prefer non-timeout rows and the later table.
processed_lines: t.List[str] = []
# Regex to extract LOSS% and SENT count following it: e.g. '0% 3'
sent_re = re.compile(r"(\d+)%\s+(\d+)\b")
max_rows = max(len(t) for t in tables)
for i in range(max_rows):
best_row = None
best_sent = -1
best_is_timeout = True
best_table_index = -1
for ti, table in enumerate(tables):
if i >= len(table):
continue
row = table[i]
m = sent_re.search(row)
if m:
try:
sent = int(m.group(2))
except Exception:
sent = 0
else:
sent = 0
is_timeout = "timeout" in row.lower() or ("100%" in row and "timeout" in row.lower())
# Prefer higher SENT, then prefer non-timeout, then later table (higher ti)
pick = False
if sent > best_sent:
pick = True
elif sent == best_sent:
if best_is_timeout and not is_timeout:
pick = True
elif (best_is_timeout == is_timeout) and ti > best_table_index:
pick = True
if pick:
best_row = row
best_sent = sent
best_is_timeout = is_timeout
best_table_index = ti
if best_row is not None:
processed_lines.append(best_row)
# Collapse excessive trailing timeouts into an aggregation line
trailing_timeouts = 0
for line in reversed(processed_lines):
if ("timeout" in line.lower()) or (sent_re.search(line) and sent_re.search(line).group(1) == "100"):
trailing_timeouts += 1
else:
# Found a non-timeout line, stop counting
break
# Add non-trailing lines as-is
non_trailing_count = len(data_lines) - trailing_timeout_count
processed_lines.extend(data_lines[:non_trailing_count])
if trailing_timeouts > 3:
non_trailing = len(processed_lines) - trailing_timeouts
# Keep first 2 of trailing timeouts and aggregate the rest
aggregated = processed_lines[:non_trailing] + processed_lines[non_trailing:non_trailing + 2]
remaining = trailing_timeouts - 2
aggregated.append(f" ... ({remaining} more timeout hops)")
processed_lines = aggregated
# Handle trailing timeouts
if trailing_timeout_count > 0:
if trailing_timeout_count <= 3:
# If 3 or fewer trailing timeouts, show them all
processed_lines.extend(data_lines[non_trailing_count:])
else:
# If more than 3 trailing timeouts, show first 2 and aggregate the rest
processed_lines.extend(data_lines[non_trailing_count : non_trailing_count + 2])
remaining_timeouts = trailing_timeout_count - 2
# Add an aggregation line
processed_lines.append(
f" ... ({remaining_timeouts} more timeout hops)"
)
cleaned_lines.extend(processed_lines)
return "\n".join(cleaned_lines)
# Prepend header line if we have one
header_to_use = header_line or "ADDRESS LOSS SENT LAST AVG BEST WORST STD-DEV STATUS"
cleaned = [header_to_use] + processed_lines
return "\n".join(cleaned)
def process(self, *, output: OutputType, query: "Query") -> Series[str]:
"""
@ -185,5 +239,12 @@ class MikrotikGarbageOutput(OutputPlugin):
cleaned_output = "\n".join(filtered_lines)
cleaned_outputs.append(cleaned_output)
log.debug(f"MikrotikGarbageOutput cleaned {len(output)} output blocks.")
# Minimal debug logging: log number of cleaned blocks and if any aggregation occurred
if len(output) > 0:
log.debug(f"MikrotikGarbageOutput processed {len(output)} output blocks.")
# If any aggregation line was added, log that event
for cleaned in cleaned_outputs:
if "... (" in cleaned:
log.debug("Aggregated excessive trailing timeout hops in traceroute output.")
break
return tuple(cleaned_outputs)

View file

@ -0,0 +1,657 @@
"""Parse Arista traceroute output to structured data."""
# Standard Library
import re
import typing as t
# Third Party
from pydantic import PrivateAttr
# Project
from hyperglass.log import log
from hyperglass.exceptions.private import ParsingError
from hyperglass.models.data.traceroute import TracerouteResult, TracerouteHop
from hyperglass.state import use_state
# Local
from .._output import OutputPlugin
if t.TYPE_CHECKING:
from hyperglass.models.data import OutputDataModel
from hyperglass.models.api.query import Query
from .._output import OutputType
def _normalize_output(output: t.Union[str, t.Sequence[str]]) -> t.List[str]:
"""Ensure the output is a list of strings."""
if isinstance(output, str):
return [output]
return list(output)
def parse_arista_traceroute(
output: t.Union[str, t.Sequence[str]], target: str, source: str
) -> "OutputDataModel":
"""Parse an Arista traceroute text response."""
result = None
out_list = _normalize_output(output)
_log = log.bind(plugin=TraceroutePluginArista.__name__)
combined_output = "\n".join(out_list)
# DEBUG: Log the raw output we're about to parse
_log.debug(f"=== ARISTA TRACEROUTE PLUGIN RAW INPUT ===")
_log.debug(f"Target: {target}, Source: {source}")
_log.debug(f"Output pieces: {len(out_list)}")
_log.debug(f"Combined output length: {len(combined_output)}")
_log.debug(f"First 500 chars: {repr(combined_output[:500])}")
_log.debug(f"=== END PLUGIN RAW INPUT ===")
try:
result = AristaTracerouteTable.parse_text(combined_output, target, source)
except Exception as exc:
_log.error(f"Failed to parse Arista traceroute: {exc}")
raise ParsingError(f"Failed to parse Arista traceroute output: {exc}") from exc
_log.debug(f"=== FINAL STRUCTURED TRACEROUTE RESULT ===")
_log.debug(f"Successfully parsed {len(result.hops)} traceroute hops")
_log.debug(f"Target: {target}, Source: {source}")
for hop in result.hops:
_log.debug(f"Hop {hop.hop_number}: {hop.ip_address or '*'} - RTT: {hop.rtt1 or 'timeout'}")
_log.debug(f"Raw output length: {len(combined_output)} characters")
_log.debug(f"=== END STRUCTURED RESULT ===")
return result
class AristaTracerouteTable(TracerouteResult):
"""Arista traceroute table parser."""
@classmethod
def parse_text(cls, text: str, target: str, source: str) -> TracerouteResult:
"""Parse Arista traceroute text output into structured data."""
_log = log.bind(parser="AristaTracerouteTable")
_log.debug(f"=== RAW ARISTA TRACEROUTE INPUT ===")
_log.debug(f"Target: {target}, Source: {source}")
_log.debug(f"Raw text length: {len(text)} characters")
_log.debug(f"Raw text:\n{repr(text)}")
_log.debug(f"=== END RAW INPUT ===")
hops = []
lines = text.strip().split("\n")
_log.debug(f"Split into {len(lines)} lines")
# Pattern for normal hop: " 1 er03-ter.jhb.as37739.net (102.209.241.6) 0.285 ms 0.177 ms 0.137 ms"
# Also handles IPv6: " 1 2001:43f8:6d0::10:3 (2001:43f8:6d0::10:3) 19.460 ms 19.416 ms 19.353 ms"
hop_pattern = re.compile(
r"^\s*(\d+)\s+(.+?)\s+\(([^)]+)\)(?:\s+<[^>]+>)?\s+(\d+(?:\.\d+)?)\s*ms(?:\s+(\d+(?:\.\d+)?)\s*ms)?(?:\s+(\d+(?:\.\d+)?)\s*ms)?"
)
# Pattern for MPLS hop with labels: " 2 41.78.188.48 (41.78.188.48) <MPLS:L=116443,E=0,S=1,T=1> 1653.906 ms"
mpls_hop_pattern = re.compile(
r"^\s*(\d+)\s+(.+?)\s+\(([^)]+)\)\s+<MPLS:[^>]+>\s+(\d+(?:\.\d+)?)\s*ms(?:\s+(\d+(?:\.\d+)?)\s*ms)?(?:\s+(.+?)\s+\(([^)]+)\)\s+<MPLS:[^>]+>\s+(\d+(?:\.\d+)?)\s*ms)?"
)
# Pattern for complex multipath with mixed timeouts and IPs:
# "10 ae22.cr11-lon2.ip6.gtt.net (2001:668:0:3:ffff:1:0:3471) 201.963 ms be8443.ccr41.lon13.atlas.cogentco.com (2001:550:0:1000::9a36:3859) 184.724 ms *"
complex_multipath_pattern = re.compile(
r"^\s*(\d+)\s+(.+?)\s+\(([^)]+)\)(?:\s+<[^>]+>)?\s+(\d+(?:\.\d+)?)\s*ms\s+(.+?)\s+\(([^)]+)\)(?:\s+<[^>]+>)?\s+(\d+(?:\.\d+)?)\s*ms(?:\s+\*|\s+(.+?)\s+\(([^)]+)\)(?:\s+<[^>]+>)?\s+(\d+(?:\.\d+)?)\s*ms)?"
)
# Pattern for partial timeout multipath: " 8 * * 2c0f:fa90:0:8::5 (2c0f:fa90:0:8::5) 179.449 ms"
partial_timeout_pattern = re.compile(
r"^\s*(\d+)\s+\*\s+\*\s+(.+?)\s+\(([^)]+)\)(?:\s+<[^>]+>)?\s+(\d+(?:\.\d+)?)\s*ms"
)
# Pattern for mixed timeout start: " 9 ae22.cr11-lon2.ip6.gtt.net (2001:668:0:3:ffff:1:0:3471) 201.979 ms * 2c0f:fa90:0:8::5 (2c0f:fa90:0:8::5) 179.438 ms"
mixed_timeout_start_pattern = re.compile(
r"^\s*(\d+)\s+(.+?)\s+\(([^)]+)\)(?:\s+<[^>]+>)?\s+(\d+(?:\.\d+)?)\s*ms\s+\*\s+(.+?)\s+\(([^)]+)\)(?:\s+<[^>]+>)?\s+(\d+(?:\.\d+)?)\s*ms"
)
# Pattern for triple multipath IPv6: "30 2001:41d0:0:50::b:66 (2001:41d0:0:50::b:66) 442.036 ms 2402:1f00:8201:586:: (2402:1f00:8201:586::) 456.999 ms 2001:41d0:0:50::b:66 (2001:41d0:0:50::b:66) 441.399 ms"
triple_multipath_pattern = re.compile(
r"^\s*(\d+)\s+(.+?)\s+\(([^)]+)\)(?:\s+<[^>]+>)?\s+(\d+(?:\.\d+)?)\s*ms\s+(.+?)\s+\(([^)]+)\)(?:\s+<[^>]+>)?\s+(\d+(?:\.\d+)?)\s*ms\s+(.+?)\s+\(([^)]+)\)(?:\s+<[^>]+>)?\s+(\d+(?:\.\d+)?)\s*ms"
)
# Pattern for multiple IPs in one hop (load balancing):
# " 2 po204.asw02.jnb1.tfbnw.net (2620:0:1cff:dead:beef::5316) 0.249 ms 0.234 ms po204.asw04.jnb1.tfbnw.net (2620:0:1cff:dead:beef::5524) 0.244 ms"
multi_hop_pattern = re.compile(
r"^\s*(\d+)\s+(.+?)\s+\(([^)]+)\)(?:\s+<[^>]+>)?\s+(\d+(?:\.\d+)?)\s*ms(?:\s+(\d+(?:\.\d+)?)\s*ms)?\s+(.+?)\s+\(([^)]+)\)(?:\s+<[^>]+>)?\s+(\d+(?:\.\d+)?)\s*ms"
)
# Pattern for timeout hop: " 6 * * *"
timeout_pattern = re.compile(r"^\s*(\d+)\s+\*\s*\*\s*\*")
# Pattern for single IP without hostname: "12 72.251.0.8 (72.251.0.8) 421.861 ms 421.788 ms 419.821 ms"
ip_only_pattern = re.compile(
r"^\s*(\d+)\s+([0-9a-fA-F:.]+)\s+\(([^)]+)\)\s+(\d+(?:\.\d+)?)\s*ms(?:\s+(\d+(?:\.\d+)?)\s*ms)?(?:\s+(\d+(?:\.\d+)?)\s*ms)?"
)
for i, line in enumerate(lines):
line = line.strip()
_log.debug(f"Line {i:2d}: {repr(line)}")
if not line:
continue
# Skip header lines
if (
"traceroute to" in line.lower()
or "hops max" in line.lower()
or "byte packets" in line.lower()
):
_log.debug(f"Line {i:2d}: SKIPPING HEADER")
continue
# Try to match timeout hop first
timeout_match = timeout_pattern.match(line)
if timeout_match:
hop_number = int(timeout_match.group(1))
_log.debug(f"Line {i:2d}: TIMEOUT HOP - {hop_number}: * * *")
hops.append(
TracerouteHop(
hop_number=hop_number,
ip_address=None,
display_ip=None,
hostname=None,
rtt1=None,
rtt2=None,
rtt3=None,
sent_count=3, # Arista sends 3 pings per hop
last_rtt=None,
best_rtt=None,
worst_rtt=None,
loss_pct=100, # 100% loss for timeout
# BGP enrichment fields (all None for timeout)
asn=None,
org=None,
prefix=None,
country=None,
rir=None,
allocated=None,
)
)
continue
# Try to match partial timeout: " 8 * * 2c0f:fa90:0:8::5 (2c0f:fa90:0:8::5) 179.449 ms"
partial_timeout_match = partial_timeout_pattern.match(line)
if partial_timeout_match:
hop_number = int(partial_timeout_match.group(1))
hostname = partial_timeout_match.group(2).strip()
ip_address = partial_timeout_match.group(3)
rtt1 = float(partial_timeout_match.group(4))
_log.debug(
f"Line {i:2d}: PARTIAL TIMEOUT - {hop_number}: * * {hostname} ({ip_address}) {rtt1}ms"
)
hops.append(
TracerouteHop(
hop_number=hop_number,
ip_address=ip_address,
display_ip=None,
hostname=hostname if hostname != ip_address else None,
rtt1=rtt1,
rtt2=None,
rtt3=None,
sent_count=3,
last_rtt=rtt1,
best_rtt=rtt1,
worst_rtt=rtt1,
loss_pct=66, # 2 out of 3 packets lost
asn=None,
org=None,
prefix=None,
country=None,
rir=None,
allocated=None,
)
)
continue
# Try to match triple multipath IPv6
triple_multipath_match = triple_multipath_pattern.match(line)
if triple_multipath_match:
hop_number = int(triple_multipath_match.group(1))
hostname1 = triple_multipath_match.group(2).strip()
ip1 = triple_multipath_match.group(3)
rtt1 = float(triple_multipath_match.group(4))
hostname2 = triple_multipath_match.group(5).strip()
ip2 = triple_multipath_match.group(6)
rtt2 = float(triple_multipath_match.group(7))
hostname3 = triple_multipath_match.group(8).strip()
ip3 = triple_multipath_match.group(9)
rtt3 = float(triple_multipath_match.group(10))
_log.debug(
f"Line {i:2d}: TRIPLE MULTIPATH - {hop_number}: {hostname1}/{hostname2}/{hostname3}"
)
display_hostname = f"{hostname1} / {hostname2} / {hostname3}"
hops.append(
TracerouteHop(
hop_number=hop_number,
ip_address=ip1,
display_ip=None,
hostname=display_hostname,
rtt1=rtt1,
rtt2=rtt2,
rtt3=rtt3,
sent_count=3,
last_rtt=rtt3,
best_rtt=min(rtt1, rtt2, rtt3),
worst_rtt=max(rtt1, rtt2, rtt3),
loss_pct=0, # No loss if we got responses
asn=None,
org=None,
prefix=None,
country=None,
rir=None,
allocated=None,
)
)
continue
# Try to match complex multipath with mixed timeouts
complex_multipath_match = complex_multipath_pattern.match(line)
if complex_multipath_match:
hop_number = int(complex_multipath_match.group(1))
hostname1 = complex_multipath_match.group(2).strip()
ip1 = complex_multipath_match.group(3)
rtt1 = float(complex_multipath_match.group(4))
hostname2 = complex_multipath_match.group(5).strip()
ip2 = complex_multipath_match.group(6)
rtt2 = float(complex_multipath_match.group(7))
# Check for third IP or timeout
rtt3 = None
hostname3 = None
has_third = complex_multipath_match.group(8) is not None
if has_third:
hostname3 = complex_multipath_match.group(8).strip()
rtt3 = float(complex_multipath_match.group(10))
_log.debug(
f"Line {i:2d}: COMPLEX MULTIPATH - {hop_number}: {hostname1}/{hostname2}{('/' + hostname3) if hostname3 else ''}"
)
display_hostname = f"{hostname1} / {hostname2}"
if hostname3:
display_hostname += f" / {hostname3}"
rtts = [x for x in [rtt1, rtt2, rtt3] if x is not None]
hops.append(
TracerouteHop(
hop_number=hop_number,
ip_address=ip1,
display_ip=None,
hostname=display_hostname,
rtt1=rtt1,
rtt2=rtt2,
rtt3=rtt3,
sent_count=len(rtts),
last_rtt=rtts[-1] if rtts else None,
best_rtt=min(rtts) if rtts else None,
worst_rtt=max(rtts) if rtts else None,
loss_pct=int((3 - len(rtts)) / 3 * 100),
asn=None,
org=None,
prefix=None,
country=None,
rir=None,
allocated=None,
)
)
continue
# Try to match mixed timeout with start response
mixed_timeout_start_match = mixed_timeout_start_pattern.match(line)
if mixed_timeout_start_match:
hop_number = int(mixed_timeout_start_match.group(1))
hostname1 = mixed_timeout_start_match.group(2).strip()
ip1 = mixed_timeout_start_match.group(3)
rtt1 = float(mixed_timeout_start_match.group(4))
hostname2 = mixed_timeout_start_match.group(5).strip()
ip2 = mixed_timeout_start_match.group(6)
rtt2 = float(mixed_timeout_start_match.group(7))
_log.debug(
f"Line {i:2d}: MIXED TIMEOUT START - {hop_number}: {hostname1} * {hostname2}"
)
display_hostname = f"{hostname1} / * / {hostname2}"
hops.append(
TracerouteHop(
hop_number=hop_number,
ip_address=ip1,
display_ip=None,
hostname=display_hostname,
rtt1=rtt1,
rtt2=None, # Middle packet timed out
rtt3=rtt2,
sent_count=3,
last_rtt=rtt2,
best_rtt=min(rtt1, rtt2),
worst_rtt=max(rtt1, rtt2),
loss_pct=33, # 1 out of 3 packets lost
asn=None,
org=None,
prefix=None,
country=None,
rir=None,
allocated=None,
)
)
continue
# Try to match MPLS hop
mpls_hop_match = mpls_hop_pattern.match(line)
if mpls_hop_match:
hop_number = int(mpls_hop_match.group(1))
hostname1 = mpls_hop_match.group(2).strip()
ip1 = mpls_hop_match.group(3)
rtt1 = float(mpls_hop_match.group(4))
rtt2 = float(mpls_hop_match.group(5)) if mpls_hop_match.group(5) else None
# Check for second MPLS hop in same line
hostname2 = None
ip2 = None
rtt3 = None
if mpls_hop_match.group(6): # Second hostname exists
hostname2 = mpls_hop_match.group(6).strip()
ip2 = mpls_hop_match.group(7)
rtt3 = float(mpls_hop_match.group(8))
_log.debug(
f"Line {i:2d}: MPLS HOP - {hop_number}: {hostname1} (MPLS){(' + ' + hostname2) if hostname2 else ''}"
)
display_hostname = hostname1
if hostname2:
display_hostname += f" / {hostname2}"
rtts = [x for x in [rtt1, rtt2, rtt3] if x is not None]
hops.append(
TracerouteHop(
hop_number=hop_number,
ip_address=ip1,
display_ip=None,
hostname=display_hostname if display_hostname != ip1 else None,
rtt1=rtt1,
rtt2=rtt2,
rtt3=rtt3,
sent_count=len(rtts),
last_rtt=rtts[-1] if rtts else None,
best_rtt=min(rtts) if rtts else None,
worst_rtt=max(rtts) if rtts else None,
loss_pct=0, # No loss if we got responses
asn=None,
org=None,
prefix=None,
country=None,
rir=None,
allocated=None,
)
)
continue
# Try to match multi-hop line (load balancing)
multi_match = multi_hop_pattern.match(line)
if multi_match:
hop_number = int(multi_match.group(1))
hostname1 = multi_match.group(2).strip()
ip1 = multi_match.group(3)
rtt1 = float(multi_match.group(4))
rtt2 = float(multi_match.group(5)) if multi_match.group(5) else None
hostname2 = multi_match.group(6).strip()
ip2 = multi_match.group(7)
rtt3 = float(multi_match.group(8))
_log.debug(
f"Line {i:2d}: MULTI HOP - {hop_number}: {hostname1} ({ip1}) and {hostname2} ({ip2})"
)
# For multi-hop, we'll create one hop with the first IP and include the second in display
display_hostname = f"{hostname1} / {hostname2}"
hops.append(
TracerouteHop(
hop_number=hop_number,
ip_address=ip1,
display_ip=None,
hostname=display_hostname,
rtt1=rtt1,
rtt2=rtt2,
rtt3=rtt3,
sent_count=3,
last_rtt=rtt3 if rtt3 else (rtt2 if rtt2 else rtt1),
best_rtt=min(x for x in [rtt1, rtt2, rtt3] if x is not None),
worst_rtt=max(x for x in [rtt1, rtt2, rtt3] if x is not None),
loss_pct=0, # No loss if we got responses
# BGP enrichment fields (will be populated by enrichment plugin)
asn=None,
org=None,
prefix=None,
country=None,
rir=None,
allocated=None,
)
)
continue
# Try to match normal hop with hostname
hop_match = hop_pattern.match(line)
if hop_match:
hop_number = int(hop_match.group(1))
hostname = hop_match.group(2).strip()
ip_address = hop_match.group(3)
rtt1 = float(hop_match.group(4))
rtt2 = float(hop_match.group(5)) if hop_match.group(5) else None
rtt3 = float(hop_match.group(6)) if hop_match.group(6) else None
_log.debug(
f"Line {i:2d}: NORMAL HOP - {hop_number}: {hostname} ({ip_address}) RTTs: {rtt1}, {rtt2}, {rtt3}"
)
rtts = [x for x in [rtt1, rtt2, rtt3] if x is not None]
hops.append(
TracerouteHop(
hop_number=hop_number,
ip_address=ip_address,
display_ip=None,
hostname=hostname if hostname != ip_address else None,
rtt1=rtt1,
rtt2=rtt2,
rtt3=rtt3,
sent_count=len(rtts),
last_rtt=rtts[-1] if rtts else None,
best_rtt=min(rtts) if rtts else None,
worst_rtt=max(rtts) if rtts else None,
loss_pct=0, # No loss if we got a response
# BGP enrichment fields (will be populated by enrichment plugin)
asn=None,
org=None,
prefix=None,
country=None,
rir=None,
allocated=None,
)
)
continue
# Try to match IP-only hop (no hostname)
ip_match = ip_only_pattern.match(line)
if ip_match:
hop_number = int(ip_match.group(1))
ip_display = ip_match.group(2).strip() # The IP shown before parentheses
ip_address = ip_match.group(3) # The IP in parentheses
rtt1 = float(ip_match.group(4))
rtt2 = float(ip_match.group(5)) if ip_match.group(5) else None
rtt3 = float(ip_match.group(6)) if ip_match.group(6) else None
_log.debug(
f"Line {i:2d}: IP-ONLY HOP - {hop_number}: {ip_address} RTTs: {rtt1}, {rtt2}, {rtt3}"
)
rtts = [x for x in [rtt1, rtt2, rtt3] if x is not None]
hops.append(
TracerouteHop(
hop_number=hop_number,
ip_address=ip_address,
display_ip=None,
hostname=None, # No hostname for IP-only hops
rtt1=rtt1,
rtt2=rtt2,
rtt3=rtt3,
sent_count=len(rtts),
last_rtt=rtts[-1] if rtts else None,
best_rtt=min(rtts) if rtts else None,
worst_rtt=max(rtts) if rtts else None,
loss_pct=0, # No loss if we got a response
# BGP enrichment fields (will be populated by enrichment plugin)
asn=None,
org=None,
prefix=None,
country=None,
rir=None,
allocated=None,
)
)
continue
_log.debug(f"Line {i:2d}: UNMATCHED - skipping")
_log.debug(f"Before cleanup: {len(hops)} hops")
# Clean up consecutive timeout hops at the end
# Keep only the first few timeouts, remove excessive trailing timeouts
if len(hops) > 5:
# Find the last non-timeout hop
last_real_hop = -1
for i in range(len(hops) - 1, -1, -1):
if not hops[i].is_timeout:
last_real_hop = i
break
if last_real_hop >= 0:
# Keep at most 3 timeout hops after the last real hop
max_timeouts = 3
timeout_count = 0
cleaned_hops = hops[: last_real_hop + 1] # Keep all hops up to last real hop
for hop in hops[last_real_hop + 1 :]:
if hop.is_timeout:
timeout_count += 1
if timeout_count <= max_timeouts:
cleaned_hops.append(hop)
else:
_log.debug(f"Removing excessive timeout hop {hop.hop_number}")
else:
# If we find another real hop after timeouts, keep it
cleaned_hops.append(hop)
timeout_count = 0
hops = cleaned_hops
_log.debug(f"After cleanup: {len(hops)} hops")
for hop in hops:
if hop.is_timeout:
_log.debug(f"Final hop {hop.hop_number}: * (timeout)")
else:
_log.debug(
f"Final hop {hop.hop_number}: {hop.ip_address} ({hop.hostname}) - RTTs: {hop.rtt1}/{hop.rtt2}/{hop.rtt3}"
)
_log.info(f"Parsed {len(hops)} hops from Arista traceroute")
# Extract packet size and max hops from header if available
max_hops = 30 # Default from your examples
packet_size = 60 # Default from your examples
for line in text.split("\n"):
if "hops max" in line and "byte packets" in line:
# Example: "traceroute to 177.72.245.178 (177.72.245.178), 30 hops max, 60 byte packets"
parts = line.split()
for i, part in enumerate(parts):
if part == "hops":
try:
max_hops = int(parts[i - 1])
except (ValueError, IndexError):
pass
elif part == "byte":
try:
packet_size = int(parts[i - 1])
except (ValueError, IndexError):
pass
break
return TracerouteResult(
target=target,
source=source,
hops=hops,
max_hops=max_hops,
packet_size=packet_size,
raw_output=text,
asn_organizations={},
)
class TraceroutePluginArista(OutputPlugin):
"""Parse Arista traceroute output."""
_hyperglass_builtin: bool = PrivateAttr(True)
platforms: t.Sequence[str] = ("arista_eos",)
directives: t.Sequence[str] = ("__hyperglass_arista_eos_traceroute__",)
common: bool = False
def process(self, output: "OutputType", query: "Query") -> "OutputType":
"""Process Arista traceroute output."""
# Extract target and source with fallbacks
target = str(query.query_target) if query.query_target else "unknown"
source = "unknown"
if hasattr(query, "device") and query.device:
source = getattr(query.device, "display_name", None) or getattr(
query.device, "name", "unknown"
)
device = getattr(query, "device", None)
if device is not None:
if not getattr(device, "structured_output", False):
return output
try:
_params = use_state("params")
except Exception:
_params = None
if (
_params
and getattr(_params, "structured", None)
and getattr(_params.structured, "enable_for_traceroute", None) is False
):
return output
else:
try:
params = use_state("params")
except Exception:
params = None
if not (params and getattr(params, "structured", None)):
return output
if getattr(params.structured, "enable_for_traceroute", None) is False:
return output
return parse_arista_traceroute(
output=output,
target=target,
source=source,
)

View file

@ -0,0 +1,552 @@
"""Parse FRR traceroute output to structured data."""
# Standard Library
import re
import typing as t
# Third Party
from pydantic import PrivateAttr
# Project
from hyperglass.log import log
from hyperglass.exceptions.private import ParsingError
from hyperglass.models.data.traceroute import TracerouteResult, TracerouteHop
from hyperglass.state import use_state
# Local
from .._output import OutputPlugin
if t.TYPE_CHECKING:
from hyperglass.models.data import OutputDataModel
from hyperglass.models.api.query import Query
from .._output import OutputType
def _normalize_output(output: t.Union[str, t.Sequence[str]]) -> t.List[str]:
"""Ensure the output is a list of strings."""
if isinstance(output, str):
return [output]
return list(output)
def parse_frr_traceroute(
output: t.Union[str, t.Sequence[str]], target: str, source: str
) -> "OutputDataModel":
"""Parse an FRR traceroute text response."""
result = None
out_list = _normalize_output(output)
_log = log.bind(plugin=TraceroutePluginFrr.__name__)
combined_output = "\n".join(out_list)
# DEBUG: Log the raw output we're about to parse
_log.debug(f"=== FRR TRACEROUTE PLUGIN RAW INPUT ===")
_log.debug(f"Target: {target}, Source: {source}")
_log.debug(f"Output pieces: {len(out_list)}")
_log.debug(f"Combined output length: {len(combined_output)}")
_log.debug(f"First 500 chars: {repr(combined_output[:500])}")
_log.debug(f"=== END PLUGIN RAW INPUT ===")
try:
result = FrrTracerouteTable.parse_text(combined_output, target, source)
except Exception as exc:
_log.error(f"Failed to parse FRR traceroute: {exc}")
raise ParsingError(f"Failed to parse FRR traceroute output: {exc}") from exc
_log.debug(f"=== FINAL STRUCTURED TRACEROUTE RESULT ===")
_log.debug(f"Successfully parsed {len(result.hops)} traceroute hops")
_log.debug(f"Target: {target}, Source: {source}")
for hop in result.hops:
_log.debug(f"Hop {hop.hop_number}: {hop.ip_address or '*'} - RTT: {hop.rtt1 or 'timeout'}")
_log.debug(f"Raw output length: {len(combined_output)} characters")
_log.debug(f"=== END STRUCTURED RESULT ===")
return result
class FrrTracerouteTable(TracerouteResult):
"""FRR traceroute table parser."""
@classmethod
def parse_text(cls, text: str, target: str, source: str) -> TracerouteResult:
"""Parse FRR traceroute text output into structured data."""
_log = log.bind(parser="FrrTracerouteTable")
_log.debug(f"=== RAW FRR TRACEROUTE INPUT ===")
_log.debug(f"Target: {target}, Source: {source}")
_log.debug(f"Raw text length: {len(text)} characters")
_log.debug(f"Raw text:\n{repr(text)}")
_log.debug(f"=== END RAW INPUT ===")
hops = []
lines = text.strip().split("\n")
_log.debug(f"Split into {len(lines)} lines")
# Pattern for normal hop: " 1 bdr2.std.douala-ix.net (196.49.84.34) 0.520 ms 0.451 ms 0.418 ms"
hop_pattern = re.compile(
r"^\s*(\d+)\s+(.+?)\s+\(([^)]+)\)\s+(\d+(?:\.\d+)?)\s*ms(?:\s+(\d+(?:\.\d+)?)\s*ms)?(?:\s+(\d+(?:\.\d+)?)\s*ms)?"
)
# Pattern for timeout hop: " 3 * * *"
timeout_pattern = re.compile(r"^\s*(\d+)\s+\*\s*\*\s*\*")
# Pattern for partial timeout: " 7 port-channel4.core4.mrs1.he.net (184.105.81.30) 132.624 ms 132.589 ms *"
partial_timeout_pattern = re.compile(
r"^\s*(\d+)\s+(.+?)\s+\(([^)]+)\)\s+(\d+(?:\.\d+)?)\s*ms(?:\s+(\d+(?:\.\d+)?)\s*ms)?\s+\*"
)
# Pattern for IP-only hop: "15 72.251.0.8 (72.251.0.8) 360.370 ms 352.170 ms 354.132 ms"
ip_only_pattern = re.compile(
r"^\s*(\d+)\s+([0-9a-fA-F:.]+)\s+\(([^)]+)\)\s+(\d+(?:\.\d+)?)\s*ms(?:\s+(\d+(?:\.\d+)?)\s*ms)?(?:\s+(\d+(?:\.\d+)?)\s*ms)?"
)
# Complex multi-IP patterns for load balancing scenarios
# Pattern 1: "18 * 2001:41d0:0:50::7:1009 (2001:41d0:0:50::7:1009) 353.548 ms 351.516 ms"
partial_multi_pattern = re.compile(
r"^\s*(\d+)\s+\*\s+(.+?)\s+\(([^)]+)\)\s+(\d+(?:\.\d+)?)\s*ms(?:\s+(\d+(?:\.\d+)?)\s*ms)?"
)
# Pattern 2: "12 2001:41d0:aaaa:100::3 (2001:41d0:aaaa:100::3) 274.418 ms 2001:41d0:aaaa:100::5 (2001:41d0:aaaa:100::5) 269.972 ms 282.653 ms"
dual_ip_pattern = re.compile(
r"^\s*(\d+)\s+(.+?)\s+\(([^)]+)\)\s+(\d+(?:\.\d+)?)\s*ms\s+(.+?)\s+\(([^)]+)\)\s+(\d+(?:\.\d+)?)\s*ms(?:\s+(\d+(?:\.\d+)?)\s*ms)?"
)
# Pattern 3: More complex multi-IP lines (3 or more IPs)
# "19 2001:41d0:0:50::3:211b (2001:41d0:0:50::3:211b) 351.213 ms 2001:41d0:0:50::7:100f (2001:41d0:0:50::7:100f) 351.090 ms 2001:41d0:0:50::7:100b (2001:41d0:0:50::7:100b) 351.282 ms"
multi_ip_pattern = re.compile(
r"^\s*(\d+)\s+(.+?)\s+\(([^)]+)\)\s+(\d+(?:\.\d+)?)\s*ms\s+(.+?)\s+\(([^)]+)\)\s+(\d+(?:\.\d+)?)\s*ms\s+(.+?)\s+\(([^)]+)\)\s+(\d+(?:\.\d+)?)\s*ms"
)
for i, line in enumerate(lines):
line = line.strip()
_log.debug(f"Line {i:2d}: {repr(line)}")
if not line:
continue
# Skip header lines
if (
"traceroute to" in line.lower()
or "hops max" in line.lower()
or "byte packets" in line.lower()
):
_log.debug(f"Line {i:2d}: SKIPPING HEADER")
continue
# Try to match timeout hop first
timeout_match = timeout_pattern.match(line)
if timeout_match:
hop_number = int(timeout_match.group(1))
_log.debug(f"Line {i:2d}: TIMEOUT HOP - {hop_number}: * * *")
hops.append(
TracerouteHop(
hop_number=hop_number,
ip_address=None,
display_ip=None,
hostname=None,
rtt1=None,
rtt2=None,
rtt3=None,
sent_count=3, # FRR sends 3 pings per hop
last_rtt=None,
best_rtt=None,
worst_rtt=None,
loss_pct=100, # 100% loss for timeout
# BGP enrichment fields (all None for timeout)
asn=None,
org=None,
prefix=None,
country=None,
rir=None,
allocated=None,
)
)
continue
# Try to match multi-IP pattern (3 IPs)
multi_match = multi_ip_pattern.match(line)
if multi_match:
hop_number = int(multi_match.group(1))
hostname1 = multi_match.group(2).strip()
ip1 = multi_match.group(3)
rtt1 = float(multi_match.group(4))
hostname2 = multi_match.group(5).strip()
ip2 = multi_match.group(6)
rtt2 = float(multi_match.group(7))
hostname3 = multi_match.group(8).strip()
ip3 = multi_match.group(9)
rtt3 = float(multi_match.group(10))
_log.debug(f"Line {i:2d}: MULTI-IP HOP (3 IPs) - {hop_number}: {ip1}, {ip2}, {ip3}")
# Use the first IP as primary, combine hostnames
display_hostname = f"{hostname1} / {hostname2} / {hostname3}"
if hostname1 == ip1:
display_hostname = None # All IP-only
hops.append(
TracerouteHop(
hop_number=hop_number,
ip_address=ip1,
display_ip=None,
hostname=display_hostname,
rtt1=rtt1,
rtt2=rtt2,
rtt3=rtt3,
sent_count=3,
last_rtt=rtt3,
best_rtt=min(rtt1, rtt2, rtt3),
worst_rtt=max(rtt1, rtt2, rtt3),
loss_pct=0, # No loss if we got responses
# BGP enrichment fields (will be populated by enrichment plugin)
asn=None,
org=None,
prefix=None,
country=None,
rir=None,
allocated=None,
)
)
continue
# Try to match dual-IP pattern
dual_match = dual_ip_pattern.match(line)
if dual_match:
hop_number = int(dual_match.group(1))
hostname1 = dual_match.group(2).strip()
ip1 = dual_match.group(3)
rtt1 = float(dual_match.group(4))
hostname2 = dual_match.group(5).strip()
ip2 = dual_match.group(6)
rtt2 = float(dual_match.group(7))
rtt3 = float(dual_match.group(8)) if dual_match.group(8) else None
_log.debug(f"Line {i:2d}: DUAL-IP HOP - {hop_number}: {ip1} and {ip2}")
# Use the first IP as primary, combine hostnames
display_hostname = f"{hostname1} / {hostname2}"
if hostname1 == ip1:
display_hostname = None # Both IP-only
rtts = [x for x in [rtt1, rtt2, rtt3] if x is not None]
hops.append(
TracerouteHop(
hop_number=hop_number,
ip_address=ip1,
display_ip=None,
hostname=display_hostname,
rtt1=rtt1,
rtt2=rtt2,
rtt3=rtt3,
sent_count=len(rtts),
last_rtt=rtts[-1] if rtts else None,
best_rtt=min(rtts) if rtts else None,
worst_rtt=max(rtts) if rtts else None,
loss_pct=0, # No loss if we got responses
# BGP enrichment fields (will be populated by enrichment plugin)
asn=None,
org=None,
prefix=None,
country=None,
rir=None,
allocated=None,
)
)
continue
# Try to match partial multi pattern (* hostname)
partial_multi_match = partial_multi_pattern.match(line)
if partial_multi_match:
hop_number = int(partial_multi_match.group(1))
hostname = partial_multi_match.group(2).strip()
ip_address = partial_multi_match.group(3)
rtt1 = float(partial_multi_match.group(4))
rtt2 = float(partial_multi_match.group(5)) if partial_multi_match.group(5) else None
_log.debug(
f"Line {i:2d}: PARTIAL-MULTI HOP - {hop_number}: * {hostname} ({ip_address})"
)
rtts = [x for x in [rtt1, rtt2] if x is not None]
hops.append(
TracerouteHop(
hop_number=hop_number,
ip_address=ip_address,
display_ip=None,
hostname=hostname if hostname != ip_address else None,
rtt1=rtt1,
rtt2=rtt2,
rtt3=None,
sent_count=3, # Still sent 3, but one timed out
last_rtt=rtts[-1] if rtts else None,
best_rtt=min(rtts) if rtts else None,
worst_rtt=max(rtts) if rtts else None,
loss_pct=33.33, # 1 out of 3 packets lost
# BGP enrichment fields (will be populated by enrichment plugin)
asn=None,
org=None,
prefix=None,
country=None,
rir=None,
allocated=None,
)
)
continue
# Try to match partial timeout (hostname with one *)
partial_timeout_match = partial_timeout_pattern.match(line)
if partial_timeout_match:
hop_number = int(partial_timeout_match.group(1))
hostname = partial_timeout_match.group(2).strip()
ip_address = partial_timeout_match.group(3)
rtt1 = float(partial_timeout_match.group(4))
rtt2 = (
float(partial_timeout_match.group(5))
if partial_timeout_match.group(5)
else None
)
_log.debug(
f"Line {i:2d}: PARTIAL-TIMEOUT HOP - {hop_number}: {hostname} ({ip_address}) with timeout"
)
rtts = [x for x in [rtt1, rtt2] if x is not None]
hops.append(
TracerouteHop(
hop_number=hop_number,
ip_address=ip_address,
display_ip=None,
hostname=hostname if hostname != ip_address else None,
rtt1=rtt1,
rtt2=rtt2,
rtt3=None,
sent_count=3, # Still sent 3, but one timed out
last_rtt=rtts[-1] if rtts else None,
best_rtt=min(rtts) if rtts else None,
worst_rtt=max(rtts) if rtts else None,
loss_pct=33.33, # 1 out of 3 packets lost
# BGP enrichment fields (will be populated by enrichment plugin)
asn=None,
org=None,
prefix=None,
country=None,
rir=None,
allocated=None,
)
)
continue
# Try to match normal hop with hostname
hop_match = hop_pattern.match(line)
if hop_match:
hop_number = int(hop_match.group(1))
hostname = hop_match.group(2).strip()
ip_address = hop_match.group(3)
rtt1 = float(hop_match.group(4))
rtt2 = float(hop_match.group(5)) if hop_match.group(5) else None
rtt3 = float(hop_match.group(6)) if hop_match.group(6) else None
_log.debug(
f"Line {i:2d}: NORMAL HOP - {hop_number}: {hostname} ({ip_address}) RTTs: {rtt1}, {rtt2}, {rtt3}"
)
rtts = [x for x in [rtt1, rtt2, rtt3] if x is not None]
hops.append(
TracerouteHop(
hop_number=hop_number,
ip_address=ip_address,
display_ip=None,
hostname=hostname if hostname != ip_address else None,
rtt1=rtt1,
rtt2=rtt2,
rtt3=rtt3,
sent_count=len(rtts),
last_rtt=rtts[-1] if rtts else None,
best_rtt=min(rtts) if rtts else None,
worst_rtt=max(rtts) if rtts else None,
loss_pct=0, # No loss if we got a response
# BGP enrichment fields (will be populated by enrichment plugin)
asn=None,
org=None,
prefix=None,
country=None,
rir=None,
allocated=None,
)
)
continue
# Try to match IP-only hop (no hostname)
ip_match = ip_only_pattern.match(line)
if ip_match:
hop_number = int(ip_match.group(1))
ip_display = ip_match.group(2).strip() # The IP shown before parentheses
ip_address = ip_match.group(3) # The IP in parentheses
rtt1 = float(ip_match.group(4))
rtt2 = float(ip_match.group(5)) if ip_match.group(5) else None
rtt3 = float(ip_match.group(6)) if ip_match.group(6) else None
_log.debug(
f"Line {i:2d}: IP-ONLY HOP - {hop_number}: {ip_address} RTTs: {rtt1}, {rtt2}, {rtt3}"
)
rtts = [x for x in [rtt1, rtt2, rtt3] if x is not None]
hops.append(
TracerouteHop(
hop_number=hop_number,
ip_address=ip_address,
display_ip=None,
hostname=None, # No hostname for IP-only hops
rtt1=rtt1,
rtt2=rtt2,
rtt3=rtt3,
sent_count=len(rtts),
last_rtt=rtts[-1] if rtts else None,
best_rtt=min(rtts) if rtts else None,
worst_rtt=max(rtts) if rtts else None,
loss_pct=0, # No loss if we got a response
# BGP enrichment fields (will be populated by enrichment plugin)
asn=None,
org=None,
prefix=None,
country=None,
rir=None,
allocated=None,
)
)
continue
_log.debug(f"Line {i:2d}: UNMATCHED - skipping")
_log.debug(f"Before cleanup: {len(hops)} hops")
# Clean up consecutive timeout hops at the end
# Keep only the first few timeouts, remove excessive trailing timeouts
if len(hops) > 5:
# Find the last non-timeout hop
last_real_hop = -1
for i in range(len(hops) - 1, -1, -1):
if not hops[i].is_timeout:
last_real_hop = i
break
if last_real_hop >= 0:
# Keep at most 3 timeout hops after the last real hop
max_timeouts = 3
timeout_count = 0
cleaned_hops = hops[: last_real_hop + 1] # Keep all hops up to last real hop
for hop in hops[last_real_hop + 1 :]:
if hop.is_timeout:
timeout_count += 1
if timeout_count <= max_timeouts:
cleaned_hops.append(hop)
else:
_log.debug(f"Removing excessive timeout hop {hop.hop_number}")
else:
# If we find another real hop after timeouts, keep it
cleaned_hops.append(hop)
timeout_count = 0
hops = cleaned_hops
_log.debug(f"After cleanup: {len(hops)} hops")
for hop in hops:
if hop.is_timeout:
_log.debug(f"Final hop {hop.hop_number}: * (timeout)")
else:
_log.debug(
f"Final hop {hop.hop_number}: {hop.ip_address} ({hop.hostname or 'no-hostname'}) - RTTs: {hop.rtt1}/{hop.rtt2}/{hop.rtt3}"
)
_log.info(f"Parsed {len(hops)} hops from FRR traceroute")
# Extract packet size and max hops from header if available
max_hops = 30 # Default from your examples
packet_size = 60 # Default from your examples (IPv4)
for line in text.split("\n"):
if "hops max" in line and "byte packets" in line:
# Example: "traceroute to syd.proof.ovh.net (51.161.209.134), 30 hops max, 60 byte packets"
parts = line.split()
for i, part in enumerate(parts):
if part == "hops":
try:
max_hops = int(parts[i - 1])
except (ValueError, IndexError):
pass
elif part == "byte":
try:
packet_size = int(parts[i - 1])
except (ValueError, IndexError):
pass
break
return TracerouteResult(
target=target,
source=source,
hops=hops,
max_hops=max_hops,
packet_size=packet_size,
raw_output=text,
asn_organizations={},
)
class TraceroutePluginFrr(OutputPlugin):
"""Parse FRR traceroute output."""
_hyperglass_builtin: bool = PrivateAttr(True)
platforms: t.Sequence[str] = ("frr",)
directives: t.Sequence[str] = ("__hyperglass_frr_traceroute__",)
common: bool = False
def process(self, output: "OutputType", query: "Query") -> "OutputType":
"""Process FRR traceroute output."""
# Extract target and source with fallbacks
target = str(query.query_target) if query.query_target else "unknown"
source = "unknown"
if hasattr(query, "device") and query.device:
source = getattr(query.device, "display_name", None) or getattr(
query.device, "name", "unknown"
)
device = getattr(query, "device", None)
if device is not None:
if not getattr(device, "structured_output", False):
return output
try:
_params = use_state("params")
except Exception:
_params = None
if (
_params
and getattr(_params, "structured", None)
and getattr(_params.structured, "enable_for_traceroute", None) is False
):
return output
else:
try:
params = use_state("params")
except Exception:
params = None
if not (params and getattr(params, "structured", None)):
return output
if getattr(params.structured, "enable_for_traceroute", None) is False:
return output
return parse_frr_traceroute(
output=output,
target=target,
source=source,
)

View file

@ -11,6 +11,7 @@ from pydantic import PrivateAttr
from hyperglass.log import log
from hyperglass.exceptions.private import ParsingError
from hyperglass.models.data.traceroute import TracerouteResult, TracerouteHop
from hyperglass.state import use_state
# Local
from .._output import OutputPlugin
@ -246,6 +247,30 @@ class TraceroutePluginHuawei(OutputPlugin):
query.device, "name", "unknown"
)
device = getattr(query, "device", None)
if device is not None:
if not getattr(device, "structured_output", False):
return output
try:
_params = use_state("params")
except Exception:
_params = None
if (
_params
and getattr(_params, "structured", None)
and getattr(_params.structured, "enable_for_traceroute", None) is False
):
return output
else:
try:
params = use_state("params")
except Exception:
params = None
if not (params and getattr(params, "structured", None)):
return output
if getattr(params.structured, "enable_for_traceroute", None) is False:
return output
return parse_huawei_traceroute(
output=output,
target=target,

View file

@ -0,0 +1,573 @@
"""Parse Juniper traceroute output to structured data."""
# Standard Library
import re
import typing as t
# Third Party
from pydantic import PrivateAttr
# Project
from hyperglass.log import log
from hyperglass.exceptions.private import ParsingError
from hyperglass.models.data.traceroute import TracerouteResult, TracerouteHop
from hyperglass.state import use_state
# Local
from .._output import OutputPlugin
if t.TYPE_CHECKING:
from hyperglass.models.data import OutputDataModel
from hyperglass.models.api.query import Query
from .._output import OutputType
def _normalize_output(output: t.Union[str, t.Sequence[str]]) -> t.List[str]:
"""Ensure the output is a list of strings."""
if isinstance(output, str):
return [output]
return list(output)
def parse_juniper_traceroute(
output: t.Union[str, t.Sequence[str]], target: str, source: str
) -> "OutputDataModel":
"""Parse a Juniper traceroute text response."""
result = None
out_list = _normalize_output(output)
_log = log.bind(plugin=TraceroutePluginJuniper.__name__)
combined_output = "\n".join(out_list)
# DEBUG: Log the raw output we're about to parse
_log.debug(f"=== JUNIPER TRACEROUTE PLUGIN RAW INPUT ===")
_log.debug(f"Target: {target}, Source: {source}")
_log.debug(f"Output pieces: {len(out_list)}")
_log.debug(f"Combined output length: {len(combined_output)}")
_log.debug(f"First 500 chars: {repr(combined_output[:500])}")
_log.debug(f"=== END PLUGIN RAW INPUT ===")
try:
result = JuniperTracerouteTable.parse_text(combined_output, target, source)
except Exception as exc:
_log.error(f"Failed to parse Juniper traceroute: {exc}")
raise ParsingError(f"Failed to parse Juniper traceroute output: {exc}") from exc
_log.debug(f"=== FINAL STRUCTURED TRACEROUTE RESULT ===")
_log.debug(f"Successfully parsed {len(result.hops)} traceroute hops")
_log.debug(f"Target: {target}, Source: {source}")
for hop in result.hops:
_log.debug(f"Hop {hop.hop_number}: {hop.ip_address or '*'} - RTT: {hop.rtt1 or 'timeout'}")
_log.debug(f"Raw output length: {len(combined_output)} characters")
_log.debug(f"=== END STRUCTURED RESULT ===")
return result
class JuniperTracerouteTable(TracerouteResult):
"""Juniper traceroute table parser."""
@classmethod
def parse_text(cls, text: str, target: str, source: str) -> TracerouteResult:
"""Parse Juniper traceroute text output into structured data."""
_log = log.bind(parser="JuniperTracerouteTable")
_log.debug(f"=== RAW JUNIPER TRACEROUTE INPUT ===")
_log.debug(f"Target: {target}, Source: {source}")
_log.debug(f"Raw text length: {len(text)} characters")
_log.debug(f"Raw text:\n{repr(text)}")
_log.debug(f"=== END RAW INPUT ===")
hops = []
lines = text.strip().split("\n")
_log.debug(f"Split into {len(lines)} lines")
# Pattern for normal hop: " 1 102.218.156.197 (102.218.156.197) 0.928 ms 0.968 ms 0.677 ms"
hop_pattern = re.compile(
r"^\s*(\d+)\s+([^\s]+)\s+\(([^)]+)\)\s+(\d+(?:\.\d+)?)\s*ms(?:\s+(\d+(?:\.\d+)?)\s*ms)?(?:\s+(\d+(?:\.\d+)?)\s*ms)?"
)
# Pattern for timeout with IP: " 6 * 130.117.15.146 (130.117.15.146) 162.503 ms 162.773 ms"
timeout_with_ip_pattern = re.compile(
r"^\s*(\d+)\s+\*\s+([^\s]+)\s+\(([^)]+)\)\s+(\d+(?:\.\d+)?)\s*ms(?:\s+(\d+(?:\.\d+)?)\s*ms)?(?:\s+(\d+(?:\.\d+)?)\s*ms)?"
)
# Pattern for mixed timeout and IP: " 7 80.231.196.36 (80.231.196.36) 328.264 ms 328.938 ms *"
mixed_timeout_pattern = re.compile(
r"^\s*(\d+)\s+([^\s]+)\s+\(([^)]+)\)\s+(\d+(?:\.\d+)?)\s*ms(?:\s+(\d+(?:\.\d+)?)\s*ms)?\s+\*"
)
# Pattern for multipath: " 3 197.157.77.179 (197.157.77.179) 169.860 ms 41.78.188.48 (41.78.188.48) 185.519 ms 1006.603 ms"
multipath_pattern = re.compile(
r"^\s*(\d+)\s+([^\s]+)\s+\(([^)]+)\)\s+(\d+(?:\.\d+)?)\s*ms\s+([^\s]+)\s+\(([^)]+)\)\s+(\d+(?:\.\d+)?)\s*ms(?:\s+(\d+(?:\.\d+)?)\s*ms)?"
)
# Pattern for IPv6 multipath: "25 2001:41d0:0:50::7:100b (2001:41d0:0:50::7:100b) 460.762 ms 2001:41d0:0:50::7:1009 (2001:41d0:0:50::7:1009) 464.993 ms 2001:41d0:0:50::7:100f (2001:41d0:0:50::7:100f) 464.366 ms"
ipv6_multipath_pattern = re.compile(
r"^\s*(\d+)\s+([a-fA-F0-9:]+)\s+\(([^)]+)\)\s+(\d+(?:\.\d+)?)\s*ms\s+([a-fA-F0-9:]+)\s+\(([^)]+)\)\s+(\d+(?:\.\d+)?)\s*ms(?:\s+([a-fA-F0-9:]+)\s+\(([^)]+)\)\s+(\d+(?:\.\d+)?)\s*ms)?"
)
# Pattern for complete timeout: " 1 * * *"
timeout_pattern = re.compile(r"^\s*(\d+)\s+\*\s*\*\s*\*")
# Pattern for partial timeout at end: "10 * * 2001:978:3::12e (2001:978:3::12e) 200.936 ms"
partial_timeout_pattern = re.compile(
r"^\s*(\d+)\s+\*\s+\*\s+([^\s]+)\s+\(([^)]+)\)\s+(\d+(?:\.\d+)?)\s*ms"
)
i = 0
while i < len(lines):
line = lines[i].strip()
_log.debug(f"Line {i:2d}: {repr(line)}")
if not line:
i += 1
continue
# Skip header lines
if (
"traceroute to" in line.lower()
or "traceroute6 to" in line.lower()
or "hops max" in line.lower()
or "byte packets" in line.lower()
):
_log.debug(f"Line {i:2d}: SKIPPING HEADER")
i += 1
continue
# Skip MPLS label lines
if "MPLS Label=" in line:
_log.debug(f"Line {i:2d}: SKIPPING MPLS LABEL")
i += 1
continue
# Try to match complete timeout hop first
timeout_match = timeout_pattern.match(line)
if timeout_match:
hop_number = int(timeout_match.group(1))
_log.debug(f"Line {i:2d}: TIMEOUT HOP - {hop_number}: * * *")
hops.append(
TracerouteHop(
hop_number=hop_number,
ip_address=None,
display_ip=None,
hostname=None,
rtt1=None,
rtt2=None,
rtt3=None,
sent_count=3,
last_rtt=None,
best_rtt=None,
worst_rtt=None,
loss_pct=100, # 100% loss for timeout
# BGP enrichment fields (all None for timeout)
asn=None,
org=None,
prefix=None,
country=None,
rir=None,
allocated=None,
)
)
i += 1
continue
# Try to match partial timeout: "10 * * 2001:978:3::12e (2001:978:3::12e) 200.936 ms"
partial_timeout_match = partial_timeout_pattern.match(line)
if partial_timeout_match:
hop_number = int(partial_timeout_match.group(1))
ip_address = partial_timeout_match.group(3)
hostname = partial_timeout_match.group(2).strip()
rtt1 = float(partial_timeout_match.group(4))
_log.debug(
f"Line {i:2d}: PARTIAL TIMEOUT HOP - {hop_number}: * * {hostname} ({ip_address}) {rtt1}ms"
)
hops.append(
TracerouteHop(
hop_number=hop_number,
ip_address=ip_address,
display_ip=None,
hostname=hostname if hostname != ip_address else None,
rtt1=rtt1,
rtt2=None,
rtt3=None,
sent_count=3,
last_rtt=rtt1,
best_rtt=rtt1,
worst_rtt=rtt1,
loss_pct=66, # 2 out of 3 packets lost
# BGP enrichment fields
asn=None,
org=None,
prefix=None,
country=None,
rir=None,
allocated=None,
)
)
i += 1
continue
# Try to match IPv6 multipath
ipv6_multipath_match = ipv6_multipath_pattern.match(line)
if ipv6_multipath_match:
hop_number = int(ipv6_multipath_match.group(1))
ip1 = ipv6_multipath_match.group(3)
hostname1 = ipv6_multipath_match.group(2).strip()
rtt1 = float(ipv6_multipath_match.group(4))
ip2 = ipv6_multipath_match.group(6)
hostname2 = ipv6_multipath_match.group(5).strip()
rtt2 = float(ipv6_multipath_match.group(7))
rtt3 = None
if ipv6_multipath_match.group(10): # Third IP/RTT pair
rtt3 = float(ipv6_multipath_match.group(10))
_log.debug(
f"Line {i:2d}: IPv6 MULTIPATH HOP - {hop_number}: {hostname1}/{hostname2} ({ip1}/{ip2})"
)
display_hostname = f"{hostname1} / {hostname2}"
if ipv6_multipath_match.group(8): # Third hostname
hostname3 = ipv6_multipath_match.group(8).strip()
display_hostname += f" / {hostname3}"
rtts = [x for x in [rtt1, rtt2, rtt3] if x is not None]
hops.append(
TracerouteHop(
hop_number=hop_number,
ip_address=ip1,
display_ip=None,
hostname=display_hostname,
rtt1=rtt1,
rtt2=rtt2,
rtt3=rtt3,
sent_count=len(rtts),
last_rtt=rtts[-1] if rtts else None,
best_rtt=min(rtts) if rtts else None,
worst_rtt=max(rtts) if rtts else None,
loss_pct=0, # No loss if we got responses
# BGP enrichment fields
asn=None,
org=None,
prefix=None,
country=None,
rir=None,
allocated=None,
)
)
i += 1
continue
# Try to match multipath IPv4
multipath_match = multipath_pattern.match(line)
if multipath_match:
hop_number = int(multipath_match.group(1))
hostname1 = multipath_match.group(2).strip()
ip1 = multipath_match.group(3)
rtt1 = float(multipath_match.group(4))
hostname2 = multipath_match.group(5).strip()
ip2 = multipath_match.group(6)
rtt2 = float(multipath_match.group(7))
rtt3 = float(multipath_match.group(8)) if multipath_match.group(8) else None
_log.debug(
f"Line {i:2d}: MULTIPATH HOP - {hop_number}: {hostname1}/{hostname2} ({ip1}/{ip2})"
)
display_hostname = f"{hostname1} / {hostname2}"
rtts = [x for x in [rtt1, rtt2, rtt3] if x is not None]
hops.append(
TracerouteHop(
hop_number=hop_number,
ip_address=ip1,
display_ip=None,
hostname=display_hostname,
rtt1=rtt1,
rtt2=rtt2,
rtt3=rtt3,
sent_count=len(rtts),
last_rtt=rtts[-1] if rtts else None,
best_rtt=min(rtts) if rtts else None,
worst_rtt=max(rtts) if rtts else None,
loss_pct=0, # No loss if we got responses
# BGP enrichment fields
asn=None,
org=None,
prefix=None,
country=None,
rir=None,
allocated=None,
)
)
i += 1
continue
# Try to match timeout with IP: " 6 * 130.117.15.146 (130.117.15.146) 162.503 ms 162.773 ms"
timeout_with_ip_match = timeout_with_ip_pattern.match(line)
if timeout_with_ip_match:
hop_number = int(timeout_with_ip_match.group(1))
hostname = timeout_with_ip_match.group(2).strip()
ip_address = timeout_with_ip_match.group(3)
rtt1 = float(timeout_with_ip_match.group(4))
rtt2 = (
float(timeout_with_ip_match.group(5))
if timeout_with_ip_match.group(5)
else None
)
rtt3 = (
float(timeout_with_ip_match.group(6))
if timeout_with_ip_match.group(6)
else None
)
_log.debug(
f"Line {i:2d}: TIMEOUT WITH IP - {hop_number}: * {hostname} ({ip_address})"
)
rtts = [x for x in [rtt1, rtt2, rtt3] if x is not None]
loss_pct = int((3 - len(rtts)) / 3 * 100) if len(rtts) > 0 else 100
hops.append(
TracerouteHop(
hop_number=hop_number,
ip_address=ip_address,
display_ip=None,
hostname=hostname if hostname != ip_address else None,
rtt1=rtt1,
rtt2=rtt2,
rtt3=rtt3,
sent_count=3,
last_rtt=rtts[-1] if rtts else None,
best_rtt=min(rtts) if rtts else None,
worst_rtt=max(rtts) if rtts else None,
loss_pct=loss_pct,
# BGP enrichment fields
asn=None,
org=None,
prefix=None,
country=None,
rir=None,
allocated=None,
)
)
i += 1
continue
# Try to match mixed timeout: " 7 80.231.196.36 (80.231.196.36) 328.264 ms 328.938 ms *"
mixed_timeout_match = mixed_timeout_pattern.match(line)
if mixed_timeout_match:
hop_number = int(mixed_timeout_match.group(1))
hostname = mixed_timeout_match.group(2).strip()
ip_address = mixed_timeout_match.group(3)
rtt1 = float(mixed_timeout_match.group(4))
rtt2 = float(mixed_timeout_match.group(5)) if mixed_timeout_match.group(5) else None
_log.debug(
f"Line {i:2d}: MIXED TIMEOUT - {hop_number}: {hostname} ({ip_address}) with *"
)
rtts = [x for x in [rtt1, rtt2] if x is not None]
loss_pct = int((3 - len(rtts)) / 3 * 100)
hops.append(
TracerouteHop(
hop_number=hop_number,
ip_address=ip_address,
display_ip=None,
hostname=hostname if hostname != ip_address else None,
rtt1=rtt1,
rtt2=rtt2,
rtt3=None,
sent_count=3,
last_rtt=rtts[-1] if rtts else None,
best_rtt=min(rtts) if rtts else None,
worst_rtt=max(rtts) if rtts else None,
loss_pct=loss_pct,
# BGP enrichment fields
asn=None,
org=None,
prefix=None,
country=None,
rir=None,
allocated=None,
)
)
i += 1
continue
# Try to match normal hop
hop_match = hop_pattern.match(line)
if hop_match:
hop_number = int(hop_match.group(1))
hostname = hop_match.group(2).strip()
ip_address = hop_match.group(3)
rtt1 = float(hop_match.group(4))
rtt2 = float(hop_match.group(5)) if hop_match.group(5) else None
rtt3 = float(hop_match.group(6)) if hop_match.group(6) else None
_log.debug(
f"Line {i:2d}: NORMAL HOP - {hop_number}: {hostname} ({ip_address}) RTTs: {rtt1}, {rtt2}, {rtt3}"
)
rtts = [x for x in [rtt1, rtt2, rtt3] if x is not None]
hops.append(
TracerouteHop(
hop_number=hop_number,
ip_address=ip_address,
display_ip=None,
hostname=hostname if hostname != ip_address else None,
rtt1=rtt1,
rtt2=rtt2,
rtt3=rtt3,
sent_count=len(rtts),
last_rtt=rtts[-1] if rtts else None,
best_rtt=min(rtts) if rtts else None,
worst_rtt=max(rtts) if rtts else None,
loss_pct=0, # No loss if we got a response
# BGP enrichment fields
asn=None,
org=None,
prefix=None,
country=None,
rir=None,
allocated=None,
)
)
i += 1
continue
_log.debug(f"Line {i:2d}: UNMATCHED - skipping")
i += 1
_log.debug(f"Before cleanup: {len(hops)} hops")
# Clean up consecutive timeout hops at the end
if len(hops) > 5:
# Find the last non-timeout hop
last_real_hop = -1
for i in range(len(hops) - 1, -1, -1):
if not hops[i].is_timeout:
last_real_hop = i
break
if last_real_hop >= 0:
# Keep at most 3 timeout hops after the last real hop
max_timeouts = 3
timeout_count = 0
cleaned_hops = hops[: last_real_hop + 1] # Keep all hops up to last real hop
for hop in hops[last_real_hop + 1 :]:
if hop.is_timeout:
timeout_count += 1
if timeout_count <= max_timeouts:
cleaned_hops.append(hop)
else:
_log.debug(f"Removing excessive timeout hop {hop.hop_number}")
else:
# If we find another real hop after timeouts, keep it
cleaned_hops.append(hop)
timeout_count = 0
hops = cleaned_hops
_log.debug(f"After cleanup: {len(hops)} hops")
for hop in hops:
if hop.is_timeout:
_log.debug(f"Final hop {hop.hop_number}: * (timeout)")
else:
_log.debug(
f"Final hop {hop.hop_number}: {hop.ip_address} ({hop.hostname or 'no-hostname'}) - RTTs: {hop.rtt1}/{hop.rtt2}/{hop.rtt3}"
)
_log.info(f"Parsed {len(hops)} hops from Juniper traceroute")
# Extract packet size and max hops from header if available
max_hops = 30 # Default for Juniper
packet_size = 52 # Default from your examples
for line in text.split("\n"):
if "hops max" in line and "byte packets" in line:
# Example: "traceroute to 51.161.209.134 (51.161.209.134) from 196.201.112.49, 30 hops max, 52 byte packets"
parts = line.split()
for i, part in enumerate(parts):
if part == "hops":
try:
max_hops = int(parts[i - 1])
except (ValueError, IndexError):
pass
elif part == "byte":
try:
packet_size = int(parts[i - 1])
except (ValueError, IndexError):
pass
break
return TracerouteResult(
target=target,
source=source,
hops=hops,
max_hops=max_hops,
packet_size=packet_size,
raw_output=text,
asn_organizations={},
)
class TraceroutePluginJuniper(OutputPlugin):
"""Parse Juniper traceroute output."""
_hyperglass_builtin: bool = PrivateAttr(True)
platforms: t.Sequence[str] = ("juniper", "juniper_junos")
directives: t.Sequence[str] = ("__hyperglass_juniper_traceroute__",)
common: bool = False
def process(self, output: "OutputType", query: "Query") -> "OutputType":
"""Process Juniper traceroute output."""
# Extract target and source with fallbacks
target = str(query.query_target) if query.query_target else "unknown"
source = "unknown"
if hasattr(query, "device") and query.device:
source = getattr(query.device, "display_name", None) or getattr(
query.device, "name", "unknown"
)
device = getattr(query, "device", None)
if device is not None:
if not getattr(device, "structured_output", False):
return output
try:
_params = use_state("params")
except Exception:
_params = None
if (
_params
and getattr(_params, "structured", None)
and getattr(_params.structured, "enable_for_traceroute", None) is False
):
return output
else:
try:
params = use_state("params")
except Exception:
params = None
if not (params and getattr(params, "structured", None)):
return output
if getattr(params.structured, "enable_for_traceroute", None) is False:
return output
return parse_juniper_traceroute(
output=output,
target=target,
source=source,
)

View file

@ -7,9 +7,10 @@ import typing as t
from pydantic import PrivateAttr, ValidationError
# Project
from hyperglass.log import log
from hyperglass.log import log, log as _log
from hyperglass.exceptions.private import ParsingError
from hyperglass.models.parsing.mikrotik import MikrotikTracerouteTable
from hyperglass.state import use_state
# Local
from .._output import OutputPlugin
@ -26,6 +27,33 @@ def _normalize_output(output: t.Union[str, t.Sequence[str]]) -> t.List[str]:
return [output]
return list(output)
def _clean_traceroute_only(
output: t.Union[str, t.Sequence[str]], query: "Query"
) -> t.Union[str, t.Tuple[str, ...]]:
"""Run only the traceroute-specific cleaner and return same-shaped result.
This calls the internal _clean_traceroute_output method on the
MikrotikGarbageOutput plugin so the cleaned traceroute text is used
as the 'raw' output exposed to clients.
"""
from .mikrotik_garbage_output import MikrotikGarbageOutput
out_list = _normalize_output(output)
cleaner = MikrotikGarbageOutput()
cleaned_list: t.List[str] = []
for piece in out_list:
try:
cleaned_piece = cleaner._clean_traceroute_output(piece)
except Exception:
# If cleaner fails for any piece, fall back to the original piece
cleaned_piece = piece
cleaned_list.append(cleaned_piece)
if isinstance(output, str):
return cleaned_list[0] if cleaned_list else ""
return tuple(cleaned_list)
def parse_mikrotik_traceroute(
output: t.Union[str, t.Sequence[str]], target: str, source: str
@ -37,21 +65,18 @@ def parse_mikrotik_traceroute(
_log = log.bind(plugin=TraceroutePluginMikrotik.__name__)
combined_output = "\n".join(out_list)
# DEBUG: Log the raw output we're about to parse
_log.debug(f"=== MIKROTIK TRACEROUTE PLUGIN RAW INPUT ===")
_log.debug(f"Target: {target}, Source: {source}")
_log.debug(f"Output pieces: {len(out_list)}")
for i, piece in enumerate(out_list):
_log.debug(f"Output piece {i}: {repr(piece[:200])}...") # Truncate for readability
_log.debug(f"Combined output length: {len(combined_output)}")
# Check if this looks like cleaned or raw output
# Minimal summary of the input - avoid dumping full raw output to logs
contains_paging = "-- [Q quit|C-z pause]" in combined_output
contains_multiple_tables = combined_output.count("ADDRESS") > 1
_log.debug(f"Contains paging prompts: {contains_paging}")
_log.debug(f"Contains multiple ADDRESS headers: {contains_multiple_tables}")
_log.debug(f"First 500 chars: {repr(combined_output[:500])}")
_log.debug(f"=== END PLUGIN RAW INPUT ===")
_log.debug(
"Received traceroute plugin input",
target=target,
source=source,
pieces=len(out_list),
combined_len=len(combined_output),
contains_paging=contains_paging,
multiple_tables=contains_multiple_tables,
)
try:
# Pass the entire combined output to the parser at once
@ -62,20 +87,13 @@ def parse_mikrotik_traceroute(
# This is the processed output from MikrotikGarbageOutput plugin, not the original raw router output
result.raw_output = combined_output
# DEBUG: Log the final structured result
_log.debug(f"=== FINAL STRUCTURED TRACEROUTE RESULT ===")
_log.debug(f"Successfully parsed {len(validated.hops)} traceroute hops")
_log.debug(f"Target: {result.target}, Source: {result.source}")
for hop in result.hops:
# Concise structured logging for result
_log.debug(
f"Hop {hop.hop_number}: {hop.ip_address} - Loss: {hop.loss_pct}% - Sent: {hop.sent_count}"
"Parsed traceroute result",
hops=len(validated.hops),
target=result.target,
source=result.source,
)
_log.debug(f"AS Path: {result.as_path_summary}")
_log.debug(
f"Cleaned raw output length: {len(result.raw_output) if result.raw_output else 0} characters"
)
_log.debug(f"Copy button will show CLEANED output (after MikrotikGarbageOutput processing)")
_log.debug(f"=== END STRUCTURED RESULT ===")
except ValidationError as err:
_log.critical(err)
@ -100,7 +118,50 @@ class TraceroutePluginMikrotik(OutputPlugin):
target = getattr(query, "target", "unknown")
source = getattr(query, "source", "unknown")
# Try to get target from query_target which is more reliable
if hasattr(query, "query_target") and query.query_target:
target = str(query.query_target)
if hasattr(query, "device") and query.device:
source = getattr(query.device, "name", source)
_log = log.bind(plugin=TraceroutePluginMikrotik.__name__)
# Debug: emit the raw response exactly as returned by the router.
# Do not transform, join, or normalize the output — log it verbatim.
try:
# Ensure the router output is embedded in the log message body so it
# is visible regardless of the logger's formatter configuration.
if isinstance(output, (tuple, list)):
try:
combined_raw = "\n".join(output)
except Exception:
# Fall back to repr if join fails for non-string elements
combined_raw = repr(output)
else:
combined_raw = output if isinstance(output, str) else repr(output)
# Log the full verbatim router response (DEBUG level).
_log.debug("Router raw output:\n{}", combined_raw)
except Exception:
# Don't let logging interfere with normal processing
_log.exception("Failed to log router raw output")
try:
params = use_state("params")
except Exception:
params = None
device = getattr(query, "device", None)
if device is None:
return _clean_traceroute_only(output, query)
else:
if params is None:
return _clean_traceroute_only(output, query)
if not getattr(params, "structured", None):
return _clean_traceroute_only(output, query)
if getattr(params.structured, "enable_for_traceroute", None) is False:
return _clean_traceroute_only(output, query)
return parse_mikrotik_traceroute(output, target, source)

View file

@ -66,8 +66,14 @@ class ZTracerouteIpEnrichment(OutputPlugin):
from hyperglass.state import use_state
params = use_state("params")
if not params.structured.ip_enrichment.enabled:
_log.debug("IP enrichment disabled in configuration")
# If structured config missing or traceroute enrichment disabled, skip
# IP enrichment but still perform reverse DNS lookups.
if (
not getattr(params, "structured", None)
or not params.structured.ip_enrichment.enrich_traceroute
or getattr(params.structured, "enable_for_traceroute", None) is False
):
_log.debug("IP enrichment for traceroute disabled in configuration")
# Still do reverse DNS if enrichment is disabled
for hop in output.hops:
if hop.ip_address and hop.hostname is None:

View file

@ -87,7 +87,7 @@ export const LookingGlassForm = (): JSX.Element => {
return tmp;
}, [form.queryType, form.queryLocation, getDirective]);
function submitHandler(): void {
async function submitHandler(): Promise<void> {
if (process.env.NODE_ENV === 'development') {
console.table({
'Query Location': form.queryLocation.toString(),
@ -97,6 +97,11 @@ export const LookingGlassForm = (): JSX.Element => {
});
}
// Note: IP enrichment refresh is now handled server-side on query
// submission when enabled. Removing client-side best-effort refresh
// to centralize refresh logic and avoid redundant requests from many
// clients.
// Before submitting a query, make sure the greeting is acknowledged if required. This should
// be handled before loading the app, but people be sneaky.
if (!greetingReady) {

View file

@ -55,9 +55,18 @@ export const ASNField = (props: ASNFieldProps): JSX.Element => {
);
}
// Display ASN as-is (no prefix added since backend now sends clean format)
const asnDisplay = asn; // Just use the value directly: "12345" or "IXP"
const tooltipLabel = org && org !== 'None' ? `${asnDisplay} - ${org}` : asnDisplay;
// Display ASN. If this hop is an IXP (asn === 'IXP') and we have the
// IXP name in `org`, show the IXP name instead of the literal "IXP" so
// the visualiser renders a friendly label. Keep the tooltip labeled as
// "IXP - <name>" for clarity.
let asnDisplay = asn; // default: "12345" or "IXP"
// For table display we want IXPs to appear as the literal "IXP".
if (asn === 'IXP') {
asnDisplay = 'IXP';
}
const tooltipLabel = org && org !== 'None'
? (asn === 'IXP' ? `IXP - ${org}` : `${asnDisplay} - ${org}`)
: asnDisplay;
return (
<Tooltip hasArrow label={tooltipLabel} placement="top">

View file

@ -26,9 +26,52 @@ export const Path = (props: PathProps): JSX.Element => {
const output = response?.output as AllStructuredResponses;
const bg = useColorValue('light.50', 'dark.900');
const centered = useBreakpointValue({ base: false, lg: true }) ?? true;
const addResponse = useFormState(s => s.addResponse);
return (
<>
<PathButton onOpen={onOpen} />
<PathButton
onOpen={async () => {
// When opening the AS path modal, attempt on-demand ASN enrichment
// if the response does not already contain ASN organization data.
try {
onOpen();
if (!response) return;
const out = response.output as any;
const asnOrgs = out?.asn_organizations || {};
if (Object.keys(asnOrgs).length > 0) return;
// Collect unique ASNs from the output depending on type
let asns: string[] = [];
if (out?.routes) {
const all = out.routes.flatMap((r: any) => r.as_path || []);
asns = Array.from(new Set(all.map((a: any) => String(a))));
} else if (out?.hops) {
const all = out.hops.map((h: any) => h.asn).filter(Boolean);
asns = Array.from(new Set(all.map((a: any) => String(a))));
}
if (asns.length === 0) return;
const resp = await fetch('/api/aspath/enrich', {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({ as_path: asns }),
});
if (!resp.ok) return;
const j = await resp.json();
if (j?.success && j.asn_organizations) {
// Merge ASN orgs into the stored response and update state
out.asn_organizations = { ...(out.asn_organizations || {}), ...j.asn_organizations };
addResponse(device, { ...response, output: out });
}
} catch (e) {
// Ignore enrichment failures
// eslint-disable-next-line no-console
console.debug('AS path enrichment failed', e);
onOpen();
}
}}
/>
<Modal isOpen={isOpen} onClose={onClose} size="full" isCentered={centered}>
<ModalOverlay />
<ModalContent

View file

@ -45,6 +45,11 @@ function* buildElements(
): Generator<FlowElement<NodeData>> {
let asPaths: string[][] = [];
let asnOrgs: Record<string, { name: string; country: string }> = {};
// For traceroute data we may have IXPs represented as asn === 'IXP' with
// the IXP name stored per-hop in hop.org. Collect per-path org arrays so
// nodes for IXPs can show the proper IXP name instead of the generic
// "IXP" label.
const pathGroupOrgs: Record<number, Array<string | undefined>> = {};
if (isBGPData(data)) {
// Handle BGP routes with AS paths
@ -70,20 +75,25 @@ function* buildElements(
} else if (isTracerouteData(data)) {
// Handle traceroute hops - build AS path from hop ASNs
const hopAsns: string[] = [];
const hopOrgs: Array<string | undefined> = [];
let currentAsn = '';
for (const hop of data.hops) {
if (hop.asn && hop.asn !== 'None' && hop.asn !== currentAsn) {
currentAsn = hop.asn;
hopAsns.push(hop.asn);
hopOrgs.push(hop.org ?? undefined);
}
}
if (hopAsns.length > 0) {
// Remove the base ASN if it's the first hop to avoid duplication
const filteredAsns = hopAsns[0] === base.asn ? hopAsns.slice(1) : hopAsns;
const removeBase = hopAsns[0] === base.asn;
const filteredAsns = removeBase ? hopAsns.slice(1) : hopAsns;
const filteredOrgs = removeBase ? hopOrgs.slice(1) : hopOrgs;
if (filteredAsns.length > 0) {
asPaths = [filteredAsns];
pathGroupOrgs[0] = filteredOrgs;
}
}
@ -182,13 +192,25 @@ function* buildElements(
const y = g.node(node).y - NODE_HEIGHT * (idx * 6);
// Get each ASN's positions.
// Determine display name for this node. Prefer ASN org mapping, but
// for traceroute IXPs prefer the per-hop IXP name if present.
let nodeName = asnOrgs[asn]?.name || (asn === '0' ? 'Private/Unknown' : `AS${asn}`);
if (asn === 'IXP') {
const ixpName = pathGroupOrgs[groupIdx]?.[idx];
if (ixpName && ixpName !== 'None') {
nodeName = ixpName;
} else {
nodeName = 'IXP';
}
}
yield {
id: node,
type: 'ASNode',
position: { x, y },
data: {
asn: `${asn}`,
name: asn === 'IXP' ? 'IXP' : asnOrgs[asn]?.name || (asn === '0' ? 'Private/Unknown' : `AS${asn}`),
name: nodeName,
hasChildren: idx < endIdx,
hasParents: true,
},