Optimize LoRaWAN® Coverage with ThingPark

How to Design an Optimum LoRaWAN Radio Network with ThingPark’s Network Coverage Tool

Designing a LoRaWAN® network is a balancing act: you need strong, predictable coverage without overbuilding sites or blowing the budget. The ThingPark Network Coverage Tool (TNC) turns that challenge into a repeatable, data-driven workflow. Built by Actility and integrated with ThingPark Community, Wireless, and Enterprise, the tool lets you predict outdoor coverage, optimize the number and placement of gateways, and validate results against real device KPIs such as packet error rate and SNR.

With Version 2.0, TNC adds two high-impact capabilities: Smart Antenna Selection, which activates the fewest candidate sites needed to reach a defined coverage target inside your area of interest, and a Device Data Set overlay to compare prediction heatmaps with actual fleet performance—or even with the planned location of future devices. Under the hood, engineering-grade propagation models, real antenna patterns, terrain and diffraction data, and bi-directional link budgets help align simulations with what you’ll see in the field.

This article walks you through when and how to use TNC, what to expect from the models, how to run and interpret simulations, and how to decide where to densify for tougher scenarios like deep indoor or basement coverage.

Key Takeaways

  • Predict outdoor LoRaWAN® coverage and visualize heatmaps by penetration level (Outdoor, Daylight, Deep Indoor, Basement).
  • Smart Antenna Selection reaches a target coverage (%) inside your AOI with the fewest gateways.
  • Overlay predictions with device KPIs (health, PER, SNR) and planned device locations to validate and prioritize densification.
  • Seamless ThingPark integration (Community/Wireless/Enterprise) with SSO and automatic import of stations, antennas, heights, environments.
  • SAFE propagation model tuned on >370 links (≈1 dB average delta); an optimistic model is available for sensitivity checks.
  • Bi-directional link budgets (UL/DL) with realistic interference margins; use RX2 assumptions to spot DL limits.
  • Real commercial antenna patterns (H & V), terrain/DSM/DTM and diffraction for field-like predictions.
  • Regulatory presets by country (bands/EIRP) keep scenarios deployable (EU868, US915, etc.).
  • Clean inputs matter: accurate device TX, (possibly negative) antenna gains, height ~0.5 m for worst-case, valid coordinates.
  • Exports provide an audit trail: KMZ/PNG/TIF + CSV/JSON; hosted results retained ~90 days → archive locally.
  • Credits model: free credits (5 base stations / 90 days) used before paid credits via ThingPark Market.
  • On-prem/OCP users can plan today via CSV imports; API or on-prem integrations available via Actility.

Table of Contents

Why RF Planning Still Matters for LoRaWAN®

LoRaWAN® thrives on long-range, low-power links—but coverage is never “one size fits all.” Real deployments face competing pressures: guarantee service where devices actually live (basements, meter pits, deep indoor cores) while keeping the site count—and therefore CAPEX/OPEX—under control. Without disciplined planning, you risk white spots, overbuilt clusters, or networks tuned for “daylight indoor” that crumble when use cases shift to deep indoor or underground.

Unlike cellular, LoRaWAN® performance hinges on bi-directional link budgets that vary with spreading factor, repetitions, antenna gains/losses, terrain, and interference in unlicensed ISM bands. Uplink often budgets differently from downlink (e.g., RX2 constraints), so a design that seems fine in one direction can bottleneck in the other. Add in topography, rooftop heights, and realistic antenna patterns, and intuition alone won’t cut it.

Good RF planning lets you:

  • Predict coverage by penetration level (Outdoor, Daylight, Deep Indoor, Basement) before you spend on hardware.
  • Quantify trade-offs between height, antenna gain, and densification to hit service targets at minimum cost.
  • De-risk scale-up by checking that device capabilities (TX power, negative antenna gains, allowed repetitions) match the modeled assumptions.
  • Validate and iterate by comparing predictions with device KPIs (health, PER, SNR), so densification happens precisely where it matters.

In short, planning aligns budgets, physics, and business goals. It turns LoRaWAN® from a promising pilot into a resilient, right-sized network that supports today’s use cases—and tomorrow’s tougher ones—without surprise costs.

Meet the ThingPark Network Coverage Tool (TNC)

The ThingPark Network Coverage Tool (TNC) is Actility’s in-house RF prediction and optimization engine for outdoor LoRaWAN® planning. It’s designed to help you predict coverage, optimize site count, and validate against real device KPIs—all tightly integrated with the ThingPark ecosystem.

What it is (at a glance)

  • Purpose-built for outdoor micro-gateways installed on rooftops/high points.
  • Not for indoor gateway planning (indoor requires different propagation models).
  • Delivers coverage heatmaps by penetration level (Outdoor, Indoor Daylight, Deep Indoor, Basement).
  • Supports Smart Antenna Selection to reach a target coverage % with the fewest sites.

Deep integration with ThingPark

  • Works with ThingPark Community, ThingPark Wireless, and ThingPark Enterprise via SSO.
  • Automatic ingestion of your base stations (IDs, coordinates, height above ground), antenna models/gains, cable losses, and environment (dense urban/urban/suburban/rural).
  • Authoritative edits are done in your ThingPark account; TNC reflects them automatically.
  • Alternatively, CSV import lets you define candidate sites; the tool validates required fields and flags errors.

Realistic by design

  • Uses real commercial antenna patterns (H & V).
  • Applies country regulatory presets (band/EIRP limits) when you set the country of operation.
  • Computes bi-directional link budgets (UL/DL) with sensible interference margins for ISM operation.
  • Leverages terrain & diffraction data to reflect topography and clutter.
Use Case What TNC Delivers
Greenfield design Predicts coverage and estimates the minimum site count to hit a target %
Densification Surfaces white spots and indicates where to add gateways
New use cases (deep indoor/basement) Tests feasibility across penetration levels before committing CAPEX
Field validation Overlays coverage with device KPIs (PER, SNR, health) and future device locations

What it’s not

  • A substitute for indoor RF planning.
  • A black box: inputs (devices, antennas, environments, margins) are transparent and configurable, so you can align simulations with real-world constraints.
Ramez Soss
Ramez Soss - Actility

“The ThingPark Network Coverage Tool is an in-house RF coverage prediction tool for LoRaWAN planning.

What’s New in Version 2.0

Version 2.0 focuses on doing more with fewer sites and closing the loop with real devices. Two headline features drive this:

v2.0 Feature What it changes for you
Smart Antenna Selection Given a polygon (area of interest) and a coverage target (%), the tool automatically activates the minimum subset of candidate gateways to meet the target—no more overbuilding.
Device Data Set Overlay Superimpose your deployed (or planned) devices on prediction heatmaps and compare with real KPIs (health, PER, SNR) to confirm coverage or pinpoint where to densify.

How Smart Antenna Selection works

  • You provide: a candidate site list (auto-import or CSV), a polygon, and a coverage objective per penetration level.
  • The optimizer runs iterative picks to reach the target with the fewest gateways.
  • If some candidates remain inactive, they likely sit outside the polygon or add marginal benefit inside it.
  • If the target isn’t fully met, you’ll see it flagged; the remedy is to add eligible candidates within the polygon or adjust the target (especially for Deep Indoor/Basement).

How Device Data Set overlay helps

  • Export devices with GPS from your ThingPark account and overlay them on the simulated heatmap.
  • Compare predicted vs observed (PER, SNR, connection health) to validate assumptions.
  • Drop in future device locations to anticipate blind spots before rollout.

Workflow enhancements you’ll notice

  • A more actionable dashboard (recent runs, status, eligible base stations).
  • Preferences to prefill typical parameters (country/ISM limits, antenna model, cable loss, device profiles) and speed up new simulations.
  • CSV validation that highlights missing mandatory fields (e.g., coordinates or antenna pattern) before a run.
  • One-click duplication of a simulation to A/B test assumptions (e.g., antenna gain, repetitions, penetration levels).
  • Complete exports (KMZ/PNG/TIF + JSON/CSV) for auditability and GIS workflows (remember: hosting retention is 90 days—archive locally).
Ramez Soss
Ramez Soss - Actility

“Smart Antenna Selection determines which gateways should be activated to fulfill your coverage requirements with the minimum number of gateways.”

Under the Hood: Models, Data, Link Budgeting

Bi-directional link budget

TNC computes a complete uplink and downlink budget, not just path loss. It factors: device conducted TX power, device antenna gain (which can be negative), gateway sensitivity, RX2 settings for downlink, cable losses, rooftop heights, and engineering margins (“noise rise”) to reflect interference/collisions typical of ISM bands. Use the defaults if you lack measurements; for advanced tuning, run noise scans from high points and set the margins accordingly.

Propagation models (field-tuned)

 Two empirical models are available:

  • SAFE model — tuned against extensive field campaigns (>370 measured links) across dense urban, urban, suburban, and rural morphologies; it delivers an ~1 dB average delta between prediction and measurements.
  • Alternative (“optimistic”) model — slightly optimistic for most morphologies (except urban), useful for upper-bound exploration. Pick SAFE for planning and use the alternative as a sensitivity check.

Antenna patterns that reflect reality

The simulator uses real commercial antenna patterns in both horizontal and vertical planes, not templates. Vertical patterns combine with diffraction and site heights to estimate incidence angles toward first obstacles. If your antenna model is missing, Actility can add it; otherwise select the closest available pattern and adjust gain/losses.

Terrain, clutter, and diffraction

Coverage is shaped by landform and skyline, so TNC incorporates:

  • Europe: DTM-based elevation.
  • Outside Europe: DSM datasets (~30 m resolution) such as JAXA releases. A diffraction model accounts for knife-edge and terrain shielding, improving predictions in hilly or high-rise environments.

Regulatory presets by country

When you set the country of operation, the tool applies the matching ISM plan and power constraints automatically (e.g., EU868, US915) for both uplink and downlink. You can still explore what-ifs, but staying within presets ensures simulations align with deployable configurations.

Penetration levels = explicit design targets

Rather than treating indoor as a single bucket, TNC simulates Outdoor, Indoor Daylight, Deep Indoor, and Basement. Each level implies different additional losses, helping you plan densification for specific use cases (e.g., meters in basements vs. sensors near windows).

Spreading factors and repetitions

Uplink coverage can depend critically on allowing higher SFs (e.g., SF12) and repetitions—if your devices and local regulations support them. Model only the combinations your fleet can actually use; otherwise predictions will overshoot. For downlink, TNC references the RX2 link budget (often SF12 for maximum margin).

Data integrity and sources of truth

  • Station metadata (coords, height AGL, environment, antenna/cable) comes from your ThingPark account and is read-only inside TNC—edit in ThingPark for consistency.
  • CSV import follows a strict schema (LRR ID, lat/lon, height, environment, antenna, cable loss). The validator flags missing/invalid fields before you run.

Bottom line: TNC’s physics—field-tuned propagation, real antenna patterns, terrain/diffraction, and strict regulatory presets—aim to make predicted heatmaps behave like the real world, so your design choices map cleanly to coverage outcomes.

Penetration Levels Simulated

TNC models coverage as four distinct penetration levels so you can size the network to the actual environment and use case—not just a generic “indoor.”

Penetration Level Typical Environment & Planning Notes
Outdoor Line-of-sight or light clutter (streets, parks, rooftops). Baseline for greenfield sizing; useful to validate UL/DL symmetry and terrain impacts.
Indoor Daylight (First Hop) Near windows/openings, ground or upper floors. Good proxy for “light indoor” sensors. Often achievable without densification if rooftops are well placed.
Deep Indoor Interior cores behind multiple walls, dense building materials. Typically requires extra **height/gain** or **additional rooftops**; UL may still work while DL becomes limiting.
Basement Underground car parks, utility rooms, pit meters, partially buried devices. Highest loss class; plan for **targeted densification** and realistic device capabilities (SF/repetitions).

How to use these levels

  • Define targets per level (e.g., Outdoor ≥95% vs. Deep Indoor ≥70%) based on your SLA and device mix.
  • Run optimization per target: Smart Antenna Selection may meet Outdoor with minimal sites, but Deep Indoor/Basement usually need added candidates inside your AOI polygon.
  • Check UL vs DL limits: If DL is limiting at tougher levels, prioritize antenna height, pattern choice, or extra rooftops rather than only boosting device UL assumptions.
Walkthrough: Run a Simulation in 5 Steps

Goal: go from zero to a decision-ready heatmap (and, if needed, an optimized site list) with clean inputs and reproducible outputs.

Step 1 — Set Preferences (once)

  • Country of operation → applies ISM plan & power limits automatically.
  • Default RF stack → antenna model, cable loss, max TX power.
  • Device profiles → conducted TX, antenna gain (allow negative), device height = 0.5 m for worst-case.
  • Noise rise (UL/DL) → keep defaults unless you have spectrum scans.

Tip: Getting preferences right saves time—new simulations prefill from here.

Step 2 — Provide Candidate Sites

Option A: Auto-import from ThingPark
Eligible outdoor rooftop stations (with valid coordinates) are pulled via SSO.

Option B: Import a CSV
Use the sample CSV structure; all fields below are mandatory.

CSV Field What to enter
LRR_ID Unique gateway ID (string)
Latitude / Longitude Decimal degrees (WGS84), with valid values
Height_AGL_m Antenna height above local ground (meters)
Environment DenseUrban / Urban / Suburban / Rural
Antenna_Model Must match a listed pattern (H & V available)
Cable_Loss_dB Total feeder + connectors (dB)

Common CSV validation errors to fix before running

  • Missing Antenna_Model or coordinates
  • Non-numeric Height_AGL_m / Cable_Loss_dB
  • Environment value not in the allowed set

Step 3 — Configure Radio Parameters

  • Propagation model: SAFE (recommended) or the alternative “optimistic”.
  • Penetration levels: choose Outdoor, Daylight, Deep Indoor, Basement (select one or several).
  • Uplink: max SF allowed by regulation & device, plus repetitions if supported.
  • Downlink: plan with RX2 (often SF12) for maximum margin.
  • Terrain/Diffraction: keep recommended defaults unless you have a reason to change.

Reality check: Don’t model UL repetitions or SF12 if your devices or policy won’t actually allow them.

Step 4 — (Optional) Optimize with Smart Antenna Selection

  • Draw your polygon (area of interest), set a coverage target (%) per level.
  • Run the optimizer to activate the fewest sites that meet the target.
  • If target isn’t fully met: add more candidates inside the polygon or adjust the target (especially for Deep Indoor/Basement).

Step 5 — Run, Review, Export

  • Heatmaps: inspect by level; note UL vs. DL limiting factors per area/site.
  • Device overlay (if used): compare predicted vs. actual KPIs (PER, SNR, health).
  • Export package: KMZ/PNG/TIF (maps) + CSV (base stations) + JSON (settings) for auditability and GIS workflows.
  • Retention: results hosted ~90 days—archive exports for long-term access.

Quick loop: Duplicate the simulation to A/B test assumptions (e.g., antenna gain, repetitions, or penetration targets) without re-entering data.

Ramez Soss
Ramez Soss - Actility

“Before launching a new simulation, check your user preferences and set default settings for your account.”

Interpreting Results

A good read of the outputs turns a pretty heatmap into concrete build decisions. Focus on: (a) penetration-level coverage, (b) the limiting link budget (UL vs DL), (c) device KPI overlays, and (d) reproducible exports.

Read heatmaps by penetration level

  • Compare levels side-by-side (Outdoor → Daylight → Deep Indoor → Basement). Expect shrinkage as losses increase; that shrinkage indicates where height/gain/density must rise.
  • Look for structural patterns: valleys behind buildings (diffraction shadows), ridgelines with excellent reach, street canyons, and pockets that persist across levels (white spots).
  • Decide targets per level: e.g., “Outdoor ≥95% within AOI; Deep Indoor ≥70% in priority blocks.” Use Smart Antenna Selection to test if targets are attainable with current candidates.

Use the limiting factor (UL vs DL) to choose the fix

  • Compare levels side-by-side (Outdoor → Daylight → Deep Indoor → Basement). Expect shrinkage as losses increase; that shrinkage indicates where height/gain/density must rise.
  • Look for structural patterns: valleys behind buildings (diffraction shadows), ridgelines with excellent reach, street canyons, and pockets that persist across levels (white spots).
  • Decide targets per level: e.g., “Outdoor ≥95% within AOI; Deep Indoor ≥70% in priority blocks.” Use Smart Antenna Selection to test if targets are attainable with current candidates.

Overlay devices and KPIs to validate (or falsify) assumptions

  • Match prediction to reality by plotting devices with GPS. Compare connection health, uplink PER, and average SNR against the predicted class (Outdoor/Daylight/Deep/Basement).
  • Prioritize fixes where prediction and KPIs disagree the most (e.g., predicted Daylight but poor health/PER) and where business value is highest.
  • Future devices: drop planned locations on the map to catch expected holes before rollout.
  • Treat KPI thresholds relatively across your fleet and region (hardware, firmware, duty-cycle, and SF policies vary). Use the tool to find outliers and clusters rather than chasing absolute numbers.

Exports = audit trail and collaboration

  • Archive the full bundle for each run: KMZ/PNG/TIF (maps), CSV (base stations), JSON (settings).
  • Keep a simple naming/versioning scheme (e.g., AOI_v12_SAFE_SF12x3_2025-09-01).
  • Share KMZ with stakeholders for quick Google Earth reviews; use PNG/TIF for slide decks and GIS; keep JSON/CSV so the run is fully reproducible.

Common symptoms → likely remedies

  • Archive the full bundle for each run: KMZ/PNG/TIF (maps), CSV (base stations), JSON (settings).
  • Keep a simple naming/versioning scheme (e.g., AOI_v12_SAFE_SF12x3_2025-09-01).
  • Share KMZ with stakeholders for quick Google Earth reviews; use PNG/TIF for slide decks and GIS; keep JSON/CSV so the run is fully reproducible.
What you see What to try
Outdoor target met, Deep Indoor far below target Add candidates inside the AOI; raise rooftop height; consider higher-gain or directional antennas; recheck device UL repetitions policy.
Large areas DL-limited, UL looks fine Increase gateway height/gain or add sites; revisit RX2 configuration and antenna pattern/downtilt; device UL tweaks won’t help here.
Prediction says “Daylight,” KPIs show poor health/PER Verify device model assumptions (TX power, negative antenna gains), coordinates accuracy, and spectrum noise; adjust noise margin if scans show higher floor.
Optimizer leaves many candidates inactive They’re likely outside the polygon or add marginal benefit; add new candidates within the polygon or relax the target for tougher levels.
Small persistent holes across all levels Topography/clutter shadows: try a **localized** rooftop close by with height advantage or a tighter beam to punch through.

Bottom line: Let penetration heatmaps set the ambition, let UL/DL limiting dictate the technical lever, let device KPIs ground decisions in reality, and let exports keep your design choices auditable and easy to share.

Smart Antenna Selection in Practice

Bottom line: Let penetration heatmaps set the ambition, let UL/DL limiting dictate the technical lever, let device KPIs ground decisions in reality, and let exports keep your design choices auditable and easy to share.

What you provide

  • AOI polygon: draw the exact geography you care about (keep it tight).
  • Coverage target(s): e.g., Outdoor ≥95%; Deep Indoor ≥70%.
  • Candidate sites: auto-imported or CSV (coords, height AGL, antenna model, cable loss, environment). Include more than you think you’ll need, especially inside the AOI.

What the optimizer does

  • Runs iterative picks to activate the minimum subset of candidates that achieves your target(s) within the AOI only.
  • Ignores candidates that are outside the AOI or add only marginal improvement inside it.
  • Flags “target not fully met” when no subset can reach the goal with the current candidates.

Reading the output

  • Selected sites: the activations the tool chose (your lean build).
  • Inactive sites: not needed or outside AOI (don’t force them unless your business case demands redundancy).
  • Coverage summary: achieved % per penetration level (and where UL vs. DL is limiting).

If the target isn’t met: a simple playbook

  1. Enrich candidates inside the AOI (more rooftops, better heights).
  2. Improve geometry: raise heights, consider directional or higher-gain antennas for stubborn pockets.
  3. Level realism: for Deep Indoor/Basement, confirm device capabilities (SF/repetitions) and noise margins.
  4. Refine the AOI: exclude low-value fringes; run separate optimizations for distinct zones.
  5. Adjust targets: set tiered goals (e.g., Outdoor 95%, Daylight 85%, Deep Indoor 70%).

Modeling tips so the optimizer makes sense

  • Candidate density matters: thin candidate lists inside the AOI lead to “not met.”
  • Keep inputs realistic: device TX power, negative antenna gains, RX2 settings—over-optimism inflates coverage.
  • Mind credits: each base station included in a run consumes credit—scope candidate lists accordingly.
  • One variable at a time: duplicate the simulation and A/B test (heights, antenna pattern, target %) to see which lever moves coverage most.

Quick guide (symptom → likely fix)

What you see What to try
Many candidates left inactive They’re outside AOI or add little inside. Add stronger candidates inside AOI; keep fringe sites for a separate run.
Outdoor target met, Deep Indoor missed Add rooftops within AOI; raise heights; test directional/higher-gain antennas; verify device SF/repetition policy.
DL-limited patches dominate Increase gateway height/gain, adjust downtilt/pattern, or densify; UL tweaks won’t fix DL limits.
Optimizer oscillates near the target Tighten AOI, remove borderline areas, or define tiered targets per level; small geometry changes can stabilize the pick set.

Bottom line: draw a precise AOI, feed a rich candidate set inside it, keep assumptions realistic, and let the optimizer hand you the leanest site list that meets your coverage goal.

Limits, Assumptions, and Best Practices

What the tool is (and isn’t)

  • Designed for outdoor rooftop/high-point gateways. Indoor gateway planning requires different propagation models—treat TNC’s indoor penetration levels as device reception scenarios, not indoor AP placement guidance.
  • Prediction ≠ promise. Outputs depend on inputs; unrealistic device or antenna assumptions will skew coverage.

Key assumptions to keep realistic

  • Device capabilities: use the worst-case models you expect in the field—conducted TX power, negative antenna gains, permitted SF and repetitions, and device height ≈ 0.5 m for conservative planning.
  • Downlink specifics: DL often limits via RX2; don’t assume UL tweaks fix DL gaps.
  • Noise/interference: defaults are sane for ISM bands, but spectrum scans (if available) give better margins; update the noise rise accordingly.
  • Terrain/clutter: Europe uses DTM; outside Europe DSM (~30 m). Very fine local clutter (e.g., single-building renovations) won’t be fully captured—validate on site.

Data hygiene & governance

  • Single source of truth: edit base station data (coords, height AGL, environment, antenna, cable) in ThingPark; TNC ingests it read-only to prevent drift.
  • CSV discipline: ensure mandatory fields are complete and consistent with the antenna catalog; run the validator and fix errors before launch.
  • Versioning: export and archive KMZ/PNG/TIF + CSV/JSON for each run; keep a naming scheme for reproducibility.

Model choice & sensitivity

  • Plan with SAFE (field-tuned, conservative).
  • Use the alternative/optimistic model for what-if sensitivity—never as the sole basis for CAPEX.

When to densify vs. tune

  • UL-limited & broad → consider height/gain and additional sites.
  • DL-limited pockets → prioritize gateway height, pattern/downtilt, or extra rooftops; UL tweaks won’t help.
  • Deep Indoor/Basement targets → expect targeted densification within the AOI; verify devices truly support the modeled SF/repetitions.

Operational good practices

  • Iterate fast: duplicate a baseline run; change one lever at a time (height, pattern, target %, candidate set).
  • AOI discipline: draw only where service matters; split large metros into zones to avoid optimizer “averaging.”
  • Stakeholder alignment: share KMZ for quick visual buy-in; attach JSON/CSV so engineering can reproduce outcomes.
Credits & Availability

Operational good practices

  • Free credits first: Run coverage for up to 5 base stations every 90 days. These refresh automatically and are consumed before any paid credits.
  • Paid credits for scale: Purchase additional credits via ThingPark Market when your simulations include more base stations or frequent iterations.
  • Per-run accounting: Each base station included in a simulation consumes one credit.

Operational good practices

  • ThingPark Community: Immediate access with SSO and free credits to get started.
  • ThingPark Wireless & ThingPark Enterprise: Fully supported, with the same planning and optimization features.
  • On-prem / OCP environments: Use CSV imports to plan with your own candidate lists today. For API bridges or an on-prem deployment of the tool at scale, contact Actility to discuss options.

Good practice

  • Scope candidate lists thoughtfully (credits are counted per base station).
  • Use duplication of simulations for A/B testing rather than rebuilding from scratch.
  • Archive exports (KMZ/PNG/TIF + CSV/JSON) since hosted results are retained for ~90 days.
Frequently Asked Questions (FAQ) - LoRaWAN® Coverage with ThingPark

Yes. TNC includes real commercial antenna patterns (horizontal & vertical). If a model is missing, Actility can add it.

 Yes, as device penetration levels (Indoor Daylight, Deep Indoor, Basement). TNC does not plan indoor gateway placement.

 Two empirical models tuned on field data: SAFE (planning default, ~1 dB average delta vs. >370 links) and an optimistic alternative for sensitivity checks.

DTM in Europe; DSM (~30 m resolution) elsewhere (e.g., JAXA). Diffraction is modeled to capture shielding and incidence angles.

 Bi-directional (UL & DL), including device TX, antenna gains/losses (device and gateway), RX2 for DL, rooftop height, and noise rise margins for ISM.

DL often hinges on RX2. Fixes are gateway height/gain, pattern/downtilt, or additional sites. UL tweaks won’t resolve DL limits.

Yes. Define separate coverage % targets (e.g., Outdoor 95%, Deep Indoor 70%) and run Smart Antenna Selection accordingly.

 They’re typically outside the AOI or add marginal improvement. Add stronger candidates inside the AOI or refine the polygon.

 One credit per base station per run. Free credits (5 BS / 90 days) are used before paid credits.

No strict tool limit for visualization/export; your platform license may impose caps.

Model the worst-case device (TX power, possibly negative antenna gain, height ~0.5 m), use SAFE, and keep noise margins conservative unless you have spectrum scans.

Hosted outputs are retained for ~90 days. Export & archive KMZ/PNG/TIF + CSV/JSON for reproducibility.

About Actility

Media contact : marketing@actility.com – https://www.actility.com/contact/ 

Why choose Actility?

At Actility, we are passionate about unlocking the full potential of IoT for businesses and communities around the world. Join us as we continue to innovate, collaborate, and lead the way in connecting the digital and physical realms through cutting-edge IoT solutions.

© 2024 Actility’s All Rights Reserved