Hi,
My name is Gabriel Twigg-Ho

I am a robotics engineering student.

This is a webpage to help describe some of my more published work in greater detail.

View Resume
Scroll to explore

Featured Projects

Swarm Tracking Algorithm

Problem: Existing swarm algorithms only achieved 40% target tracking reliability.

Solution: Developed a three-state adaptive algorithm achieving 94.41% line-of-sight persistence.

View Project →
Task Scheduling

Online Task Scheduling

Problem: Traditional scheduling algorithms assume known task durations, but real-world systems face uncertainty.

Solution: Adapted classic heuristics (HEFT, CPoP) for online execution with real-time recalibration.

View Project →
CO2 Prediction Model

CO2 Prediction Model

Problem: Excel-based CO2 forecasting delivered only 70% accuracy, requiring constant intervention.

Solution: Built an Ignition SCADA system with 95% hourly accuracy, selected as ICC 2024 finalist.

View Project →
AI-Powered Mobile Robot
In Progress

AI-Powered Mobile Robot

Problem: Bridging natural language commands with physical robot actions requires complex perception pipelines.

Solution: ROS 2 platform with YOLOv8 vision, voice control, and LLM integration on Raspberry Pi.

View Project →

How can a swarm of slow drones maintain persistent contact with a target moving twice their speed?

Red dot = Target • Yellow dot = Swarm Agent • Yellow circumference = Swarm Agent LOS radius
Note: the target is non-evasive

Line-of-Sight Persistence

From unusable to Reliable and Deployable

Our Solution
94.41%

The Problem

The Real-World Context

This problem is critical in applications like law enforcement pursuing a suspect vehicle, or disaster response teams tracking a drifting vessel, where persistent tracking is paramount.

The Gap in Existing Research

The best Line-of-Sight score from existing research was only 40%. At this reliability, the swarm loses sight of the target more than half the time. For real-world applications like search and rescue or surveillance, this simply isn't usable. A target that disappears 60% of the time isn't being tracked.

That's why we set out to develop a solution capable of achieving 90%+ LOS, a threshold where persistent tracking becomes viable for practical deployment.

Problem Constraints

  • Track a fast-moving, non-evasive target with a slow-moving swarm
  • Agent speed = 50% of target speed (2:1 speed disadvantage)
  • Agents can only communicate with K nearest neighbours
  • Score averaged over multiple target paths

Methodology

Core Hypothesis: No single behaviour can achieve 90% LOS. We need an adaptive solution.

1. Build the Framework

Created a 2D simulation optimization framework in Python, allowing rapid prototyping and testing of swarm behaviours.

2. Ideate

Explored various tracking and searching algorithms for the swarm, drawing from existing research and novel approaches.

3. Optimise & Test

  • Tested and optimised every combination of existing algorithms
  • Evaluated one-state, two-state, and three-state strategies
  • Built a Python framework to iteratively tune algorithm variables. Each algorithm had handfuls of parameters to optimise (e.g., memory duration, prediction multipliers)

4. Exploratory Design

Observed weaknesses in the simulation and designed entirely new, purpose-built behaviours to fill those strategic gaps.

The Solution: Three-State Architecture

The breakthrough came from a three-state solution, with distinct algorithms for when an agent is in front, behind, or has no memory of the target:

1. In Front → Aggressive Prediction

Prediction worked well all-round, but we observed that a more aggressive predict multiplier could be used when in front. Standard prediction sometimes caused agents to miss the target's path. The aggressive variant kept agents positioned on the target's trajectory.

2. Behind → Predictive Pursuit

Testing showed prediction was the best-performing behaviour when behind the target. Agents use a predictive multiplier to anticipate where the target will be, rather than chasing where it was.

3. No Memory → Inverse Square Repulsion

When an agent loses sight of the target, separation kicks in using inverse square repulsion. This forces agents to spread equally far apart across the map, maximizing sensor coverage for reacquisition.

Results

94.41%
Line-of-Sight persistence score
Previous Best: 40% → Our Solution: 94.41%

By moving from static behaviours to dynamic role allocation, we demonstrated that in resource-constrained, decentralized systems, adaptability is more powerful than raw speed.

Tech Stack

  • Python
  • Pygame
  • NumPy
  • Git
View Code on GitHub →

Online Task Scheduling under Uncertainty

SC '25 Workshops Publication | SAGA Framework

During my exchange semester at Loyola Marymount University in Los Angeles, I was invited by Professor Jared Coleman to join his research team. Even after returning home to Australia after the exchange I still work remotely with Professor Jared and his team helping where I can.

The research we do focuses on the question, "How do you schedule tasks on a supercomputer when you don't know how long they will take?"

Specifically the research linked below was focused on creating an automated framework for the SAGA Python library that adapts offline heuristics (like HEFT and CPoP) for online execution. By simulating a feedback loop that re-calibrates the schedule as tasks finish, we enabled static algorithms to handle runtime uncertainty dynamically.

Task Scheduling Visualization

Task graph visualization from our publication showing computational dependencies and scheduling complexity

Understanding the Task Scheduling Problem

Task scheduling aims to assign computational tasks to machines to minimize execution time while respecting dependencies. Traditional algorithms assume known task durations, but real-world systems face uncertainty from hardware variations, network conditions and workload unpredictability. Our research adapts classic scheduling heuristics to handle this uncertainty in real-time, making offline algorithms work effectively in dynamic, online environments.

CO2 Prediction Model

Inductive Automation Conference 2024 Finalist

During my university placement at Carlton & United Breweries, I was given the opportunity to complete and deploy a CO2 prediction model for the brewery.

The project utilised Inductive Automation's Ignition SCADA platform to build a predictive system that forecasts CO2 storage levels hour-by-hour, replacing an unreliable Excel-based solution.

CO2 Model Interface

The Ignition-based CO2 prediction model interface showing real-time storage levels and forecasting

This work was submitted to the Ignition Community Conference's annual Discover Gallery, where it was selected as a finalist among the best projects of the year.

View Project on ICC Discover Gallery →
In Progress

AI-Powered Mobile Robot

ROS 2 / Raspberry Pi / Voice-Controlled Autonomous Platform

Project Overview

I'm building a mobile robot designed to bridge the gap between natural language and physical action. The goal is a system that understands context-aware commands like "follow the person in the red shirt" or "go to the kitchen," combining perception, language understanding, and autonomous navigation on a Raspberry Pi.

Development Roadmap

Phase Goal Status
A–C Core System: Pi OS, ROS 2 environment, and low-latency camera streaming. Done
D Voice Interface: On-board offline speech recognition (Vosk). Done
E Simulation: Voice-to-motion logic and virtual drive kinematics. Done
F Vision: Off-board GPU object detection (YOLOv8) & person tracking over Wi-Fi. Done
G Electronics: Motor driver integration, PWM signal generation, and bench testing. G-7/G-8 calibration and 10-min soak test on hold. Paused
Chassis Physical Design: Chassis redesign and stability validation using NVIDIA Omniverse Isaac Sim. In Progress
H Motion: Closed-loop motor control and follow-me logic. Reached H-8/H-9 (save parameters, brown-out rehearsal); blocked on H-7 (bench test with wheels)—fixing e-stop not working in follow mode. In Progress
I–J Intelligence: Obstacle safety systems and LLM-based visual reasoning. Planned

Tech Stack

ROS 2 Iron Raspberry Pi Python YOLOv8 OpenCV LLM Integration Vosk pigpio NVIDIA Isaac Sim

Progress

From concept to physical build, here’s a glimpse of where things stand.

Early concept design rendering of the mobile robot

Early concept design

Work in progress: physical prototype with electronics and motor

WIP prototype