Utku Melemetci, researcher. maker.
Hi! I'm Utku. I'm a 3rd year undergraduate CS student at Cornell University. I'm interested in tackling challenging problems in robotics, low-level and systems software, and computer vision. I am fundamentally curious and value learning; I enjoy working in researchy environments where I have ample opportunity to obtain new knowledge and experiment with new ideas.
As a researcher, I've had the pleasure of working with Dr. Kirstin Petersen at Cornell, Dr. Angelique Taylor at Cornell Tech, as well as Group 76 at MIT Lincoln Labs. I'm also on the autonomy subteam of Cornell Electric Vehicles, where I work on software that allows our model- and full-scale electric vehicles to drive autonomously.
If you want to talk to me about anything at all, you can send me
an email at contact@utku.sh
. You
can also connect with me on LinkedIn.
projects

rrt-gpu
I’m working on accelerating the classic RRT algorithm. The goal is to parallelize tree expansion to more effectively explore high-dimensional spaces. I may port the algorithm to an FPGA one day to learn more about accelerator design.
Currently, there is only a CPU implementation available. A KD-tree is used for nearest-neighbor lookup, though I am exploring alternatives that may be more amenable to a GPU/FPGA implementation.

x86ISTMB
I was a major contributor to my CS 3110 group’s final project: an optimizing compiler to x86 for our own language written in OCaml. I worked on the intermediate representation generation, register allocation, bug fixes in emission, and randomized property-based testing. I also created the continuous integration pipeline and the language documentation.
The register allocator uses an algorithm called linear scan. Linear scan was attractive over alternatives like graph coloring due to it’s simplicity. It is also the algorithm of choice in many JIT compilers due to it’s lower time complexity.
research

MIT Lincoln Labs
I worked with Group 76 (Controls and Autonomous Systems) at MIT Lincoln Labs on control barrier functions (CBFs) for dynamic obstacle avoidance in ground robots. CBFs are a somewhat recent technique for controlling dynamical systems that, under some conditions, can provide mathematical guarantees that the system will not enter undesirable states. The core idea is shown in the image: for a dynamical system changing according to F(x,u), if the system can be controlled such that on the boundary of the undesirable set (where h(x) = 0), the direction of motion and the gradient of h(x) are aligned, then the system is provably safe, and h(x) is called a CBF.
For realistic systems, actually getting such a guarantee is easier said than done. Uncertainty in the system dynamics and control limits (where u is restricted) pose challenges. My work revolved around construction CBFs for Ackermann drive ground robots based on LiDAR data. I worked largely in NVIDIA Isaac Sim and implemented CBFs using JAX for automatic differentiation. Because construction h(x) using classical techniques was particularly challenging for our system, I turned to neural CBF techniques that learn CBFs from data, as described in “How to Train Your Neural Control Barrier Function.” I used a Graph Neural Network trained on synthetic LiDAR data I collected from Isaac Sim to learn h(x).

Collective Embodied Intelligence Lab
I work on the Single-Actuator Wave (SAW) robot project at CEI. The core idea behind our robots is that they can use their wave-shaped bodies to modify terrain, including building mounds, digging holes, and burrowing. As the name suggests, each wave is driven by just a single motor using a central helix.
My work is mostly about getting two or more of these robots to collaborate, as they are much more successful at their task when they climb on top of each other or use their peers as solid platforms. To this end, this semester, I have mostly been working on embedded signal processing to passively infer the state of another robot (such as the frequency of its waves or its location) using just a microphone. I’ve enjoyed working around the constraints of an embedded environment and have learned a lot about signals and microcontrollers. For example, I’ve had to write my own custom fixed-point FFT to get reasonable performance out of our Cortex-M0+, which doesn’t have an FPU. I’ve also touched on some PCB design, for example to integrate the microphone amplifier into the main robot PCB.