In the last post I mentioned the HexagDLy Python package, which implements convolution kernels in natively hexagonal coordinates for use in CNNs (based on PyTorch).
The codebase comes with a nifty way of generating examples, with pre-set patterns such as “snowflake” and “double_hexagon”. I decided to test out frequency space convolutions in hexfft
using their most basic example, which can be found here. The toy_data
method spits out hexagonal shapes
on grids with customizable sizes as Torch tensors, which can then be added together.
I found that the package uses an offset coordinate system which is perfectly compatible with hexfft
’s “offset” HexArray
s (provided we take the transpose), and wrote a tiny converter for the toy examples:
from example_utils import toy_data # found in the HexagDLy repository
from hexfft import HexArray
import numpy as np
def toy_data_hexarray(*args, **kwargs):
x = toy_data(*args, **kwargs)
return HexArray(np.squeeze(np.array(x.to_torch_tensor().T)))
We can then generate the examples and load them as HexArray
s:
from hexfft.plot import hexshow
num_rows = 20
num_columns = 24
t1 = toy_data_hexarray('double_hex', num_rows, num_columns, px=5, py=5)
t2 = toy_data_hexarray('double_hex', num_rows, num_columns, px=14, py=8)
t3 = toy_data_hexarray('snowflake_3', num_rows, num_columns, px=5, py=16)
t4 = toy_data_hexarray('snowflake_3', num_rows, num_columns, px=14, py=19)
h = t1 + t2 + t3 + t4
hexshow(h, cmap="gray_r")
Note that we’re displaying this rotated by 90 degrees since HexagDLy
offsets columns rather than rows.
Now we can create the same smoothing kernel that they use as a HexArray
:
kernel = HexArray(np.zeros((num_rows, num_columns)))
c1, c2 = num_rows//2, num_columns//2
kernel[c1, c2] = 1.
idx = np.array([[c1, c2-1], [c1, c2+1], [c1-1, c2], [c1+1, c2], [c1-1, c2-1], [c1+1, c2-1]])
kernel[tuple(idx.T)] = 1.
hexshow(kernel, cmap="gray_r")
And then convolve via multiplication in the frequency domain:
from hexfft import fft, ifft
from hexfft.utils import filter_shift
H = fft(h)
K = fft(filter_shift(kernel))
CONV = H * K
hexfft_conv = ifft(CONV)
With the following code, I performed the same convolution using hexagDLy
:
s1 = toy_data('double_hex', num_rows, num_columns, px=5, py=5)
t1 = s1.to_torch_tensor()
s2 = toy_data('double_hex', num_rows, num_columns, px=14, py=8)
t2 = s2.to_torch_tensor()
s3 = toy_data('snowflake_3', num_rows, num_columns, px=5, py=16)
t3 = s3.to_torch_tensor()
s4 = toy_data('snowflake_3', num_rows, num_columns, px=14, py=19)
t4 = s4.to_torch_tensor()
tensor = t1 + t2 + t3 + t4
hex_conv = hexagdly.Conv2d(in_channels = 1, out_channels = 1, kernel_size = 1, stride = 1, bias=False, debug=True)
hex_conved_tensor = hex_conv(tensor)
hg_conv = HexArray(np.squeeze(hex_conved_tensor.detach().numpy().T))
Let’s compare the output of hexfft
vs hexagDLy
:
fig, ax = plt.subplots(3, 1, figsize=(4, 12))
im = hexshow(np.real(hexfft_conv), cmap="gray_r", ax=ax[0])
fig.colorbar(im, ax=ax[0])
ax[0].set_title("hexfft results")
im = hexshow(hg_conv, cmap="gray_r", ax=ax[1])
fig.colorbar(im, ax=ax[1])
ax[1].set_title("hexagDLy results")
im = hexshow(np.real(hexfft_conv - hg_conv), cmap="gray_r", ax=ax[2])
fig.colorbar(im, ax=ax[2])
ax[2].set_title("Difference")
fig.tight_layout()
The results are identical!