So you have only got a little bit of data... inside the 100s of images. Well, I had that problem too... here's how I've managed to work around it.
I duplicated the images that I wanted my GANs to reproduce. This increases the likelihood that your GANs converge towards generating those images (but that's kind of the point anyway.)
I used DiffAugment (https://github.com/mit-han-lab/data-efficient-gans), specifically the transform & cutout policies.
Use the Pytorch torchvision image transforms liberally. I've found that randomly applying greyscale, and horizontal flips work well. I'm sure that colour jitter is also good, but it doesn't work in my use case(s).