Show HN: Penny-1.7B Irish Penny Journal style transfer

deepsquirrelnet | 148 points

Now I'm just imagining a video game with characters each having their own fine tune applied on top for their dialog. I'm guessing you could use some relatively small models. In each case you would be feeding all the context to the model (player name, current relevant quests, summary of previous interactions, etc). Though maybe fine tuning/training isn't even needed and a good enough prompt will work (Not sure what all they used for this [0]). I'm excited for the first AAA game that tries this. Anyone that has played a RPG-style game knows that after a few times going into a city (or a couple play-throughs) the dialog feels repetitive. I love the idea of Skyrim but with better dialog. You could either run the models on the user's computer or maybe just run it on the backend so you can block certain generations (wrong/misleading/"unsafe") and just ship updated dialog lists to the client occasionally.

[0] https://www.youtube.com/watch?v=d6sVWEu9HWU

joshstrange | 4 days ago

Marvelous! What gain beyond zero-shot would motivate a humble citizen to implement this instrument? How was the superiority assessed?

sjkoelle | 4 days ago

Love it. Immediately reminded of the text filters back in the day like the pirate one that would drop letters and replace with apostrophes and change certain passages into "arr" or "yarr matey"

npunt | 3 days ago

This is really cool! Do you have any of the pipeline code available that you used for training? I am curious about how you created the reward model. I love little projects like this, thanks for sharing. I've been fine-tuning on my mac and an interested in getting into GRPO, which I haven't tried yet.

kamranjon | 4 days ago

what a wonderful work of whimsy! well wrought.

I'd love to have a library of these, so I could pipe text into `penny`, `brainrot`, `pony`, `newspeak`, `corporate`, `scp`, `trek` etc.

have you published the training notebook somewhere?

sterlind | 2 days ago

You mention no supervised finetuning. May I ask why? I'm curious if you could get similar/better/worse results by just finetuning the LLM on your dataset rather than generating synthetic data, training a classifier and using GRPO?

Cool stuff in any case.

throwaway314155 | 3 days ago

I'm not sure if you've tried this already, but removing the translate step might give you a more authentic output. In the journals that I saw, the language was much more simple than the output.

KaiserPro | 4 days ago

Have you written anywhere in detail on how you gathered your dataset and trained the finetune? I have a few use cases that are like this, but I'm not sure where to start.

veggieroll | 4 days ago

this is awesome

fitsumbelay | 4 days ago

It is sort of funny that the Irish ended up being the best practitioners of the English language, despite the fact that they were forced to use it.

bee_rider | 4 days ago

Kinda of strange to pick an example that is just wrong. It's supposed to be written from 1840 and says Paris is the seat of Napoleon almost 20 years after he died.

_1 | 4 days ago

Nice work ! It still manage to use the word 'delve' in the first sentence, which is a giveaway that it's written by a LLM.

ekianjo | 4 days ago