

I mean, yes, i takes some practice. I was more commenting in terms of time+effort, which imo is not that much actual time spent doing stuff compared to e.g. just making regular sourdough bread. which also takes practice if you want nice big bubbles. In my experience, getting a pretty sourdough bread with high hydradation dough actually took more practice (in terms of handling the sticky dough) than getting good croissants.
And even the first couple of croissants turned out pretty good when i started. Not on par with bakery ones but still tasty. So it’s not like practice results need to go in the bin
Actually, as to your edit, the it sounds like you’re fine-tuning the model for your data, not training it from scratch. So the llm has seen english and chinese before during the initial training. Also, they represent words as vectors and what usually happens is that similiar words’ vectors are close together. So subtituting e.g. Dad for Papa looks almost the same to an llm. Same across languages. But that’s not understanding, that’s behavior that way simpler models also have.