Trendaavat aiheet
#
Bonk Eco continues to show strength amid $USELESS rally
#
Pump.fun to raise $1B token sale, traders speculating on airdrop
#
Boop.Fun leading the way with a new launchpad on Solana.
This point is indeed true: instead of saying what not to do in the prompt, it should focus on what should be done. Large models are similar to humans in this regard; the more you tell them not to do something, the more likely they are to be drawn to it.

13.8. klo 23.50
When you want the model to be prohibited or not,
do your best not to write it directly!!!
do your best not to write it directly!!!
do your best not to write it directly!!!
Let me briefly explain a few methods:
1. If you really have to write, do not exceed two points.
2. Change "do not" to "do". Avoid writing awkward sentences—check each sentence to ensure the context, transitions, and connections are coherent.
3. Prohibited content should go from not appearing to appearing multiple times. Some things cannot be remembered after just one mention. When I was in school, my Japanese language teacher said that Japanese companies have a special practice of repeatedly explaining a simple matter to prevent you from forgetting it; even the smallest details will be remembered if mentioned multiple times. You can prohibit at the beginning, in the middle, in related areas, and at the end as well.
4. If prohibition means not to do something, then add a step: turn a single task into two steps, and at the end add a sentence asking if there are any prohibitions. I will send you the prohibitions, and then we will start screening the prohibitions, ensuring that other information remains unchanged during partial modifications.
5. Place the prohibitions in the first step.
6. Ensure that your prohibitions can actually be enforced. For example, if you haven't defined the style of the text, and the text ends up sounding AI-generated, then prohibiting it from using an AI tone is useless; it doesn't even know what tone it's using!
63,89K
Johtavat
Rankkaus
Suosikit