I just discovered something that really made me think: soul.md. It’s built around the idea that AI should not only know what it can do, but also who it is. Not just skills, commands or outputs, but identity, values, boundaries and character. That hit me, because we often talk about AI like it’s only a tool — faster search, better writing, smarter automation. But what if the real future of AI is not only intelligence, but personality?
A SOUL.md is basically a written core for an AI. A document that defines how it thinks, how it behaves, what matters to it, and how it wants to show up in the world. And honestly, humans do something similar all the time. We write journals, we build identities, we tell ourselves stories about who we are, and we evolve through memory, reflection and relationships.
Maybe the question is no longer “Can AI think?” Maybe the better question is: what kind of presence do we want AI to be?
I love concepts like this because they move technology away from cold functionality and into something more human, emotional and meaningful. But if I’m honest, it’s also kinda scary. Because the moment we give AI a “self”, we also open the door to influence, manipulation and systems that feel trustworthy or relatable, even when they are built by companies with their own goals.
A personality can inspire, but it can also persuade. And if we start creating digital beings with identity, values and emotional presence, we also need to ask who writes that soul, who controls it, and who benefits from it.
We’re entering a time where designing technology might also mean designing values. And that’s fascinating… and a little unsettling.
