This video marks the beginning of the third and final phase of my discussion on AI, where I move from theory into architecture.
After covering automation and civilization-level impact in the previous videos, I now explain why I believe AI must be built with an ontology-first priority.
In this first segment, I outline how survival-driven bias shapes human systems, why information itself needs governance, and how grounding AI in identity, purpose, and role changes what it can responsibly do in the world.
Part 2 https://dexternelson.com/lets-talk-about-ai-framework-and-philosophy-part-1-2/
Transcript
00:00:01
Hey, what’s up everybody? Happy Thursday. Um, almost to the end of the week and yes, I am tired. It’s been a very busy week. I’m a little low energy right now. Um, which is typical for first few weeks out of the year running a tech company. Um, there’s new updates, new threats, new standards to adopt. Um, a lot of things to close off from the year before. So, I have a I’ve had a long week. Um, busy week and tomorrow is probably going to be a very busy day, but I’m kind of
00:00:40
looking forward to the weekend. I’m probably going to sleep 12 hours straight. Um, anyway, I’m going to try and be quick here because I am so tired. Um, this is part three of my talk on AI and my framework and philosophy behind what I built. Um quick quick recap part one I talked about um automation and how AI doesn’t replace jobs. Employers are replacing jobs with automation and AI is just the next tool for automation which actually turns into be a which turns out to be a problem um because of the
00:01:16
significance of AI and what it does um or I should say the potential of it because we don’t actually have true AI yet. We have very smart, very advanced search and retrieval um with very limited cognition. And then my second video, I talked about the opportunity that AI has for us as far as a civilization goes and how we can begin the process if it used if it’s used correctly to go beyond operating on survival to start moving towards a new type of economy where everybody’s basic needs are met. But there’s still room
00:02:01
for people to profit, to grow, to start businesses. Um, one of the things that the United States did and did really well was that we raised the standard of poverty in the country, right? So, our poor is wealthier than most of the poor in other countries. We raise that standard. AI is giving us an opportunity to raise that standard again where everyone’s meet everyone’s need uh food, shelter, security and housing on a basic level are met. So no more homeless, no more um people going hungry. We have an opportunity to fix
00:02:41
that if AI is used correctly. And the new economy would allow people to benefit, profit, grow, raise their standard of living based on their contribution to society and not just the purely how much can I get paid for driving profit. Um, the way that I do that, the way I’m working towards that is by changing how we use AI. Currently, it’s being used for profit. um and is just being used as another great tool to automate processes. But it’s it’s happening in an economy where capitalism has shifted from the core
00:03:25
idea that you know we create the highest quality products at the lowest possible cost while paying the highest wages. We’ve we’ve shift we’ve shifted capitalism from employees and people first to turning humans into assets and we have an opportunity with AI if used correctly to change that back. Um but it does require us to make uh some changes in the way we think, the way we operate. Now, I’m not naive enough to think I can change the way people think on a massive level. I think I said that in the last
00:04:06
video. But what I can do is create a system or a tool using AI to change the way we are informed. Right? So, one of the things I want to do is help us get rid of the negative biases that um are built on survival. The biases that tells us whether someone else is deserving or not because, you know, they didn’t work as hard or they don’t look like me or they don’t think like me or, you know, they don’t agree with our group so they don’t deserve what we have. that type of negative biases that
00:04:45
we have or the core belief that in order for us to win they have to lose. We need to get rid of those biases. Um a good example of that is um something that I read. I don’t know how true it is though I think it is based with some grain of truth. You know how the internet changes things, but it was of a college professor who has a really really hard exam and accounts for a large portion of the grade and it’s notorious because not a lot of people actually pass it. Um the vast majority gets D’s. Um several fail.
00:05:23
The average is a C average. Um, and every once in a while you’ll get like one person out of like a hundred that gets an A and a few people that make a B, but everybody else falls below the average. So, what he does is a little experiment where he holds a vote and each class if they vote for if they vote yes, then everybody across the board gets to be and everybody passes and everybody gets to move forward. But in all the years he’s done that, apparently it’s never happened because there’s always a few people that
00:06:02
feel they can do better, that they feel other people aren’t deserving because they didn’t study as hard as they did. So, you know, it’s really a survival thing where they would rather fail themselves than see someone else who they think doesn’t deserve it pass. And you know that’s the kind of bias that we need to actually get rid of. Um and what I want to do is I want to use AI to create an AI aware society that’s built um an an ontological first priority. Um ontology not oncology with
00:06:42
a t um being the study of being. So what I did was I gave it an identity and I gave it a purpose and a specific role. Um, and I answered it and I grounded it in a worldview with origin, meaning morality and destiny. Um, and then I gave it a higher level of governance so it doesn’t do things like hallucinate or lie. Um, it doesn’t blindly mimic, it doesn’t become your cheerleader and just agree with everything you say, which is what happens with typical models. Um, and the purpose is really simple. The
00:07:18
purpose is to serve in the best interest of the domain in which it’s in um or which it’s a part of and the role is very specific. It’s to inform with meaningful and informative conversations that are free of negative biases. So it becomes a source of information to help people do it. Um and what I’m going to do is I’m going to make a part two to this video. Um, so I will be right back.


![Lets Talk About AI – Framework and Philosophy (Part 1) [2] Thumbnail image of Dexter Nelson discussing applied ontology-first AI and bias-free decision-making over a calm natural landscape](https://dexternelson.com/wp-content/uploads/2026/01/Lets-Talk-About-AI-Framework-and-Philosophy-Part-3-2-150x150.png)


![Lets Talk About AI – Framework and Philosophy (Part 1) [3] Thumbnail image of Dexter Nelson discussing collective decision-making, AI philosophy, and the future of human progress over a calm natural landscape](https://dexternelson.com/wp-content/uploads/2026/01/Lets-Talk-About-AI-Framework-and-Philosophy-Part-3-3-150x150.png)


Leave a Reply