Lets Talk About AI – Framework and Philosophy (Part 3) [2]

Thumbnail image of Dexter Nelson discussing applied ontology-first AI and bias-free decision-making over a calm natural landscape

In this second segment of Part 3, I move from theory into real-world application. I explain what an ontology-first AI framework actually looks like when it is deployed, and how grounding AI in identity, purpose, role, and governance can change how decisions are informed.

Through concrete business and social examples, this video explores how AI can reduce negative bias, increase trust, and help people and organizations make better decisions without replacing human judgment.


Part 3 https://dexternelson.com/lets-talk-about-ai-framework-and-philosophy-part-1-3/

Transcript

00:00:01
Okay, welcome back. And here’s part two. Now, I just talked about um getting rid of negative biases and using AI in a different way to inform and help people make decisions by removing negative biases and having it tell the truth. And I did it in a way that where I gave my AI an identity, a purpose, a role, and grounded it in a higher in a higher governance that it has to listen to. And that’s outside of the model. So what does that look like in real life? Um, I’m going give you a couple examples.

00:00:40
Um, two of them will be corporate because I did build this for businesses first. Um, and then one will be a social example, right? So the first one is imagine a company with that’s losing money, right? They have three amazing products and they’re selling really well, but they’re losing money and they can’t figure out why. So they put my framework in there. It pays attention to the company. It learns their habits. It learns their processes. It takes data from their sales. And it really learns

00:01:18
their company. It really integrates into the part of the company. And when you’re talking to it, it determines that well, you can actually improve your products and serve your customers better. If you get rid of two of those products and you make these changes to your core product and it’s not, you know, if somebody if I were as a consultant walk into a business and I tell people, hey, you need to drop these two products and do this to your main product. There’s a whole level of resistance that comes in.

00:01:54
They’re like, well, who are you? What are your credentials? Who you’re working for? Why should we listen to you? um this we’re making $20,000 a month with these other two products and we can’t lose that. Um we need to figure something else out and there’s this whole wall of resistance. But if you’re talking to an AI that is serving in the best interest of the company, it can tell you it’s like, hey, you need to get rid of these two products and you need to make these changes

00:02:24
because that will actually give you give you the results that you’re looking for. So they don’t argue because it’s informative, it’s nonbiased, and it’s what’s in the best interest of the company. So they make it based on that decision. And so they do it, they lose $20,000 a month because they drop those two products and but they make the changes to the main product and all of a sudden their profit goes up by $60,000. Suddenly they’re making $40,000 a month more off of one product simply because they

00:03:00
dropped the other two and they’re no longer in the black. Um they’re no they’re no longer in the red, they’re in the black. They’re making profit, right? That’s just one example of how it can work. Now, yes, it’s hypothetical, but you know, it’s the same thing I’ve seen that I’ve actually done for other companies as a business consultant. Now, here’s a situation, another corporate example where morale is low, productivity is suffering. Um, typical consultants say, well, you need to

00:03:30
create corporate culture, you need to have more parties, you need reward systems to get morale up. And these things actually work, you know, because they’ve worked before. But, you know, what if you already have all of those and morale is still low? What if you looked at AI again? It paid attention. It learned the company. It learned the culture and learns what happens. And it says, you know what? Morality is suffering because people are worried about things at home. They’re worried about how they’re going

00:04:05
to pay their bills. They’re worried about not having insurance. They’re worried about um what if they lose their car. A lot of people are worried about, you know, other expenses where they’re going to get groceries. Some your people are not food secure. Um if you really want to reverse, if you really want to increase your morality and your production, how about you give your people a pay raise? Give them all a pay raise across the board by about four or$5 dollars an hour. I see what that

00:04:30
does. And suddenly it’s not like you’re you’re not having this wall of resistance. Well, this is morally wrong. They’re in a contract. This is what they agreed to when they came to work. Um this is quote unquote woke. Um this is socialism and you know the whole political, socioeconomic arguments come in. But again, because this is AI serving in the best interest of the company, you do it anyway because you trust it. It’s informative. It doesn’t lie to you. doesn’t negotiate. So, you

00:05:03
said, “You know what? Let’s do it.” And all of a sudden, the pay raises go out and all of a sudden people aren’t worried anymore. They can focus on the job. They’re getting productivity up. And they’re doing this. And AI suggested that because it’s been proven that when people don’t have to worry about money, their productivity on the job increases. So, all it’s doing is taking us back to a time when we put people first. And it made the suggestion based on research,

00:05:32
based on analysis, based on patterns, based on observation. It’s learning. And its sole purpose is well, I want to help you make bias-free informed decisions. So this is how I can inform you. And you know it’s loyalty because it’s loyal to the domain. So you tend to trust it and you just kind of go along with what it says and try it. Now, we’re not saying it’s going to replace decisions or decision- making. We still need humans for that. But at least we’ll be informed from a place that’s bias

00:06:07
free, right? Or free of negative biases. And we’re doing it for what’s right for ourselves, what’s right for our company, our homes, our family, our friends, our domain. Um, now here’s one that’s here’s a third example um regarding society. And let’s talk about the climate. A lot of people are not having families because they’re worried about the condition of the planet. What if we had um an AI platform that analyzed all of the data? What’s true? What’s false?

00:06:44
What’s hype? What’s just branding? What are people’s fears? And it determines in an informative, knowledgeable, bias-free way, hey, people are worried about the climate. There is some actual evidence that we are making the planet the climate worse. We’re seeing the results from it and we need to reverse it. So instead of doing all these other things, let’s just invest, you know, you know, let’s invest $50 million a year in some initiatives to uh combat climate. It starts reversing. All of a sudden,

00:07:25
people aren’t worried anymore. They start having families. The air quality is better. The water quality is better. Um suddenly, you know, people are starting to feel a whole lot better. They’re getting less sick long term. Um, so their prospects and the outlook changes and all of a sudden we’re dealing with it. Not because it’s some mortal or righteous crusade, not because it’s, you know, some religious or faith-based belief, but it’s something that that needed to happen, right?

00:07:58
simply because it was the right thing to do and it is in the best interest of ourselves, all of humanity and the planet. That’s it. It’s in our best interest and we’re informed. Um, and it turns out we’re I’m going to have to make a part three to this because I’m on nearing that that 10-minute limit that I put myself on. Um, I think that’s a limit on platforms, too. But I’ll be right back with part three.

Comments

Leave a Reply