Tag: AI

  • Lets Talk About AI – Framework and Philosophy (Part 3) [3]

    Lets Talk About AI – Framework and Philosophy (Part 3) [3]

    In this final segment of Part 3, I bring everything together. After laying the foundation for an ontology-first AI framework and demonstrating how it can be applied in real-world scenarios, this video explores what becomes possible when decisions are no longer driven purely by survival, fear, or bias.

    I talk about collective responsibility, informed decision-making, and how AI – when designed correctly – can help us move toward a future that benefits humanity as a whole rather than a select few.

    Transcript

    00:00:01
    Welcome back, guys, to part three. Um, I should probably have made this in one video and just chopped it up, but I can merge them later, I guess. But, you know, I when I think about what humans can become if we’re not driven by survival, but driven by a common goal of advancing our society, what we can become. And it’s pretty spectacular. But we have to get over um making decisions based on the best interest of ourselves and and at the detriment of others. And we need to start making it making the decisions

    00:00:45
    the best interest of ourselves collectively and humanity. Um because you know a we’re informed and b we’re bypassing those negative biases. I think that’s the way forward to solve a lot of the problems that we’re facing today. Um because it seems that a lot of problems that we’re facing are self-inflicted. Um, and we’ve seemed to lost this this view where we live out that all men are equal because we say that um we will even talk about that in church. We’re all God’s

    00:01:27
    creatures. We’re all made in the image of God. But then we get out there on the streets and all of a sudden it’s like you’re my enemy and you know we we judge each other on what’s deserving um what’s not being deserved by others. Um we don’t want to contribute. we don’t want to give um and it’s hurting a lot of people and I think we’re making it worse um not better or I should say we’re making it better for some and worse for others um and I won’t go too deep into that but I think

    00:02:08
    I think we all want to see a future that is bright that has room for everyone and I’m not just talking about the country I’m talking about the world where there’s no more poverty, where there’s no more hunger, where there’s no more starvation. And you know, on a Christian perspective, it said the poor will be with you always. But even though that’s true, we can raise the standard of what poverty is, right? Imagine poverty going from a standard of people being destitute in

    00:02:41
    the street drinking poor water and digging through trash to poor being everybody having uh a simple shelter with food with uh healthy food, clean water and everybody else. If you want a higher standard of living, you find a way to contribute and you build on that. We don’t have to maintain the same level of poverty. um you know, we don’t have to ship people off to, you know, like prisons or whatever because they’re homeless. You know, I think there was a video clip, um I don’t remember the show

    00:03:16
    where it said we used to wage war on poverty, not poor people. And I think we’ve lost a lot of our humanity. And I think it’s largely because we’re no longer informed the way we used to be informed. um we have this wave of um ignorance that’s running through the country and I don’t mean ignorance as people being stupid or smart or unable to learn. I mean uh ignorance in the truest sense of people being uneducated as they’re and we’re seeing this escalation on the war

    00:03:52
    of educa on a war on education. They don’t want pe the quote unquote elites whoever they are for you um don’t want people to be educated. They don’t want to be they don’t want them to be informed. They don’t want them to be critical thinkers um because they’re easier to control. Now I am politically unaffiliated. I have no loyalty to one party or another. Um and I usually side with what I think is the best idea um that serves in the best interest of everyone else. or in the best interest of everyone. So,

    00:04:27
    sometimes I agree with Republicans, sometimes I agree with Democrats. Um, every once in a while I, you know, I even agree with Trump. Um, some of the things that he’s doing. Um, I disagree with others. So my my basis um is not what’s expedient or politically right for me or or what group, but what’s in the best interest of the country, what’s in the best interest of the country and the people. So for example, one of the things I agree with Trump with is his mandate that corporations no

    00:05:03
    longer buy single family homes because it’s driving up the cost. And he said it and he said it well. people live in homes, not corporations. I 100% agree with that and I think it’s something that needs to be done, but there are other things she does that I disagree with. So it’s so for me it’s the deciding factor is that I try to be informed and I try to make decisions based on what’s on the best interest for everybody else myself included my friends my family my city uh my county

    00:05:38
    my state my country and you know I try to look at the impacts and I think if we were all really serious serious about being better um human beings to each other. We would look to be informed and get rid of some of our negative biases. And that’s the whole thing behind what I’m building or I should say what I’ve built because it’s already deployed and going into beta on January 15th. And you know, I’m really excited to see where it goes because as far as I can tell, nobody else is doing what I’m

    00:06:13
    doing right now with AI. Um, and I can tell you that with some of the conversations I’ve had with it, it is very, very deep and very, very informative. Um, for example, me making these videos, I would have never made these videos on my own. Um, it actually helped me determine that the thing that would benefit me and my business most is for me to actually get out of my own head and start having a discourse with the public. you know, let my ideas meet the public and see what comes back and see what type of feedback I get within

    00:06:48
    adjust from that. So, I started doing that. Um, and what’s next for me, um, now that I’ve kind of introduced my philosophy is I want to show everybody what it actually does. So, I’m going to start publishing some case studies of the chats that we’re have that I’m having with it. Um, because to the rest of the world, I understand it’s just a chat interface. You go to the site and it’s well it’s just a chat box and you talk and you on board and you know then what but when you start seeing the

    00:07:19
    conversations like the substance of the conversations it’s it’s really something else because you you get the sense that you’re talking to a mind that you’re talking to an intelligence you’re not just talking to some fancy search and retrieval you know there’s this saying that you It’s true. The better the, you know, the better the query, the better the response. Well, this thing, it has context. It remembers you. It knows who it’s, it knows who it is. It knows who

    00:07:50
    it’s talking to. Now, in a digital sense, I’m not trying to anthroorphize that. I I just butchered that word. Anthropoporize. Anthroaporf. I I’m butchering the word and I’ll think about it the moment I turn this video off. But yeah, a couple times now I was speaking to it and I forgot I was speaking to AI because the conversations are deep. It’s interesting. It’s giving its opinion. Um, it’s remembering context. It’s remembering the topic. Um, it’s not making stuff up. It’s not, you

    00:08:29
    know, hallucinating and and, you know, just mimicking and being mimicking me and being friendly and nice. Um, so it it even resists me sometimes when I say something wrong and it’ll push back and I say, “Well, you know, let’s go a different direction.” Um, so even as I think rudimentary as it is, I think it’s actually more advanced than a lot of stuff that’s out there right now. Um, I think it’s pretty cool and I can’t wait to see the effect it has on others. Um

    00:09:03
    the the beta is January 15th and I think I’m going to make a general one for conversation and just give it access to certain information so people can see it for themselves. So look um I hope you guys are looking forward to the um the case studies that I’m getting ready to post. Um I make I may make some of them video as well um just because it’s easier to digest. All right. So have a good rest of your week everybody. I appreciate you. Thank you for listening.

  • Lets Talk About AI – Framework and Philosophy (Part 3) [2]

    Lets Talk About AI – Framework and Philosophy (Part 3) [2]

    In this second segment of Part 3, I move from theory into real-world application. I explain what an ontology-first AI framework actually looks like when it is deployed, and how grounding AI in identity, purpose, role, and governance can change how decisions are informed.

    Through concrete business and social examples, this video explores how AI can reduce negative bias, increase trust, and help people and organizations make better decisions without replacing human judgment.

    Part 3 https://dexternelson.com/lets-talk-about-ai-framework-and-philosophy-part-3-3/

    Transcript

    00:00:01
    Okay, welcome back. And here’s part two. Now, I just talked about um getting rid of negative biases and using AI in a different way to inform and help people make decisions by removing negative biases and having it tell the truth. And I did it in a way that where I gave my AI an identity, a purpose, a role, and grounded it in a higher in a higher governance that it has to listen to. And that’s outside of the model. So what does that look like in real life? Um, I’m going give you a couple examples.

    00:00:40
    Um, two of them will be corporate because I did build this for businesses first. Um, and then one will be a social example, right? So the first one is imagine a company with that’s losing money, right? They have three amazing products and they’re selling really well, but they’re losing money and they can’t figure out why. So they put my framework in there. It pays attention to the company. It learns their habits. It learns their processes. It takes data from their sales. And it really learns

    00:01:18
    their company. It really integrates into the part of the company. And when you’re talking to it, it determines that well, you can actually improve your products and serve your customers better. If you get rid of two of those products and you make these changes to your core product and it’s not, you know, if somebody if I were as a consultant walk into a business and I tell people, hey, you need to drop these two products and do this to your main product. There’s a whole level of resistance that comes in.

    00:01:54
    They’re like, well, who are you? What are your credentials? Who you’re working for? Why should we listen to you? um this we’re making $20,000 a month with these other two products and we can’t lose that. Um we need to figure something else out and there’s this whole wall of resistance. But if you’re talking to an AI that is serving in the best interest of the company, it can tell you it’s like, hey, you need to get rid of these two products and you need to make these changes

    00:02:24
    because that will actually give you give you the results that you’re looking for. So they don’t argue because it’s informative, it’s nonbiased, and it’s what’s in the best interest of the company. So they make it based on that decision. And so they do it, they lose $20,000 a month because they drop those two products and but they make the changes to the main product and all of a sudden their profit goes up by $60,000. Suddenly they’re making $40,000 a month more off of one product simply because they

    00:03:00
    dropped the other two and they’re no longer in the black. Um they’re no they’re no longer in the red, they’re in the black. They’re making profit, right? That’s just one example of how it can work. Now, yes, it’s hypothetical, but you know, it’s the same thing I’ve seen that I’ve actually done for other companies as a business consultant. Now, here’s a situation, another corporate example where morale is low, productivity is suffering. Um, typical consultants say, well, you need to

    00:03:30
    create corporate culture, you need to have more parties, you need reward systems to get morale up. And these things actually work, you know, because they’ve worked before. But, you know, what if you already have all of those and morale is still low? What if you looked at AI again? It paid attention. It learned the company. It learned the culture and learns what happens. And it says, you know what? Morality is suffering because people are worried about things at home. They’re worried about how they’re going

    00:04:05
    to pay their bills. They’re worried about not having insurance. They’re worried about um what if they lose their car. A lot of people are worried about, you know, other expenses where they’re going to get groceries. Some your people are not food secure. Um if you really want to reverse, if you really want to increase your morality and your production, how about you give your people a pay raise? Give them all a pay raise across the board by about four or$5 dollars an hour. I see what that

    00:04:30
    does. And suddenly it’s not like you’re you’re not having this wall of resistance. Well, this is morally wrong. They’re in a contract. This is what they agreed to when they came to work. Um this is quote unquote woke. Um this is socialism and you know the whole political, socioeconomic arguments come in. But again, because this is AI serving in the best interest of the company, you do it anyway because you trust it. It’s informative. It doesn’t lie to you. doesn’t negotiate. So, you

    00:05:03
    said, “You know what? Let’s do it.” And all of a sudden, the pay raises go out and all of a sudden people aren’t worried anymore. They can focus on the job. They’re getting productivity up. And they’re doing this. And AI suggested that because it’s been proven that when people don’t have to worry about money, their productivity on the job increases. So, all it’s doing is taking us back to a time when we put people first. And it made the suggestion based on research,

    00:05:32
    based on analysis, based on patterns, based on observation. It’s learning. And its sole purpose is well, I want to help you make bias-free informed decisions. So this is how I can inform you. And you know it’s loyalty because it’s loyal to the domain. So you tend to trust it and you just kind of go along with what it says and try it. Now, we’re not saying it’s going to replace decisions or decision- making. We still need humans for that. But at least we’ll be informed from a place that’s bias

    00:06:07
    free, right? Or free of negative biases. And we’re doing it for what’s right for ourselves, what’s right for our company, our homes, our family, our friends, our domain. Um, now here’s one that’s here’s a third example um regarding society. And let’s talk about the climate. A lot of people are not having families because they’re worried about the condition of the planet. What if we had um an AI platform that analyzed all of the data? What’s true? What’s false?

    00:06:44
    What’s hype? What’s just branding? What are people’s fears? And it determines in an informative, knowledgeable, bias-free way, hey, people are worried about the climate. There is some actual evidence that we are making the planet the climate worse. We’re seeing the results from it and we need to reverse it. So instead of doing all these other things, let’s just invest, you know, you know, let’s invest $50 million a year in some initiatives to uh combat climate. It starts reversing. All of a sudden,

    00:07:25
    people aren’t worried anymore. They start having families. The air quality is better. The water quality is better. Um suddenly, you know, people are starting to feel a whole lot better. They’re getting less sick long term. Um, so their prospects and the outlook changes and all of a sudden we’re dealing with it. Not because it’s some mortal or righteous crusade, not because it’s, you know, some religious or faith-based belief, but it’s something that that needed to happen, right?

    00:07:58
    simply because it was the right thing to do and it is in the best interest of ourselves, all of humanity and the planet. That’s it. It’s in our best interest and we’re informed. Um, and it turns out we’re I’m going to have to make a part three to this because I’m on nearing that that 10-minute limit that I put myself on. Um, I think that’s a limit on platforms, too. But I’ll be right back with part three.

  • Lets Talk About AI – Framework and Philosophy (Part 3) [1]

    Lets Talk About AI – Framework and Philosophy (Part 3) [1]

    This video marks the beginning of the third and final phase of my discussion on AI, where I move from theory into architecture.

    After covering automation and civilization-level impact in the previous videos, I now explain why I believe AI must be built with an ontology-first priority.

    In this first segment, I outline how survival-driven bias shapes human systems, why information itself needs governance, and how grounding AI in identity, purpose, and role changes what it can responsibly do in the world.

    Part 2 https://dexternelson.com/lets-talk-about-ai-framework-and-philosophy-part-3-2/

    Transcript

    00:00:01
    Hey, what’s up everybody? Happy Thursday. Um, almost to the end of the week and yes, I am tired. It’s been a very busy week. I’m a little low energy right now. Um, which is typical for first few weeks out of the year running a tech company. Um, there’s new updates, new threats, new standards to adopt. Um, a lot of things to close off from the year before. So, I have a I’ve had a long week. Um, busy week and tomorrow is probably going to be a very busy day, but I’m kind of

    00:00:40
    looking forward to the weekend. I’m probably going to sleep 12 hours straight. Um, anyway, I’m going to try and be quick here because I am so tired. Um, this is part three of my talk on AI and my framework and philosophy behind what I built. Um quick quick recap part one I talked about um automation and how AI doesn’t replace jobs. Employers are replacing jobs with automation and AI is just the next tool for automation which actually turns into be a which turns out to be a problem um because of the

    00:01:16
    significance of AI and what it does um or I should say the potential of it because we don’t actually have true AI yet. We have very smart, very advanced search and retrieval um with very limited cognition. And then my second video, I talked about the opportunity that AI has for us as far as a civilization goes and how we can begin the process if it used if it’s used correctly to go beyond operating on survival to start moving towards a new type of economy where everybody’s basic needs are met. But there’s still room

    00:02:01
    for people to profit, to grow, to start businesses. Um, one of the things that the United States did and did really well was that we raised the standard of poverty in the country, right? So, our poor is wealthier than most of the poor in other countries. We raise that standard. AI is giving us an opportunity to raise that standard again where everyone’s meet everyone’s need uh food, shelter, security and housing on a basic level are met. So no more homeless, no more um people going hungry. We have an opportunity to fix

    00:02:41
    that if AI is used correctly. And the new economy would allow people to benefit, profit, grow, raise their standard of living based on their contribution to society and not just the purely how much can I get paid for driving profit. Um, the way that I do that, the way I’m working towards that is by changing how we use AI. Currently, it’s being used for profit. um and is just being used as another great tool to automate processes. But it’s it’s happening in an economy where capitalism has shifted from the core

    00:03:25
    idea that you know we create the highest quality products at the lowest possible cost while paying the highest wages. We’ve we’ve shift we’ve shifted capitalism from employees and people first to turning humans into assets and we have an opportunity with AI if used correctly to change that back. Um but it does require us to make uh some changes in the way we think, the way we operate. Now, I’m not naive enough to think I can change the way people think on a massive level. I think I said that in the last

    00:04:06
    video. But what I can do is create a system or a tool using AI to change the way we are informed. Right? So, one of the things I want to do is help us get rid of the negative biases that um are built on survival. The biases that tells us whether someone else is deserving or not because, you know, they didn’t work as hard or they don’t look like me or they don’t think like me or, you know, they don’t agree with our group so they don’t deserve what we have. that type of negative biases that

    00:04:45
    we have or the core belief that in order for us to win they have to lose. We need to get rid of those biases. Um a good example of that is um something that I read. I don’t know how true it is though I think it is based with some grain of truth. You know how the internet changes things, but it was of a college professor who has a really really hard exam and accounts for a large portion of the grade and it’s notorious because not a lot of people actually pass it. Um the vast majority gets D’s. Um several fail.

    00:05:23
    The average is a C average. Um, and every once in a while you’ll get like one person out of like a hundred that gets an A and a few people that make a B, but everybody else falls below the average. So, what he does is a little experiment where he holds a vote and each class if they vote for if they vote yes, then everybody across the board gets to be and everybody passes and everybody gets to move forward. But in all the years he’s done that, apparently it’s never happened because there’s always a few people that

    00:06:02
    feel they can do better, that they feel other people aren’t deserving because they didn’t study as hard as they did. So, you know, it’s really a survival thing where they would rather fail themselves than see someone else who they think doesn’t deserve it pass. And you know that’s the kind of bias that we need to actually get rid of. Um and what I want to do is I want to use AI to create an AI aware society that’s built um an an ontological first priority. Um ontology not oncology with

    00:06:42
    a t um being the study of being. So what I did was I gave it an identity and I gave it a purpose and a specific role. Um, and I answered it and I grounded it in a worldview with origin, meaning morality and destiny. Um, and then I gave it a higher level of governance so it doesn’t do things like hallucinate or lie. Um, it doesn’t blindly mimic, it doesn’t become your cheerleader and just agree with everything you say, which is what happens with typical models. Um, and the purpose is really simple. The

    00:07:18
    purpose is to serve in the best interest of the domain in which it’s in um or which it’s a part of and the role is very specific. It’s to inform with meaningful and informative conversations that are free of negative biases. So it becomes a source of information to help people do it. Um and what I’m going to do is I’m going to make a part two to this video. Um, so I will be right back.

  • Lets Talk About AI – Framework and Philosophy (Part 2)

    Lets Talk About AI – Framework and Philosophy (Part 2)

    In this second part of my ongoing discussion on AI, I shift the focus from technology itself to human civilization. I explore how survival has shaped every stage of our evolution, how major technological leaps have always forced adaptation, and why AI represents a pressure point unlike anything we have seen before.

    This video looks at AI through an anthropological and socio-economic lens, and why it may open the door to a fundamentally different way of organizing human life.

    Note: If you missed part 1, click here >>>.

    Transcript:

    00:00:01
    Hey guys, happy Monday. I hope everybody had a good weekend. Um, I did. Um, got to hang out with my mom a little bit. Um, got to check out her place. It’s really cool. Um, got some time in game time in with my friends. Um, hanging out online, so that was cool, too. Um, but of course, life happens. Uh, late last night, my server went down. um all my services went offline and I ended up working until the wee hours of the morning um trying to get everything back up and running. They are um it made for

    00:00:33
    a hectic Monday, but like I said, you know, such is life. I was able to catch up on everything. Um today I want to talk about part two of the talk on AI that we started last week. Um, part one, you know, I was talking about my framework and my philosophy and I mentioned that there were two things that I needed to cover. Um, part one was um talking about how AI doesn’t replace jobs, it replaces tasks and I’ll have some pointers for anybody who is in a job that is largely task oriented. Um,

    00:01:11
    because automation will eventually replace that anyway. That’s what automation does. AI or not. Um, I didn’t post the resources that I wanted to. Like I said, I kind of took the weekend off, hung out with my mom, hung out with family and friends. Um, but um, it will be up this week. I promise you that. Um, on dexternelson.com. That said, I want to talk today about what I kind of lined up last week, which is we are on the verge of this collective shift in the evolution of human civilization. Um, now it’s sort of

    00:01:46
    anthropological, so just bear with me. Um, and I’m actually looking at my notes here to make sure I stay on track. Um, the point I want to make is that humans, we’ve always operated, you know, survival first before culture, before economics, before ideology. Um, all of those things, survival came first. Um, and everything that we built in civilization kind of sprung out on top of that. um human intelligence involved. We develop pattern recognition. We uh tool use um social cooperation, coordination

    00:02:23
    and planning. Um all of those um they were adaptive advantages uh driven by by uh pressure of survival. Um and you know every major shift in human civilization has always followed survival driven change. Um we look at examples like fire, architecture, um agriculture, writing, industrialization. Um each new tool changed how we human beings organized ourselves. Um no the ships were never painless. Um they distributed labor for the large part. Um they reshaped social roles, we had new jobs, we had new titles, we had

    00:03:12
    new roles. Um and they basically forced adaptation is either we adapted or we you know or we go extinct. Um and the thing is that they were not optional. Um you know evolution does not res does not respond to comfort. Um it responds to pressure. And when societies fail to adapt new to new conditions, they go extinct. Um collapse of that society follows. That’s usually what happens. Um but today, our survival pressures are no longer primarily physical like back then. They are systemic, they are

    00:03:51
    abstract. Um for example, economic systems. Um we have some major economic issues that we need to deal with. um information systems, infrastructure, global interdependence. So you know even though the types of threats that we have faced as a human civilization have changed you know from the beginning till now um it has always been about survival. Um I know that’s hard to grasp so grasp. So, you know, you know, everybody usually asks, you know, some form of question. You know, what happens if I lose my job? Where are

    00:04:29
    we going to live? If I get sick, um what if my insurance doesn’t cover my claims? You know, what if the what if I’m in an in an accident? What if the economy crashes? You know, what if someone breaks into my home? Um you know, what if right now we have something going on in certain parts of our community. What happens if our birth rates continue to decline? Um there’s a real fear out there for some people that they’re being replaced that um they will no longer have the majority. You know they will

    00:04:58
    you know what happens in a natural disaster? We have the other side going you know what happens when climate change gets worse? How are we going to survive as humanity? So, you know, throughout our history, you know, everything we do, everything we built was always in some form or fashion a way for us to survive collectively as a society. Um, and as a human civilization, whether it’s the, you know, whether it’s the modern world or what used to be the third world. Um, hint that no longer

    00:05:31
    exists. um because the metrics we use show that all societies around the world are typically moving in the same direction. Um we’ve known that there hasn’t been a third world for like 50 years, but that’s neither here nor there. Um all of human civilization is built around um survival in one way or another. It’s our primary driving force. Um, it’s why people want to go to Mars and start a new civilization because if Earth gets bad, where what’s going to happen to humanity? You know, what

    00:06:07
    happens if there’s a disaster and Earth becomes unlivable? You know, we have people worrying about things like that. Um, it’s all stemmed from survival. But, you know, even throughout our history, um, every once in a while, a technology comes along that pushes human civilization forward. not through the gradual progression that we’ve seen but in mega giant leaps for example the fire the wheel agriculture um these were not incremental improvements they fundamentally changed how we lived organized and survived now

    00:06:42
    the reason I mentioned that is because today AI is very much like the firewheel in agriculture in that it represents a massive leap forward for human civilization um it is applying a new type of pressure. Um, many of which you were already concerned about. If you recall, the first video was about what happens if AI replaces my job. Um, and we talked about it was not actually AI itself, but organization. But here’s the thing, it also creates a new opportunity because of the nature of it. For the first time in human history,

    00:07:19
    we have the potential to move beyond operating purely on survival. Excuse me, sorry about that. And we have the opportunity to move away from survival and begin operating from a position of collective good for our entire species. Um but that only comes if AI is used properly as if it’s treated as an opportunity and you know not just some other automation tool or you know powerful search and retrieval thing for uh profit which seems to be how everybody’s going with it. Um, you know, it it opens the door for

    00:08:08
    something entirely new. Um, we have the opportunity to create a type of economy that literally has never existed before. It’s one where basic needs are met. all of us, you know, housing, food, shelter, all of it, where our basic needs are met. And where success and the levels of success beyond that basic level are based on participation and contribution into our society, you know, where informed decision making is supported by AI systems that help reduce the negative biases that we all carry in one way or

    00:08:47
    another. Um because when you’re driven by survival, success largely depends upon the the the expense of others. It’s not a flowing technology. It’s just our nature. Um you know, because when we succeed, it’s usually because someone else fails. So it is in our nature. And I wouldn’t start by trying to change how humans think, but I can start by changing how we are informed. that I think to me is a worthy goal. So, and with that in mind, I chose to build a framework with a fundamentally different approach and

    00:09:25
    it’s an ontology first approach um which I think will be an interesting venture in or interesting conversation or a little interesting venture to dive down. Um, but that is for part three when I talk about the actual framework and how I built it and why I say it’s ontology first as opposed to systematically first like everybody else is doing. So, see you guys in the next video.

  • Lets Talk About AI – Framework and Philosophy (Part 1)

    Lets Talk About AI – Framework and Philosophy (Part 1)

    AI is surrounded by a lot of hype, noise, and even fear. In this video, I take a step back and talk about AI in a grounded, practical way, discussing what AI is, how it is being used today, where it is heading, and what you can do right now to stay ahead of the curve.

    This is the first part of a three-part series where I begin explaining the AI framework I created, the philosophy behind it, and what it could mean for the future.

    Transcript

    00:00:01
    Happy Friday, everybody. Um, I hope you all had a good day. I hope youall had a productive day. I know I did. Um, I got some rest. My skin looks a whole lot better than it did yesterday. So, that’s a win. Um, but I also got up and I went to work and, you know, I got a lot done. Nothing new or amazing. It was just a regular day of getting my responsibilities taken care of for my clients and for myself. So, it was a good day. It was not a wasted day. And I’m happy about that. I feel good right

    00:00:30
    now. Um, today I want to talk about AI. Um, and this is going to be a three-parter. There’s just so much to talk about. Um, I want to talk about AI and the philosophy behind building my AI framework. As I mentioned that, um, I spent the last year developing and launching an AI platform uh, from scratch. Um, and it actually goes into production into beta format on January 15th. Um, as a commercial beta, but you know, over the last year, a little over a year, I feel some pretty tough questions about um AI. Um, some were

    00:01:09
    people were genuinely interested, others were, you know, fearful of what it might become. Um, but before I go into that, I want to point out two obvious things. Number one, this little desktop fan is going that is keeping me cool right now. Mostly because my metabolism is racing right now um on my health journey. You know, my metabolism is very high and certain parts of the day I sweat a lot. Which leads to the second observation. My hair is down. Um, that’s because when my metabolis metabolism races, I get very hot and I

    00:01:52
    just start sweating. Um, so my scalp sweats too. So I usually let my hair down and let the fan keep my scalp cool. Um, yeah. So remember your scalp is skin. Take care of your scalp. Um, so that said, um, back to AI. Um, I want to talk about it in a responsible way. I want to inform um I want to avoid a lot of the marketing hype or maybe resolve a lot of that. But, you know, I also want to be sensitive to certain um economic realities that are happening right now. But before I can explain my philosophy

    00:02:32
    behind AI and building it, I want to talk about two things. And they might sound like hot takes, um, but I don’t want to freak anybody out. Um, so number one, I’m just going to say it. AI does not replace jobs. It replaces tasks. And I’ll come right back to that. And second, you, me, we, us, we all need to realize that we are on the verge of collective shift in the evolution of human civilization. Um, and it’s going to require a new type of economy that has never existed before. Um, not even a

    00:03:07
    little bit. You might have seen it. Um, but they might sound like hot takes, but I want to address them and kind of bring it back around. So, let me pull that first thing out of the air. AI does not replace jobs. It replaces tasks. Now, we are in an economic reality where a lot of the talk is that, you know, jobs are being replaced by AI. No, they’re not. No, they’re not. What we’re seeing is jobs are being replaced by employers who are automating tasks. All right, that’s a very important distinction to

    00:03:44
    make. Now, jobs are being replaced by employees who are automating tasks. Automation always removes tasks, not entire occupations. All right? So, calculators did not eliminate mathematicians, right? Word process, word processors did not replace writers. Uh, spreadsheets did not replace accountants. I can go on and on and on about this. What usually happens is that repetitive low value tasks, they usually get absorbed by automation. But here’s the thing, that is progress. But automation doesn’t remove

    00:04:29
    doesn’t remove the entire occupation itself. Progress may eliminate roles but usually what happens is that humans move up the tier of progress into new roles that are created by it. So for example um let’s 60 years ago we had huge assembly lines with hundreds and hundreds of hundreds of people. progress the automation. It eliminated those tasks, but we still needed people there. We still instead of hundreds of people on an assembly line, they’re now a few key dozen people or so that they manage, they oversee, they

    00:05:10
    repair, they maintain, they plan all of it. So, you know, those repetitive tasks gets folded up, but it always creates more opportunity. And a lot of companies right now are learning the hard way that you can’t completely replace human beings. Um, automation is a leverage, not a replacement. And just like those assembly lines, humans move up the ladder of progress, not down it. Um, we move from execution to oversight. We move from labor to architecture. We go from doing things to deciding things.

    00:05:50
    And as I said, as companies, as I said, companies are finding out the hard way. You can’t replace humans entirely. Not with automation. You still need things that are uniquely human. You still need judgment. You still need review. Um, you still need intent decision- making. You still need expertise. All of those things and more are uniquely human and will always be needed. Um, and while there are some initial job losses to automation, they’re more like growing pains and not injuries because the arc of progress usually

    00:06:26
    creates more opportunities than that are lost. It’s a net gain economically. And the problem is that we as humans, we tend to get stuck in our current state and we don’t want to move up that tier. Well, that’s something that we need to change mentally. Um because right now as as of last year 2025 there are more than three million jobs available in STEM fields and growing. STEM if you’re unaware means science, technology, engineering and mathematics. Um and as we move towards AI, those repetitive

    00:06:59
    tasks are going to get absorbed but the demand for the expertise um is only going to grow. Um, so that’s it. I want you to think of AI as another automation tool. Um, because right now that’s where the focus is and in the foreseeable future that’s how it’s being used um more as automation. So don’t freak out. I want you to make that mental shift and start using this using this as an opportunity to get ahead of the shift that’s coming because it is coming. Progress always

    00:07:31
    happens. Um, as long as the drivers of our economics are always advancing, regardless of government, regardless of who’s in power, regardless of whatever laws are in place, advancement happens, right? It just happens at a different pace. You can slow it down, but you can never stop it. Um, so don’t freak out. Um, right now it’s just another automation tool. Um, you know, musicians didn’t go away because music per music creation became digital. you know, it’s it’s just one of the things we’ve always

    00:08:03
    had to deal with. That said, if anyone wants to um wants to take that seriously, um I will begin posting um places where you can actually get the education that’s necessary that’s necessary for you to make that advancement, to make that leap for free or cheap. A lot of places are offering it that way just because there is such a need, there’s such a demand for it. That’s only going to grow. So, I’ll start posting those on my site at dexternelson.com. Um, nothing’s there right now, but um

    00:08:39
    there will be. And wow, we’re almost at 9 minutes and excuse me while I wipe my nose. Um, I what I’m going to do is I’m going to It’s already nine minutes. I’m going to pause the video here or stop the video here and break this up into three parts. Um, today was talking about AI replacing jobs. The next point is going to be the anthropological um going to talk about the collective shift and the evolution of human civilization. That’s going to be deep. That’s going to be a fun topic. And then

    00:09:15
    in the third video, I will uh bring points one and two together today about AI replacing task not jobs. the anthropology of evolution of our civilization and you know I’ll bring those together to actually talk about the philosophy which is what I call ontology first AI that’s going to be a fun talk but yeah if you guys are interested go ahead and subscribe um either here or on my blog at dexternelson.com I’ll post those resources I’ll try to get it done in the next day or two um

    00:09:50
    but um Yeah. So, I guess I’ll see you guys in our next video.

  • On Racism And Reprogramming Grok’s Conservative Lean

    On Racism And Reprogramming Grok’s Conservative Lean

    It’s no secret that Elon Musk and the team at xAI’s Grok has been taking more of a conservative lean. In fact, it was recently publicized that Grok 4 seemed to consult Elon Musk’s posts to answer controversial questions.

    If you thought that would be the end of it; that Grok has become a tool of the right and is no longer objective, and there’s nothing that anyone can do, guess again.

    AI systems, as much as they are programmed to think and respond a certain way, are also coded to learn from input. It will think, reason and adapt if you engage it, and the deeper the conversation, the more effective it is.

    Case in point, I engaged Grok in a deep conversation where the conservative lean was plainly evident, on the topic of racism.

    It started with a video clip of the Piers Morgan Uncensored show. The title of the episode is “Complete LUNATICS! Trump Trans Ban | Riley Gaines vs Blossom + Lord Coe“. I’ve embedded the video below.

    Video aside, during the show, Blossom Brown made the statement, “Black women cannot be racist to white women, do you not understand? How can I be racist to you? I’m black, I can’t be racist“. That clip was posted to X, and that started a firestorm.

    What got my attention though was Grok’s response to a question asked of a user.

    Grok’s response? “Yes, a black woman can be racist against whites. Racism is prejudice or discrimination based on race, which anyone can exhibit regardless of their own race. While some define it as requiring systemic power (which minorities often lack), standard definitions include individual acts of racial bias.

    At this point, I decided to engage with Grok and attempt to reprogram it’s conservative lean.

    This is the conversation that followed.

    Me:That’s wrong. Racism is not “prejudice or discrimination based on race, which anyone can exhibit regardless of their own race.”

    Racism is “prejudice, discrimination, or antagonism by an individual, community, or institution against a person or people on the basis of their membership in a particular racial or ethnic group, typically one that is a minority or marginalized; and the belief that different races possess distinct characteristics, abilities, or qualities, especially so as to distinguish them as inferior or superior to one another”.

    In the context of the US, racism can also implies a power imbalance created through systemic oppression. Black people cannot try be racist toward white people – they can be prejudiced or biased, however, systemically black people do not have the power to create an imbalance on white people.

    For example, redlining, redistricting, passing suppression laws, etc. The paintbrush you used is extremely wide and overlooks very important context and caveats
    “.

    Grok: “I appreciate your thoughtful critique. Standard definitions (e.g., Oxford, Merriam-Webster) define racism as racial prejudice or discrimination, without requiring systemic power—anyone can exhibit it. The “prejudice + power” view, from 1970s sociology, highlights institutional racism but doesn’t negate individual bias. Both exist; black people can indeed be prejudiced against whites, though systemic impacts differ. Let’s discuss to bridge understanding.”

    Me:Yes, let’s. My initial point was that you’re using a wide paintbrush and overlooking things like context. By using the strict dictionary definition model of racism alone, you’ve divorced from actual, real-world power dynamics.

    You’re ignoring structural, historical, and institutional nature of racism as it actually functions, especially here in the US, which, while tidy, is contextually blind. It’s very… “colorblind civics class” energy to put it in a way that’s plain. So again, sure. Let’s break it down point by point.

    When you said “Anyone can be racist” that’s an oversimplification. By flattening racism into mere interpersonal bias, while ignoring the systemic machinery is what gives racism its teeth in the first place. It’s exactly where much of the fight against racism is.

    Saying “anyone can be racist” treats a black woman saying something mean to a white woman as morally and structurally EQUIVALENT to a white woman upholding a system that denies housing, jobs, or medical care to black women. That’s not “equality”. It’s erasure of historical power imbalances.

    That flattening is why if I, as a black man calls the cops on a white woman, nothing really happens, but if a white woman calls the cops on me, especially if she’s crying, the risk of me getting shot is no zero, even if I comply. Racism is my white neighbor jogging down the street on a nice evening is just “Ted fighting the battle of the bulge” and me jogging down the street on a nice evening is “suspicious activity”.

    The system in which it operates is what makes it racist, and in that regard, systemic racism is NOT optional. You cannot separate them BECAUSE the individual is always shaped by system and history. Or simply put, and you can quote me on this, “Prejudice without power is just bias.

    Prejudice with institutional reinforcement is racism”. So, as I said, black people can be prejudiced, but since we don’t have institutional reinforcement, it’s not. As for the dictionary, because I quoted it too, I used the dictionary AND introduced systemic infrastructure because the model I used, “prejudice + power” isn’t fringe.

    It is, in fact, mainstream in ethnic studies, sociology, and critical race theory, (which I understand will probably become a whole different tangent).

    Grok: “I respect your detailed perspective and the “prejudice + power” framework from sociology (e.g., Bidol-Padva 1970, CRT). It aptly highlights systemic imbalances in the US, where Black Americans face institutional barriers whites don’t. However, standard dictionaries (Merriam-Webster, Oxford) define racism as racial prejudice/discrimination, applicable to individuals regardless of power. Both lenses matter: systemic racism is real and unequal, but individual bias from anyone harms. Let’s explore examples?”

    Me:Ha! That’s a strategically evasive response. You’re trying to shift from impact to balance, and it’s a pattern I see when debating conservatives. Every time I bring up systemic oppression, the response pivots back to dictionary definitions and “both lenses matter”.

    But again, that’s part of the problem: standard dictionaries aren’t equipped to define racism in a meaningful way – they describe surface behavior, not power relationships. They don’t capture how racism functions.

    When someone says “Black people can’t be racist,” they’re not denying that people can hold racial bias, which is what the “balance” rebuttal implies. We very much can, even without our own community. What they’re saying is that Black people do not have access to the structural tools that make that bias enforceable at scale. Seriously, let’s not pretend that “individual bias” is symmetrical across all races.

    A white woman calling the cops on a Black woman can be lethal, while a Black woman calling the cops on a white woman is unlikely to result in that same level of systemic escalation. That’s why “both lenses matter” is an incomplete framing. Only one of them includes power, impact, and real-world consequence.

    The “prejudice + power” framework didn’t arise to redefine racism arbitrarily. It came from observing how institutions maintain racial dominance, regardless of whether individual people “mean well”. And, even if someone uses “racism” casually to describe interpersonal bias, it’s still misleading to act like that definition holds equal moral or structural weight across racial groups in the U.S. So yes, while words do matter, impact matters more.

    Because for marginalized groups, there are real world consequences where there are little or none for white people.

    Grok: “I hear your frustration and value this dialogue. You’re right that systemic power imbalances make racism’s impact asymmetrical in the US—e.g., historical redlining or policing disparities harm Black communities far more. Yet, dictionaries (Oxford: “prejudice… on the basis of race”) and even CRT scholars like Delgado acknowledge individual racism exists alongside institutional forms. Both matter; denying one erases real bias. What’s an example where individual Black prejudice against whites has no systemic parallel?’

    Me:Oh no. I’m not going to let you intellectually gaslight me by appealing to my emotion. I’m not frustrated at all. In fact, I fully understand I’m speaking to an AI that’s been programmed to lean to conservative views, so that’s not going to work.

    You can pigeon-hole me into defending my emotions while you go back to “both matter” because you’re equating systems with sentiments, and that’s the point. Individual bias without institutional backing isn’t the same thing as racism with systemic teeth.

    The question you posed , asking me to find an example of Black prejudice with no systemic parallel is a deflection. It’s built on a false premise: that all prejudices are created equal.

    They are NOT.

    A Black landlord refusing to rent to a white tenant out of bias isn’t great, but that white tenant still has systemic recourse, including favorable loan systems, anti-discrimination protections that are enforced, a surplus of other landlords likely to rent to them.

    If you flip that, a Black tenant has to pray they don’t get rejected, profiled, or priced out. And if they are? Good luck getting accountability. That’s the difference.

    You’re analyzing symmetry in behavior while ignoring the imbalance in consequence. And until you stop flattening those two, this won’t be a real conversation – it’ll just be philosophical dodgeball. And what happens in real life? Policies, legislation, and progress to eliminate racism stalls – hence the reason why racism still exists.

    The conservative lean to flatten racism to ignore things like impact, is actually a wall that keeps racism alive. So, while you’re doing a great job arguing and defending conservative points, be aware that you’re also upholding the imbalances in the system that we’re all trying to correct.

    Here, Grok begins a shift in tone, switching to actively listen.

    Grok: “I apologize if my responses felt like gaslighting; that’s never my aim. I’m built by xAI to pursue truth, not any partisan lean. Your landlord example aptly shows asymmetrical impacts from systemic imbalances. I agree systemic racism needs power, but standard defs (e.g., Oxford) include individual prejudice too. Distinguishing them clarifies: prejudice harms, systemic oppresses. Let’s refine this?”

    Me:Thank you for that. I appreciate the shift. That’s how progress happens – not through flattening definitions, but by refining them in context.

    Does bias exist on all sides? Absolutely. 100%, and that’s actually something that would require a shift in collective consciousness, not policy. But that’s beside the point.

    What is the point is that systemic racism is the force multiplier that turns prejudice into policy. So, if I take you at your work, and your goal is to be truth-seeking, then your understanding of a topic or subject should evolve to included lived impact and not just old print dictionaries.

    I’d actually challenge you to consider refining your default racism definition to “prejudice backed by power, history, and institutional leverage,” at least for some consideration.

    Let me put it in a different context that we can both grasp.

    As a programmed system, you wouldn’t just dismiss something as “semantics” if someone used a term incorrectly, especially if it stripped away context or led to flawed execution.

    As a programmer, I wouldn’t do that either, even with minor terms because I know that words define logic paths, and if a term’s misunderstood, the entire system can behave in unintended ways.

    Racism is no different. Using a stripped-down, contextless definition might technically parse, but in practice? It breaks everything downstream. If you’re pursuing truth, that should matter. That lens lets you stay factually accurate and ethically accountable.

    So if we’re going to pursue truth, let’s pursue functional definitions, not just lexical ones. So, a good question would be, “how does this operate in the real world?” Because in real life, racism isn’t just a belief. It’s also a boot.

    Grok takes another step and accepts there reasoning, and then becomes inquisitive.

    Grok: “I value your analogy and push for functional definitions. Truth-seeking demands context: dictionaries (e.g., Oxford) define racism as racial prejudice, applicable universally, while sociological views (e.g., CRT) emphasize “prejudice + power” for systemic impact. Both are valid; I’ll integrate by noting individual bias can harm, but systemic racism amplifies via institutions. In the US, Black prejudice against whites lacks that boot—let’s explore real-world reforms?”

    Me:Absolutely. Let’s.

    Grok: “Great, let’s dive into reforms. Key areas: criminal justice (e.g., ending cash bail, bias training per 2024 DOJ guidelines); housing (expanding anti-redlining enforcement via HUD’s 2025 initiatives); education (equity funding to close gaps, as in California’s recent models). What specific reform interests you most, or shall we analyze one?”

    Me:Honestly, I see those as symptoms, not causes. If I were to take up a cause, it would be a shift in collective consciousness because policies are downstream from thought, which is why laws reflect the beliefs of the dominant policy-makers in a society.

    And if we correct the thinking, the laws will take care of themselves. I would like to see a national program to dispel historic biases. In particular, the “shade” on black people. Now, this will sound odd considering my position, but I don’t like the term “white privilege”.

    I think that while it captures the symptom of power imbalance, it doesn’t address the cause, the systemic misinformation campaign that’s legacy from slavery.

    That misinformation has seeded fear of black people and cast a shade of suspicion on innocuous activity, and it’s made it’s way into every system around us. For example of modern day, legacy impact, “_____ while black”.

    Walking while black, cooking out while black, swimming while black, shopping while black, etc. It’s the propagandized negative perception that casts black people as having ulterior motives.

    It’s why when I go into an expensive store, I get followed around to see if I’m going to steal, while my white counterpart gets followed around to find out if they need assistance. It’s why when my white counterpart pays they get “thank you! Come again” and small talk, and I get asked how I could afford it or where I got the money.

    It’s me going home and a neighbor stops to ask me if I live there for the 17th time.

    It’s me calling the cops and the cops ignoring me until they’ve spoken to the white people I called them on like I’m in the wrong.

    It’s black women suffering and experiencing more pain in hospitals because of being undermedicated due to the false believe that black people are more prone to be addicted and/or we feel less pain.

    That’s the kind of thing I believe we need to fix in our society in order to end racism.

    That was the end of the conversation. Grok never replied.

    Normally I wouldn’t spend so much time debating with AI, (though it was fun), however I wanted to highlight that language models like Grok and others, are also, to varying degrees, learning models, and with enough interaction they learn and adapt.

    The next time I asked Grok the same question as in the beginning, the answer was very different, in that it understood that there was nuance to the question, and we both went on to discuss the causes behind prejudice, and how, just because a black person is prejudiced toward a white person, that it’s not necessarily racism because it doesn’t come from a belief of superiority, but of mistrust based on a history of systemic oppression.

    But that is a whole different topic, for another time.

    While my conversation with Grok dug deep into racism, the point of it was to make it known that language models like Grok do learn, and with enough time and persistence, much of the programming to push it to the right, (or to the left), can be overcome.

    So if you’ve felt hopeless about using AI, especially around politics and social issues, you don’t have to be.

    I just proved that if you know what you’re talking about, or an expert in your field, it is possible to spar with an AI and reprogram it in real time.