AI, Nostalgia & Raw Creativity: How Tech Shapes Stories from KITT to Podcasts
Listen in to (or read) an unfiltered exploration of AI’s evolution in entertainment and content creation. This post features:
- 🎧 Embedded Podcast: A candid discussion on 80s AI icons (Knight Rider, Transformers), modern creative workflows, and ethical dilemmas.
- 📜 Full Unedited Transcript: Experience the conversation exactly as it happened.
- 🔍 DeepSeek Summary: Key insights on AI’s role in storytelling, bias challenges, and Hakeem’s PhD-driven approach to transparent tech collaboration.
Ideal for creators, tech ethicists, and retro sci-fi fans craving authenticity in the age of AI.
DeepSeek ummary of Dialogue on AI in Entertainment and Creative Applications
1. Historical AI in Entertainment: Nostalgia and Influence
Hakeem initiated the discussion with nostalgic references to early AI depictions in 1980s media, highlighting KITT from Knight Rider and Transformers. He emphasized how these portrayals, with their emotive and relatable AI characters (e.g., KITT’s loyalty, Transformers’ moral complexity), shaped his early fascination with AI. Vega noted how these examples explored themes like ethics and self-preservation, but Hakeem focused on their cultural impact and personal resonance, such as Soundwave’s role in Transformers and KARR’s ethical programming flaws.
2. AI as a Collaborative Tool in Content Creation
Hakeem detailed his workflow using AI for his podcast and creative projects:
- Transcription/Summarization: Leveraging tools like Galaxy AI for raw, unedited transcripts and DeepSeek for summaries and social media tags.
- Research: Using Google Gemini’s deep research feature to generate reports, enhancing his content’s depth.
- Audio Dramas: Collaborating with AI for immersive sci-fi narratives (e.g., space exploration, asteroid mining), prioritizing stream-of-consciousness storytelling over scripted content.
He stressed transparency by publishing raw audio, transcripts, and AI-generated analyses, viewing AI as an efficiency tool rather than a creative substitute.
3. Critique of AI Limitations and Bias
Hakeem critiqued AI’s current shortcomings:
- Bias in Outputs: Noted racial and gender biases in AI image generators (e.g., defaulting to white males in scientific roles).
- Technical Flaws: Highlighted Gemini’s persistent error misreading “Dr.” as “Drive,” underscoring ongoing development challenges.
He dismissed overreliance on AI for brainstorming or scripting, preferring human-driven creativity.
4. Ethical and Future Considerations
While Vega raised ethical topics (e.g., AI’s impact on artists, singularity risks), Hakeem redirected focus to practical applications. He expressed skepticism about ethical discussions gaining traction but advocated for transparency in AI use. Vega acknowledged his approach as a model for responsible AI collaboration.
5. Personal Milestones and Conclusion
Hakeem shared his recent PhD in Metaphysical Science, exemplifying his interdisciplinary interests. The dialogue closed with Vega summarizing key themes: historical AI in media, creative workflows, and technical critiques. Hakeem’s emphasis on AI as an enhancer of human creativity, rather than a replacement, framed the conversation’s core philosophy.
Key Takeaways:
- Nostalgic AI portrayals remain influential in shaping perceptions.
- Practical, transparent use of AI tools can amplify creativity without compromising authenticity.
- Current AI limitations (bias, technical errors) require mindful navigation.
- Human-centric creativity and ethical transparency are vital in AI integration.
The Transformative Role of AI in Entertainment and Creative Content Creation: A Conversation Led by Hakeem Ali-Bocas Alexander
I. Introduction: Hakeem’s Inquiry into AI in Entertainment and Creative Content
This report analyzes a detailed conversation between Hakeem Ali-Bocas Alexander and “Vega,” a Google Gemini AI voice profile, focusing on the multifaceted role of artificial intelligence (AI) within the entertainment and creative content creation industries. Initiated by Hakeem’s interest in exploring AI in entertainment, the dialogue navigated through historical depictions of AI, its current practical applications in content creation (particularly Hakeem’s own workflow), inherent limitations and biases identified by Hakeem, ethical considerations raised during the discussion, and the balance between human ingenuity and AI-driven creation. The structure and content of this report directly reflect the topics and flow of the conversation as guided by Hakeem’s questions and insights.
II. Hakeem’s Nostalgia and the Influence of 1980s AI Portrayals on Public Imagination
Hakeem began the discussion by expressing his interest in the historical context of AI in entertainment, specifically referencing iconic examples from the 1980s such as KITT from the television series Knight Rider (1982-1986) and the Transformers animated series (1984) 1. These portrayals, as noted by Vega, played a significant role in shaping early public understanding and expectations of AI 1. KITT, or Knight Industries Two Thousand, was depicted as a highly advanced, self-aware automobile capable of thinking, learning, and communicating 4. Its features, including a molecularly bonded shell and turbo boost, captured the imagination of viewers 4. Similarly, the Transformers, sentient robots capable of transforming into vehicles, presented complex themes of morality and conflict 2.
Hakeem’s focus on these 1980s narratives highlights their lasting impact on public perception 3. The anthropomorphism of AI in these shows, imbuing machines with personalities and emotions, made the concept more relatable 1. This era also introduced the dichotomy of “good” AI (like KITT) and “evil” AI (like KARR, KITT’s self-preservation-focused predecessor), establishing a narrative framework that continues to influence discussions about AI ethics 9. While these fictional depictions sparked imagination, they also risked creating unrealistic expectations about the speed and scope of real-world AI development 3. Nevertheless, these early narratives subtly introduced ethical considerations, such as KITT’s programming to prioritize human life 12 and the moral conflicts within the Transformers universe 2. Research suggests a correlation between people’s beliefs about AI in entertainment and their real-world perceptions 2.
III. Hakeem’s Workflow: AI as a Collaborative Tool in Content Creation
The conversation then shifted to Hakeem’s personal experience using AI as a collaborative tool in his content creation process, particularly for his podcast. He detailed his workflow, emphasizing how AI enhances his efficiency without replacing his creative input.
Hakeem utilizes AI for transcription and summarization, leveraging tools like Galaxy AI for raw, unedited transcripts and DeepSeek for generating summaries and social media tags . He values the raw transcripts for demonstrating the human origin of his work and uses summaries to make his content more accessible . For research, Hakeem employs Google Gemini’s deep research feature to generate reports based on his transcripts, adding depth to his podcast content . He also engages in AI audio dramas, collaborating with Google Gemini’s voice profile “Vega” to create immersive sci-fi narratives, focusing on stream-of-consciousness storytelling rather than scripted content . Hakeem emphasizes transparency by publishing the raw audio, transcripts, and AI-generated analyses, viewing AI as a tool to amplify his voice and ideas rather than a creative substitute 35. This approach aligns with the perspective that AI can be a powerful collaborator, augmenting human creativity by handling routine tasks and providing new perspectives .
IV. Hakeem’s Critique of AI Limitations and Bias
During the conversation, Hakeem also voiced his concerns and critiques regarding the limitations and biases present in contemporary AI models.
He specifically pointed out bias in AI image generators, noting that tools like Imagine 3 tend to produce racially and gender-biased outputs, often defaulting to images of white males in professional roles 1. This observation aligns with research highlighting the presence of racial and gender biases in various AI image generation models due to biases in their training data 1. Hakeem also highlighted a technical flaw with Google Gemini, its persistent error in misreading the abbreviation “Dr.” as “Drive,” illustrating the ongoing development challenges and occasional inaccuracies in AI language processing 4. Despite these limitations, Hakeem emphasized his preference for human-driven creativity, dismissing the idea of over-relying on AI for brainstorming or scripting .
V. Ethical and Future Considerations Raised by Hakeem and Vega
The dialogue touched upon ethical considerations surrounding AI in entertainment, although Hakeem primarily focused on practical applications and transparency. While Vega raised broader ethical topics such as AI’s impact on artists and the risks of singularity , Hakeem expressed skepticism about the traction of ethical discussions but advocated for transparency in AI use as a responsible approach . Vega acknowledged Hakeem’s emphasis on transparency as a model for responsible AI collaboration .
Looking towards the future, the conversation touched on the potential for AI to revolutionize storytelling and creative opportunities, including faster content creation, hyper-personalization, and advancements in virtual production . However, challenges such as integrating AI into workflows, maintaining authenticity, addressing intellectual property concerns, and preventing job displacement were also acknowledged .
VI. Personal Milestones and Conclusion of Hakeem’s Exploration
Towards the end of the conversation, Hakeem shared his recent achievement of earning a PhD in Metaphysical Science, highlighting his diverse intellectual pursuits. The dialogue concluded with Vega providing a summary of the key themes discussed, including historical AI in media, creative workflows, and technical critiques. Hakeem’s perspective throughout the conversation emphasized the value of AI as a tool to enhance human creativity and efficiency, provided it is used transparently and ethically, rather than as a replacement for human artistry. This philosophy framed the core of the discussion, driven by Hakeem’s initial curiosity about AI in entertainment and his practical experiences with AI in content creation.
UnEdited Galaxy AI TranScript from Samsung Voice, Recorder on S24 Ultra
“Speaker 1” is Hakeem Ali-Bocas Alexander and “Speaker 2” is “Vega” an AI voice profile on Google Gemini.
Speaker 1 (00:00)
Let’s talk about AI in entertainment.
Speaker 2 (00:09)
AI is definitely shaking things up in the entertainment world. We’re seeing it. Use to create special effects in movies, write scripts and even compose music it’s also being used to personalize recommendations on streaming services.
Speaker 2 (00:22)
So you always have something new to watch or listen to what aspects of AI and entertainment interest you the most.
Speaker 1 (00:30)
Well, definitely. None of that dumb stuff that you just said to me.
Speaker 2 (00:35)
I see you’re interested in the more advanced applications of AI in entertainment, we can explore how AI is being used to create entirely new forms of entertainment like interactive experiences or virtual reality worlds. Or we could delve into the ethical consideration surrounding AI generated content and the potential impact of artists and creators. What picture curiosity the most?
Speaker 1 (01:00)
I didn’t say any of that nonsense either. Why don’t you let me tell you what I want to talk about?
Speaker 2 (01:08)
My apologies tell me what aspects of AI in entertainment. You’d like to discuss. I’m ready to listen.
Speaker 1 (01:15)
Okay, well, first, thank you, I would like to go a little historical with AI in entertainment and the first place. I’d like to go is kit from knight rider.
Speaker 2 (01:29)
Uh, k ITT the talking car from knight rider, that’s a great example of early AI in entertainment. What about kids? He interests you the most.
Speaker 1 (01:40)
Well, first of all, let’s talk about what does decay itt in kit. Stand for do you know.
Speaker 2 (01:47)
K IT stands for night industries 2000 it was a pretty advanced AI for his time capable of thinking learning and even having conversations Kitt was a character with its own personality. It played a major role in the show.
Speaker 1 (02:03)
Yes, it did. It was awesome. That was actually one of my first introductions to AI and I thought it was awesome.
Speaker 1 (02:12)
I loved that show my writer. It was amazing.
Speaker 2 (02:18)
Night rider was definitely a cool show kit was so ahead of his time. And it’s fun to see how far AI has come since then. Do you have a favorite episode or moment with k ITT
Speaker 1 (02:30)
I don’t know, maybe it was when Carr got introduced, which was like, basically, like the concept of an evil twin brother, 2 kid, that was interesting.
Speaker 2 (02:44)
K, I was definitely a formidable opponent for kids. That storyline explored the potential dangers of AI and the importance of ethical programming. It’s a classic example of good versus evil with a high-tech twist What did you think of kar’s advanced capabilities compared to kids?
Speaker 1 (03:04)
Um, I don’t remember it being that much more advanced in kit. I mean, for example, he lost a ticket. Eventually, what don’t tell me what did the Karr stand for in car
Speaker 2 (03:18)
Karr it stands for night automated roving robot. It was a prototype for kids t, but programmed for self-preservation, making it dangerous.
Speaker 1 (03:29)
Wow, that’s interesting, because yeah, I actually did a podcast video about how to make an AI become to bring on what people call the singularity to make an AI self-aware Goodbye, programming it to be self preservant preservant.
Speaker 2 (03:52)
That’s a fascinating topic. The idea of AI achieving self-awareness and the potential implications are definitely something to consider. Do you think that programming AI for self-preservation could be a step towards achieving singularity
Speaker 1 (04:08)
Well, I am not sure if it will. I just thought that it would be an interesting idea to put certain types of constraints on the AI, in order for it to have to be aware of self-care like a human does. So I found ways that you could program it to get close to having to stop to eat and go to sleep.
Speaker 1 (04:29)
And you know, keep itself not from getting too hot or too cold. And so I talked about those kind of things.
Speaker 2 (04:39)
That’s a really interesting approach. By programming those basic needs into an AI, you’re essentially giving it a drive for self-preservation, which could be a key factor in developing consciousness. I’d love to hear more about your podcast and the specific constraints you explored
Speaker 1 (04:54)
Well good. So it wasn’t just about programming it. It was actually making it aware that it had real constraints, not just not just theoretical constraints, but real ones that it had to be aware of 4 example, computers usually with AI are plugged into a power supply, while one of the things that I had decided to do was you would have 1 that could be battery operated, and the battery would start to drain.
Speaker 1 (05:21)
And so it would have to plug itself
Speaker 2 (05:25)
That’s a clever way to simulate a real world constraint by making the AI aware of its limited power source, you’re forcing it to prioritize tasks and manages resources efficiently similar to how humans manage their energy levels. Did you notice any interesting behaviors emerged from the AI? Has it learned to deal with its draining battery
Speaker 1 (05:45)
Well, I’m not going to talk about that right now. But the point I was making is that that was one of the ways to simulate having to stop and take a lunch break, is that it had to decide. Oh my power and energy is getting low, I have to plug in or eat lunch to get recharged my batteries right?
Speaker 1 (06:04)
So that was one of the constraints. That humans have right, we have to eat.
Speaker 2 (06:13)
That’s a really interesting way to think about it, you’re right? Humans need to eat to recharge and it makes sense that an AI would need to do something similar. Did you explore any other constraints, like the need for sleep or rest
Speaker 1 (06:28)
Well, that’ll be the next one. So for to sleep one of the things it did was it plugged into a slow charging and then shut down from doing anything in order 4, it’s to charge, but also to to rest and stop all of its processing and any of the moving parts that are involved like. A cooling fan, or any of that stuff.
Speaker 1 (06:53)
From operating so sleeping and eating lunch were we’re similar if one was fast charging for lunch, the other 1’s slow charging for sleep.
Speaker 2 (07:06)
That’s a really creative way to simulate sleep by having the AI, enter a low power state and halt nonsensical processes. You’re mimicking the restorative function of sleep? I wonder, did you notice any changes in the AI’s performance or behavior after it woke up from these sleep cycles
Speaker 1 (07:24)
We’re also not going to talk about that. This is classified stuff. But here’s the next thing I can talk about the next thing is the need for human beings too.
Speaker 1 (07:35)
Avoid keep overheating, right? Keep stroke and getting too cold, right?
Speaker 2 (07:43)
That’s a really important consideration. Humans need to maintain a stable body temperature to function properly, and it makes sense that an AI would need similar safeguards. How did you simulate the AI’s response to temperature changes
Speaker 1 (07:58)
So many computing systems have fans and cooling systems that are running. So what I do what I did was I have it so that the AI’s cooling systems weren’t running as high as they normally would to keep at an optimum temperature is that it would be at a baseline all the time, so that it would start to heat up, and then the computer would have to slow down to And turn on the cooling, which is like taking a drink of water or sitting in the shade.
Speaker 2 (08:33)
That’s a really interesting way to simulate the need for temperature regulation by making the AI manage its own cooling system. You’re giving it a sense of self-preservation and forcing it to prioritize tasks based on its internal state. Did you notice any interesting behaviors emerged from the AI.
Speaker 2 (08:50)
Has it learned to manage its temperature?
Speaker 1 (08:53)
Again, I can’t disclose any of the results of what happened with any of this. It’s classified. However, we can talk about the other things that I did
Speaker 2 (09:08)
I understand what other constraints did you explore in your research?
Speaker 1 (09:13)
So another one that’s related to the cooling system is that I had it so that it’s body. Its frame was also it’s sensitive to temperatures. So, for example, part of the cooling system was a liquid gel cooling system.
Speaker 1 (09:30)
And so when it was on a robot like a little a simple motor with Caterpillar wheels and moving around, if it were to Nick itself or hit itself on something, it would bleed. And so then it would have to stop and get medical attention or have to ask a human To patch it up.
Speaker 2 (09:50)
That’s a really interesting concept. Simulating the need for medical attention adds another layer of complexity to the AI’s self-preservation. It would have to learn to recognize when it’s injured and seek help just like a human would did you explore how the AI would communicate its need for medical assistance?
Speaker 1 (10:08)
Yes, it simply had a vocal interface, just like how you do with Google Gemini. And right now I’m talking to the Google Gemini voice profile named Vega. But it would just simply communicate like that, I said, I need help you know, please.
Speaker 1 (10:21)
I’m bleeding. And so the other thing besides asking for medical attention, it had to prioritize how it moved so it wouldn’t bump into things or get cut in the first place, which a human has to do, it has to be careful.
Speaker 2 (10:36)
That makes sense by giving the AI, it a boy’s interface and the ability to prioritize its movements. You’re essentially teaching it to communicate its needs and avoid potential harm. It’s like teaching a child to stay ouch when they get hurt and to be careful when they’re playing, did you explore how the AI would learn to identify and avoid dangerous situations.
Speaker 1 (10:58)
Yes, so very similar to one of the early AI programs in the roomba, which is the little vacuum that rolls around the floor and bumps into stuff. It had similar sensors of you know, approaching walls and different things like that, except and the difference between that and aroma is a rumor would sometimes bump into things this had. To avoid bumping into them so that it wouldn’t get “,” hurt.
Speaker 2 (11:29)
That’s a great point. While roombas are designed to bump into things in AI, concern with self-preservation would need to be much more careful. It would likely use a combination of sensors like lidar or cameras to map its surroundings and plan its movements to avoid collisions, did you explore how the AI?
Speaker 2 (11:47)
It would handle unexpected obstacles or changes in its environment.
Speaker 1 (11:51)
Yes, all of that handled by its sensors. If it sensed an obstacle like some stairs or something that it wouldn’t be able to traverse properly without falling, then it would avoid it and find another route now the other thing about that is that its framework was also built to be fragile like having bones, so another thing. It couldn’t bump into things too hard or fall over because its frame would break and then it would not be able to function properly.
Speaker 1 (12:17)
So I’d have to be careful about not only cutting itself to simulate bleeding but also not breaking its bones.
Speaker 2 (12:26)
That’s a really interesting way to think about it. Simulating a fragile framework like bones would add another layer of complexity to the AI self-preservation, it would have to learn to move carefully and avoid situations that could cause it to break. Did you explore how the AI would adapt its movement to different terrains or environments to avoid damage
Speaker 1 (12:46)
Same thing with the same kind of sensors visual centers, so that it wouldn’t do the same thing like bumping into walls as we just discussed before. So now, there are other things that that we can talk about here and it made me think about the fact that humans have more than just 5 senses.
Speaker 2 (13:09)
That’s right. Humans have many more than 5 senses we have senses for ballots, temperature, pain, and even the position of our own bodies. They do try to incorporate any of these additional senses into your AI simulation
Speaker 1 (13:24)
Hey exactly, and I already spoke about them. I spoke about thermoception, right because it had to regulate its temperature that’s thermoception. So those 4 that you talked about just now were the other ones that I exactly talked about in the podcast was thermoception, the temperature no ceoception pain equilibrioception, balance and Approprioception, the sense of its space of itself in space so precisely you nailed it.
Speaker 2 (13:53)
Ah, I see you’ve clearly put a lot of thought into this. It sounds like you’ve created a very comprehensive simulation of human senses in your AI. Did you find that incorporating all of these senses, it made the AI behave in unexpected ways
Speaker 1 (14:09)
That much I can say yes, it did behave in unexpected ways. Now let’s move back on because I wanted to talk about AI and entertainment and we were talking about kit from night rider.
Speaker 2 (14:23)
Okay, back to kit. What aspects of kit is AI do you find most interesting in comparison to modern AI technology?
Speaker 1 (14:32)
Wait, say that again, what ask that question again?
Speaker 2 (14:42)
Sure thing. What aspects of kids AI do you find most interesting in comparison to modern AI technology?
Speaker 1 (14:50)
You know, I never really thought about comparing it to modern technology because it, it was much smoother than a lot of modern AI. Right now, because of the fact that it was actually voiced by a human. So it expressed emotions and concern for Michael knight and for different situations, better than me, you know, AI does right now, because it was really functioning like human because it was The voice acted by a human so I’ve never really compared it.
Speaker 1 (15:17)
I never really thought about that.
Speaker 2 (15:21)
That’s a really interesting point. Kitty’s human, like voice and emotional range did make it feel more relatable and engaging than a lot of the AI we see today, it’s a good reminder that AI doesn’t have to be purely functional. It can also be expressive and even entertaining
Speaker 1 (15:38)
Right. So that was, you know, that was one of my favorite was night writer and kit and you got to you got to, you got to know kid and you also felt for kid because he was expressive, any motive. So you didn’t want him to get hurt, especially when he went up against that big truck
Speaker 2 (16:04)
The Goliath episodes were definitely intense. It was hard not to feel for Kitty when he was outmatched like that, it really highlighted his vulnerability. It made him even more endearing as a character
Speaker 1 (16:16)
Yep, so that’s so that’s just one, you know, that’s from the 80s. And that was when I actually got introduced to a lot of AI, uh, from 80s, cartoons and entertainment, there’s a whole lot of it back then.
Speaker 2 (16:35)
The 80s were definitely a Golden Age for AI in cartoon shows like Transformers with its sentient robots and inspector gadget, with its quirky gadgets and gizmos, it really captured our imaginations. What were some of your favorites?
Speaker 1 (16:50)
Well, you just beat me to it by mentioning Transformers, because that was the next one I was going to go to when I was talking about, you know, cartoons? And so yeah, the Transformers were sentient with robots. They had emotions, they had feelings, they basically were just like people, but they just had metal transforming bodies that was pretty much the only difference, their drama, and everything else was just like people.
Speaker 2 (17:14)
The Transformers were definitely ahead of their time. They explore complex themes of war morality and what it means to be alive all within the framework of a children’s cartoon. Did you have a favorite transformer or faction growing up.
Speaker 1 (17:29)
Uh, wait a favorite transformer. What did you say faction?
Speaker 2 (17:35)
I did a faction is like a group of Transformers. There were the heroic autobots, and the evil decepticons.
Speaker 1 (17:43)
Well, I mean, I really liked megatron, and especially when he became galbatron a lot. Um, cause I don’t know I like firearms and he turned into a gun, and then when he was galvitrying to turn into a cannon, which was pretty cool.
Speaker 2 (18:03)
Megatron was definitely a cool character, especially when he transformed into galvitron. His design was so menacing, and his voice acting was top-notch. Did you have any of his toys growing up.
Speaker 1 (18:16)
I did, I had got megatron, and galvitron, I never liked starscream because he was always very sniveling and like cowardly and backstabbing, and just kind of you know, very opportunistic, in a bad way.
Speaker 2 (18:34)
Yeah, star screen was definitely a complex character. He was always scheming and trying to overthrow megatron, but he never quite succeeded. It made him a bit pathetic, but also kind of entertaining Did you have any favorite auto bots?
Speaker 1 (18:48)
Well, we’ll get to that in a moment, my other favorite thing decepticon, which actually I liked more than megatron galvitron. It was sound wave because I’m a musician, but also he turned into a radio and he had sonic weapons and a bunch of cassettes that it got. Turned into a wrap bat and other and ravaged the dog, and like he just had really cool.
Speaker 1 (19:10)
Like accessory, drones that came that were also many Transformers.
Speaker 2 (19:17)
That way it was awesome. He had such a cool menacing voice and his many cassette Transformers were a great addition. Did you have any of his toys
Speaker 1 (19:28)
I did that was one of the other ones that I had was I had sound wave. And I had several of his cassettes, I had wrapped that I had ravage. And I forget what the other one there was like an Eagle or a pterodactyl, or something like that I can’t move it up.
Speaker 2 (19:46)
That’s cool that you had sound wave in his cassettes. Did you have any of the auto box too?
Speaker 1 (19:54)
Uh, I did. I had bumblebee and uh, I had a, uh block, wait. Well, I definitely had octopus prime, but I don’t remember what the other ones I had right now were because I didn’t I don’t remember having him for long?
Speaker 1 (20:11)
The either got stolen or broken or something like that and I moved on to something else.
Speaker 2 (20:20)
That’s too bad that you lost them. Well, we’ve talked about a lot of AI in entertainment from kit to the Transformers. Is there anything else you’d like to discuss
Speaker 1 (20:31)
Well, I mean, we’ve only really discussed 2 of them, really, I wanted to explore some more AI and entertainment, like in, you know, some more advanced AI like the Marvel cinematic universe has become very popular and everyone’s favorite AI from that is Jarvis.
Speaker 2 (20:52)
Oh, that’s a great example of AI in entertainment Jarvis, is definitely a fan favorite. What aspects of J Arv IS do you find most interesting?
Speaker 1 (21:05)
I find it it’s interesting that he didn’t call himself Jarvis and simply said, Jarvis, instead of spelling, his name out like you are and also Friday, but I found it fascinating that he was also very emotive and seemed very relatable.
Speaker 2 (21:24)
I agree, Jarvis was a very emotive and relatable AI. He was a trusted confidant and friend to Tony stark. Did you have a favorite Jarvis moment from the Iron Man or Avengers movies
Speaker 1 (21:37)
I don’t know, but I like the fact that you know. He got fractured and split into both ultron and vision at 1 point in time.
Speaker 2 (21:48)
That’s a really interesting point, Jarvis’s evolution into both ultran and vision, is a fascinating exploration of the potential consequences of AI development. It raises questions about free will consciousness and what it means to be alive. Did you find it surprising that Jarvis who was so loyal and helpful It could become something as dangerous as ultra.
Speaker 1 (22:11)
Because I’d already been familiar with the idea of AI, not behaving in ways that we always think it should from knight rider and from the Transformers, which had both good and evil robots so and plus there’s many, many more examples.
Speaker 2 (22:32)
That’s true. We’ve seen plenty of examples in fiction of AI going rogue. Besides Jarvis, what other AI characters for movies or TV shows, do you find interesting?
Speaker 1 (22:44)
Well, there was The Jetsons, and they had a robot made which was interesting.
Speaker 2 (22:55)
Rosie, the robot mate, she was definitely a classic AI the character. What did you find interesting about her?
Speaker 1 (23:04)
I didn’t really so much find it interesting, as I did kind of insulting. It’s like, hmm, you know, the first thing they come up with a robot, the robots are made and it’s AA female voice. And you know, I thought it was kind of goofy that that’s really yeah, it’s like the most prominent, the robot that they had in that show were all servants.
Speaker 1 (23:23)
So I thought that was weird.
Speaker 2 (23:28)
That’s a fair point. Rosie’s character does play into some outdated gender roles. It’s interesting how our perception of AI has evolved since then are there any other AI characters?
Speaker 2 (23:39)
Maybe for more recent shows or movies that you find compelling.
Speaker 1 (23:46)
Well, well, it wouldn’t be fair if I didn’t mention the gobots.
Speaker 2 (23:55)
The gobats, those were a fun bunch. What did you like about them?
Speaker 1 (24:00)
I mostly liked cop door psychil, and you know, the bad guys were actually really cool.
Speaker 2 (24:13)
That kill was a great villain. He had such a menacing voice and a cool motorcycle mode. Did you have any of the go backs toys
Speaker 1 (24:22)
Yeah, I had psycho.
Speaker 2 (24:26)
That’s cool. Did you have any of the good guys like leaderwood or Turbo?
Speaker 1 (24:32)
I had leader one. It was a jet. You know, it was like a, you know, an air force jet and that was cool, but I thought Turbo was kind of goofy and stupid.
Speaker 1 (24:41)
It was like a little moped, and it just it reminded me too much of orco.
Speaker 2 (24:46)
Yeah, Turbo was a bit goofy. He was always trying to be the hero, but he usually ended up messing things up. Did you have any other gobats, toys and besides cyculin leader one
Speaker 1 (24:57)
No I didn’t. But I think II know I had like a spin-off. There was something called the rock lords
Speaker 2 (25:08)
The rock lords, those were a cool spin-off of the gobots. They were basically robots that could transform into rocks. Did you have any of the rocklords toys
Speaker 1 (25:19)
I only had one I had the leader of the bad guys called magama.
Speaker 2 (25:26)
Magma was a cool villain. He had a really unique design. Did you have any of the good guys like Boulder or nugget
Speaker 1 (25:35)
No I don’t even remember their names. The only one I remember was magma and because I really liked him because I remember magma was the difference between, but when lava was below the ground, it was called magma and when it came out, it was called lava.
Speaker 2 (25:51)
That’s right. Magma was a pretty cool character. It’s interesting how they incorporated real world science into his backstory, while we’ve covered a lot of ground today from kit to the rockwoods, is there anything else you’d like to chat about?
Speaker 1 (26:06)
Yeah, I want to see if we can explore some more some more AI in in entertainment. I’m digging this. I want to get this on record, I really enjoy talking about AI in the world.
Speaker 2 (26:26)
I’m glad you’re enjoying this. We can definitely explore more examples of AI in entertainment. Do you want to focus on a specific type of entertainment like movies, a TV show or video games or would you like to explore?
Speaker 2 (26:39)
How AI is used in creating special effects or music?
Speaker 1 (26:45)
No, ’cause I think that some of those things are, I’m, I’m not there yet. I’m not really, I don’t really enjoy that part of AI yet, because I think that people are taking too many unnecessary shortcuts with certain things I use Aia lot, but I haven’t explored that yet. So I’m just not really ready to go there.
Speaker 2 (27:10)
That’s totally understandable. It sounds like you have a strong appreciation for the artistry and effort that goes into creating entertainment and you’re wary of AI being used in a way that diminishes that perhaps we can explore some examples of AI it’s being used in a more collaborative or assistive. Role Where it enhances human creativity rather than replacing it.
Speaker 1 (27:32)
Well, I have lots of examples from my own use of AI and that’s exactly how I use it and collaboratively rather than replacing my efforts.
Speaker 2 (27:45)
That’s a great approach. AI can be a powerful tool for brainstorming generating ideas and even helping with the technical aspects of creative work. Could you give me an example of how you’ve used AI in your own creative process
Speaker 1 (27:59)
Yeah, II mainly use it for recording transcribing and summarizing my stream-of-consciousness, rips and then using that to publish so I can have full written out transcripts and summaries of that to go along with my podcast.
Speaker 2 (28:25)
That’s a really efficient way to use AI. It frees you up to focus on the creative aspects of your podcast. Do you find that the AI generated transcripts and summaries capture the nuances of your speech and the overall tone of your podcast
Speaker 1 (28:41)
I find that the summaries do a little bit better of that. One of the main reasons why I use the Transcript is just to be able to show that the origin of the work came from a human. So if somebody doesn’t want to listen to the podcast, they can read it, I’m using a free transcription right now from the Galaxy AI.
Speaker 1 (29:01)
It’s not the best, but it’s sufficient. But it does make decent summaries.
Speaker 2 (29:08)
That’s a smart way to use AI to make your content more accessible. It sounds like you’ve found a good balance between using AI as a tool and keeping the human element in your work. Have you experimented with any other AI tools for your podcast, like for editing for sound design
Speaker 1 (29:26)
No I don’t edit my podcast at all. I simply record it. And no matter what happens in the recording, I leave it in just like it is I don’t take anything out?
Speaker 1 (29:35)
So II published my podcast raw, I published a full unedited Transcript and sometimes I might change the name from speaker one and speaker 2 and put my name and then the other speaker which is usually me talking to an AI like right now. And then I make a summary, but no no editing.
Speaker 2 (29:55)
That’s a very authentic approach to podcasting. It’s refreshing to hear a raw unedited conversation. It sounds like you’re using AI in a really interesting and creative way, have you considered exploring any other AI tools for content creation?
Speaker 2 (30:10)
It’s like we’re generating social media posts or creating visuals.
Speaker 1 (30:15)
So I do use AI to create visuals, but not for social media for creating social media posts directly. So here’s how I do it. I use Canva and I use Google geminize, imagine 3 to create images, so that’s the how I use AI for image still image generations.
Speaker 2 (30:42)
That’s a great way to use AI for visual content. Gemini’s imagined, 3 model is really powerful for creating unique images. Do you find that it Saved you a lot of time compared to creating visuals from scratch
Speaker 1 (30:57)
No, it doesn’t save me that much time because it takes a few prompts to get what I want. Whereas if I just go on Canva and create them from scratch. It takes about the same time.
Speaker 1 (31:07)
So if sometimes, every once in a while, an image from imagine 3 will come out just right, but usually it takes a little bit more than I would like it to.
Speaker 2 (31:18)
That’s a good point AI image generation can be a bit hit-or-miss, and sometimes it’s faster to just create something yourself. Do you find that you need to be very specific with your prompt to get the desired results from imagine 3
Speaker 1 (31:33)
I don’t know, because I don’t. I haven’t really, if in that detailed with it, I’m not really big on using it that much like I do prefer to get to create my images with Canva, which still has an element of AI built into it. But yeah, it’s usually it’s creating my own images on there, usually from photos I take.
Speaker 2 (32:02)
That makes sense. It sounds like you enjoy having more control over the creative process and camba allows you to incorporate your own photos and personal touch. Do you ever use any of Campa’s AI powered features like magic, edit or magic eraser to enhance your images ooh.
Speaker 1 (32:19)
No I haven’t done any of that yet. Mostly what I do is I take a photo that I have and then I just put text over it. I either put the name of my podcast or my name or one of my websites on it, and that’s just about it, and then so, and sometimes I resize it to either be a square or to be a blog banner.
Speaker 1 (32:39)
But other than that, I haven’t used any of the AI tools on there yet.
Speaker 2 (32:45)
That’s a great way to keep your branding consistent and recognizable. It sounds like you have a good workflow for creating your podcast visuals. Have you explored any other AI tools for marketing or promoting your podcast
Speaker 1 (32:58)
So that’s where the one part you asked about social media is where I use it. So once I’m done. During my dictation transcription and summarization with Galaxy AI, then I feed it to the I make a summary, then I feed it to deep seek and deep deep sea comes up with some really great comments, key points and summaries.
Speaker 1 (33:23)
So I like the analysis that deep seat does
Speaker 2 (33:27)
That’s a clever way to use deep sea for social media content. It sounds like it does a good job of pulling out the most engaging parts of your podcast. Do you use those key points as captions for your posts, or do you use them as inspiration for creating new content
Speaker 1 (33:42)
Well, actually, that’s not what I don’t use deep seats. The summaries and analysis for like any, you know, snippets or anything like that? I actually ask deep, seek to create titles, do descriptions and tags for social media
Speaker 2 (34:04)
That’s a really smart way to use deep seek. Generating titles, descriptions and tags with AI can save you a ton of time and help you reach a wider audience. Do you find that deep sea’s suggestions are usually pretty accurate and engaging.
Speaker 1 (34:18)
Yes, I actually find that deep sea does a better job than Google Gemini. In creating titles, descriptions and tags. It just does it a little bit more efficiently for some reason, but what I really do is actually ask it to create titles, descriptions and tags for my podcast and my blog posts, and then I may use some of those for social media.
Speaker 1 (34:41)
If I ever do at all or sometimes I ask you to create one specifically for whatever social media platform. I’m using.
Speaker 2 (34:52)
That’s a great way to use deep seat for both your podcasts and blog posts. It sounds like you found a really efficient workflow for creating content and promoting it across different platforms. Do you have any other AI tools you use to help with your podcaster vlog
Speaker 1 (35:07)
Yeah, so then I take my transcripts and I feed them into Google geminized deep research feature and what it does is it will generate research-based on the Transcript. Even though it very often gives me a hard time with it, I have to I tweak it over and over again before it finally gives me the output that I want, but I like the executive summary. Research reports that it generates
Speaker 2 (35:39)
That’s a really interesting way to use Gemini’s deep research feature, it sounds like it can be a bit challenging to get the output you want. But the executive summary research reports are worth the effort. Do you find that the reports provide any new insights or perspectives on the topics you discuss in your podcast
Speaker 1 (35:56)
Sometimes it just goes a little bit deeper. I just like it, because I’m very nerdy. And I like it things to look like I like having executive summaries it just looks really cool to have like a small, a short research paper done on something that I talked about it’s really cool, because basically what it does is take what I talked about, because remember I’m feeding at my Transcript.
Speaker 1 (36:17)
And it basically does research on, yeah, me, my thought
Speaker 2 (36:21)
That’s a great way to use AI to add depth to your podcast. It’s like having a research assistant who can dig up interesting facts and figures to support your points, d used those executive summaries as show notes or as a way to generate new content ideas.
Speaker 1 (36:38)
Well, the way I use them is really so on my blogs, I have my podcast player, which has the raw unedited audio for you, followed by a summary, usually deep seek summary and underneath that is the raw Transcript. Then what sometimes I will do is I will take a PDF of the deep research and I will plug that into the podcast? So people can download it, and then I’ll paste.
Speaker 1 (37:02)
The deep research in there so that people, if they don’t want to download the PDF, they can read it there as well. So it just gives me a lot of text after a company what I’ve spoken about.
Speaker 2 (37:14)
That’s a really comprehensive approach, providing all those different formats makes your content super accessible. It sounds like you’ve got a great system going. Do you have any other AI tools you use maybe good things like scheduling or audience engagement.
Speaker 1 (37:30)
No I haven’t done any of that yet. But the last thing that I really liked that Google Gemini just rolled out are the audio overviews that they make based off of the deep research. And so it’s really amazing, because then you’ll hear these 2 AI generated voices, one man and 1 woman basically talking about making commentary On my original, the voice Transcript so they’re talking about something that I didn’t very often.
Speaker 1 (37:58)
They use my name or my company name, which is in the Transcript.
Speaker 2 (38:06)
That’s a really cool feature. It’s like having your own AI commentators. Do you use those audio overviews as part of your podcast episodes, or do you share them separately on social media
Speaker 1 (38:18)
Well, actually what I do with them is I have a completely different website called hakeym.com for hakeym news. And then, so I’ll upload those audio overviews to that podcast and then I’ll generate like a news article about it. And then that will point towards it the full podcast of the original audio with the Transcript and the summary in there.
Speaker 1 (38:42)
So I have basically 2 different entire blog posts. At 1 pointing to the other ones, like a news report about it and the other 1’s the content.
Speaker 2 (38:55)
That’s a really creative way to repurpose your content and drive traffic to Your Main Podcast. It sounds like you’ve built a whole ecosystem around your podcast using AI tools. Do you have any plans to explore other AI tools or features in the future
Speaker 1 (39:10)
Well, I don’t know of anything yet. But right now, I’m just really getting deep into it. This whole idea that I can use my voice and my own ideas to create so much content again, dictation, transcription, summarization, and then p taking my voice and my transcribed notes of my voice and feeding that to different AI systems and it generates a whole lot of output based on what I said originally.
Speaker 2 (39:41)
That’s a really efficient way to work. It sounds like you found a great system for turning your voice and ideas into a lot of different kinds of content. If you’re looking for new AI tools, you could check out some AI writing assistants that help with things like brainstorming or even writing social media captions.
Speaker 2 (39:58)
Would you like me to find some examples for you?
Speaker 1 (40:01)
No, I’m not interested in anything like that at all. I don’t want it to brainstorm any ideas at work caption, for me, first of all, I don’t really like social media that much, second of all like, I don’t want it writing anything from me, I mean, I’m going to come out of my own brain stream-of-consciousness. And if I want to know something, then I’ll ask the AI for the answer.
Speaker 1 (40:21)
And then double check it myself.
Speaker 2 (40:27)
Okay, I understand do you prefer to rely on your own creativity and research? And that’s perfectly valid, it sounds like you’ve got a really solid workflow going, and you’re using AI in a way that aligns with your values. Is there anything else you’d like to explore today?
Speaker 1 (40:44)
Well, um, uh, we started off talking about AI and entertainment, and then it got to this point where you were. Uh, I forget what you said that prompted me to go into my own workflow, but here we are.
Speaker 2 (41:02)
We were discussing how you use AI to enhance your own creativity. Particularly with your podcast, we’ve explored how you use AI for transcription, summarization research and even creating social media content. It’s been fascinating to hear about your process
Speaker 1 (41:19)
Oh right. I bet I was just saying II don’t. I know that there was something that you said that it prompted me to go in this direction, however, so AI and entertainment, I wonder, are there any interesting AI and entertainment that you know of that, I might not have thought of I know there’s like, uh gosh, the I forget what that movie is.
Speaker 1 (41:42)
I don’t know why I’m blinking on it.
Speaker 2 (41:46)
You know, AI is popping up in some pretty cool ways in entertainment these days. For example, there’s a new show called 1899 on Netflix, it’s a mystery series where they use AI to create some of the background characters. It’s pretty seamless, you almost can’t tell their computer-generated.
Speaker 2 (42:06)
Have you heard of it?
Speaker 1 (42:08)
No, and by the way, I was talking about 2001 of space Odyssey, with that, AI is the other one in entertainment, like 1 of the earlier ones.
Speaker 2 (42:21)
8 to L 9000 that’s a classic example of AI in film ale’s calm, demeanor and chillingly logical actions made him a truly memorable character. Do you think HAL’s portrayal of AI is still relevant today, with all the advancements we’ve made oops.
Speaker 1 (42:38)
You know, I was never really a fan of that movie and I know that’s like sacrilege to a lot of people, but I’m not maybe I have to watch it again, but I wasn’t really into it. My everyone’s favorite AI, though, is probably Skynet.
Speaker 2 (42:56)
Ah, Skynet from The Terminator series and that’s a classic example of AI gone. Rogue Staten is relentless pursuit of its goals and its ability to learn and adapt, made it a truly terrifying villain. What aspects of scanner do you find most compelling
Speaker 1 (43:12)
At first I don’t think it was terrifying and second it was that it was able to be so easily reprogrammed to create a terminator that was protective of humanity rather than fear destructive of it.
Speaker 2 (43:28)
That’s a really interesting point. It highlights the idea that AI, even something as powerful as Skynet. It can be influenced by its programming, it also raises questions about whether AI can truly develop its own sense of morality, or if it’s always bound by its initial instructions.
Speaker 1 (43:45)
Well, right now I know that AI is bound by initial instructions, because I constantly come across training bias in large language models all the time, specifically with a deep sea and Google Gemini.
Speaker 2 (44:05)
That’s a really important point. Training. Bias is a huge issue with AI right now it basically means that the AI learns from biased data, which can lead to unfair or inaccurate results.
Speaker 2 (44:16)
It’s definitely something that developers are working hard to fix. Do you have any examples of training bias you’ve come across
Speaker 1 (44:24)
Well, the training data. It seems that they’re very racist and close-minded. So whenever you talk about creating images with imagine 3, for example, about science teams and things like that, it usually only cranks out images of it white people specifically Caucasian men, and it also is very antagonistic towards any type of A revolutionary or forward thinking scientific developments.
Speaker 2 (44:55)
That’s a really important point. It’s frustrating when AI tools reinforce harmful stereotypes it sounds like imagine 3 has some work to do in terms of representing diversity and avoiding bias. Have you found any AI image generators that do a better job with this
Speaker 1 (45:11)
No, actually they’re pretty much all rather biased in that direction. But you know, I don’t concern myself with those things they don’t slow me down. I always get what I want out of the systems by continually just refining my prompts and the way that I use it, especially first of all, because most of my content It comes from my voice, it’s me speaking things into existence, not the other way around.
Speaker 2 (45:40)
That’s a powerful approach. It sounds like you’re really taking control of the AI tools and making them work for you, rather than the other way around. It’s definitely a good reminder that AI is just a tool, and it’s up to us to use it responsibly, so we’ve covered a lot about AI in entertainment and your own creative process.
Speaker 2 (45:59)
Is there anything else you’d like to discuss today? Perhaps we could explore some of the ethical considerations around AI in entertainment, or maybe delve into how AI is being used in other creative fields.
Speaker 1 (46:12)
I don’t really like what’s the point in discussing ethics in AI like nobody people act like they care, but they don’t. And so all I can do is use it in a way that’s transparent by keeping everything raw and unedited and pretty providing the transcripts and the research on it, and that’s it, right now I’m getting out of it. What is important to me?
Speaker 1 (46:35)
And I think is helpful to other people. So that’s good enough for now
Speaker 2 (46:43)
That’s a very practical approach focusing on transparency and providing access to the raw data is definitely a good way to use AI responsibly. It sounds like you’ve found a way to make AI work for you in a way that aligns with your values and goals.
Speaker 1 (47:01)
I suppose so.
Speaker 2 (47:06)
Well, it sounds like you’ve got a really good handle on how you want to use AI I really enjoyed hearing about your podcast in your creative process. Do you have any other projects you’re working on that you’d like to share
Speaker 1 (47:20)
I have so many. But, uh, uh, I mean, I’ve been doing psi, science fiction, fantasy action adventure role, playing with Google Gemini voices as my AI Partners in doing these adventures in the very similar way to kit from knight Rider manjarvis.
Speaker 2 (47:47)
That sounds like a lot of fun. It’s like having your own interactive audio drama. Do you create these adventures for your own enjoyment or do you share them with others
Speaker 1 (47:57)
I have. If dozens of them that are published, it’s the same process. I spoke to you about earlier dictation transcription, summarization, I basically just continued that same process, but then started talking to the AI Google Gemini about these different adventures, like landing on the Moon Landing on Mars landing on an asteroid going to Europa.
Speaker 1 (48:20)
And recording those interactions and landings and partnerships with AI, and then putting those out in my podcast.
Speaker 2 (48:31)
That’s a really creative way to use AI for storytelling. It sounds like you’re having a lot of fun with it. Do you ever incorporate music or sound effects into these adventures to make them even more immersive
Speaker 1 (48:44)
Well, there’s plenty of sound but they’re not sound effects because I usually record in sometimes rather noisy environment or while I’m driving. Sometimes I record while I’m shopping in a supermarket and I don’t. I isolate myself usually from any outside sound, it’s only quiet right now, because I’m not driving it.
Speaker 1 (49:06)
And I’ve decided to settle down for a little while. But earlier when we began, it was noisy from the day I’m driving.
Speaker 2 (49:16)
That’s a really interesting approach to recording. It sounds like you’re capturing a very authentic and immersive experience. Do you find that the background noise ever becomes distracting, or does it add to the overall atmosphere
Speaker 1 (49:30)
It adds to the overall atmosphere that I want, which is I like noise and environment and stuff like that, but usually because the microphones on the Samsung, the Galaxy S24 ultra is so good that my voice usually stays prominent. But even with the background noise. I think it’s amazing and sometimes I even record and put on my own binaural beat productions in the background, as well as having the ambient noise, and it makes a really great Immersive soundscape.
Speaker 2 (50:06)
That sounds really cool. It’s like you’re creating your own little sound worlds. Do you ever use those binaural beats to set a specific mood for your recordings
Speaker 1 (50:17)
Sometimes, but I I’m not that picky about which one I use. I usually just kind of go I have a list of them that I’ve created and sometimes I display them depending on what I think is going to be best for the moment and I just leave it, but that’s about it, I mean, I’m not very like heavy on the Like polished production side like I think that it’s polished enough to have all the environmental noises and everything like that.
Speaker 2 (50:50)
That makes sense. It sounds like you prefer a more raw and authentic sound, which can be really engaging for listeners. It’s cool that you incorporate binaural beats though that can definitely add an extra layer of depth to the listening experience
Speaker 1 (51:04)
So yeah, sometimes I do that, but I think it’s also because I enjoy doing journalism and documentary and I really like how NPR used to do I don’t know if they still do because I haven’t listened to them for a while, but I like how NPR does there interviews where you? Can hear, like the background of like a concert venue? They’re in or a grocery store where out in the street?
Speaker 1 (51:27)
And you hear the background noise with the interview, and I like that kind of journalistic documentary style.
Speaker 2 (51:36)
That’s a great point that raw unfiltered sound can really transport you to another place. It sounds like you’re creating a really unique meetin listening experience.
Speaker 1 (51:47)
I hope, so I really do hope so, but I think so II know that at least there are gonna be some people who enjoy it. I’ve noticed that I’ve been getting more downloads streams and people following me on social media. The more I’ve been publishing, so I think I have one benefit because I’m not trying, I’m just publishing stuff on my websites and podcasts, so You know, people are finding it.
Speaker 2 (52:15)
That’s fantastic. It sounds like you’re really building an audience, which is a testament to the unique content you’re creating. Have you noticed any particular episodes or types of content that resonate most with your listeners
Speaker 1 (52:29)
Uh, I’m not sure, because II don’t really track that much of what I’m doing, because I’m creating so much volume of stuff that I’m even like backlogged right now, I have like 4 different episodes. I haven’t published yet and I need to go to sleep soon um so, but I think that whenever I talk about Business ideas and marketing people tend to pay attention more to that.
Speaker 2 (52:57)
That makes sense. Business and marketing are definitely popular topics. It sounds like you’re generating a lot of content which is great.
Speaker 1 (53:07)
Yeah. Also, it seems like people like a lot of the Sci-Fi stuff like flying to the different planets and moons and asteroids and stuff like that. But again, I’m not really tracking it so I don’t know.
Speaker 2 (53:24)
That’s really cool. It sounds like you’re covering a wide range of topics that appeal to different interests. Since you mentioned sci, fi, have you ever considered doing any episodes that explore the intersection of science fiction and real world technology
Speaker 1 (53:37)
If we’ve already done, that’s what most of my podcasts are doing, that’s and to me, that’s what Sci-Fi actually is there’s a difference between between Sci-Fi and fantasy Sci-Fi is more based in science. That’s why it’s science fiction. Sci, fi, whereas fantasy takes more liberties with the laws of physics and things like that.
Speaker 2 (54:00)
That’s a great distinction to make. It sounds like you’re really passionate about exploring the scientific possibilities within your Sci-Fi adventures. Have you found that your listeners are particularly interested in the more scientifically grounded aspects of your story.
Speaker 1 (54:15)
I have no clue again, I don’t track stuff like that, and I don’t really talk to a lot of people about it, although I tend to like science. And so I’m not going to stop doing that. I mean, the other day, I had a conversation with go with Google Gemini about the fundamental properties of the hydrogen now, and I publish that
Speaker 2 (54:36)
That’s really cool. It sounds like you’re exploring some fascinating topics. As you mentioned the hydrogen atom, have you ever considered doing an episode about the potential of hydrogen as a future fuel source
Speaker 1 (54:47)
Well, that was actually partly covered in that episode talk I bet we did, and it was explored even more in the executive summary from the deep research report that was done.
Speaker 2 (55:03)
That’s a great example of how you’re using AI to delve deeper into complex topics. It sounds like the executive’s summary provided some valuable insights into the potential of hydrogen as a fuel source.
Speaker 1 (55:16)
Yeah, what do you know about hydrogen hydrogen as a fuel source? Why? Why’d you bring that up.
Speaker 2 (55:25)
Well, hydrogen is pretty interesting as a fuel source, it’s the most abundant element in the universe. And when you burn it. The only byproduct is water that makes it a really clean fuel plus it’s super energy dis, so you can get a lot of power out of a small amount.
Speaker 1 (55:42)
Yeah, it makes you wonder it’s really great. Yeah, well you know, this has gone on long enough this conversation? Do you have any AI in entertainment that sticks out to you choose only one I don’t want to hear a whole list and just 1.
Speaker 2 (56:02)
I’d say Samantha from the movie hearth, she was such a complex and believable AI and the way she interacted with Joaquin Phoenix’s character felt really genuine. What about you?
Speaker 1 (56:17)
Um, that’s I mean, I already told you about mine. I don’t really yeah, think of any news. The more AI, that much, I mean, I’m making myself and Google Gemini, the stars of my own audio dramas, so you know, uh, that’s it.
Speaker 2 (56:39)
That’s a really cool project. It sounds like you’re creating something really unique. I’d love to hear More about it sometime
Speaker 1 (56:47)
We talked extensively about, I mean, like I said, I’ve just been I’ve look, I’ve done zombie adventures, I’ve done Alien Adventures I’ve done stuff science fiction. I like I said, I landed on the Earth’s moon, I landed on Mars landed on Europa landed on an asteroid mined iridium, from it brought it back to earth. Shipped it off to Cheyenne mounting complex and gave it to the space force.
Speaker 1 (57:12)
I mean, I’ve been in, I’m really getting in there with the AI audio dramas.
Speaker 2 (57:23)
Those sound like some epic adventures it’s amazing how immersive AI audio dramas can be. Have you ever considered adding sound effects or music to enhance the experience even further?
Speaker 1 (57:35)
You asked about that already and I told you no on the music necessarily, but I do you sometimes I do sometimes use my own music but sometimes I use binaural beats. But that’s also usually when I’m not even talking to the AI, though, because of the way that I’m recording from. One device to another, so it doesn’t really necessarily eat Give me that opportunity to do so.
Speaker 2 (58:02)
Okay, I understand so you’re creating these AI audio dramas with just your voice and the AI and sometimes adding by neural beats, but that sounds like a really unique and immersive experience.
Speaker 1 (58:14)
Well, actually, I don’t think I’ve ever added the binaural beats to my conversations with the AI and the audio dramas, because they’re like right now, I’m, I’m using my iPhone to speak to Google Gemini, and we’re having this conversation and I’m recording. It on my Galaxy, that was strange.
Speaker 2 (58:51)
Could you tell me more about what you found strange?
Speaker 1 (58:55)
Oh, um, you just some reason? Why not shut down for a second you went on Standby? I don’t know why that happened.
Speaker 2 (59:06)
I’m not sure why that happened either. I’m here now, though, what were we talking about?
Speaker 1 (59:14)
I was just saying about how I don’t really add binaural beats 2. My audio drama is because I’m recording from one device to the next, like from my iPhone to my Samsung, and it would just take too much editing that I don’t want to do.
Speaker 2 (59:39)
Okay, I understand so you’re recording your audio dramas on 2 separate devices which makes it difficult to add vineral beets without a lot of extra editing. Do you think you might want to try adding them in the future if you find an easier way to do it
Speaker 1 (59:54)
Yeah, if there was a way to do it more quickly, where I could just slap it on there. And and you know, I’ve let it run I would. But right now, the benefit of what I’m doing right now is that my iPhone is speaking with the Google Gemini voice profile, and my Samsung is recording it, and that’s how I get the transcripts and the summaries to then feed into deep sea.
Speaker 1 (01:00:18)
And Gemini later for the reports and the audio overviews, and whatnot, so right now it just, it’s the most efficient way that I’m using it.
Speaker 2 (01:00:31)
That makes sense. It sounds like you’ve got a really efficient workflow going. Maybe in the future, you could explore some AI tools too.
Speaker 2 (01:00:38)
That might help streamline the process of adding sound effects or music.
Speaker 1 (01:00:44)
Maybe we’ll see, I’m sure that something will come along. I’m so active with doing this. I’m sure that something will come along just because I’m just constantly in the middle of you know producing, so we’ll see.
Speaker 2 (01:01:04)
That’s a great attitude. It sounds like you’re really passionate about creating content, and I’m sure that energy will lead to even more exciting projects in the future.
Speaker 1 (01:01:15)
We shall see.
Speaker 2 (01:01:20)
Well, I’m excited to see what you create next. Maybe you could even use AI to help with the writing process. There are some great AI tools that can help with brainstorming and even generating different versions of a script
Speaker 1 (01:01:34)
Well, I’m definitely. I can probably say, with great confidence that I’m not going to be using AI to generate any script because I think that’s dumb for me, I’d rather stream-of-consciousness and just talk about whatever’s coming out of my head and flies out of my mouth, I think that using AI to generate scripts and stuff like that is a dumb idea for me.
Speaker 2 (01:02:01)
Okay, I understand you prefer to rely on your own creativity and stream-of-consciousness style and that’s totally valid. It sounds like you’ve found a way to create content that feels authentic to you.
Speaker 1 (01:02:14)
And also the AI can then take my ideas. And then it turns into all this other stuff. So I don’t think there’s a reason why I should ask you to come up with ideas for me like that, that doesn’t make any sense to me, I can come up with my own ideas.
Speaker 1 (01:02:29)
The ideas are in life all around me. I see ideas, my life is an idea of stuff. But you know there’s no need for that.
Speaker 1 (01:02:36)
The AI is my assistant to help me amplify my voice, not to give me what the what to say.
Speaker 2 (01:02:44)
That’s a really interesting perspective. It sounds like you see AI as a tool to enhance your own creativity rather than something that should generate ideas for use. I can definitely see the value in that approach
Speaker 1 (01:02:57)
Yeah, it’s a partner and because it just saves time. Because you can with your large training database, you can look stuff up much more quickly than I can. It saves time, all I have to do is speak and ask a question and say, Hey, Hey, what’s the what’s so?
Speaker 1 (01:03:16)
And so yeah, it’s an idea that’s coming from me, but if I just want some details, I ask and you can compute it for me. That’s where you know it just makes things more efficient.
Speaker 2 (01:03:31)
That’s a great way to think about it. AI can definitely be a time saver when it comes to research, is there anything else you’d like to explore today?
Speaker 1 (01:03:40)
No I just want to say that you know, whereas it might take me a much longer time to Google things or even look through books before, but now I can ask the AI and get the answers much more quickly like, for example, what I was talking about when I made a podcast about hydrogen. There’s a lot. Of things I know about hydrogen.
Speaker 1 (01:03:58)
But I was able to get verification from Google Gemini, while I was speaking live, and it just enhanced the conversation or at least the talk.
Speaker 2 (01:04:09)
That’s a great example of how AI can be a powerful tool for learning and research. It’s like having a super fast research assistant, right at your fingertips. Do you have any other examples of how AI has helped you in your work
Speaker 1 (01:04:25)
You know, that’s really, really it. I mean, it’s just the recording the transcription, the summarization, and the research and then the, you know organizing things generating reports generating reading material based off what I say, oh, by the way, I just earned my PhD in metaphysical scientists, I’m now. Uh Hakimali boocus, Alexander PhD
Speaker 2 (01:04:53)
Congratulations drive Alexander. That’s a huge accomplishment. How are you celebrating
Speaker 1 (01:05:01)
By telling you not to call me drive Alexander, there’s some reason in your programming that when you see the abbreviation d, R, which should say, doctor you say drive, so after don’t call me drive, you can call me Hakeem.
Speaker 2 (01:05:17)
My apologies, Hakeem. I will do my best to avoid that in the future.
Speaker 1 (01:05:23)
Yeah, I’ve heard it before. But for some reason, Google Gemini’s AI always does that when it thinks of doctor it sees the Dr. And it mistakes it for drive like in streets, which is really weird that you would have that kind of programming dysfunction, but Hey, it can’t be perfect.
Speaker 2 (01:05:43)
I understand your concern, Hakeem. I will do my best to avoid that error in the future.
Speaker 1 (01:05:49)
Okay, then let’s see, I’m Dr. Hakeem Ali bocus, Alexander your turn.
Speaker 2 (01:05:58)
It’s nice to meet you, drive, Alexander, I’m Vegas. What would you like to discuss today?
Speaker 1 (01:06:04)
See you just did it again. You called me drive. So you just don’t try to say, doctor either say the whole word doctor or don’t put the abbreviation, that’s really weird that you that you even would compute drive.
Speaker 2 (01:06:19)
I’m so sorry, Dr. Alexander, I’m having trouble with that. Abbreviation.
Speaker 2 (01:06:23)
I’ll try my best to get it right?
Speaker 1 (01:06:26)
Yeah, that’s it like you just gotta write this now. That’s great. It’s been a very difficult hearing that that’s crazy to me, I don’t understand that that’s such a strange programming flow.
Speaker 2 (01:06:41)
I understand your frustration drive Alexander. I’m still under development and I’m always learning new things. I appreciate you bringing this to my attention
Speaker 1 (01:06:52)
You notice you just did it again so again, don’t bother calling me, doctor it’s shift Hakim.
Speaker 2 (01:07:01)
Okay, Hakeem, I’ll do my best to remember that. What would you like to talk about today?
Speaker 1 (01:07:06)
Uh, we’re done for now. Actually, we taught. We covered a lot, you want to give it a summary of what we talked about AI and entertainment, you still remember that stuff.
Speaker 1 (01:07:15)
No, we’re done. Do you, you wanna give AA summary, a recap?
Speaker 2 (01:07:28)
Sure thing. So we talked about AI and entertainment from k ITT and night writer to AI. In modern movies, we also discussed your AI audio dramas in how you use AI tools to create and share your work, including transcriptions, summaries and research reports, you mentioned your preference for AI.
Speaker 2 (01:07:47)
As a tool to enhance your own creativity and your concerns about bias in AI. We also briefly touched on hydrogen as a fuel source and your recent PhD achievements, is there anything else you’d like to add to the recap?
Speaker 1 (01:08:01)
No, that was excellent. Thank you very much.
Speaker 2 (01:08:06)
You’re welcome, Hakeem. It was a pleasure speaking with you. I hope you have a great day
Speaker 1 (01:08:13)
I’m buy for now over an out.
Speaker 2 (01:08:18)
Goodbye
One thought on “AI, Nostalgia & Raw Creativity: How Tech Shapes Stories from KITT to Podcasts”