To AI or not To AI, That is the Question...

change management communication & technology self leadership
Blog Image: AI is it solving problems or generating them?

I've been having conversations with a group of friends about AI and whether it is great or whether it is problematic. Total disclaimer, at this stage, that group is a mix of female, creative, entrepreneurial, neurotypical and atypical individuals of different cultural and socioeconomic backgrounds.

Please, bring your thoughts to the conversation. Message me if you do not want to go public.

For this article though, let's focus on AI from a content-generating and content-analyzing perspective where its role is to help decision-making.

AI is not new, but it is getting a lot of hype lately - mostly since the rise of ChatGPT, and the rollout of content writing AI across various entrepreneurial platforms.

And, yes, I have used it, and been subject to it in the following spaces:

1 - Where I have written AI-assisted articles for corporate clients on topics that I am not an expert in (like removals or plumbing)

2 -Where I have applied for roles or opportunities in the corporate space and had my own submitted content analyzed by a bot with a horoscope-style profile sent back within minutes - sometimes based on a chatbot, sometimes involving video analysis.

3 - Where I have just played around and tested idea generation  - helpful for writer's block, not for writer's personality.

But there are some rather large problems I see with the rise of AI, and the AI that currently exists, and I want to list them for you to consider - helpful if you are looking at procuring an AI solution, no matter the size of your business - big or small.



Behind every AI is a team of people. And behind the research, there tends to be a peer review to support AI in development. 

I would encourage you to question how the AI you are using is embracing diversity, how it allows for differences, and what the development team or peer review looks like.  Are you buying technology with unconscious bias programmed in that has the potential to send progress backward?  

It is well known that the tech industry is dominated by men, and according to, only 38% of women who majored in computer science are working in the field, and in a study released by Bayside Group in Australia, only 24% of computing jobs are held globally by women.

Are these stats and their impact worth ignoring as you choose your AI solution?

For example, if AI is reading facial expressions and using eye tracking, does it allow for neurodivergent traits, disability, or cultural protocols? Does it allow for the fact that gendered men and women have different biological indicators with eye motion - something that can not be changed no matter how an individual identifies? How does it treat that? How is its' decision-making being programmed?

Does it allow for the fact that some high-functioning, highly capable people on the autism spectrum do not do eye contact well?  What is the impact through interview processes when organisations are crying out for talent? Is AI creating missed opportunities?

What is your own decision-making process for embedding AI into your business? Is it purely around resources (cost and time savings), or is it taking a broader look at your values and the future impact on your workforce or customer base?

What will happen when your diverse workforce starts looking skewed? How long will it take for you to realize a robot made the workforce decisions for you?

If the AI is generating content from what is already available on the web, is it able to create, or recognize a minority voice effectively, or is it just going to regurgitate the most popular content and diversity be gone?



Where does the ability to develop critical thinking skills lie in the outsourcing of content creation to AI? I understand the problem that AI is solving (generally, time and money), but what about the problems it can create long term?

I want my kids to do their own research - at whatever level of education. I want them to be actively googling or grabbing books. I don't want them to sit there and (ad libbed) "hey siri, write me a 500 word essay on Macbeth covering why it is important to literature in this day and age, and include some wit."

How will AI keep free thinking alive, or will it be the death of what makes (most of) us inherently human?


Who really owns what is generated?  Your blog or your bot?  It has a tendency to feel like a designer knock off that anyone can claim as their own.  Will consumers think to ask who actually wrote this?

And what is happening to the interview answers captured by bots - both successful and unsuccessful? Is it going back into the algorithm pool to analyze potential and marginalise opportunities?



I can't help but get excited by technology, but then hit an anti-climax as I realize that what content AI spews out is a bit on the Meh side. It basically can read as content that could use a good ole' bedazzler before hitting publish. This goes for my corporate clients where their brand personality is clearly amiss in anything generated by AI

Sure, Google bots might like it. Sure, it might hit the SEO mark.

But what about the consumer of the content?  

Do you really want to read something that sounds like its originator? Robotic or Human?

I'll take human imperfection, thank you.

So while the world gets all hyped up about AI and its powers, I want to keep it safe to challenge and question. I want to keep reading and hearing original thoughts. I want a generation that knows how to think and do more than code something to do the creating for me.

I want content that has personality and challenge and growth, and the heart of a writer.

And maybe some disclaimer at the top of blog posts, much like sponsored ones now require stating whether it has been generated by AI, or is, actually generated by a human. Like this post.

Feel free to let me know your thoughts.


Navigate change with confidence. Like a Good Hair Day for Your Inbox.

We hate SPAM. We will never sell your information, for any reason.