Modern technology gives us many things.

Game-changing AI in iPhone 15 will talk in your voice but can it get you kidnapped too?

That’s right! After ChatGPT and Google, it’s now Apple’s turn to dip its toes into the world of advanced AI, and what a feature to begin this journey with have Tim Cook & Co come up with! iPads and iPhones running iOS 17 (launching in September) will allow you to create a digital version of your own voice. Let that sink in for a moment… If it sounds like Apple will let you create a “deepfake” of your own voice, that’s because, it’s true. Sort of. But it’s not that simple.

Apple says your iPhone will soon be able to speak in your voice with 15 minutes of training. Why? So “users can speak with their Personal Voice (the actual name of the feature) when connecting with friends and family”. But it’s not just a party trick.It should come as no surprise that Apple is approaching the Personal Voice AI feature from an angle of accessibility, which is said to be the key objective here. Cupertino has a solid track record of going above and beyond to make the iPhone more inclusive. However, this time around, many are also concerned about their privacy, and ultimate security.

AI is getting incredibly powerful, and while ChatGPT can help you write an essay, and Google’s Bard can take over as your Gmail assistant, or help you find the best refrigerator for your kitchen, the true power of AI might be in its ability to interact with humans, and thus become part of our society.

So, should we be freaked out about the fact that the iPhone will soon be able to speak in our own voice? I don’t think so. If anything, I’m quite excited!

Scary but helpful – iPhones running iOS 17 will be able to talk in your “Personal Voice”; Apple enters the advanced AI race in the smartest way possible

I don’t know about you but I think Apple is being quite careful about entering the AI race, as accessibility might be one of the safest options when it comes to rationalizing the need for AI in iPhones and iPads. However, this doesn’t mean Apple has chosen an easy path.

If anything, an AI feature with the mission of making people’s lives easier in a truly meaningful way is more important than any other AI trick that might or might not be helping us in the first place. Moreover, the fact that Apple is behind the Personal Voice feature will only magnify the level of interest and scrutiny around Personal Voice gets from critics and the general public. But Apple is comfortable with attention.

Of course, no one’s had the chance to test Personal Voice just yet, so I’ll have to reserve any strong opinions for when the feature is released (it’s expected at the end of this year). But what we can do right now is to talk about the positive nature of the advanced AI coming to iPhone. And what a better way to make a positive impact than helping people get through life.

Unfortunately, it’s difficult to find global statistics of this kind, but according to those available in the US, approximately 18.5 million individuals have a speech, voice, or language disorder, which shows the clear need for making technology work for those who could benefit the most out of it.

That’s the moment to mention that rather than breaking new ground, Apple is simply tapping into the already existing world of Augmentative and Alternative Communication (AAC). AAC apps are designed to help nonspeaking people communicate more effectively through the use of symbols and predictive keyboards that produce speech. Many who are unable to produce oral speech, including those with ALS, cerebral palsy, and autism have to use AAC apps to communicate.

If you’re curious, Apple has published a dedicated story on AAC and AssistiveWare (one of the leading developers in the field of AAC). AssistiveWare says their mission is to make AAC an effective and accepted means of communication. For now, it doesn’t look like Apple is looking to acquire AssistiveWare.

Apple’s Personal Voice – a game-changing feature that makes smartphones smart and our lives easier?

I believe this makes it a little bit more clear as to why Apple’s work towards making the iPhone and iPad more accessible should be the main talking point of a feature like Personal Voice. In a world of TikTok videos and Instagram stories, accessibility and Quality of Life (QoL) features like Personal Voice are a reminder that smartphones can (and should) exist to make our lives easier.

As AssistiveWare’s CEO David Niemeijer says, iPhones/smartphones are the “cool” devices that everyone uses, and this has already made a major difference in the acceptability of AAC apps, which exist on the same ubiquitous devices instead of a “specialized” piece of hardware that looks “different”.

So, the fact that Apple’s Personal Voice will live directly on iPhone, even without the need for any special software, would make using this (hopefully) game-changing piece of AI that much more accessible and “normal”.

Personal Voice might be the new, improved, and supercharged version of Siri – can Apple’s most ambitious accessibility feature turn into the ultimate Google Assistant competitor?

Now all that being said, as a “tech person”, I simply can’t help but look at Personal Voice’s extended potential. And let me explain what I mean by that…

Siri has been different kinds of bad for years now, with Google Assistant running circles around Apple’s robot in just about every way possible. But what if Personal Voice is only the beginning of the iPhone’s transition into becoming the ultimate AI voice recognition phone? This honor currently belongs to the Google Pixel, which (thanks to Google’s Tensor chip) can understand, record, and transcribe speech better than any other phone on the market.

By the looks of it Personal Voice is essentially shaping up to be a text-to-speech engine, which can be useful in a number of different scenarios. I’d love to see a feature like Personal Voice expand to other iPhone and iPad apps like Voice Memos and Notes. I’m saying that, because finding a good text-to-speech piece of software that’s both free and natural-sounding is next to impossible.

Clearly, the iPhone will soon be able to speak in your voice, but I expect the Personal Voice feature to come with certain limitations. However, what if your iPhone can freely read things back to you in your own voice, or another natural-sounding voice, with proper intonation? This would be helpful to

  • Student preparing for an exam
  • Podcasters who hates reading boring ads
  • Comedians trying to memorize a comedy set
  • Actors trying to learn a script

I know… My imagination is running a bit wild here but I really think artists and the general public can make great use of a broader implementation of a feature like Personal Voice. The aforementioned examples might look funny, considering the current mission of Personal Voice, but I really do believe this is only the start of Apple’s AI transformation.

iPhone and iPad will be able to speak in your voice: Is Apple opening the door to scammers? Concerns over Virtual Kidnapping and Deepfakes arise

Now, about the controversial sides of Apple’s Personal Voice…

Of course, the main concerns over the new Personal Voice feature are related to security. People (including the mainstream news media) are asking questions, which we’ve gotten used to seeing upon the launch of any new software feature that involves the collection of personal data. However, this time around, we aren’t talking about photos of your lunch, or your drunk texts.iPhone and iPad are expected to record, retain, and replicate your voice, which (naturally) amplifies any concern about privacy and security. Of course, Apple promises Personal Voice to be a “simple and secure way to create a voice that sounds like you”, which leads me to assume all of the Personal Voice action will be encrypted, taking place directly (and only) on your iPhone/iPad (or rather on their SoC).

Still, Apple’s promise for simple and secure AI doesn’t stop people from voicing their concerns over the possibility of potential abuse of the powerful accessibility feature by bad actors and “pranksters”. Social media users are already thinking of the various ways Personal Voice can be turned into something else than a helpful feature:

  • Petty scams
  • Virtual kidnapping
  • Deceptive voice messages/recordings
  • Pranks that step over the limits

One that stands out in particular (thanks to being discussed by big news outlets) is Virtual Kidnapping, which is a telephone scam that takes on many forms. This is essentially an extortion scheme that tricks victims into paying a ransom to free a loved one they believe is being threatened with violence or death. The twist? “Unlike traditional abductions, virtual kidnappers have not actually kidnapped anyone. Instead, through deceptions and threats, they coerce victims to pay a quick ransom before the scheme falls apart”, says the FBI.

However, since Virtual Kidnapping doesn’t usually involve an actual “kidnapping”, the only option to take advantage of the iPhone’s Personal Voice is if the bad guys somehow gained access to your iPhone/recordings, which means at this point you might be “actually kidnaped”, or that Apple’s encryption has failed. In other words, if that’s happened, there’d be bigger things to worry about.

So, what I say is… Perhaps we should try and focus on the positive side of Personal Voice and all other AI and ML-powered accessibility features, which can help those in need? I’d leave the suspicion for later. Meanwhile, you can learn everything about Apple’s new features for cognitive accessibility, along with Live Speech, Personal Voice, and Point and Speak in Magnifier via the company’s blog post.



phonearena.com

Follow TechWar.gr on Google News

Απάντηση