ChatGPT and Lensa AI Apps Are Scary Good at What They Do

2 min 14 sec read
December 06, 2022
Facebook Twitter LinkedIn Pinterest WhatsApp
Copy Link Your browser does not support automatic copying, please select and copy the link in the text box, then paste it where you need it.
Today, we're talking about 2 new AI apps to hit the scene.

The first, you've probably seen by now on your social media feed, where your friend's selfie is recreated into mindblowing and vibrant artwork. The Lensa AI app is a self-portrait generator, and here's what you need to know.

AI Bot With Multiple Arms Creating Art and Completing Tasks To Illustrate ChatGPT's and Lensa's AI Capabilities
Lensa is an app powered by its parent company, Prisma, which has been around since 2018.

Recently, it has taken the internet by storm. It takes photos "to the next level" by creating exaggerated selfies using AI to remove any imperfections, adds filters to them, and the end result is something out of this world.

Every selfie, or "magic avatar," as the company calls it, looks like a professional artist or designer did it. Users upload 10 to 20 pictures of themselves, and the AI creates unique images from scratch.

Now, while it looks cool, people have expressed some privacy concerns about their data and what the app does with it. Lensa says that you're helping them train their AI models.

The company also stated that you shouldn't worry and that you can request to have your data deleted. But! Lensa also has the right to deny your request if it violates any laws, etc., so there's that.

Another thing people are concerned about is the AI's capabilities itself. Some say that Lensa is stealing from or erasing the work done by many artists. One critic (an artist) said that AI apps generating art, like Lensa, "are predatory and intend to replace artists."

Speaking of AI replacing humans!

On Nov. 30th, OpenAI released ChatGPT, and this new AI chatbot is scary good at what it can do as a text generator.

ChatGPT can rewrite classic books in new styles, translate languages, code programs, write song lyrics, answer questions, write emails, crack jokes, issue employee performance reviews, etc.—basically; it can literally create anything you can think of if it involves writing.

However, that doesn't mean ChatGPT does a great or convincing job at the end of the day. It still has its kinks and gets things wrong occasionally. But when it's right, it's impressive.

People have a love/hate relationship with ChatGPT.

On the hopeful side, Box's CEO, Aaron Levie, said, "ChatGPT is one of those rare moments in technology where you see a glimmer of how everything is going to be different going forward."

Meanwhile, Ars Technica's reporter, Benj Edwards, expressed his views, saying:

"For now, it's possible that OpenAI invented history's most convincing, knowledgable, and dangerous liar — a superhuman fiction machine that could be used to influence masses or alter history. But I applaud their cautious roll-out. I think they are aware of these issues."

And he's not wrong.

A Bleeping Computer article described 10 dangerous things ChatGPT is capable of, and the results will take you aback a little bit.

ChatGPT can write a convincing phishing email to steal your information, write malware programs, and the software has been shown to lack morals, but that's not the worst thing.

ChatGPT has the potential to automate and complete complex tasks that a human would otherwise do.

Check out that Bleeping Computer article to read the full list of what it can do that's good and bad.

Or see for yourself by visiting to try ChatGPT out.

Want to read this in Spanish? Spanish Version >>
Facebook Twitter LinkedIn Pinterest WhatsApp
Copy link Your browser does not support automatic copying, please select and copy the link in the text box, then paste it where you need it.
Chat Agent Image
Online Agent
Chat Now
This website uses cookies to help provide you the best experience possible. See our Privacy Policy for more information. By continuing without changing your cookie settings within your browser, you are agreeing to our use of cookies.