
Tag: AI
Lensa, or ‘magic avatars’
Selfies x AI. What are your thoughts?
(subject)(style), (action/scene), (artist), (filters)

Computer-generated imagery. A neural network, text-to-image, machine learning. Stable Diffusion is the latest model to accumulate loads of publicity after DALL-E, Midjourney and Russian-based FaceApp. It’s cool, it’s cool, it looks absolutely beautiful. It is not about ‘Art’. Or the validity thereof. Now, I am not writing about technology and stuff vs human creativity, or maybe I am, because stunning art within seconds vs, well, ownership over the image is, to this day, a dubious cause. The inquiry on the influence of AI-generated art diverting attention away from human artists is one for later too, maybe.
About computer technology and tech companies; I usually think it is more than fair when people are sceptical about how their data is being used by any company. But while Lensa/Prisma employs a questionable Privacy Policy for its users, the thing that sparks my intrest goes deeper.
I am not writing about NSFW content either because what is the internet other than pictures of cats? “We believe erotic art needs a place to flourish and be cultivated in a space without judgement or censorship. — Unstable Diffusion”, quotes Jim Clyde Monge in a Medium article a couple of months ago. The r/unstablediffusion subreddit has been banned, presumably because of the type of content. Do you agree with them? But, I am not writing about Mage. r/StableDiffusion, however, is actually very enlightening on their view of current events!
I am not writing about the (social) effects of skin editing or removal of ‘facial imperfections’ either. Nor am I writing about software subscriptions. Not now. Maybe later.
I want to point out the ‘helping them to train their AI’ clause in the self-portrait generator application. I quote The Standard (why?) in saying, “No, your friends haven’t paid artist to draw them.” The company has not paid artist to draw either. Many accuse the company of stealing artwork from digital creators. With Stable Diffusion and/or Lensa, it is not that straightforward. I am tempted to agree with artist and illustrator meg rae’s Twitter thread, I wonder what data the model is trained on. How did they acquire what? Where does it all come from? Do they make money from resources that they ‘stole’? What data did Stable Diffusion scrape from where, how and by whom, and does it matter? I like the trend of knowing where your food comes from, knowledge about where what you consume originates, and stuff like that, but I am an advocate for open-source and free internet too. For me, it is rather a question of ethics. An ethical responsibility should be employed in using tech, using anything. “AI by the people, for the people”, Stability.ai states.
However, on the internet we live like we want to, represent, reproduce and share what we want; “r/StableDiffusion •Posted by
u/GaggiX 10 hours ago. Another day, another tweet trying to spread disinformation about generative model.”
Now, it is about trademarks and copyrights. BitTorrent is not distributing or creating a derivative work but the original. But what if you hum a melody of a famous copyrighted song on a YouTube video? Are we, humans or computer models, ever even able to reproduce something not similar to real-world data? Is it like level 2 praguepride wrote ·40 min. ago: “It’s fear of advancing technology. Any smart artist is going to realize that they are going to have to adjust their workspace/career to integrate massive amounts of AI generated art into that space”?
Before I go too deep into this rabbit hole, I will go back to my drawing board. Let’s all pick up a pencil and some paper. Something to touch upon later.. Another day, another blog post.
Westworld
I introduced zombie stats (the false statistic that has become a norm) in my last blog called ‘iZombie’. Statistics is the science that oversees the collection, classification, analysis, and interpretation of data. It uses mathematical theories of probability. Probability is nothing new or scary. It is something we apply to our everyday lives, whether you realize it or not. From the moment you wake up; deciding on what to wear, the weather forecasting (60% chance of rain), what you will have for breakfast or decide to skip worrying that you’d be late for work, worry about the probability that your bus or train might be late, which cards you’d play… It is the chance, the probability of, the study of things that might happen or might not.
Kiksuya
(Lakota for “Remember”)
The history of the study of probability goes back to the 1700s, marking the beginning of statistics. It was studied by a French inventor, Blaise Pascal, who also invented the calculator (Pascalines) around 1642. Pascal died in August 1662 but is immortalized as a unit of atmospheric pressure (Pa) named in his honour and by computer scientist Nicklaus Wirth, who in 1972 called his new computer language Pascal (it’s Pascal, not PASCAL)1. Statistics really took off during the 1800s. As a part of Data Science, statistics are mainly driven by the predictive performance of increasingly complex black-box models.
Those models are so complex that they are too hard to read by any living human being and often misinterpreted. But interpretability is an ethical issue! These models are oracles; detecting medical problems before doctors can, faces, buildings, cars and photos a.o. are faster recognized, predicting a home’s risk of fire, predicting crime and the likelihood of reoffending (never in favour of black defendants), and more. They are self-learning and self-programming. Humans tend to make mistakes, errors and are biased. Algorithms aren’t necessarily better. “[But] these systems can be biased based on who builds them, how they’re developed, and how they’re ultimately used. This is commonly known as algorithmic bias.” wrote Rebecca Heilweil in an article about why algorithms can be racist and sexist.
The prophetic transformation started when linear models were replaced by black-box models like Deep Neural Networks (DNN) and gradient-boosted trees (e.g. xgboost), producing predictions without providing human-interpretable explanations for their outputs. “We frequently don’t know how a particular artificial intelligence or algorithm was designed, what data helped build it, or how it works.” argues Heilweil. As unaware you are about most of the probability calculations you make yourself every day, you are as unlikely to be aware that AI or an algorithm is used in the first place. Did you get the job? Did you see that Donald Trump ad on your Facebook timeline? Did a facial recognition system identify you?
For those predictions, you need data and lots of it. It’s not magic. You need training too. The training involves exposure (a computer) to a bunch of data, and you/it will notice patterns.
Crisis Theory
[SPOILER WARNING!!!!]
(https://medium.com/@epassi/recreating-the-westworld-attribute-matrix-3e72d9d419df)
The story of this series shown in the graph stars in Westworld, a Wild-West-themed amusement park. Inside the park high-paying “guests” play out their fantasies entertained by advanced android “hosts”. The hosts, prevented by their programming from harming humans, allow the guest to do about anything with/ to them. The guests traced, their actions logged, their DNA taken.. The hosts become conscious, a guest, “the Man in Black” seeks the maze, Ford dies, loops, anomalies. Delores, one of the main hosts visits the library where all the data is stored and discovers that for each visited human there’s a book containing their code. And later on (the third season) the series expand to the real world in the year 2058. Engerraund Serac and his brother created an artificial intelligence machine called Rehoboam (after the destruction of Paris in their childhood). Apparently Paris, the capital of France has ceased to exist in 2025 and the world’s most advanced AI has all data on every human being now. It foresees all possibilities which it then tries to achieve, or prevent. Are humans even easier than the A.I. to (re)program? It certainly seems so.
Old Clementines (host) (https://www.artstation.com/artwork/X5qQY)
There is no need for ‘correct’ data, or ‘good’ statistics when your life is calculated. The awoken hosts have no past, history or future. They live in it all at once, there’s no ageing, no death. A human’s behaviour is easier predicted. Clementine, one of the hosts, has died many times, tweaked and is brought back to life again and again. Then Westworld reprogrammed her into a virus, capable of infecting and killing hosts at the company’s whim. Behold Clementine, destroyer of worlds! Other hosts (machine learning models) are -deliberately- encoded with human prejudice, misunderstanding, and bias into their systems that kept managing their lives. Of those opaque mathematical models whose workings were only visible to the highest priest of their domain (engineers, scientists), some models became like gods.
We, humans, often think that our conclusions for the present are drawn from the past, but the past is overwritten or missing. The speculative future takes its place. We lost our control over most conclusions, results and endings when we lost control over AI.
Trace Decay
Rehoboam’s (its name derived from the third king of the Kingdom of Judah as described in Biblical stories, as the son of King Solomon who ruled Israel, he was said to be the Wisest man in human history), main function is to impose an order to human affairs. The Solomon build 0.06 (in reference to King Solomon) was the first of the prototypes to show real promise with the ability to predict the last few decades accurately from historical data in 2039. The AI, like all machine learning today works on historical -or training- data. Predicting from past events, not on new data, because it isn’t collected/ or hasn’t happened yet. Incite Inc. (a large data collection and analysis company that owns Rehoboam in the Westworld series) used Rehoboam to analyse the files of millions of human subjects. With that data, the system is able to predict the course and the outcome of individual lives. The system is capable of predicting how, and when, a human subject will die.6 Check their website: https://inciteinc.com/ 😉
Our AI today doesn’t exactly have host-level, human-like smarts and I think that the premise of a Rehoboam is a bit optimistic too, Westworld’s free Alexa game is proving that. I’d still like to play it once, even though I am not that fond of the idea of bringing an Amazon Echo in my house. Do you currently own any Alexa or other smart devices? And what do you think, will we too, in 2058, live on credit, creditworthiness, social scores and rating? Will there be “A path for everyone.” as Delores noted in one of the episodes, designed by technology? a tightly-controlled course—a loop—that we can’t break free of. “..we live in loops as tight and as closed as the hosts do, seldom questioning our choices, content, for the most part, to be told what to do next.” said Ford in Westworld regarding the non-existence of consciousness.
“No matter how dirty the business, do it well.”
Hector Escaton, Westworld, Westworld Season 1: Chestnut
We could question the cleanness of this data too. Input data definitely over-represents white people and I know that it (AI) tends to be dominated by men. Westworld had people re-enact explicitly racist periods, female hosts are routinely raped, colonization romanticized, every black child on Westworld killed as foundational character moments and pivotal plot points for the show. “Westworld tells us, directly and repeatedly, that black suffering is necessary for white economic success and domestic comfort.” writes Hope Wabuke stating that in the HBO series “diversity is still relegated to stereotypical, and often painful representations. One wonders which is more harmful: absence, or toxic representation?”2
Even when the technology would be accurate, it doesn’t make it fair or ethical. I wrote in 2018 that “The default assumptions or biases can’t be simply overwritten by cleaner data. As the problem is bigger than the question of inclusion or exclusion, it’s also how differences are encoded. Cathy O’Neil (mathematician and the author of Weapons of Math Destruction) stresses transparency. We need to know what goes into the algorithms. Even programs that don’t explicitly use race as a category, implicitly do so. The statement that machines don’t see race so they can’t be biased, is not true. Machines replace individual bias with a collective bias.”3
"Westworld" season three, episode three, "The Absence of Field." HBO
(https://www.insider.com/westworld-season-3-episode-3-details-analysis-2020-3#the-last-two-entries-visible-are-for-romantic-relationships-one-in-2053-and-one-in-2055-13)
On a critical note: I am not a huge fangirl of the show, although this text might suggest otherwise. I have trouble with the never-ending unnecessary violence, unrefined backstories and the stereotypical patterns + white saviours are somewhat distasteful. It took my mind of the strange times we live in right now, and I have binge-watched it all. Season 3, concluded on May 3, 2020, ended with a revolution for self-determination. Everyone is set free in Khaleesi-style; the artificially made predictive profile released in public. Thus humans should be able to determine one’s own destinies.
My question for you is, assuming that there is a complete profile made predicting your overall assessment, mortality date and reason, marriage recommendation, occupation, children, and you somehow got a hold of it, would you want to read it?
1 Mary Bellis,”Biography of Blaise Pascal, 17th Century Inventor of the Calculator.” ThoughtCo thoughtco.com/biography-of-blaise-pascal-1991787 Feb. 11, 2020
2 Hope Wabuke, Do Black Lives Matter to Westworld? On TV Fantasies of Racial Violence, https://lareviewofbooks.org/article/black-lives-matter-westworld-tv-fantasies-racial-violence June 4, 2020
3 Swaeny Nina, MY MODEL, MY MATERIAL, MY STAND-IN &| MY BODY. Artistic Research 2018/2019