artificial flattery 🥸
When it comes to the arts, modern human society is generally anti-imitation. You wouldn't brag about fake designer clothes; nor would you flaunt forgeries of famous pieces. That said, creating these imitations is still hard work. Making a convincing (fake) Gucci bag takes craftsmanship, painting a believable forgery requires legitimate skill.
This principle used to hold true for the digital landscape, but with advancements in AI, specifically Neural Style Transfer (NST), this is no longer the case.
NST refers to a class of algorithms for digital image manipulation. NST leverages neural networks to allow for the visual style of a source image to be transferred to a target image. The original and most widely known use-case is in applying the style of famous painters to user-supplied images. Try it out yourself.
Pretty dope, right?
But the excitement of having your selfies painted wears off quickly, even if it's by Van Gogh himself. As a product, NST provides a means to automatically create image filters from reference material, similar to those on Instagram. There are now tons of apps that now provide this functionality out of the box.
More than just selfie filters 👀
Arguably, the real beauty here lies in the technical accomplishment. Fundamentally, images are just signals, we should be able to generalise and leverage the same or similar infrastructure and transfer their styles to other types of signals.
So where to from here? 🤔
Sure enough, smart people are already exploring applications of style transfer to audio signals using Generative Adversarial Networks (GANs). Technicalities aside, let's think about the implications of this for a second.
On one hand, there's the thought of malicious actors using this technology. This could result in:
The impersonation of people for theft or character assassination.
It could cause widespread mistrust of audio and video media.
A new documentary on the life of Anthony Bourdain sparked some controversy after using synthetic AI-generated (deepfake) audio without disclosing it to the viewers. Deep fake technology has even been able to get Barack Obama to call Donald Trump a “complete dipsh*t”.
No doubt, as this technology inevitably matures, we need to place a higher emphasis on researching how to mitigate its potential misuse.
On the other hand, legitimate applications seem vast:
Musical genre translation - automatically generating a jazz version of your favorite album?
Augmented songwriting - imagine if you could record a reference track and apply the style of your target singer on top of it.
Voices as franchises - bedtime stories by Morgan Freeman, audio-books by Drake, you get the idea...
Immortal artists - posthumous albums would no longer have to rely on the existence of throw-away recordings, with this perhaps Pop Smoke's latest posthumous album could have retained a lot more of his personality.
I believe that technological advancement can address the negative implications. However, it is yet to be seen whether we can overcome our societal aversion to imitation, especially if said imitation is artificial.
claude
a doctor’s aid 🩺
About a season ago, the people's favorite New Amsterdam character, Dr Kapoor was coerced into using DAWN (Diagnostic Assessment Wellness Network): a machine learning software that uses AI to detect and diagnose illnesses with varying levels of certainty from images, patient symptoms and other input fields.
DAWN was a glimpse into what a doctor’s aid could look like in the near future. An endless repository of clinical trials, past medical cases, and imaging analysis that'll assist your doctor in diagnosing and treating you - all at the touch of a button (well, screen).
Basically WebMD? 🤔
One aspect of it, yes, but the use cases are much broader and surprisingly already in play. If we hone in on diagnostics, we see local players like envisionit and, from the podcast I shared in vol. 2, international players like Qure.ai and Aidoc.
Qure.ai's technology has been used to help doctors operating in India's rural populations and they use AI to create diagnostic reports from chest and head scans. During the pandemic they were able to quickly train their qXR AI tool to detect COVID-19 from scans, allowing for quick triaging and diagnosis, less test taking and cheaper health care. In a world where half the population lacks access to basic healthcare and a 100 million people fall into poverty each year because of high health expenses, access and affordability remain some of the largest problems in the space.
Proceed with caution ☢️
However, AI does come with its downsides. One of those, which many others stem from, is bias. Because AI is based on algorithms and data sets that humans create, biases are implicit. Algorithms are trained on sample datasets, if the samples are not representative of the population then the biases are reinforced. This is something that the medical space has struggled with historically as datasets have often been based on Caucasian test samples.
Google's dermatology assist tool recently launched and accounted for factors like age, sex, race and different skin types in their datasets.
Moving forward 🔜
As Africa begins to benefit from the host of increased efficiencies that AI brings, it will be imperative that our datasets are inclusive and representative of her people. This is a step, only a step, to circumventing bias and effectively aiding doctors in the treatment of all their patients.
karl
if you’re in the mood to binge-watch something, matt recommends the docu-series dirty money on netflix
karl got some advice from your favourite entrepreneurs
sash listened to karl’s recommendation and watched the theranos documentary on showmax