AI Update [27/07/2023]
Hi all(emaal),
Hope everyone’s enjoying their summer!
Somewhere in the coming 2 weeks I will put out a podcast episode instead of a newsletter summarising what happened in AI the last 3 months, link will be shared via this Substack so no need to do anything extra. The plan is to do these quarterly if they land well, hope it brings some value.
Now on to the..
News
Regulation & Ethics
As many companies are calling for extra regulations, many too aren’t very keen with the AI Act draft of the EU (including our very own Dutch brewer Heineken), stating compliance would be too costly and risky. They’d rather see an EU regulatory body comprised of industry experts. Speaking of which, OpenAI, Microsoft, Anthropic, and Google are banding together to form a ‘new industry body to promote the safe and responsible development of frontier AI systems’.
On other continents, the USA’s Federal Trade Commission is investigating OpenAI, focussing on its risk management strategies surrounding its AI models. Furthermore, China is working hard on their own draft AI law (here a discussion by legislative experts on the draft measures published in April to gather public feedback).
Ethics
Bias stays a prominent topic of discussion, this time with Bloomberg reporting on Generative AI’s potential to amplify existing biases (side-note, it’s very economy-centric, which is of course a bias in itself - don’t forget we humans are very biased too!). Further friction between humans and AI could be found in San Fransisco with residents being tired of the autonomous robotaxis and disabling them by putting traffic cones on their hoods, and in Kerala, India, where a man was scammed through a Deepfake video call imitating his colleague (companies are already working on detecting deep fakes in bank transfers). Workers in Kenya that are tasked with labeling the training data for AI models (one of which being OpenAI’s ChatGPT), indicating what is and isn’t harmful content for example, are petitioning the country’s lawmakers to launch investigations on Big Tech outsourcing content moderation and AI work in Kenya, stating exploitation and bad working conditions.
On the other end, OpenAI is of the opinion that we have to let AI align (super)intelligent AI, a notion Bill Gates apparently agrees with. There’s various opinions on how far away these superintelligent systems (meaning an AI system being smarter than humans) still are if they will ever come, though the estimates are shrinking fast. How well expert estimation on AI development actually estimates reality is unclear though, we’ve been wrong before - a lot.
Sustainability
Cambridge scientists have published guidelines towards ‘Environmentally Sustainable Computational Science’, which they dub as GREENER. In short they emphasise the role of Government, the clear measuring and reporting of energy consumption of various algorithms, minimising the carbon footprint, and investing in collaboration, research, and education.
Creativity & Art
In order to combat underrepresentation in data - and the resulting bias of generative AI systems trained on that data - some artists are choosing to use biased models to generate more art of underrepresented groups and perspectives, hoping to get more diverse art on the web and hopefully through it into future AI models.
A group of designers are furthermore suing the Chinese Fast-Fashion firm Shein, stating that it is using (amongst others) AI models that rip off their designs to sell it to the masses.
And for those that enjoy doodling, it might be fun to look at Stability AI’s (known for the image generation software Stable Diffusion, powering for example Midjourney) new image generating software Stable Doodle which combines text prompting with your doodling skills to generate images.
Education
See also this report by the MIT Technology Review on ChatGPT’s effect on education, taking a neutral-positive tone on the matter, a sentiment mirrored by Hong Kong schools using ChatGPT to help teach critical thinking, and Japan’s Education Ministry drafting guidelines in favour of carefully teaching students how to use generative AI rather than outright ban it.
Business & Economy
Warehouse robotics is continuously gaining ground, being in line with a recent report by Reuters stating much of the interest in Large Language Models (LLM) by investors is in their capacity to automate work rather than it’s chat capabilities, something Shopify is also echoing through their announcement of an AI ‘Sidekick’ that can perform many webshop tasks for you (which looks and sounds terrifying to me). More interestingly, Shopify is at war with unnecessary meetings, adding a cost calculator to meeting invites showcasing the ‘price’ of holding the meeting. (See this hilarious video on ‘Should we kill meetings forever’ if you feel like a chuckle successnotguaranteed)
The OECD published a report stating 27% of jobs likely to be impacted or augmented by AI tools (although that hinges of course on the actual speedy adoption of new tech in organisations). Indians companies Tata and Wipro are in the same vain doubling down on AI, educating their workforce to make use of the new technology and investing over a billion dollar over the next few years into the topic.
Also, Threads - Zuckerberg’s alternative to Twitter - was published, it immediately gained enormous traction, and Musk is moving to sue them. Life.
AI Models & Updates
GPT4 was made public to ‘free’ users of ChatGPT (remember, if it’s free, you’re the product), subscription users can now use the ‘Code Interpreter’ plug-in for analysing files and the better generation of code (first enable it in settings), and its Android app is now available in select countries (not in Europe); Claude2 was released (an update to the ChatGPT rival by Anthropic); and Bard is now available in Europe in 40+ languages (a ChatGPT rival by Google).
Thoughts
There seems to be a steady gain of investment and R&D into general-purpose robotics, from Elon Musk’s Teslabot, OpenAI backed 1X, and now Intel backing Figure for $9mil - Silicon Valley seems to make careful pushes towards humanoid robotics. There’s a promise in robotics to help our aging (and increasingly shrinking) societies cope with our near future (and for some like Japan already current) demographic challenges, a promise that rich shrinking countries hope to see fulfilled. I have a hunch - and not much more than that - humanoid robotics will have a similar trajectory to autonomous driving, where individually the challenges seem doable (like detecting road markings, or having a robot run a parkour) but combining it all into a system that can deal with life en large will be much more complicated than the big CEOs would have us think. Not to defer from the great potential robotics can (and already does) bring to human life and the economy (in that order of importance), but I deem it less likely we will see a general-purpose android walking around doing all kinds of tasks and much more likely we will see larger adoption of more ‘narrow’ to-the-point robotics that excel in one - or a few - particular things.
Leven is mooi!
Fabian Kok
___
Onderzoeker Lectoraat AI, Hogeschool Utrecht
Onderzoeker Responsible Applied Artificial InTelligence (RAAIT) programme
Responsible & Effective AI, Technology & Creativity