Hi all(emaal),
The last few weeks have again shown some rather fascinating developments, from OpenAI opening up a marketplace for bots (called ‘GPT’s) and the USA’s president signing an executive order on AI to a new ‘FraudGPT’ for illicit actors, but also (on the brighter side of things) AI tools that can help surgeons decide how to tackle brain tumours.. and a band member from ABBA weighing in on generative AI.
Let’s dive on in-
UN has set up an AI panel of expert to recommend how to govern the use of AI
The USA's president signed an executive order on AI directives, general expert opinion seems to be neutral-positive
UK's global AI Safety Summit more aesthetics than function, only outcome is for more summits to be organised
FraudGPT, a chatbot that helps scammers in their illicit business, has surfaced on hacker forums, charging $200 a month (likely not to be paid by the scammer's own money)
Journalist used an AI-generated voice clone of himself to gain access to his bank account, showcasing how unsafe voice verification has gotten
Amazon, instead of treating their warehouse workers like non-humans, now experimenting in practise with actual robots called Digits (with that naming scheme at least upper management can keep talking the same way about their workers)
Bias gained through using an AI model persists even if one stops using the model
Japanese tea maker using AI spokesperson in commercial, and show how they use generative AI throughout their label design workflow to support designers in their work
Article explaining ways to perform prompt engineering with DALL-E 3, specifically focussing on using 'seeds' (numbers that represent a generated image's starting point) to create small deviations in already generated images
Björn Ulvaeus, member of music group ABBA, on taking a chance on AI whilst protecting musicians
Perspectives of teachers on AI changing towards the positive, and how they're using it in the classroom
Collection of opinions on using AI for (support in) writing a doctoral dissertation
UN proposing an age limit for using AI tools in schools (tentatively set at 13 years old)
(NL) Het Nationaal Onderwijslab AI start 10 projecten dit schooljaar (2023-2024)
New super-fast AI model can help brain surgeons determine how aggressively to remove a brain tumour during the operation itself by analysing a sample of the to-be-removed tumour
DeepMind further refined their AlphaFold model for helping in drug discovery
Start-up Cambrium raised €11mil for their AI solution that can help to discover, design and distill structural proteins with the same efficacy as currently used proteins but with a more sustainable and vegan manner of producing
OpenAI launching their own marketplace (called it!) for community/enterprise-made bots called 'GPTs', akin to how the iPhone started the app-store
Method of using GPT-4 to help automate and enhance survey data analysis
Comprehensive article discussing what skills employees need to effectively work with AI
Why boards of companies need to make AI a pivotal topic of conversation and action as part of their fiduciary responsibility towards their employees and shareholders
RedPajama released their new 30 trillion token open dataset (gained from various Common Crawls over the public internet) for training Large Language Models
Google Deepmind's 3-pronged framework for evaluating social and ethical risk from generative AI
Paper from Waymo on translating multi-agent motion forecasting (in particular within the domain of autonomous driving) into a language modeling task
Paper on how DALL-E 3 was trained on images with captions generated through a specialised image captioning model
Paper showcasing a potential method to make an LLM unlearn a concept through finetuning
Analysis on Reinforcement Learning from Human Feedback (RLHF) and its effect on LLM generalisation and diversity
Down-scaling LLMs leads to a decline in fact recall before in-context learning takes a drop, new paper finds
Training a transformer to induce generalised systematic/rule-based thinking akin to humans (including human cognitive errors)
Blog going through Google Deepmind's insights into 2 new prompt engineering techniques: step-back and analogical prompting
Opinion piece of an AI expert on the need to teach AI to empathise with people and understand their needs/emotions, rather than fully focus on automation and 'personas' to anthropomorphise the technology
See you in the next one!
Leven is mooi