In the news category, we keep you updated on the latest developments and trends within the AI and marketing world.

Here we share up-to-date information on important events, events, innovative developments, market shifts and more.

We go in depth and give you insights so that you always stay well informed and can make strategic decisions based on the latest information. After all, within marketing it is extremely important to keep up to date because developments are happening so fast.

Follow us and stay up to date.

Of course, in all the violence of new AI announcements, Elon Musk cannot be missing. While the biggest tech companies, such as OpenAI and Google, are launching new AI tools and features, Musk is building his own vision of the future of artificial intelligence with xAI. Musk is focused on creating an AI that is not only functional, but also good for humanity and focused on truth-telling. During OpenAI's DevDay, he chose to hold his own recruitment event for AI engineers, in the former offices of OpenAI, where he once worked as a co-founder.

The purpose of xAI

Musk's ambition with xAI is to rapidly innovate and build AI models that put human interests first. He is creating an AI that is ethical and transparent, focused on serving humanity. In doing so, he is critical of the current direction of OpenAI, the company he helped found in 2015. According to Musk, OpenAI is no longer true to its original mission of open source and public access to AI. During the event, he said, "I just don't trust OpenAI for obvious reasons. It is closed, for-maximum-profit AI." His vision for xAI is a platform that does remain open and accessible.

Innovation and Grok

xAI has already released several versions of their Grok chatbot, which allows users to interact with an AI that learns and evolves quickly. The chatbot is designed to provide accurate and relevant answers and constantly adapts based on interactions with users. In addition to the existing chatbot features, xAI is now working on developing voice and search features. This would make the Grok chatbot more versatile and allow users to obtain information directly with voice commands.

In addition, there are plans to link Grok to X (formerly Twitter), the social media platform that Musk previously acquired. Integrating Grok with X would create new opportunities for real-time interactions with AI within social media. This could revolutionize the way users interact with AI on a platform that reaches millions of users.

Open Source: back to basics

One of xAI's most notable points is that the company plans to make its AI models open source within 9 months of release. This is a direct reference to OpenAI's original mission, which Musk once co-founded with the goal of making AI open and accessible to all. Today, OpenAI operates as a more closed, commercial company, which makes Musk highly critical of their current approach. By sticking to an open source approach, xAI aims to increase the transparency and accessibility of AI, which is a clear departure from the path that companies like OpenAI and Google have taken.

Musk's vision for the future of AI

Musk has big plans for xAI, according to The Verge, and sees his company as a major player in the world of AI, along with giants such as OpenAI, Anthropic and Google. He believes that in the next five years, xAI can become as dominant in the field of AI as SpaceX is in the space industry. By innovating quickly and offering open source models, Musk wants xAI to not only become a forerunner in technological developments, but also set an ethical example in the AI industry.

Funding and future challenges

While Musk has big ambitions with xAI, it also requires significant financial support to achieve these goals. In May 2023, xAI raised a whopping $6 billion in funding, valuing the company at $24 billion. This is a solid foundation on which to build, but Musk has indicated that much more money will likely be needed to make really big strides. However, this does not seem to be a problem for Musk, who is the richest person in the world, according to the Bloomberg Billionaires Index, with assets of $262 billion.

Musk's experience scaling companies such as Tesla and SpaceX gives him confidence that he can also take xAI to the top. With his unique combination of technological vision and entrepreneurship, Musk has a proven track record of changing industries. Whether he can achieve the same in the world of AI with xAI remains to be seen, but it is clear that he is doing everything he can to achieve this ambition.

Take a leap forward in your marketing AI transformation every week

Every Friday, we bring you the latest insights, news and real-world examples on the impact of AI in the marketing world. Whether you want to improve your marketing efficiency, increase customer engagement, sharpen your marketing strategy or digitally transform your business. 'Marketing AI Friday' is your weekly guide.

Sign up for Marketing AI Friday for free.

Marketing AI Friday

On Oct. 1, Microsoft announced a series of major updates to Copilot, their AI assistant. With the theme "An AI companion for everyone," Microsoft focuses on integrating AI as an everyday resource for users. The assistant should be personal and flexible, while respecting users' privacy.

While many of these new features are not yet available in the Netherlands, they do offer an interesting preview of what to expect. Here are some of Microsoft's key announcements:

Speech-driven interaction with Copilot Voice

Copilot Voice introduces voice-controlled interactions with the AI, allowing users to interact naturally with the assistant. Whether you want to brainstorm or ask questions, this makes it easier than ever to interact directly and hands-free with AI. Users can choose from four different voices to make the experience more personal. Currently, this feature is only available in English-speaking countries.

Daily updates with Copilot Daily

With Copilot Daily, you receive a personalized summary of weather and news every morning, read aloud by your chosen Copilot voice. Microsoft has plans to expand this feature further with more personalized content.

Improved intergration with Microsoft Edge

Copilot's integration with Microsoft Edge makes using the AI even easier. Users can now activate Copilot directly from the address bar by simply typing @copilot. This makes it easy to quickly ask questions, summarize Web page content or even translate text while browsing, without interrupting your work.

Analyzing images with Copilot Vision

Copilot Vision is a new, experimental feature that allows AI to analyze not only text, but also images and Web pages. This gives capabilities for visual searches and can provide real-time assistance with tasks such as decorating a room or choosing products. It is a step forward in combining visual information with AI assistance.

Deeper insight with Think Deeper

Think Deeper allows users to have more complex questions analyzed. This feature helps make sense of difficult problems by providing step-by-step answers. Whether you're comparing options for a big purchase or need to make tough decisions, Think Deeper helps you weigh thoroughly and carefully.

Personalized suggestions through Personalized Discover

Based on your previous interactions with Microsoft services, Personalized Discover provides recommendations for how to make the best use of Copilot. This helps users use the AI assistant more efficiently, tailored to their specific needs.

Copilot available everywhere

Microsoft has announced that Copilot will be integrated into all their products and platforms, including Windows and PCs. This means that the AI assistant will always be available with just one click, no matter what device you are using. In addition, Copilot is also integrated with WhatsApp, allowing users to communicate with Copilot via messaging. This feature is already available in the Netherlands.

Bing gets more intelligent with AI-generated searches

Microsoft also announced that Bing now supports AI-generated search. This feature helps the search engine better understand search queries and delivers dynamic content in response. Instead of having to click through to a website, you get a comprehensive landing page with relevant information, images and videos. This is similar to Google's Overview, but is currently not yet available in the Netherlands.

Take a leap forward in your marketing AI transformation every week

Every Friday, we bring you the latest insights, news and real-world examples on the impact of AI in the marketing world. Whether you want to improve your marketing efficiency, increase customer engagement, sharpen your marketing strategy or digitally transform your business. 'Marketing AI Friday' is your weekly guide.

Sign up for Marketing AI Friday for free.

Marketing AI Friday

Last week was a busy one for OpenAI. In addition to raising a whopping $6.6 billion in funding, bringing the company to an impressive $157 billion valuation, CEO Sam Altman also had time for DevDay, the annual event where OpenAI presents the latest technologies and innovations to developers.

Several exciting announcements were made during DevDay that could change the future of AI. Below is an overview of the major updates and new features that OpenAI unveiled:

Direct interactions with AI through the real-time API

A key announcement is the new Realtime API, which allows developers to build AI-driven apps that function like real conversation partners. Through constant connection, the AI understands nuances such as emotions and accents, making for a more natural conversation. Another powerful part of this API is the ability to use function calls, which allow the AI to not only respond to queries, but also perform actions within the app. For example, consider automatically placing an order or performing a task based on a spoken command.

Speed and cost savings thanks to prompt caching

Another innovation is Prompt Caching, a feature that remembers and reuses previous AI tasks. This is especially useful if you regularly perform similar tasks, such as text generation or analysis. Prompt Caching makes execution up to 50% cheaper and faster. It works with all new OpenAI models. This can be especially interesting for companies that work with AI on a large scale and want to save costs.

Train smart and efficient AIs with model distillation

Model Distillation allows you to train smaller, specialized AI models with the knowledge of larger models. This allows you to create, for example, an AI that knows exactly how your brand communicates, without having to use the large and expensive models. This produces faster and more efficient results, making it attractive for companies to develop custom AI solutions.

Make AI understand images with vision fine-tuning

With the introduction of Vision Fine-Tuning, GPT-4 becomes even more powerful. The model can now also understand images, enabling numerous new applications, such as smart classification of products or improving image recognition. Developers can now train the model with both text and images, significantly increasing the potential for visual AI applications.

New Features in the Playground

For developers who like to experiment with AI, OpenAI has also added new features to the Playground. The new "Generate" button provides support for creating system prompts, function definitions and output schedules. This makes it easier to set up complex tasks quickly and efficiently.

Reduced costs for APIs

An added bonus for developers is that working with GPT-4 via the API is now cheaper. This lowers the barrier for companies and individuals to integrate advanced AI into their workflows. Also, OpenAI has announced that prices for the Realtime API for voice (now around €0.06 per minute of input and €0.23 per minute of output) are likely to drop soon, making the use of voice-driven AI apps even more attractive.

Future of voice-activated AI apps

With these announcements, OpenAI is taking a clear step toward a future in which real-time, voice-driven AI apps become the norm. The new features make it easier AND cheaper for developers to build AI solutions that can interact with users in real time. This opens the door to a wave of innovative applications across industries, from customer service to productivity tools.

OpenAI has indicated that developers can now try out these new features for free with the training tokens that are temporarily available. This provides a great opportunity to experiment and see how these new tools can be used for your projects.

Take a leap forward in your marketing AI transformation every week

Every Friday, we bring you the latest insights, news and real-world examples on the impact of AI in the marketing world. Whether you want to improve your marketing efficiency, increase customer engagement, sharpen your marketing strategy or digitally transform your business. 'Marketing AI Friday' is your weekly guide.

Sign up for Marketing AI Friday for free.

Marketing AI Friday

OpenAI continues to push the boundaries of artificial intelligence with the introduction of ChatGPT-4o Canvas. This new update allows users to collaborate with ChatGPT in a more efficient and visually organized way, for both text writing and code editing and writing. With Canvas, users can now benefit from a more organized work process, where editing, tracking and modifying text is much easier. This makes it a very useful tool for writers, developers and anyone working with content or code.

What is ChatGPT-4o Canvas?

ChatGPT-4o Canvas was introduced last week. It gives users a new and intuitive window that makes collaboration with AI easier. It provides users with a visual workspace where you can not only chat with the model, but also make changes directly to your documents. This separate window provides a clear structure in which you can work on text or code without losing progress or previous versions. It is especially useful for professionals working with large amounts of text or complex code who need clarity and structure.

The power of Canvas lies in the ability to make instant changes, restore previous versions of your work and easily make specific adjustments without affecting the entire document. This makes editing and collaboration with AI faster and more effective.

Key features of ChatGPT-4o Canvas

Version control

The handy version control feature allows users to easily view and restore previous versions of their text or code. This means you never have to worry about losing changes or inadvertently overwriting something. You can use the arrows in the upper right corner to scroll through previous versions, which is especially useful when working on larger projects involving multiple iterations.

Easy copying

A simple but effective feature is the quick copying of text or code via the 'copy' icon. This allows you to quickly share or reuse pieces of text or code in other projects without having to copy it manually each time. This saves time and avoids cutting and pasting errors.

Simple adjustments

One of the best features of Canvas is the ability to make changes directly in the text or code. You can easily modify specific sections while keeping the rest of the document intact. This makes it a lot easier to make small changes without risking disrupting the whole thing.

Comprehensive options

Besides the basic functions, ChatGPT-4o Canvas also offers more advanced features. For example, you can easily add emojis, automatically add headings and titles, adjust the reading level of your text and even optimize the length of your text. These features not only make it easier to structure documents, but also help improve the readability and presentation of your text.

Who has access to ChatGPT-4o Canvas?

The ChatGPT-4o Canvas is currently available as a beta version for users with a Plus or Team account. This makes it accessible to both individuals and teams looking for more efficient ways to collaborate on text and code. Although still in the testing phase, Canvas is a promising addition that is expected to improve the work processes of many professionals. When selecting a model, you can now simply select "ChatGPT4o-with-canvas" to take advantage of these new features.

User collaboration with AI

ChatGPT-4o Canvas not only provides more control and visibility, but it also improves collaboration between humans and AI. Because users can make changes and restore previous versions in real time, the workflow becomes significantly smoother and clearer. This can be especially beneficial for people using ChatGPT and working on complex projects, as well as for writers or editors who need to keep track of multiple versions of a text.

With the launch of ChatGPT-4o Canvas, OpenAI is once again demonstrating the ever-improving collaboration between humans and AI. This tool allows users to work with large amounts of text or complex code in an organized and efficient way. Whether you are working on a large-scale project or need to make simple changes, Canvas makes the process faster and more user-friendly.

Take a leap forward in your marketing AI transformation every week

Every Friday, we bring you the latest insights, news and real-world examples on the impact of AI in the marketing world. Whether you want to improve your marketing efficiency, increase customer engagement, sharpen your marketing strategy or digitally transform your business. 'Marketing AI Friday' is your weekly guide.

Sign up for Marketing AI Friday for free.

Marketing AI Friday

At Meta Connect 2024, Meta's annual developer conference, technology was once again in the spotlight. Meta presented new developments in AI, augmented reality (AR) and virtual reality (VR) at their annual conference. The conference offered a glimpse into the future of technology and how Meta plans to increasingly merge our digital and physical worlds.

One of the most impressive reveals of the conference was the presentation of the Meta Quest 3S VR glasses, an affordable virtual reality headset that takes the user experience of VR to the next level. Still, the Orion AR glasses prototype was the absolute highlight. CEO Mark Zuckerberg described these glasses as "the most advanced glasses the world has ever seen." This indicates that Meta is not only focusing on affordable VR solutions, but also firmly committed to innovative and advanced AR technology.

Meta Quest 3S: The next standard in VR

The Meta Quest 3S was presented as an affordable but powerful VR headset. Aimed at a wider audience, this new model offers many features without the high cost. The Quest 3S offers an immersive VR experience with improved performance, high resolution, and a more comfortable fit for prolonged use.

What makes the Meta Quest 3S special is that Meta has managed to keep the headset accessible in terms of price, while making significant technological advances. The glasses are on sale in Dutch web shops for 330 euros. Because of its reasonably affordable price, it makes VR more accessible to consumers and companies that want to use the technology for entertainment, education or even in training simulations or simulations.

Orion AR glasses: The future of augmented reality

While the Meta Quest 3S is already impressive, the Orion AR glasses prototype was without a doubt the biggest surprise of Meta Connect 2024. Mark Zuckerberg presented a prototype of the Orion AR glasses, which he said could be one of the most advanced glasses yet. It is still unclear when the glasses will officially hit the market or if they will remain a prototype. However, the fact that Meta showed a prototype indicates that we won't have to wait long.

With the Orion AR glasses, Meta aims to further blur the lines between the physical and digital worlds. With the glasses, users would be able to see digital objects through their glasses in the real world, without the use of a VR headset. While the potential for AR is promising, it remains to be seen if and when Meta can make this technology available to a wide audience. This development could have huge implications for industries such as education, healthcare and even consumers' daily lives.

Improvements in AI

In addition to the hardware reveals, AI took center stage at Meta Connect 2024. Meta introduced enhanced AI voice interactions, where users can interact with AI assistants using celebrity voices. This technology allows for more natural conversations, where the AI is able to understand complex questions and requests and respond to them in a human voice. This makes interactions much more fluid and personal.

Another important AI development presented was the ability to automatically translate videos. Users can now watch videos with real-time translation. This technology allows people to watch videos in their native language, regardless of the original language of the content.

Meta's vision of the future

At Meta Connect 2024, it became clear that Meta is not only focusing on VR and AR, but also on a broader vision of the future in which artificial intelligence plays a key role in users' daily lives. The combination of AR, VR and AI in Meta's products and services gives us a glimpse of how the company sees the future: a world where the boundaries between physical and digital reality are increasingly blurred.

What will the future bring?

The products and technologies presented at Meta Connect 2024 provide a first glimpse of what is possible. With the announcement of state-of-the-art AR glasses and advances in AI voice interactions and translation, Meta appears to be gearing up for a future in which technology and everyday life are increasingly intertwined.

While there are still many questions about when these products will hit the market and exactly how they will work, one thing is certain: Meta continues to innovate and is at the forefront of developing technologies that will change the way we work, communicate and live.

Take a leap forward in your marketing AI transformation every week

Every Friday, we bring you the latest insights, news and real-world examples on the impact of AI in the marketing world. Whether you want to improve your marketing efficiency, increase customer engagement, sharpen your marketing strategy or digitally transform your business. 'Marketing AI Friday' is your weekly guide.

Sign up for Marketing AI Friday for free.

Marketing AI Friday

Anthropic, the company behind the popular AI chatbot Claude, has introduced a brand new feature: Artifacts. This addition allows users to create different types of interactive content through simple text prompts. The generated content then appears in a separate window next to the main chat, providing an uncluttered workflow. With Artifacts, users can not only quickly create content, but also easily edit, publish and share it via a Web page. If you do a lot of content creation, this gives you many advantages.

What are Artifacts?

Artifacts are essentially interactive content modules generated by Claude. With a simple text prompt, you can get complex output such as interactive presentations, websites, diagrams and even code snippets. This makes it a powerful tool not only for individual users, but also for teams working on different projects.

With Artifacts, Claude takes an important step in the development of their model, which is increasingly committed to interactivity and ease of use. Where advanced knowledge of code or web development was previously required, users can now create valuable different types of content with minimal effort and technical know-how.

What can you make with Artifacts?

Artifacts are flexible and suitable for various types of content. Here are some applications in which Artifacts can be used:

  1. Documents (in Markdown or plain text): For creating easy-to-read and edit text documents ideal for reports, notes or manuals.
  2. Websites (single HTML pages): Artifacts can generate simple HTML web pages. These contain all the necessary HTML, CSS, and even JavaScript in one file, making it easy to quickly publish a website without the need for extensive setup.
  3. Diagrams and flowcharts: Visual representations of processes and structures are valuable in many industries, such as project management and software development. With Artifacts, you can quickly generate and customize these diagrams.
  4. Interactive React Components: Want to add an interactive feature to a website, such as a quiz, a form or a mini-game? Artifacts can generate code snippets that can be easily integrated into React.
  5. Code snippets: Programmers can use Artifacts to quickly generate code that performs specific functions. This can range from small scripts to complex modules that fit into a larger project.
  6. Scalable vector graphics (SVG): For graphic designs, Artifacts offers the ability to create scalable vector graphics. These images are ideal for websites and apps because they can be scaled without loss of quality.

How do you create an Artifact?

Creating an Artifact is simple and quick. You begin by starting a conversation with Claude, in which you upload one or more files that serve as the basis for the content you want to create. Next, you instruct Claude to create a specific type of "content. This can be an interactive report, a mini-game or even a complete design.

Once Claude has enough information, usually more than fifteen lines of text, he gets to work generating the Artifact. What's special about Artifacts is that the content functions independently; you don't need context from the rest of the conversation to understand or use the Artifact. This makes it easy to reuse or modify later.

The result appears in a separate window on the right side of the main conversation. Here you can immediately view the visual output, analyze the code or make adjustments.

Examples of the use of Artifacts

Let's discuss some concrete applications of Artifacts:

  • Say you are working on a marketing campaign and you want to create an interactive presentation that guides customers through a product or service in a visual way. Instead of spending hours creating a PowerPoint presentation by hand, you can use Claude to input a few text prompts and have an interactive presentation generated.
  • Are you a developer who needs a quick prototype of a new Web page? Simply upload your wireframes or ideas to Claude, and he can create an interactive web page with React components and scalable SVG images.
  • Need complex diagrams for a presentation or report? Claude can create visual diagrams based on simple instructions that clearly depict processes and can be effortlessly modified.

How do you edit and use Artifacts?

One of the most impressive features of Artifacts is the ability to edit and extend content. You can easily ask Claude to make changes to a generated Artifact, such as adding an additional function to a React component or adding more detail to a diagram. These changes appear immediately in the Artifact window, making your workflow smoother.

In addition, there is the ability to store and manage multiple versions of an Artifact. This gives users the flexibility to go back to previous versions or compare multiple versions of the same Artifact.

Publishing and sharing Artifacts

Another useful feature of Artifacts is the ability to publish them. This is done through a unique Claude URL, which you can share with others. This makes collaboration and gathering feedback especially easy, as anyone with the URL can view it.

One minor limitation is that you cannot embed the Claude URL within your own Web site, which can be annoying for some users.

Remixing of Artifacts

When you share an Artifact, others can also copy and use it. This is done through the "Remix Artifact" button, which allows users to copy the content to their own Claude environment. Here they can make adjustments and reuse the content completely as they wish.

While this is a great way to collaborate and share knowledge, it can be a drawback for some users that this feature cannot be turned off. It means that your work can always be copied and then modified by others without your permission. This is especially something to consider with sensitive or copyrighted content.

The future of content creation with Artifacts

Artifacts offers unprecedented opportunities for creating, editing and sharing interactive content. Whether you work in marketing, software development or design, this tool can significantly speed up and simplify your workflow. By using AI, such as Claude, even users with no technical background can create complex content that looks professional and functions perfectly.

Want to discover for yourself how Artifacts works? Check out an example of an interactive page created with Artifacts based on a few simple prompts .

In addition, you can watch this video for a comprehensive introduction to Artifacts and all the features this new feature offers.

Invitation to monthly AI inspiration sessions

We would love to take a look at the most important AI developments with you, discuss the latest news and updates, share experiences, give you concrete tools, you can ask questions and spar with us.

Do you enjoy being a part of this as well? Then sign up.

Banner Online AI inspiration session

On Monday, September 9, 2024, Apple hosted an event. An event where the world was introduced to the iPhone 16 and other new Apple products. Every year, Apple manages to wow the tech world with the latest gadgets and this edition promises to be no exception. Here we discuss Apple's innovations for this year.

iPhone 16: A revolution with the A18 chip and Apple Intelligence

The iPhone 16 takes center stage at this event and brings with it some major innovations. One of the biggest upgrades is the introduction of the A18 chip, which will power all iPhone 16 models. This powerful chip is the backbone of Apple Intelligence, a new AI platform that brings advanced capabilities such as image and voice recognition to the iPhone.

The iPhone 16 event showcased Apple's latest innovations, including iPhone 16 Pro models with larger screens (6.3 inches and 6.9 inches) and a 48MP ultrawide camera. These models are ideally suited for photography enthusiasts and content creators. The base versions of the iPhone 16 also received a new design, with a vertical camera setup that is both stylish and functional.

The first steps in AI: Apple Intelligence

Apple revealed at the event that the iPhone 16 would be the first major step in the use of AI, with the introduction of Apple Intelligence. This platform integrates various AI features into the everyday use of the iPhone. For example, consider the improved Siri, which offers a new level of personalization and interaction thanks to AI. Siri features a glow effect around the screen, which is not only a visual improvement, but also shows that the assistant can support you.

Users can count on basic AI features such as image recognition, which allows the iPhone to recognize objects and faces even better. AI-generated content and improved speech recognition are also among the features, making the iPhone even more intuitive to use.

What the future holds: iOS 18.1 and further AI capabilities

While the iPhone 16 launch was impressive, the true power of Apple Intelligence is not expected to be fully utilized until the rollout of iOS 18.1 in October 2024. With this update, advanced AI features such as text rewriting, automatic recording of phone calls, and their transcription are expected. We will also see improved image generation and advanced image analysis, which will allow you to have photos automatically categorized or enhanced, for example.

Another exciting development is the even smarter and more personalized version of Siri in iOS 18.1, which continues to be developed in collaboration with OpenAI. This new Siri version promises to make interactions even smoother and will support users in unique ways, such as by making suggestions based on habits and preferences.

Apple Watch and AirPods: small but beautiful

In addition to the iPhone 16, there are also interesting updates to the Apple Watch and AirPods. The Apple Watch has received a slimmer design, with an option for a larger 49mm model. This new size is especially interesting for users who prefer a larger and brighter screen, for example for reading notifications or using fitness apps.

The AirPods have an improved design that is both stylish and ergonomic. The new AirPods fit better and make sound even clearer. Above all, the focus on user experience is key, with improved noise reduction and sound quality.

Should you upgrade to the iPhone 16?

For many people, the question was whether it was worth upgrading to the iPhone 16. This obviously depends on your current device and whether you need the latest features. If you use the camera a lot, the improvements in the Pro models are definitely worth it. However, the combination of the A18 chip and Apple Intelligence's AI features also offer a lot in terms of productivity and ease of use.

It is important to note that Apple Intelligence is not available in the European Union for now. Due to the Digital Markets Act (DMA), Apple has suspended the introduction of certain AI features in the EU. This may affect your decision to switch immediately, especially if you are interested in the AI capabilities that won't become fully available until later.

How do you follow the Apple Event?

Didn't want to miss anything about the launch of the iPhone 16 and other Apple products? The Apple Event on September 9, 2024 could be followed live from 7 p.m. Dutch time via Apple's official channels, such as their website and YouTube. Many people were ready to watch all the announcements live and be among the first to know the latest news.

Invitation to monthly AI inspiration sessions

We would love to take a look at the most important AI developments with you, discuss the latest news and updates, share experiences, give you concrete tools, you can ask questions and spar with us.

Do you enjoy being a part of this as well? Then sign up.

Banner Online AI inspiration session

An important new update for "coders" us or people who like to build apps or software. Cursor, a new tools makes it easier than ever to build your own software, even without deep technical knowledge. One AI tool that attracted a lot of attention this past week is Cursor. Designed specifically for coding, this new AI chatbot allows both experienced programmers and beginners to write functional code in a short amount of time.

What makes Cursor unique?

Cursor combines the power of a developer environment with the usability of an AI chatbot. Whereas tools such as ChatGPT are mainly focused on general word processing, Cursor is specifically designed to support programmers in writing code. This makes the tool ideal for people who want to build their own software or apps, but do not yet have the knowledge or experience to do so without help.

What makes Cursor truly unique is its integration with popular AI models such as Claude 3.5 Sonnet and GPT-4. These models help users turn ideas into functional code at lightning speed. Within minutes, a simple concept can turn into a working application, without the user having to understand all the details of coding themselves.

In addition, Cursor is based on the highly popular platform Microsoft Visual Studio Code, a code editor used by millions of programmers worldwide. This integration means users can quickly and easily make changes to their code and troubleshoot problems, all within the same environment.

Who is Cursor intended for?

What makes Cursor so powerful is that it makes coding accessible to everyone. Whether you are an experienced programmer who wants to work faster or someone just starting to code, Cursor helps you develop software efficiently.

A striking example is the story of an eight-year-old girl who developed her own app with the help of Cursor. This shows how user-friendly and powerful the tool is: even children can use it and bring their creative ideas to life.

Why do companies choose Cursor?

With more than 30,000 paying customers, including employees of companies such as Perplexity, Midjourney and OpenAI, Cursor has quickly proven itself as a reliable tool for software development. Companies choose Cursor because of the speed and simplicity with which ideas can be turned into working software. The AI support in Cursor allows complex projects to be completed with fewer errors and less effort, ultimately leading to a more efficient development cycle.

In addition, the seamless integration with the Visual Studio Code editor is a big advantage for programmers who are used to working with that environment. They can start coding right away without having to learn a new interface or drastically change their workflow.

The future of coding with AI

Tools like Cursor show that the future of software development is changing rapidly. Whereas coding used to be a skill mainly reserved for "technerds," it is now something anyone with a good idea can pick up. The combination of AI and user-friendly development environments such as Cursor ensures that the threshold for software development is becoming lower and lower.

Take a leap forward in your marketing AI transformation every week

Every Friday, we bring you the latest insights, news and real-world examples on the impact of AI in the marketing world. Whether you want to improve your marketing efficiency, increase customer engagement, sharpen your marketing strategy or digitally transform your business. 'Marketing AI Friday' is your weekly guide.

Sign up for Marketing AI Friday for free.

Marketing AI Friday

Google continues to develop new AI solutions and this week introduced multiple updates to their Gemini models, personalized assistants called Gems and improved image generation with Imagen 3. These new developments show how Google continues to expand the functionality of their AI systems, taking a step forward in a rapidly changing market.

New Gemini versions: Flexibility and performance

Google has announced three new versions of their Gemini AI models, each with unique features and applications.

Gemini 1.5 Flash-8B

A more compact model that is still powerful enough for multimodal applications and processing large amounts of information. This model can summarize long texts, which is useful for companies that want to get a quick overview of extensive documents or data.

Gemini 1.5 Pro

Is designed for more specialized tasks, such as coding and complex jobs. Companies that deal a lot with technical issues or custom solutions can benefit from the improved performance of this model.

Gemini 1.5 Flash

Gemini 1.5 Flash has shown major performance improvements on several internal benchmarks. This model can be especially valuable for organizations looking to use AI for fast and efficient problem-solving tasks.

All of these models are available through Google AI Studio and the Gemini API, allowing developers and companies to test and customize them for their specific needs. These updates are labeled with the date "0827" to avoid confusion with earlier versions. On Sept. 3, the older Gemini 1.5 Pro Exp 0801 model was automatically replaced with the new 0827 version.

Creating your own AI assistants with Gems

In addition to the new Gemini models, Google is also introducing Gems, a new tool that allows users to create their own AI assistants. These personalized AI experts can be tailored to specific tasks or topics, similar to the customization features in tools such as ChatGPT, Microsoft's Co-pilot, and Claude. Gems makes it possible to create AI assistants specifically tailored to a user's workflow, which can significantly improve efficiency.

Google has also developed ready-to-use Gems, such as a brainstormer, a career advisor, a programming partner, and a writing editor. These preconfigured assistants can be used immediately for common tasks and are ideal for users who want to get started quickly.

We also tested our AI colleague Liza with Gems. Although easy to set up, the capabilities and results were disappointing for now. We will test this further and continue to monitor Gems developments.

Currently, Gems are only available to paying users of Gemini Advanced, Gemini Business, and Gemini Enterprise.

Image generation with Imagen 3

In addition to the Gemini updates and Gems, Google is also launching Imagen 3, a new version of their image generation model. Imagen 3 allows users to create images based on text prompts, allowing for a wide range of styles, such as photorealism, oil painting, and even clay animations.

This enhancement positions Google as a strong competitor to image generation platforms such as DALL-E 3, Midjourney, and Flux. Unfortunately, Imagen 3 is not currently available in the Netherlands, but this new functionality is expected to be rolled out worldwide soon.

What do these developments mean for businesses?

With the launch of these new AI applications, Google shows how committed they are to developing powerful tools that help organizations work faster and more efficiently. Whether it's improving word processing capabilities with Gemini, creating custom assistants with Gems, or generating high-quality images with Imagen 3, these innovations offer many opportunities for businesses.

Google's ongoing development of AI shows that we are only at the beginning of what is possible. The next step for organizations is to properly integrate these tools into their existing processes and train staff to use AI responsibly and effectively.

Take a leap forward in your marketing AI transformation every week

Every Friday, we bring you the latest insights, news and real-world examples on the impact of AI in the marketing world. Whether you want to improve your marketing efficiency, increase customer engagement, sharpen your marketing strategy or digitally transform your business. 'Marketing AI Friday' is your weekly guide.

Sign up for Marketing AI Friday for free.

Marketing AI Friday

The world of artificial intelligence is not standing still. While many companies are just starting to use AI, OpenAI already seems to be ready for the next big step. Rumors abound about what their next development will bring, and more and more eyes are on an upcoming release Strawberry.

According to a recent article by The Information, OpenAI is working on a new GPT version that builds on previous models, focusing on better problem-solving skills and better reasoning. If they succeed in this, it could give them another big lead in the race for the most advanced AI technology.

What can we expect from Strawberry?

Strawberry is seen as a big step forward from current AI models, such as GPT-4. In particular, it is said to be better at solving complex mathematical problems, providing assistance in strategic planning and even in tasks that require human logic, such as solving puzzles. These improvements focus on deeper and more accurate reasoning, which is essential for more challenging tasks.

When is Strawberry coming out?

Although the full release of Strawberry is probably not expected until 2025, a beta version will be available as early as this fall. This will give users and companies a chance to experiment with the model in advance, while OpenAI continues to work on refining the technology.

Update: OpenAI launched ChatGPT-o1, which this article is about, on Sept. 12, 2024.

What makes Strawberry different from GPT-4?

One notable difference from earlier models is that OpenAI's Strawberry is given more time to arrive at answers. Whereas GPT-4 and other similar AIs provide quick answers, Strawberry is designed to think more deeply about complex problems. This means that the model will be able to provide more accurate and reasoned answers to complex questions.

OpenAI and responsible use of AI

In the quest for increasingly sophisticated AI models, OpenAI is also keeping an eye on safety and accountability. CEO Sam Altman recently announced a partnership with the US AI Safety Institute to ensure that new AI models, such as Strawberry, are thoroughly tested before widespread deployment. This demonstrates the importance of careful development and responsible use of AI.

What is the impact of this new generation of AI?

While it is not yet clear how big the impact of Strawberry will be, it is clear that new AI systems such as these will continue to change the way we work. They offer promising opportunities for innovation and efficiency, but they will also affect roles and tasks within organizations. It is therefore wise to keep a close eye on these developments and consider how your company can prepare for these changes.

Looking ahead to Orion

In addition to Strawberry, OpenAI is also developing a new AI model called Orion. This project should ensure that OpenAI is even further ahead. Interestingly, Strawberry is playing an important role in the development of Orion by generating the training data needed to improve this model.

The future of AI is brimming with possibilities, and with releases like Strawberry and Orion, OpenAI shows that they continue to innovate. For companies and users following this technology, this could mean another big step in how AI impacts our daily lives.

Take a leap forward in your marketing AI transformation every week

Every Friday, we bring you the latest insights, news and real-world examples on the impact of AI in the marketing world. Whether you want to improve your marketing efficiency, increase customer engagement, sharpen your marketing strategy or digitally transform your business. 'Marketing AI Friday' is your weekly guide.

Sign up for Marketing AI Friday for free.

Marketing AI Friday