Google also sprinkled new updates and tools last week, just like OpenAI. Most are not yet usable for us in the Netherlands, but it is clear that Google is making great strides.

Veo 2 for advanced video generation

By 2025, we are expected to create more and more videos with AI. In addition to existing tools such as Runway and Kling, major players such as OpenAI and Google are also entering this market. OpenAI launched Sora last week, and Google DeepMind introduced Veo 2 this week. Both tools are not yet available in the Netherlands, but give a clear picture of how video production is going to change. 

Veo 2 is a new AI model that creates high-quality videos in various styles and topics. Here are its key features:

  • Realistic details and motion: Veo 2 creates videos that look super realistic. The model better understands how things move and look in the real world, such as how people act or how objects physically behave.
  • Full creative control: You can specify exactly what you want in terms of style and effects. For example, ask for a wide-angle shot with an "18mm lens" or a blurry background with "shallow depth of field." Veo 2 delivers it in razor-sharp 4K videos that can last minutes.
  • Versatile applications: Veo 2 can create all kinds of scenes, from scientific laboratories to beautiful natural environments. Everything is rendered with impressive visual detail and high quality.
  • Reliable results: The model makes far fewer mistakes, such as extra fingers or toes, making the videos look much more credible and professional.

Veo 2 is currently being rolled out through Google Labs' VideoFX and will soon expand to YouTube Shorts. SynthID watermarks will be added to mark outputs as AI-generated and prevent misinformation.

See more about Veo 2 

Below are examples of videos you can create with Veo 2.

Imagen 3: New version now available in 100+ countries (minus the Netherlands)


Google is making Imagen 3 available through ImageFX in more than 100 countries. This AI tool allows you to generate images.

What has been improved:

  • More details, fewer errors: Imagen 3 now understands prompts much better. This makes for images that are more accurate and can be generated in a variety of styles.
  • High ratings: Reviewers indicate that the images are of high quality and that the model responds well to long and complex prompts.

Google Gemini now also uses Imagen 3 to generate images. 

Read more about Imagen 3 here.

Gemini 2.0 Flash Thinking

Google has launched an experimental AI model, Gemini 2.0 Flash Thinking, that answers complex questions and shares its "thoughts" during the process. The model combines improved reasoning capabilities with the speed of Gemini Flash 2.0 and may compete with OpenAI's o1 model. It "reasons" by breaking down tasks into smaller steps for better results.

An interesting addition is that you can also see which steps were followed, as you can see below during a test I conducted. 

You can test this model in Google AI Studio. 

Whisk: AI tool for creating unique images

Google Lab has introduced Whisk, a new AI tool that lets you use images to create original new images. Instead of long text prompts, you simply enter images for the subject, scene and style. Whisk combines these and generates, for example, digital designs, pins, stickers or other unique visuals.

The tool uses AI models such as Gemini and Imagen 3. It is designed to quickly explore and remix ideas, ideal for creative projects. It is not about perfect details, but about discovering new visual possibilities.

Whisk is currently available only in the US.

Read more about Whisk.

Apptronik and Google DeepMind join forces in robotics

Apptronik and Google DeepMind announce strategic partnership to combine advanced AI and innovative robotics technology.

Together they are developing versatile and safe human robots, such as Apptronik's Apollo, which is designed for complex tasks in industrial environments.

Have you embraced AI for your marketing yet?

For companies that also want to take a step toward using AI, we have developed AI colleagues. These digital colleagues can support your marketing team in different areas, such as content creation, personalization and data analysis. With our AI solutions, you make your communications not only smarter, but also more powerful.

Take a leap forward in your marketing AI transformation every week

Every Friday, we bring you the latest insights, news and real-world examples on the impact of AI in the marketing world. Whether you want to improve your marketing efficiency, increase customer engagement, sharpen your marketing strategy or digitally transform your business. 'Marketing AI Friday' is your weekly guide.

Sign up for Marketing AI Friday for free.

Marketing AI Friday

Competition in the world of artificial intelligence is increasing as Chinese AI labs rapidly introduce new AI models that challenge OpenAI's o1. Both Alibaba's QwQ-32B and DeepSeek's R1 claim better performance than the current leading model in specific areas. We discuss them in more detail in this article.

What differentiates the models of Alibaba and DeepSeek?

These new generation AI models are distinguished by having a different approach to problem solving. Instead of giving immediate answers, these models take a moment to "think" before responding. Thus, there is a longer thinking process in which the model itself rethinks giving possible answers, making the output more reliable, especially on complex issues. It mimics human thought processes and thus provides a more sophisticated and nuanced form of AI interaction.

Restrictions and government control

The influence of the Chinese government is evident in this technology. Both models are subject to strict regulations and avoid sensitive topics. Questions about politically sensitive topics are either ignored or answered according to the Chinese government's official way of communicating. This may be a limitation for some groups. However, both models are available for download, making them accessible to a wide audience.

Availability and use

DeepSeek R1 is immediately available to users. By simply creating an account, you can try this model for yourself and experience how it performs on complex tasks. QwQ-32B, Alibaba's model, is still in the preview phase, but promises much for future applications.

A new phase in AI development

With these developments, Chinese AI labs are demonstrating their commitment to play an important role in the global AI industry. The "thinking" models not only offer technological advances, but also take a step toward AI that is more humane and reliable. While the influence of regulation cannot be denied, QwQ-32B and DeepSeek R1 show that the race to the best AI is becoming increasingly exciting. We are very curious to see how these models will evolve.

For DeepSeek, you can get started right away by creating an account here. QwQ is currently only available as a preview version.

Take a leap forward in your marketing AI transformation every week

Every Friday, we bring you the latest insights, news and real-world examples on the impact of AI in the marketing world. Whether you want to improve your marketing efficiency, increase customer engagement, sharpen your marketing strategy or digitally transform your business. 'Marketing AI Friday' is your weekly guide.

Sign up for Marketing AI Friday for free.

Marketing AI Friday

We get a special "AI events calendar" this month. OpenAI is announcing no less than '12 days of Shipmas,' where they will introduce new AI releases and demos every business day starting December 5. Below is an overview, which will be updated after each update.

Day 1 - OpenAI o1 and ChatGPT Pro

We get a special 'AI events calendar' this month. OpenAI announced no less than '12 days of Shipmas,' where they will introduce new AI releases and demos every business day starting yesterday.

On Dec. 5, 2024, the official o1 model and the new ChatGPT Pro subscription were launched.

Official o1 model now available

OpenAI's o1 model is now officially out of the testing phase and available in ChatGPT. The model has improved significantly in speed and capabilities since the September preview version.

Especially in coding, math and writing, o1 now performs much better. Images can also be uploaded that o1 can analyze, and advanced speech functionality has been added. Web search and file uploading will follow.

For developers, o1 will be available through its API, allowing them to integrate the model into their own applications and workflows.

If you want to use o1 or o1-mini you can now select them per chat as models in ChatGPT.

ChatGPT Pro: access to the most powerful AI models

In addition to o1, OpenAI is introducing ChatGPT Pro, a $200 per month subscription. This subscription gives unlimited access to o1, o1-mini, GPT-4o and advanced voice features.

The o1 Pro version uses more processing power for complex problems and consistently scores better in tests on data science, programming and legal analysis.

In particular, the reliability of answers is a lot better than the standard version for complex mathematical, technical and scientific questions.

ChatGPT Pro is particularly suitable for professionals who need to solve complex problems on a daily basis, such as researchers and engineers. For regular use, the standard ChatGPT is often sufficient, but for those with really challenging technical or analytical problems, Pro can be very valuable.

Watch the announcement below.


Day 2 - Reinforcement Fine-Tuning Research Program.

Openai introduces Reinforcement Fine-Tuning (RFT), a new method that allows AI models to adapt even better to specific fields and data. This technique goes beyond standard fine-tuning by teaching models to reason and improve within specialized domains.

RFT opens doors for sectors such as law, health care and science. A notable example is the collaboration with Berkeley Lab, where a model was trained to identify genetic causes of rare diseases. with only 1,100 examples, this model produced impressive results.

This innovation makes AI more accessible and powerful for complex problems.

Watch the announcement below.


Day 3: Sora - Text-to-Video AI

OpenAI is introducing Sora, an AI tool that allows you to generate a video from text, which was announced earlier this year. Unfortunately, it will be rolled out first in most countries outside of Europe, with the Netherlands coming later.

Sora offers many interesting features such as planning scenes via storyboards and customizing styles with the remix function:

Generate video from text
Describe a scene and sora turns it into a video. choose from different aspect ratios, resolutions (up to 1080p) and lengths (5 to 20 seconds).

Storyboards
Plan your video step by step. define scenes, actions and timelines while Sora fills in the details. perfect for structured creations.

Remix feature
Customize existing videos by changing objects or styles, such as in the preview they replaced woolly mammoths walking in the desert with robots.

Image-to-video
Use an image as a base and let Sora convert it into a fluid, moving scene.

Community feed
Discover videos from other users, learn new techniques and share your own creations.

Users with OpenAI Plus or Pro subscriptions receive exclusive benefits, such as additional generations and higher resolutions.

Watch the introduction below.


Day 4: Canvas updates

After months in beta, OpenAI has expanded the Canvas feature with useful new applications and made it available to all users. Canvas is an expanded interface for ChatGPT, which combines chat and document editing in one environment through side-by-side editing.

With three major updates, Canvas makes writing, coding and collaboration easier and more efficient. During the presentation, the new features were demonstrated in detail.

Three major updates:

1. Canvas fully integrated into ChatGPT and is now available to all users, even without a paid subscription. The full integration into the main ChatGPT model allows you to get started right away without additional settings.

2. Code execution within Canvas allows you to execute Python code directly in Canvas. A "Run" button allows you to test code, view error messages and generate visual output such as graphs or charts. The presentation demonstrated how ChatGPT identifies error messages and suggests ways to correct them. Changes can be applied immediately with the "Fix bug" button, after which the results are retested. This real-time debugging feature makes it unnecessary to use external tools such as Replit.

3. Integration with custom GPTs. Canvas can also be used within custom GPTs, so users can now better use personalized GPTs for specific tasks, such as generating letters or managing projects. An example was shown during the presentation where a custom GPT was used to write letters in the style of Santa Claus.

Watch the introduction below.


Day 5: ChatGPT now integrated into Apple devices

Openai has taken an important step with the integration of ChatGPT into Apple devices, including iPhone, iPad and Mac. these new features make ChatGPT even more accessible and efficient.

Three powerful integrations:

1. Siri x ChatGPT
Siri can now forward complex tasks directly to ChatGPT. When you give Siri a task, such as planning a party or summarizing a document, Siri automatically engages ChatGPT. You retain full control over what information is shared.

2. Writing and document processing
ChatGPT is integrated with Apple's writing tools, so you can refine, summarize and even have documents completely redrafted.

3. Visual intelligence via camera
iPhone's camera now lets you analyze objects and scenes. For example, point the camera at a Christmas sweater and let ChatGPT rank the most festive designs, as they showed in the presentation.

Additional Functionalities:

Mac integration: Use ChatGPT directly from macOS applications such as Preview. Upload documents, ask questions, and let ChatGPT do analysis or generate charts. All without interrupting your workflow.

ChatGPT button: A special button in the interface opens ChatGPT instantly, including an ongoing call history. Perfect for follow-up actions or saving results.

This integration makes ChatGPT accessible from anywhere, without an account or additional settings. The ability to share and process information instantly saves time and unnecessary steps, such as switching between apps.

Watch the announcement below.

Check out all the updates on the OpenAI '12 days' page here.

Day 6: Introduction to real-time video and screen sharing

ChatGPT has added real-time video and screen sharing to its advanced voice feature. This allows you to have ChatGPT directly view into the real world via video and engage in real-time conversation about it.

For example, the demo showed how ChatGPT helped someone step by step in brewing the perfect coffee. In addition, the screen sharing feature allows you to share your screen, allowing ChatGPT to think along directly with you, for example, to answer email or Whatsapp messages. 


The rollout has started and will become available next week for Teams users and most Plus and Pro subscribers. In the Netherlands, we will have to wait until probably early next year.

In conversation with Santa Claus

As a special December surprise, you can now also talk directly to Santa Claus via the advanced voice feature in ChatGPT. He is happy to share his favorite Christmas traditions and stories about life at the North Pole.

This feature is available (also for us in the Netherlands) through the mobile apps, desktop apps and on chatgpt.com. You can select the voice of "Santa" in settings or click the snowflake icon in the chat screen (the latter is not currently available). You can also now save and share conversations in ChatGPT. 

Watch the introduction below.


Day 7: Introducing 'Projects' in ChatGPT

OpenAI has launched "Projects" in ChatGPT, a new feature that allows users to better organize and personalize their work. Projects allow you to:

  • Upload files and set custom instructions specific to a project.
  • Organize chats via smart folders and easily search previous chats.
  • Use Canvas for an interactive way to edit documents and code.
  • Easily link previous chats to projects and search or add to them.

During the introduction, they showed several examples of how to use Projects, such as for organizing a Secret Santa, various household things and developing a personal website.

The feature is available immediately to Plus, Pro and Teams users and will become available to Free, Enterprise and EDU users early next year.

Watch the introduction below.

OpenAI has announced three major updates to ChatGPT Search:

  1. Improved performance:
    • The search function is faster, works better on mobile devices, and includes new map functionality.
    • Users see more comprehensive visual results, such as images, maps and direct links to resources.
  2. Integration with advanced voice function:
    • Users can now conduct real-time searches during conversations with ChatGPT via the advanced speech module
  3. Access for all users:
    • As of now, ChatGPT Search is available to all logged-in users worldwide, on both desktop and mobile apps.

These updates improve speed, ease of use and interaction with up-to-date information.

Watch the introduction below.


Day 9: New tools and improvements in the OpenAI API

OpenAI has announced new features and enhancements for developers and startups building on their API. Major updates include the launch of Model 01 (fully production ready), improvements to the Realtime API with WebRTC, and the introduction of Preference Fine-Tuning to better tailor models to specific use cases. In addition, costs have been reduced, new SDKs launched, and there is more focus on usability.

  • API Updates:
    • Model o1: Is now fully available in the API. Added features: function calling, structured output, developer messages and a new parameter for reasoning effort. Also added support for visual input.
    • Structured Outputs: Convenient JSON schemas to ensure consistency in model outputs.
    • Improvements in Function Calling: Model 01 is more efficient and better than GPT-4o in calling functions correctly.
  • Realtime API:
    • Introducing WebRTC support for easier and more efficient use in voice applications. This makes the system much more responsive.
    • Cost of audio tokens has been reduced (GPT-4o: 60% cheaper, GPT-4o Mini: 10x cheaper).
    • Added Python SDK for easier integration.
  • Preference Fine-Tuning:
    • New method called "Direct Preference Optimization. Allows developers to better tailor models to preferences, such as tone, style and relevance. Available for GPT-4o and soon for GPT-4o mini.
  • Other updates:
    • New SDKs for Go and Java.
    • Easier API key registration.
    • Live AMA (Ask Me Anything) session available on the OpenAI Developer Forum.

Watch the introduction below.


Day 10: Calling and appending with ChatGPT

You can now call ChatGPT and chat along via WhatsApp.

  • ChatGPT by phone: In the US, you can call ChatGPT at 1-800-CHATGPT (1-800-2428478) and use 15 minutes for free per month.
  • ChatGPT on WhatsApp: Worldwide you can now chat via WhatsApp with ChatGPT without the need for an OpenAI account. Add 'ChatGPT' as a contact with phone number 1-800-2428478 and you can chat right away via WhatsApp.

With this, OpenAI aims to make ChatGPT easily accessible to everyone.

Watch the introduction below.

.


Day 11: Collaborate more easily with ChatGPT and your favorite apps

What's new?

OpenAI introduced new updates to the ChatGPT desktop app. These improvements make collaborating with your favorite apps on your computer easier and more efficient. It also brings ChatGPT another step closer to AI agents that can independently perform tasks on your computer.

  • Easier collaboration with apps: ChatGPT now works directly with apps such as Notion, Notes, Warp, Xcode and Quip. With a simple keyboard shortcut(Option + Space or Option + Shift + 1) you open ChatGPT and it automatically recognizes the context of your active app. You can chat directly with ChatGPT about app content, without copying and pasting information first.
  • Advanced Voice Mode: ChatGPT's new voice feature is now available when collaborating with apps, without you having to type anymore. Use your voice to give commands, ask questions or get feedback while working in an app.

Examples of use

  • Warp: Analyze data from repositories and let ChatGPT create visual graphs.
  • Xcode: Get help with code problems and automate complex processes.
  • Notion: Write and edit documents with consistency in style and facts, including sources to support facts....

Availability

These features are now available for macOS and soon for Windows. Update your ChatGPT web app to discover the new features and improve your workflow!

Watch the introduction below.

Day 12: Announcement of AI models o3 and o3-mini

On the last day, OpenAI announced two impressive new AI models: o3 and o3-mini. These models promise a huge leap forward in capabilities and efficiency. o3 achieves exceptional results on benchmarks such as programming, mathematics and general intelligence (ARC AGI), outperforming even human experts. o3-mini offers similar performance, but is specifically designed for more cost-effective deployment, with flexible settings for the time the AI spends reasoning.

Key Improvements
What sets these models apart is their focus on both strength and accessibility. Innovations such as self-assessment and thoughtful tuning allow complex problems to be solved better and more safely. Self-evaluation means that the AI can test and improve itself, while thoughtful matching helps to see through the intent behind a query and assess whether the request is safe and appropriate. This makes these models not only more powerful, but also more reliable.

Applications
These models are particularly suitable for complex applications:

  • Programming: Solving complex coding problems or generating efficient scripts.
  • Research and teaching: Supporting scientific analysis or explaining complex concepts.
  • Problem solving: tackling mathematical and technical issues, even at the PhD level.

Safety and accessibility
Both models will first be made available to outside researchers for extensive safety testing. The official launch of o3-mini is expected in January 2025, with o3 shortly thereafter.

Watch the introduction below.


Learn more about this release

Check out all the updates on the OpenAI '12 days' page here.

Take a leap forward in your marketing AI transformation every week

Every Friday, we bring you the latest insights, news and real-world examples on the impact of AI in the marketing world. Whether you want to improve your marketing efficiency, increase customer engagement, sharpen your marketing strategy or digitally transform your business, "Marketing AI Friday" is your weekly guide.

Sign up for Marketing AI Friday for free.

Marketing AI Friday

Luma has launched a new version of the Dream Machine that makes creating professional graphics and videos even easier. Instead of writing complicated prompts, you can simply tell what you want to create. This natural way of working makes the platform user-friendly and suitable for anyone who wants to develop visual content.

Creating characters that are consistent

The updated Dream Machine makes it easier to create characters that remain consistent across different images and videos. You can design a character once and then use it in different projects while looking the same. This is ideal for telling a story or creating a recognizable style that matches your brand.

Full control over videos

The updated DM 1.6 video model lets you design your videos in detail. The model gives you the freedom to control your own camera movements, from smooth transitions to dynamic shots, and to choose exactly how your video begins and ends. This flexibility makes it easy to create the right mood and impact, allowing you to create unique and professional video projects that fully reflect your creative vision.

Inspiration through the brainstorming function

The Dream Machine's integrated brainstorming feature is a powerful tool that can help move your creative process forward. If you are having trouble coming up with ideas or are simply looking for a fresh perspective, this feature will help you get started quickly. Thus, because the tool gives you smart suggestions, you can constantly come up with new ideas. This not only makes it easier to come up with new concepts, but also to improve your existing ideas.

Import and combine images and styles

The updated Dream Machine offers the ability to import and seamlessly combine your own images, styles and characters. This allows you to make your creative projects even more personal and unique. Combining different elements makes it easy to develop your own style that fits your vision. This is ideal for projects where customization and originality play a major role.

Creativity and speed with the Photon AI model

The Photon AI model that powers the Dream Machine is designed to maximize speed and creativity. This enables designers and creatives to turn ideas into high-quality visual content almost instantly.

Developments of Dream Machine

With the Dream Machine's new features, Luma makes creating visual content easier and more accessible than ever. Whether you want to realize a big project or bring an idea to life, this tool provides all the features you need to do so effortlessly. Visit the website and check out how Luma can enhance your creative process.

Take a leap forward in your marketing AI transformation every week

Every Friday, we bring you the latest insights, news and real-world examples on the impact of AI in the marketing world. Whether you want to improve your marketing efficiency, increase customer engagement, sharpen your marketing strategy or digitally transform your business. 'Marketing AI Friday' is your weekly guide.

Sign up for Marketing AI Friday for free.

Marketing AI Friday

AI is increasingly being used to improve business communications and Talpa is making great strides in this regard. With an innovative digital doppelganger from John de Mol, Talpa ConnectLab personally addressed 30 media agencies about its 2025 commercial strategy. The project shows how technology, personalization and scalability can combine and provides valuable insights for companies looking to improve their communications.

The technology behind the digital John de Mol

Creating a lifelike digital version of John de Mol required a mix of advanced AI tools. Talpa deployed a clever combination of AI tools to provide this experience:

  1. Cloning the voice
    Thanks to ElevenLabs, John's voice was perfectly mimicked, with a natural sound and intonation that convinces.
  2. Radio-quality sound
    Adobe Podcast and After Effects were used for crisp, clear sound quality without interference, making it sound professional.
  3. A lifelike avatar
    HeyGen was used to create a realistic and convincing digital avatar. As a result, the AI version of John was both visually and substantively authentic.

Personalization at scale: how AI makes a difference

Talpa shows how AI makes personal communication smarter and more efficient. John de Mol's digital doppelganger has managed to deliver a message to dozens of media agencies without a time-consuming process. Where personalization normally requires a lot of manual work, AI makes it easier to apply this process on a larger scale.

More and more companies are embracing AI

AI is increasingly being embraced by large companies looking to improve and personalize their communications. Organizations such as Talpa are demonstrating how technology can help make complex messages more personal and effective. Whether through personalized videos, voice cloning or other applications, AI offers new opportunities to reach tailored audiences.

For companies that also want to take a step toward using AI, we have developed AI colleagues. These digital colleagues can support your marketing team in different areas, such as content creation, personalization and data analysis. With our AI solutions, you make your communications not only smarter, but also more powerful.

Take a leap forward in your marketing AI transformation every week

Every Friday, we bring you the latest insights, news and real-world examples on the impact of AI in the marketing world. Whether you want to improve your marketing efficiency, increase customer engagement, sharpen your marketing strategy or digitally transform your business. 'Marketing AI Friday' is your weekly guide.

Sign up for Marketing AI Friday for free.

Marketing AI Friday

OpenAI and Google are battling it out for the best AI model. This week, the two companies waged another intense battle for the leadership position in advanced AI models.

ChatGPT-4o: OpenAI hits back

OpenAI recently took a big step forward with the release of a new version of ChatGPT-4o. This update brings with it a number of improvements, including a more natural writing style, better support for files and deeper analytics. With these improvements, OpenAI managed to reclaim the top spot in the Chatbot Arena from Google's Gemini Exp-1114, a model that previously impressed with its performance.

ChatGPT-4o's new features demonstrate OpenAI's commitment in providing easy-to-use, powerful tools that support consumers and professionals alike. In particular, its enhanced analysis and writing capabilities make it attractive for a wide range of applications, from content creation to data analysis.

Google's Gemini strikes back immediately

Although OpenAI was able to enjoy its victory for a while, it did not last long. Indeed, Google recently introduced a new version of its Gemini model (Exp-1121). This update takes Gemini to the next level, surpassing the performance of its predecessor in many ways.

The latest version of Gemini excels at programming, reasoning and understanding visual content. This makes the model particularly suitable for complex technical applications, such as developing software, interpreting images and solving challenging problems. This update is available immediately through Google AI Studio and the Gemini API, giving developers worldwide quick access to the latest technology.

An unprecedented pace in AI innovation

The rapid succession of these updates shows how fierce competition is in the AI sector. Whereas previously it took months for a new model to be introduced, that time is now reduced to just a few days. This pace reflects not only the innovativeness of companies such as OpenAI and Google, but also the high expectations of users and the growing demand for advanced AI solutions.

This speed also brings new challenges. Developers and companies must constantly adapt to changes and updates, while users must get used to ever newer functionalities. Nevertheless, this rapid pace of innovation ensures that AI models such as ChatGPT and Gemini are constantly improving and offering increasingly broad application possibilities.

What does this mean for the future of AI?

The battle between ChatGPT and Gemini shows that AI is in a phase of explosive growth. The way these models are able to perform complex tasks such as programming, reasoning and image analysis shows that we have reached a new standard.

For users, this means that the applications of AI are becoming broader and more powerful. From companies using AI for advanced data analysis, to consumers using AI for everyday tasks, the possibilities continue to expand. At the same time, competition between companies like OpenAI and Google will lead to faster innovations and better AI models, ultimately benefiting everyone.

Take a leap forward in your marketing AI transformation every week

Every Friday, we bring you the latest insights, news and real-world examples on the impact of AI in the marketing world. Whether you want to improve your marketing efficiency, increase customer engagement, sharpen your marketing strategy or digitally transform your business. 'Marketing AI Friday' is your weekly guide.

Sign up for Marketing AI Friday for free.

Marketing AI Friday