top of page

Weekly Brief 23.-30 Nov - Week 48

  • Writer: Kristine Lium
    Kristine Lium
  • 3 days ago
  • 4 min read

Updated: 3 days ago

Gemini 3: Three Changes You’ll Actually Notice

Your Weekly Brief — clear, simple, and ready to share.



ree

What to Watch This Week:

AI releases can easily feel overwhelming. Some people follow them closely. Others barely register that anything has changed.So here’s a simple starting point:


Google released Gemini 3 last week. You don’t HAVE to know the benchmarks — but here are three shifts that actually matter for everyday work.


Let’s make that as clear and human as possible..


This is the Weekly Brief is a short and condensed way for you to stay on top and part of the conversation - Every Week


Your executive briefing week 48


It reasons more like we do — and you choose how deeply it “thinks”


Humans don’t always think in the same way. Sometimes we give quick answers (“top of mind”). Sometimes we pause, consider context, compare options, and reason step by step.

Gemini 3 introduces a similar idea.Google calls it “thinking levels,” but what it really means is:

  • Quick mode → short, fast answers

  • Deep mode → slower, more structured reasoning


What human-style reasoning looks like in AI

Think of a simple task:

“We’re launching a new product. What should we focus on first?”

A quick answer might list best practices.A deeper answer would:

  • break the problem into steps

  • consider constraints

  • compare approaches

  • show why those choices make sense


That’s much closer to how humans reason when something matters.


What this helps you do

  • Analyse documents or decisions with clearer logic

  • Get more structured breakdowns

  • Ask bigger questions without getting vague answers


Important note

Deeper reasoning does not mean perfect reasoning.It simply means more steps, more structure, and more context awareness — with the same need for human judgement on top.


It finally understands mixed inputs — and can respond visually


Previous models struggled with multimodality because:

  • input formats were treated separately (text in one place, images in another)

  • context windows were too small to hold everything at once

  • the model couldn’t “connect” these formats into a single analytical chain


With Gemini 3, the underlying architecture and enormous context window remove many of these barriers.


Put simply:

You can give it mixed information in one place — text, screenshots, documents, photos, even video — and the model can analyse them together.


Example you can try today

Prompt:

“Here are a few slides from a workshop, a screenshot from our CRM dashboard, and a photo of notes from the whiteboard. Summarise the key challenges, turn them into three themes, and create a simple diagram showing how these themes connect.”

What Gemini 3 can return

  • A short written summary

  • Three clear themes

  • A generated diagram (arrows, clusters, or a simple flow)

  • Optional: a suggested structure for a slide or concept draft


This is where the breakthrough becomes practical:

the model doesn’t just describe — it visualises. Something that previously required manually stitching tools together now fits in a single prompt.


It shows up where people actually work


Gemini 3 isn’t arriving as “another AI app. ”It’s being integrated directly into:

  • Google Search (AI Mode)

  • Gmail

  • Docs

  • Slides

  • Sheets

  • The Gemini mobile app

  • Enterprise tools via Vertex AI


For many people, this will be the first time advanced reasoning and multimodal support appear in tools they already use. Not in a separate tab, not in a special playground — but inside daily workflows.


What changes for everyday users

  • Search results may contain more visual answers

  • Emails can be summarised with mixed context

  • Docs and Slides can draft first versions from screenshots or files

  • Teams can collaborate around richer AI-generated outputs


This is where AI becomes less “something you try occasionally” and more “something quietly embedded in the workday.”



💬 Closing reflection


These three shifts —reasoning, multimodality, integration —are the clearest entry points for understanding why Gemini 3 is a meaningful update.


But they’re only the starting point.



🔮 What to watch

This is just touching on what it seems as Gemini 3 is making available so for next week's briefing will do a deeper dive to see


  • how Gemini 3 can run multi-step workflows

  • what “AI agents” actually do

  • how developers can build apps faster with tools like Antigravity

  • what this means for businesses, roles, and workflows


Part 1 is about noticing the change. and next week we'll look at what you can do with these new abilities at the tip of your fingers.



💡Weekly kick-off


👉 Your Weekly Brief — clear, simple, and ready to share.



Do you have any curious topics or interesting guests we should invite on the podcast?


Share your ideas by sending us an email:


Don't miss out on the next episode of the Simply Briefed podcast launching this Wednesday


What makes an idea worth betting on?


Together with Venture Capital Partner of Skyfall Ventures, Jon Kåre Stene, we are looking into the current speed of development and how innovative ideas pop up faster and more often than we've seen before. At the same time, trying to understand how to spot what are good ideas with the potential of actually making a difference and which ones are just ideas.


If you can't wait and need an episode to keep you going - don't miss the previous episode with Linn Solvang - How ready are we REALLY, for AI at work?



Erik Rosales





SIMPLY BRIEFED

The podcast that take complex topics and makes them easier to understand










Want to start a podcast yourself?



bottom of page