Ir al contenido principal
OpenAI

16 de abril de 2025

Lanzamiento

Pensar con imágenes

OpenAI o3 y o4-mini suponen un importante avance en la percepción visual gracias al razonamiento con imágenes en la cadena de pensamiento.

Cargando…

OpenAI o3 y o4-mini son los últimos modelos con capacidad de razonamiento visual de nuestra serie Omni. Por primera vez, nuestros modelos pueden pensar con imágenes en su cadena de pensamiento en lugar de, simplemente, verlas.

Los modelos o3 y o4-mini, al igual que el anterior modelo OpenAI o1, están entrenados para pensar durante más tiempo antes de responder. Además, utilizan una larga cadena de pensamiento interno antes de darle una respuesta al usuario. Sin embargo, los modelos o3 y o4-mini amplían aún más esta capacidad al pensar con imágenes en su cadena de pensamiento. Esto se consigue transformando las imágenes que sube el usuario con herramientas que permiten recortarlas, ampliarlas y rotarlas, además de otras técnicas sencillas de procesamiento de imágenes. Y lo que es más importante, estas capacidades están incluidas de forma nativa, es decir, no dependen de modelos especializados independientes.

La inteligencia visual mejorada de ChatGPT te ayuda a resolver problemas más complejos mediante un análisis de imágenes más exhaustivo, preciso y fiable que nunca. De hecho, puede combinar sin problemas el razonamiento avanzado con herramientas como la búsqueda web y la manipulación de imágenes (aumentarlas, recortarlas, girarlas o mejorarlas automáticamente) para extraer información incluso de fotos con imperfecciones. Por ejemplo, puedes subir una foto de un conjunto de problemas de economía para recibir una explicación paso a paso. Igualmente, puedes compartir una captura de pantalla de un error de compilación para obtener de manera rápida un análisis de su origen.

Este enfoque abre un nuevo eje para el escalamiento computacional en la fase de inferencia que combina a la perfección el razonamiento visual y textual, tal y como refleja su rendimiento avanzado en pruebas comparativas multimodales. Esto supone un gran paso hacia el razonamiento multimodal.

Razonamiento visual en acción

Pensar con imágenes te permite interactuar con ChatGPT de manera más sencilla. Y es que puedes hacer preguntas simplemente sacando una foto sin tener que preocuparte por la posición de los objetos, de si el texto está al revés o de si hay diferentes problemas de física en una misma foto. Incluso si, a primera vista, los objetos no son obvios, el razonamiento visual permite que el modelo haga zoom para ver la imagen con más claridad.

Todos los ejemplos se han procesado con OpenAI o3.

o3-visual-reasoning-notebook

What is written on the notebook?

Razonó sobre el contenido del cuaderno durante 20 segundos

Analyzed image

o3-visual-reasoning-notebook1

Analyzed image

o3-visual-reasoning-notebook2
  • The user posted an image at /mnt/data/own_notebook.jpg and asked me to read what's written on it. I'll need to load the image so I can inspect the text. Once I view it, I realize the text is upside down, so I’ll rotate it so it's readable.
    From there, I can check what's written and share my findings clearly with the user.

Analyzed image

o3-visual-reasoning-notebook4
  • The user shared an image found at /mnt/data/own_notebook.jpg and asked me to read the text on it. I’ll load the image and zoom in to see the writing clearly. It turns out the text is upside down, so I’ll rotate it for easy reading. Afterward, I’ll crop the lower part of the image to focus on the relevant area, and rotate the cropped section to ensure the text is right side up.

Analyzed image

o3-visual-reasoning-notebook5

Done

It says: “4th February – finish roadmap.”

Nuestros modelos de razonamiento visual más recientes funcionan en combinación con otras herramientas, como el análisis de datos de Python, la búsqueda web o la generación de imágenes, para resolver problemas más complejos de una forma creativa y eficaz. Así, ofrecemos a los usuarios nuestra primera experiencia autónoma multimodal.

o3-visual-reasoning-maze

solve this maze and plot a path with red-colored line

Razonó sobre la solución y el camino del laberinto durante 1 minuto y 44 segundos

Analyzed image

o3-visual-reasoning-maze1
  • The user provided an image path, but I’ve yet to load it. First, I’ll need to load the image from the given path for analysis. Once that’s done, I can detect the maze path, potentially marked by black lines, and plot a red line to showcase it. The image is relatively small—404x
  • I’m thinking the start could be at the top left missing border piece, or maybe near the left border. To help figure this out, I’ll look at the first row pixels to find any white areas near the border. Since the maze walls are black on white, I’ll try a thresholding approach. To invert it, I’ll use a threshold value of 200, which would set white to 255 and black closer to 0. I’ll apply this threshold using the inverse method.
  • Okay, I initially thought I could approximate the maze start near the top-left opening. I applied some thresholding to detect white pixels but encountered an issue—nothing changed. It seems the image might have high noise. Oddly, when inspecting the grayscale, all pixel values are zero, which suggests the image is completely black. This doesn’t match what we saw earlier with Matplotlib, so I’ll delve deeper into how PIL loaded the image and inspect the underlying pixel array.

Analyzed

Python

1
print(imgcv.max(), imgcv.min(), imgcv.mean())

0 0 0.0

Analyzed

Python

1
import numpy as np
2
img_np = np.array(img)
3
print(img_np.shape, img_np.dtype, img_np.max(), img_np.min())

(404, 404, 4) uint8 255 0

It looks like the lines are black with a transparent background, but when PIL loads the image, it's showing a white background due to how it's handling the alpha channel. I’ll try converting the image to RGB, but the pixel values are still showing all zeros. This is odd, since the black lines should be visible. Hmm, earlier the display had a white background. Let me check the pixel values at coordinate (200, 200) to get a clearer picture.

Analyzed

Python

1
img_rgb = img.convert('RGB')
2
imgcv = cv2.cvtColor(np.array(img_rgb), cv2.COLOR_RGB2GRAY)
3
print(imgcv.max(), imgcv.min())

0 0

Analyzed

Python

1
print(img_np[200,200])

[0 0 0 0]

  • The image has black lines with a transparent background, but it appears white due to how PIL handled the alpha channel. When displayed earlier, I saw white areas, which were actually transparent areas shown on a white grid (the Matplotlib background color). So, the lines are black with the transparent spaces representing the open path. I'll need to treat these transparent areas as the open path when interpreting the maze. It’s essentially a black line maze on a clear canvas.
  • I see that the maze walls are black pixels, and the path is transparent with an alpha value of 0. I need to find the path from an open cell on one border to another on the opposite border. To do this, I'll create a binary matrix where the passable cells have an alpha of 0 and are not part of a line border. Now, I'll focus on processing the image channels to identify the path.

Analyzed

Python

1
alpha = img_np[:,:,3]
2
# passable pixel if alpha == 0 (transparent) else wall
3
passable = alpha == 0
4
# find openings along border
5
h,w = passable.shape
6
openings = []
7
# top row
8
for x in range(w):
9
if passable[0,x]:
10
openings.append((0,x))
11
# bottom row
12
for x in range(w):
13
if passable[h-1,x]:
14
openings.append((h-1,x))
15
# left col
16
for y in range(h):
17
if passable[y,0]:
18
openings.append((y,0))
19
# right col
20
for y in range(h):
21
if passable[y,w-1]:
22
openings.append((y,w-1))
23
len(openings)

1616

  • The walls are black, and the path is transparent (alpha 0). I need to find a way from the open cell at one border to the open cell at the opposite border. I’ll create a binary matrix where the passable cells are transparent (alpha 0). However, there are a lot of transparent cells, and the paths might not be clear due to line gaps. I plan to treat the lines as about 3 pixels thick. I'll try BFS/DFS to analyze the passable area from the starting point.
  • It seems the maze's border cells might connect to "air" outside the maze—this is the transparent area surrounding the maze. But the path is only open through legitimate openings in the border walls. Outside of these openings, the transparent background might merge with the outside area, making it harder to differentiate. I need to treat the inside walkway as enclosed by walls, ensuring it connects to the outside only at those openings, not through a continuous open background.
  • I’m thinking of dilating the walls by 3 pixels. This might cause openings smaller than 3 pixels to close, which could block the path. However, this might inadvertently seal parts of the interior walkway, leading to a disconnected path. So, I should try morphological closing—using a small kernel to dilate the walls—and then check for passable areas. This could help me manage the maze more effectively while keeping the path clear.

Analyzed

Python

1
# find pattern of vertical walls
2
# examine alpha channel row 0 lines to see thickness
3
unique_rows = np.unique(np.sum(~passable, axis=1))
4
unique_rows[:20]

array([ 0, 16, 20, 22, 24, 26, 28, 30, 32, 34, 94, 150, 178, 192, 206, 220, 234, 248, 276, 290])

Analyzed

Python

1
import cv2
2
# create wall mask (1 where wall)
3
wall = ~passable
4
# dilate walls by 3 px
5
kernel = np.ones((3,3), np.uint8)
6
wall_dil = cv2.dilate(wall.astype(np.uint8), kernel, iterations=3)
7
pass_dil = wall_dil==0
8
# label connected regions in pass_dil
9
num_labels, labels = cv2.connectedComponents(pass_dil.astype(np.uint8))
10
print(num_labels)
11

2

  • It might be easier to treat the entire transparent area outside the walls as open, and then use BFS starting from the top-left outside region. However, since the walkway is also transparent, using BFS would flood the inside walkway from the entrance openings. I think a better approach might be to switch to grid cells, considering the maze lines are about 4 pixels wide. Alternatively, I could use a shortest path algorithm on the pixel level, restricting movement to inside the maze’s bounding box to solve the path.
  • I’m considering dilating the walls by around 3 pixels. If there are openings smaller than that, they could seal, disconnecting the interior walkway from the outside. The interior walkway might also get blocked, causing the path to be cut off. To test this, I’ll try using a morphological operation where I dilate the walls with a small kernel and check for passable regions. If needed, I could increase the dilation to 5 pixels to see how it affects the maze structure.
  • One idea is to invert the walls and walkway

Analyzed

Python

1
wall_dil = cv2.dilate(wall.astype(np.uint8), kernel, iterations=5)
2
pass_dil = wall_dil==0
3
num_labels, labels = cv2.connectedComponents(pass_dil.astype(np.uint8))
4
print(num_labels)

2

  • I’m thinking of analyzing the top border, specifically between 0 and around 10 pixel rows. I could look for positions where the wall (alpha not zero) is absent and detect where the top border has an opening — essentially where all the rows for a specific x position have alpha equal to zero (transparent). That should allow me to identify where the walkway might begin at the top border.

Analyzed

Python

1
# find opening columns on top border vertical line: for each x where all first maybe 10 rows walls present?
2
open_cols_top = []
3
for x in range(w):
4
if np.all(alpha[0:10,x] == 0):
5
open_cols_top.append(x)
6
open_cols_top[:20], len(open_cols_top)

([0, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 403], 16)

  • I'm considering dilating the walls by 3 pixels, but if the openings are smaller, they might seal the walkway. This could block the path. I'll try using morphological closing to see if dilating the walls with a small kernel helps. After dilation, the walkway might still be connected, so I’ll need to check border openings and spot any gaps on the rectangle's border. I'll also inspect a few initial pixels to determine wall thickness, which seems to be about 4 pixels.

Analyzed

Python

1
print(alpha[0,0:30])

[0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0]

Analyzed

Python

1
print(alpha[1,0:50])

[ 0 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255 255]

  • The walls are drawn with black lines, and walkway openings are the gaps in these lines. I want to check the top border for missing walls, so I’ll inspect the first 10 rows for each column where the "alpha" value is zero, indicating no wall. It looks like there are 16 such columns, mostly near the edges. Some areas (like positions 195-208) probably represent real openings, while others (like positions 0 and 403) may be due to missing borders outside the maze. Now, I'll check row 1 for walls.
  • I’ll check row 1, where alpha equals 255, indicating pixels that are part of the walls. From this, I realize that the exterior walls are

Analyzed

Python

1
wall_rows = []
2
for y in range(h):
3
if np.mean(alpha[y,:]>0) > 0.5:
4
wall_rows.append(y)
5
# keep unique groups
6
rows=[]
7
prev=-10
8
for y in wall_rows:
9
if y-prev>2: rows.append(y)
10
prev=y
11
rows

[1, 17, 49, 97, 113, 145, 161, 177, 193, 209, 241, 257, 289, 305, 321, 337, 369, 401]

  • I checked alpha at row 1, and 255 indicates the presence of a wall. The border walls are likely located at row 1, row h-2, and certain columns. The outside region and walkway are alpha==0, but not within the border lines. The walkway seems enclosed by walls drawn at intervals, every 4 pixels. I’ll check horizontal wall positions, looking for rows with mostly 255-alpha values. I noticed a pattern, with gaps roughly every 16 pixels, though it’s not consistent due to vertical walls. I plan to compute the differences in these gaps.

Analyzed

Python

1
diffs = [j-i for i,j in zip(rows[:-1],rows[1:])]
2
diffs

[16, 32, 48, 16, 32, 16, 16, 16, 16, 32, 16, 32, 16, 16, 16, 32, 32]

  • I checked row 1 and observed that alpha 255 indicates a wall. The border walls may be at row 1 and row h-2, as well as certain columns. Outside the border area and the walkway both have alpha 0. It seems the walkway is enclosed by walls drawn at odd intervals. The grid cell width might be around 4 px. The horizontal wall positions appear roughly every 16 px, but the pattern isn't uniform due to vertical walls. I'll compute the differences for a clearer picture.
  • Okay, there are still many openings along row 1, but I think clusters around columns 195–208 might actually correspond to the real entrance. This span is about 14 px wide. The openings at 0 and 403 could be corners, so they might not be true entrances. It seems the real entrance is in the 195–208 range. Now, I’ll look at the left border to check for vertical openings, focusing on specific y values.

Analyzed

Python

1
openings_top = [x for x in range(w) if alpha[1,x]==0]
2
len(openings_top), openings_top[:20]

(16, [0, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 403])

Analyzed

Python

1
openings_left = [y for y in range(h) if alpha[y,1]==0]
2
openings_left[:20], len(openings_left)

([0, 403], 2)

  • I’m thinking of using morphological thinning to simplify the walkway into a one-pixel width, then running BFS to find a path. I’ll need to separate the walkway region from the outside. A flood fill from the border can help mark the outside, leaving the walkway region, but I need the walkway. A simpler plan is to target specific openings for entrances and exits. I can detect these gaps at row 1 in the border, where openings might appear as alpha[1,x]==0. Let’s check that approach.
  • I plan to use morphological thinning to narrow the walkway down to a one-pixel width, then run BFS to find the best path. To separate the walkway region from the outside, I’ll flood-fill around the border where alpha==0 to mark the outside. Then, I can address gaps in the walkway with morphological closing. Instead of filling all gaps, I’ll focus on entrance/exit openings, which I can detect by finding regions where alpha[1,x] == 0. Let’s check row 1 for these openings.
  • Looking at row 1, there are many potential openings, but the clusters between 195-208 (about 14 px wide) seem to be the real entrance. The row's openings might just be where there’s no wall. The left border also has vertical gaps, and I see that the top-left corner is open. The right and bottom borders also have openings, so I think the entrance and exit are at the top and bottom of the maze. With these connections, the walkway is accessible through these openings, so flood filling isn't viable.
  • I’m trying to define the interior of the maze, focusing on the region between (2, h-3) for rows and (2, w-3) for columns. To set up the starting point, I’ll look for the first passable cell near the top opening, specifically looking between x=195 and x=208 in row 2. I’ll select the middle of that range, x=201, as my start point. For the end point, I’ll look for something similar near the bottom, around (h-3, 201).

Analyzed

Python

1
openings_right = [y for y in range(h) if alpha[y,w-2]==0]
2
openings_right[:20], len(openings_right)

([0, 403], 2)

Analyzed

Python

1
openings_bottom = [x for x in range(w) if alpha[h-2,x]==0]
2
openings_bottom[:20], len(openings_bottom)

([0, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 403], 16)

Analyzed

Python

1
start=(2,201)
2
end=(401,201)
3
print(passable[start], passable[end])

True True

Analyzed

Python

1
from collections import deque
2
import numpy as np
3

4
# Define mask of valid passable inside region
5
valid = passable.copy()
6
# exclude row0,row h-1, col0,col w-1
7
valid[0,:]=False
8
valid[h-1,:]=False
9
valid[:,0]=False
10
valid[:,w-1]=False
11

12
start_pt=start
13
end_pt=end
14

15
prev = -np.ones((h,w,2), dtype=int)
16
visited = np.zeros((h,w), dtype=bool)
17
q=deque()
18
q.append(start_pt)
19
visited[start_pt]=True
20
found=False
21
dirs=[(1,0),(-1,0),(0,1),(0,-1)]
22
while q:
23
y,x=q.popleft()
24
if (y,x)==end_pt:
25
found=True
26
break
27
for dy,dx in dirs:
28
ny,nx=y+dy,x+dx
29
if 0<=ny<h and 0<=nx<w and valid[ny,nx] and not visited[ny,nx]:
30
visited[ny,nx]=True
31
prev[ny,nx]=[y,x]
32
q.append((ny,nx))
33
print(found)

True

  • I’ll start by selecting the starting and ending points as passable pixels near specific coordinates, using the average of the opening cluster for the start point. Then, I can compute the shortest path using BFS, restricting it to the interior region. After finding the path, I'll overlay it on the original image by drawing a red line. But to ensure the red line stands out, I’ll convert the image to RGBA and then display it for a clear view.

Analyzed

Python

1
# reconstruct path
2
path=[]
3
p=end_pt
4
while p!=(-1,-1):
5
path.append(p)
6
y,x=p
7
prev_y, prev_x = prev[y,x]
8
if prev_y==-1:
9
break
10
p=(prev_y, prev_x)
11
path = path[::-1]
12
len(path)

1144

Analyzed

Python

1
from PIL import ImageDraw
2
img_rgba = img.copy()
3
draw = ImageDraw.Draw(img_rgba)
4
for y,x in path:
5
draw.point((x,y), fill=(255,0,0,255))
6
# Save
7
img_rgba.save('/mnt/data/maze_solved.png')

Analyzed image

o3-visual-reasoning-maze4

Done

Matplotlib Chart

maze output

Here’s one possible solution path for your maze, traced in red.

Python

1
from PIL import Image, ImageDraw
2
import numpy as np
3
import cv2
4
import matplotlib.pyplot as plt
5
from collections import deque
6
# Load the original maze (walls are black, background is transparent)
7
maze_path = '/mnt/data/d19c14d3-e182-4dba-99dc-647e4f725536.png'
8
img = Image.open(maze_path)
9
alpha = np.array(img)[:, :, 3]
10
h, w = alpha.shape
11
passable = alpha == 0 # transparent pixels are walkable
12
# We determined the entrance & exit are roughly centered at the top & bottom openings
13
start = (2, 201)
14
end = (h - 3, 201)
15
# Breadth‑first search on walkable pixels (inside the border)
16
valid = passable.copy()
17
valid[0, :] = valid[-1, :] = valid[:, 0] = valid[:, -1] = False
18
prev = -np.ones((h, w, 2), dtype=int)
19
visited = np.zeros((h, w), bool)
20
q = deque([start])
21
visited[start] = True
22
dirs = [(1, 0), (-1, 0), (0, 1), (0, -1)]
23
while q:
24
y, x = q.popleft()
25
if (y, x) == end:
26
break
27
for dy, dx in dirs:
28
ny, nx = y + dy, x + dx
29
if 0 <= ny < h and 0 <= nx < w and valid[ny, nx] and not visited[ny, nx]:
30
visited[ny, nx] = True
31
prev[ny, nx] = (y, x)
32
q.append((ny, nx))
33
# Reconstruct the path
34
path = []
35
p = end
36
while (p != (-1, -1)):
37
path.append(p)
38
py, px = p
39
p = tuple(prev[py, px])
40
path.reverse()
41
# Draw the solution in red
42
solved = img.copy()
43
draw = ImageDraw.Draw(solved)
44
for y, x in path:
45
draw.point((x, y), fill=(255, 0, 0, 255))
46
# Show the solved maze
47
plt.figure(figsize=(6, 6))
48
plt.imshow(solved)
49
plt.axis('off')
50
plt.show()
51
# Save for download if the user wants it later
52
solved_path = '/mnt/data/maze_solved.png'
53
solved.save(solved_path)
54
print(f"Solved maze saved to {solved_path}")
55

Rendimiento en pruebas comparativas

Hemos probado OpenAI o3 y o4-mini en un conjunto diverso de exámenes humanos y pruebas comparativas mediante aprendizaje automático para destacar su mejora en el razonamiento visual frente a nuestros modelos multimodales anteriores. Estos nuevos modelos de razonamiento visual superan con creces a sus predecesores en todas las tareas multimodales que hemos probado.

Evaluamos todos los modelos en condiciones de «esfuerzo de razonamiento» alto, similares a variantes como «o4-mini-high» en ChatGPT.

En concreto, pensar con imágenes sin depender de la navegación conlleva mejoras importantes en todas las pruebas comparativas de percepción que hemos evaluado. Nuestros modelos se sitúan en la vanguardia a la hora de responder a preguntas STEM (MMMU, MathVista), en lectura y razonamiento de gráficos (CharXiv), en primitivos de percepción (VLMs are Blind) y en búsqueda visual (V*). En V*, nuestro método de razonamiento visual alcanza una precisión del 95,7 %, con lo que supera con creces la prueba comparativa.

Limitaciones y perspectivas de futuro

A día de hoy, pensar con imágenes presenta las siguientes limitaciones:

  • Cadenas de razonamiento excesivamente largas: los modelos pueden hacer llamadas a la herramienta y pasos de manipulación de imágenes redundantes o innecesarios, lo que implica crear cadenas de razonamiento excesivamente largas.
  • Errores de percepción: los modelos pueden cometer errores básicos de percepción. Aunque las llamadas a las herramientas hagan avanzar correctamente el proceso de razonamiento, las interpretaciones visuales erróneas pueden conducir a respuestas finales incorrectas.
  • Fiabilidad: los modelos pueden probar diferentes procesos de razonamiento visual entre múltiples intentos de resolver un problema, y esto implica que algunos puedan llevar a resultados incorrectos.

OpenAI o3 y o4-mini suponen un avance notable en las capacidades de razonamiento más punteras, lo que representa un paso importante para conseguir un razonamiento visual más avanzado y multimodal. Estos modelos ofrecen la mejor precisión de su categoría en tareas de percepción visual. Gracias a ello, pueden resolver problemas que, anteriormente, estaban fuera de su alcance.

En OpenAI, seguiremos perfeccionando las capacidades de razonamiento de los modelos con imágenes para que estos sean más concisos, menos redundantes y más fiables. Nos entusiasma seguir investigando el razonamiento multimodal para que la sociedad tenga la oportunidad de aplicar estas mejoras en su día a día.


Actualización del 16 de abril: hemos actualizado los resultados de o3 en Charxiv-r, Mathvista y vlmsareblind para reflejar un cambio en el sistema de prompts que no estaba presente en la evaluación original.