The setting use indirect sun light in a partially obscured room, so to minimize as possible the lighting variation.
I took three picture in succession, with the em5 II, the em1 II then the E-300. I used the 25mm f 1.8 for the m43, the 14-54mm at 25mm f3.0 with the e300.
Here are the jpeg results using the cameras default settings
the em5 mk II the em1 mk II the e-300 With my eyes, the e-300 looks more close to my vision than the m43. Both m43 are very similar.
Addition: i look at these images using a factory calibrated BENQ PD2700U screen in HDR10 mode thru Windows 10 and a NVidia RTX graphic card. I assume the restituted colors are not too far of the image recorded values.
This raise the next question: is this a sensor difference, a white balance difference or both?
What makes an image colour rendering?
In order to analyse the differences, let take a look at the different elements that act in the generation of the final image:
- The scene defines what needs to be rendered,
- The light impacts the scene colour appearance due to its colour definition,
- The lens defines the field of view (so the amount of colour and colour intensity that need to be rendered) and can introduce subtle colour variation due to the glass and glass coating used,
- The camera acquisition part is the combination of the ISO settings, the aperture used, the shutter speeds, all of those having no impact as long as these value do not introduce overexposure, and then the sensor itself which as the key role in converting the scene appearance into pixel values.
- The camera picture processing converts the sensor output into a JPEG file using all the other parameters setting of the camera at that time, possibly including other type of modifications such as HDR, high res, focus stacking…
- The JPEG file contains the rendered scene using all the current camera settings
- The raw file normally contains the untouched sensor’s interpretation of the scene. This also contains the parameters settings of the camera at the time the picture was taken as well as a small JPEG representation of the scene with the applied parameters for pre-viewing.
- The raw to JPEG applications allow to reprocess the sensor capture to obtain new JPEG files. There are multiple applications, some from the camera manufacturer which are able to reproduce exactly the in-camera process but also allow you to change the photo time settings, other from specialized company that will generate the output using differently interpreted or additional processing parameters.
The lens are different, but the differences they can introduce are subtle in the original photo, and, for the rest of this exercise, I will use the same lens on the M43 as on the 43 body.
We are then left with the different sensors and the camera parameters.
As we are not going to play with over exposed images here, we also can ignore the aperture, shutter speed and ISO parameters (well, this last one may impact the colour rendering, but this would probably be the result of a deficient sensor).
So we are left with the other parameters. To help us analyse the output of the capture, we can use the RAW file definition: As those files contain the original sensor outputs, as well as the camera parameters, this can help us understand the differences between the physical sensor and the cameras' processing.
Then we can use the raw to JPEG application to change various camera parameter to see the impact in the final image rendering.
I will use CaptureOne as an independent raw processing application and Olympus Workspace for a camera compatible raw to jpeg application for the M43 camera (as we can see in the original post, there is no need to differentiate between the em5.2 and the em1.2 as the colour rendition is very similar). I will also use the Olympus Master 2 raw to jpeg application to manipulate the 43 camera images if required (still works under windows 10).
White balance, the most important parameter for colour rendering
Hopefully, I will not dig into the colour theory here…
The camera sensor captures the scene using three colours that more or less matches the colour sensory cells of our eyes. These are the Red, Green and Blue colour (RGB).
In order to revert the colour cast that could be induced by the light(s) illuminating the scene, the camera use a set of parameters called the white balance that help adjust the sensor output to render as white the white objects of the scene.
A camera normally have multiple means to define this white balance: you can let the camera guess it automatically (as our vision system partially does, excepted that we have a ‘memory’ hint as our brain sometime knows which elements of the scene are really white), based on a context (sunny, shadow, cloudy, using a flash …), or a single numeric value called the colour temperature express in Kelvin degrees.
For the automated part, there is two methods, one that requires us to take a picture of a white object, which, when done with the same scene lighting, allows to analyse the scene illuminating light colour, and the other which lets the camera analyse the sensor output of each photo taken to find the most intensive colour components of the image to determine what the white colour should had been in the same (white being the most luminous combination of all the scene colours).
Some cameras also have an independent colour sensor used to determine the white balance independently of the sensor output and, more important, independently of the field of view created by the lens.
Update 2 is down below.