Digital Photo's interview with D2x's designer

Thread Status
Hello, There was no answer in this thread for more than 30 days.
It can take a long time to get an up-to-date response or contact with relevant users.
K

klaus harms

Guest
Da ich kein japanisch beherrsche, habe ich diese Übersetzung einmal aus Phil´s Forum entnommen. Nicht mehr ganz neu, aber für D2X-Interessenten immer noch neu genug, denke ich einmal. Wer Lust und Laune hat, kann diesen Text vielleicht noch einmal in´s Deutsche übersetzen; ich habe leider momenatn keine Zeit dazu.

Der Original-Link ist HIER, der Text folgt hier:

The Translation of Japanese magzine Digital Photo's interview with D2x's designer.
Tanaka: Inteviewer
Ogawa: Senior Engineer, Image Processing R&D (Design) Division I
Tsuda: Staff Engineer, Image Processing R&D (Design) Division II
Uemura: Manager, Image Processing R&D (Design) Division III
Kawamura: Manager, Image Processing R&D (Design) Division I
Shibazaki: Manager, Image Processing R&D (Marketing) Division I


Q (Tanaka): Was increasing the speed of the image process the main reason to achieve high quality picture with the high quality lines such as the D2X?

A (Tsuda): The use of the new 4 channel read-in method that utilizes the buffer memory made of DDR SDRAM accounts for the main reason that we were able to achieve high performance with our hardware. By using DDR SDRAM we were able to expand our bus, optimize simultaneous data processing thereby achieving quite an increase in processing speed.

Q (Tanaka): On processing speed, can we say that D2H has been robustly boosted with high speed features?

A (Tsuda): The base of the image processing is the same; however, the use of new hardware technologies such as DDR SRAM makes it a new generation system. As a result, picture processing and file transfer rate is well above average with D2H. Also, with the improvements in ASIC computation capabilities, the abilities to perform tasks such as high accuracy interpolation, moiré reduction and bit processing have been greatly enhanced.

Q (Tanaka): With all being said, do you have an example for what couldn’t been done in the past is now realized with the new D2X camera?

A (Tsuda): High accuracy interpolation processing feature shows in the narration of edges of the picture. Regardless of how delicate the borders, the new camera is able to depict the straight lines with straight lines, the slanted lines with the accurate original line, and circles will always be circles without any zigzag edges.

Q (Tanaka): I see. Does high accuracy interpolation processing contributes any merits to features such as dynamic ranges?

A (Tsuda): Yes, it does definitely make the contribution. To say things easy, color degradation has not been able to achieve in previous models but now we can.

Q (Tanaka): Do you mean high accuracy interpolation processing also contributes to noise reduction?

A (Shibazaki): Exactly. Noise is less likely to be generated with interpolation processing.

Q (Tanaka): However, if noise is to be completed removed, wouldn’t the picture proceed as lacking senses of 3-dimentions, then it will become difficult to say that it is a good “picture”.

A (Shibazaki): Yes we understand that. Therefore, besides the standard noise reduction choices of “normal” and “high”, we also prepared the feature “off”.

Q (Tanaka): I also noticed that the D2X menu contains modes of both “long seconds noise reduction” and “high sensitivity noise reduction”, how does each of this feature relate to processing techniques?

A (Shibazaki): There are different kinds of noise, but the most likely and prominent noises occur during either long tem exposition or high sensitivity. Since these two kinds of noises occur at different conditions, to reduce noise, it is important to attack with specific signal processing. To achieve noise reduction without affecting picture quality, we have prepared these two particular modes.

Q (Tanaka): With the feature of high speed crop in D2X, I do not the see the need for the D2H line. What is your position and plan for the feature, do you have plans to merge these two lines?

A (Shibazaki): We believe the view finder is the life of a SLR, if you view from this statement, D2X high speed crop is very different from the position of D2H. D2X high speed crop is only capable of capturing and recording the center part therefore its finder recognition is very different. Even though both captures at same high speed, the pixels recorded is much higher with the D2H finder recognition.

Q (Tanaka): We definitely hope that you will prepare finder screen that black out the parts that cannot be captured with high speed crop. I also understand that there is red window that shows what is being captured, does the window represents 100% visibility?

A (Uematsu): It’s about 98%. Like we have described above, to capture moving bodies with high speed crop, it is very likely that the moving body goes out of the frame. To avoid this, we deliberately reduced the visibility to give it a little margin.

Q (Tanaka): With high speed crop, it is said that only the center 680 mega pixel of the CMOS center is being used. Is this equivalent of trimming down a picture taken by 1240 mega pixel?

A (Shibazaki): Yes it is completely the same. The sensitivity of the sensor itself is very high, therefore if we leave the shutter out, it is quite possible to have video read with just 300 mega pixel.

Q (Tanaka): The special features of D2X are that it also allows multilayer exposure and picture synthesis. Could we say that these two features are ultimately the same?

A (Ogawa): Yes these two are basically the same features. However, please be aware that multilayer exposure is only allowed for repeated captured pictures while you get to choose any pictures taken with RAW data for image synthesis. Therefore I would say that it’s more productive wit picture synthesis.
Q (Tanaka): So in what circumstances will you consider use multilayer exposure?


A (Ogawa): This feature is specifically designed for users that performed multilayer exposure previously with their film camera to continue what they like to do. Most importantly, it is not necessary to perform exposure modification with D2X multilayer exposure; therefore this feature is easier to use than those in the film camera.

Q (Tanaka): When I first learned that D2X has the picture synthesis feature, I immediately thought of and expected the possibility of achieving increasing dynamic range with picture synthesis. Though it’s too bad this is not the case, but why is it not possible to synthesize two pictures in an attempt to strengthen the brighter part with higher range while holding the darker part with picture synthesis feature?

A (Ogawa): To achieve what you said involves a very complicated processing algorithm which is not achievable with current camera ASIC technology.

Q (Tanaka): If this feature is not equipped inside the camera, I will still be happy to see it in software programs such as the Nikon Capture 4.2.

A (Kawamura): Though this feature is not in the current Nikon Capture 4.2, we understand there are different the need for it. We are working on the processing algorithm to answer this need. Please stay tuned for now.

Q (Tanaka): Let’s move our topic into sensor elements. What is the reason that the D2X uses the Sony CMOS sensor? Will the LBCAST in D2H still be used for future Nikon digital cameras?

A (Shibazaki): We announced that the LBCAST is the best suitable sensor for the Nikon camera at the time when we first announced the LBCAST in D2H camera. We will love the D2X to follow this policy too. However, the biggest selling point of the D2X is its 1240 mega pixel and hence the high picture quality. At the time of development we all agreed that the Sony CMOS sensor is the best technology to achieve high picture quality with 1240 mega pixel.

Q (Tanaka): Did you know or feel that D2X would have CMOS sensor when you announced that of the D2H?

A (Shibazaki): Exactly so. We have started the development of this particular CMOS sensor when we announced the D1X camera in 2001. Therefore, this D2X CMOS sensor has really cooked for more than 3 years. To attain much high picture quality compare to any lines of compact digital camera, we have really worked to achieve high picture quality with this 5.29 um technology.

Q (Tanaka): Starting with Canon SLR, recent camera has all used CMOS sensor. Is it true that CMOS is much more outstanding when compared to CCD?

A (Shibazaki): Each has their pros and cons. To look from consumer electronics viewpoint, CCD has its better advantage in cost. At the same time, CCD technology is reaching its maturity with the introduction of low-cost manufacturing, it’s definitely a better choice for mass production.

Q (Tanaka): So do you think features such as high speed crop is only achievable in CMOS sensor, and hence viewing from performance CMOS seems to be well above CCD. Do you think CMOS will be the mainstream sensor from now on?

A (Shibazaki):CMOS sensor has its disadvantages too. However, comparing the already matured CCD technology, CMOS still has spaces for further developments. With submicron technology developments with the semiconductor industry last year, we believe CMOS sensor performance will be better and better.

Q (Tenaka): Is there a picture quality difference between CMOS and CCD?


A (Shibazaki): There are no differences when it comes to color generation. However, CMOS has much better dynamics range. Also, CMOS sensor does not have blooming problems that has been widely experienced in only the CCD manufacturing is also one of its biggest advantages.
Q (Tanaka): Please tell us the development of LBCAST in the future.


A (Shibazaki): As a camera company, we do feel that we need to develop the core image sensor and processor technology in-house. The plan has not been solidified yet, however, we will plan to have models that come with LBCAST in the future.

Q (Tanaka): It will be quite a different topic, but please let me ask what the picture quality differences will be between a 35 mm full size 1000 mega pixel LBCAST (if there is one) and a Nikon DX format 1240 mega pixel?

A (Shibazaki): If we are to make a 1000 mega pixel with LBCAST, the pixel pitch will be around 9 um. Comparing with the 5.49 um pixel pitch size of the D2X, the pixel pitch of the LBCAST will be 1.6 times longer.

Q (Tanaka): How would you comment on the picture quality with these two pitches then?

A (Shibazaki): Although the photodiode area will be quite different, I will say the 9 um LBCAST picture quality will still be better.

Q (Tanaka): So do you mean you will start the development of a LBCAST 35 mm full size model?

A (Shibazaki): I see where you are leading this conversation to (laugh). I can’t tell you more than what I have said so far.

Q (Tanaka): Like we have said so far, the pixel pitch of the D2X has been miniaturized to 5.49 um. It seems like the problem of reflection will be easily observed when pixel sizes have been this much minimized. I believe it will also be affected by the type of lenses that I use, do you think picture quality will be worse and worse if I use lenses above F11?

A (Shibazaki): Let me first explain the reason why reflection happens. Reflection is observed when the angle has been overly done, which is observed in all existing camera, but has been difficult to be recognized in the D1X and D70 camera. That is because D1X and D70 do not have resolution that is accurate enough to catch reflection. Therefore, reflection is only observed in delicate high quality cameras such as the D2X.

Q (Tanaka): What have you done to solve/avoid the problems of reflection in the D2X camera?

A (Shibazaki): If we were to use low-pass filter, the phenomenon of reflection will be even exaggerated resulting in images that is not sharp enough. We have used an optimized low-pass filter for the 1240 mega pixel CMOS sensor to avoid reflections.

Q (Tanaka): Reflection is frequently observed with the use of wide lens. What should the photography take as a precaution to avoid this?

A (Shibazaki): There is not an almighty answer to the question since the problem is well dependent on the type of lens used. I will say to use lens up to F11 will be one of the solutions. If you need to do more than that, try to adjust your sharpness, which can be automatically done if you are to use RAW. It will also depend on the size of the print, but if you are to use JPEG, be sure to turn on the “sharpness emphasis” to high. Picture quality will be better when sharpness is increased.

Q (Tanaka): It seems lots of skills need to be learned in order to master the D2X camera.

A (shibazaki): It’s definitely not a particular complicated camera. With the 1240 mega pixel, vibration will be easily observed. Therefore, it is very important that the photographer makes sure either the hand or the object being photographed do not move. The high picture quality can only be observed with photos taken without vibration. Also, we expect that the photographer will be technically fit to take pictures without overexposures that cause flare.

Q (Tanaka): Can you tell us about the “picture quality JPEG”?


A (Tsuda): Size has always been the priority in compressing images into JPEG format. As a result, if you are to use basic mode to photograph objects with high accuracy details that evidently involves high amount of data, block noise will be often observed. Picture quality JPEG is designed to fix this problem.

A (Shibazaki): JPEG compression priority has been set to size since data storage was very limited and high cost in the past. Pictures with too much data will potentially affect the photography plan.

Q (Tanaka): Which one will you recommend, the compressed RAW or the original RAW?

A (Kawamura): RAW compression does not take much time therefore it is still faster to photograph with compressed RAW. Even with high quality compression, you do not loose much quality with compressed RAW. Comparing to the 18 MB RAW, the compressed RAW data size will be around 10MB which is much easier to handle too. Therefore I will recommend to use the compressed RAW.

Q (Tanaka): Please share with us your plans in branding of the products. Both D2X and D2H share the same AF unit, do they also have the same performance?

A (Uemura): AF speed is the same. However, D2X has much better performance in continuous AF tracing, and it will never go out of focus even with high zooming power.

Q (Tanaka): D2X uses the new 3D-RGB multi-pattern light detection system, and how does this compare to AE?

A (Uematzu): The power of 3D-RGB multi-pattern light detection system can be best observed in a cloudy day. For example, taking pictures together with the white clouds in the sky will often make the objects under-exposed. The current camera is well improved to be suitable for different scenes.

A (Kawamura): The AE technology has almost reached its mature stage; however, will never reach 100 points. What we did was take case studies of photos with filed exposure and used the learned lessons to correct the problem one-by-one with improved algorithms.

Q (Tanaka): It seems Nikon SLR always give impressions of under-exposure. Why do you think so?

A (Ogawa): Occasions include back-light, we have tired to minimize problems of under-exposure. However, there are often flare in digital camera, and to save picture quality, we prefer to be a little under-exposure and that’s our policy.

Q (Tanaka): D2X has the new sYCC color space. Which do you think has wider color space, the Adobe RGB or sYCC?

A (Shibazaki): We do think there is a delicate different, but if you are to print the files with JPEG, we will say it’s almost about the same.

Q (Tanaka): It seems even with cameras compatible with sYCC, photos taken with sRGB mode shows a more limited color space when the print is compared with sYCC mode. Do you agree that sYCC is a better match for printing than Adobe RGB?

A (Shibazaki): When Exif Print set the spec for sYCC, it has largely in mind a format that is used for direct print, and therefore there isn’t much software that is YCC compatible. We think there will not be much people direct print from the D2X camera, we prefer to let our users enjoy the familiarity given by the Adobe RGB format. Also, since Nikon Capture 4.2 is designed with printing profile, we definitely recommend it is used as the program for RAW image processing.

Q (Tanaka): We understand Nikon Capture 4.2 is designed with D linings and LCH editor that is useful and compatible with the camera. Do you think only the Nikon Capture 4.2 software is capable of bringing out the high quality of D2X?

A (Shibazaki): We wish our users use RAW to photograph with D2X, therefore Nikon Capture 4.2 is designed specifically for this particular match.


Tom

p.s. There maybe technical words that I've not translated correctly.
 
Anzeigen
-Anzeige-
Zurück
Oben Unten