Retina

Objective-C

@interface Retina : Algorithm

Swift

class Retina : Algorithm

class which allows the Gipsa/Listic Labs model to be used with OpenCV.

This retina model allows spatio-temporal image processing (applied on still images, video sequences). As a summary, these are the retina model properties:

  • It applies a spectral whithening (mid-frequency details enhancement)
  • high frequency spatio-temporal noise reduction
  • low frequency luminance to be reduced (luminance range compression)
  • local logarithmic luminance compression allows details to be enhanced in low light conditions

USE : this model can be used basically for spatio-temporal video effects but also for : _using the getParvo method output matrix : texture analysiswith enhanced signal to noise ratio and enhanced details robust against input images luminance ranges _using the getMagno method output matrix : motion analysis also with the previously cited properties

for more information, reer to the following papers : Benoit A., Caplier A., Durette B., Herault, J., “USING HUMAN VISUAL SYSTEM MODELING FOR BIO-INSPIRED LOW LEVEL IMAGE PROCESSING”, Elsevier, Computer Vision and Image Understanding 114 (2010), pp. 758-773, DOI: http://dx.doi.org/10.1016/j.cviu.2010.01.011 Vision: Images, Signals and Neural Networks: Models of Neural Processing in Visual Perception (Progress in Neural Processing),By: Jeanny Herault, ISBN: 9814273686. WAPI (Tower ID): 113266891.

The retina filter includes the research contributions of phd/research collegues from which code has been redrawn by the author : take a look at the retinacolor.hpp module to discover Brice Chaix de Lavarene color mosaicing/demosaicing and the reference paper: B. Chaix de Lavarene, D. Alleysson, B. Durette, J. Herault (2007). “Efficient demosaicing through recursive filtering”, IEEE International Conference on Image Processing ICIP 2007 take a look at imagelogpolprojection.hpp to discover retina spatial log sampling which originates from Barthelemy Durette phd with Jeanny Herault. A Retina / V1 cortex projection is also proposed and originates from Jeanny’s discussions. more informations in the above cited Jeanny Heraults’s book.

Member of Bioinspired

Methods

  • Declaration

    Objective-C

    - (Mat*)getMagnoRAW NS_SWIFT_NAME(getMagnoRAW());

    Swift

    func getMagnoRAW() -> Mat
  • Declaration

    Objective-C

    - (Mat*)getParvoRAW NS_SWIFT_NAME(getParvoRAW());

    Swift

    func getParvoRAW() -> Mat
  • Constructors from standardized interfaces : retreive a smart pointer to a Retina instance

    Declaration

    Objective-C

    + (nonnull Retina *)create:(nonnull Size2i *)inputSize
                     colorMode:(BOOL)colorMode
           colorSamplingMethod:(int)colorSamplingMethod
          useRetinaLogSampling:(BOOL)useRetinaLogSampling
               reductionFactor:(float)reductionFactor
              samplingStrength:(float)samplingStrength;

    Swift

    class func create(inputSize: Size2i, colorMode: Bool, colorSamplingMethod: Int32, useRetinaLogSampling: Bool, reductionFactor: Float, samplingStrength: Float) -> Retina

    Parameters

    inputSize

    the input frame size

    colorMode

    the chosen processing mode : with or without color processing

    colorSamplingMethod

    specifies which kind of color sampling will be used :

    • cv::bioinspired::RETINA_COLOR_RANDOM: each pixel position is either R, G or B in a random choice
    • cv::bioinspired::RETINA_COLOR_DIAGONAL: color sampling is RGBRGBRGB…, line 2 BRGBRGBRG…, line 3, GBRGBRGBR…
    • cv::bioinspired::RETINA_COLOR_BAYER: standard bayer sampling

    useRetinaLogSampling

    activate retina log sampling, if true, the 2 following parameters can be used

    reductionFactor

    only usefull if param useRetinaLogSampling=true, specifies the reduction factor of the output frame (as the center (fovea) is high resolution and corners can be underscaled, then a reduction of the output is allowed without precision leak

    samplingStrength

    only usefull if param useRetinaLogSampling=true, specifies the strength of the log scale that is applied

  • Constructors from standardized interfaces : retreive a smart pointer to a Retina instance

    Declaration

    Objective-C

    + (nonnull Retina *)create:(nonnull Size2i *)inputSize
                     colorMode:(BOOL)colorMode
           colorSamplingMethod:(int)colorSamplingMethod
          useRetinaLogSampling:(BOOL)useRetinaLogSampling
               reductionFactor:(float)reductionFactor;

    Swift

    class func create(inputSize: Size2i, colorMode: Bool, colorSamplingMethod: Int32, useRetinaLogSampling: Bool, reductionFactor: Float) -> Retina

    Parameters

    inputSize

    the input frame size

    colorMode

    the chosen processing mode : with or without color processing

    colorSamplingMethod

    specifies which kind of color sampling will be used :

    • cv::bioinspired::RETINA_COLOR_RANDOM: each pixel position is either R, G or B in a random choice
    • cv::bioinspired::RETINA_COLOR_DIAGONAL: color sampling is RGBRGBRGB…, line 2 BRGBRGBRG…, line 3, GBRGBRGBR…
    • cv::bioinspired::RETINA_COLOR_BAYER: standard bayer sampling

    useRetinaLogSampling

    activate retina log sampling, if true, the 2 following parameters can be used

    reductionFactor

    only usefull if param useRetinaLogSampling=true, specifies the reduction factor of the output frame (as the center (fovea) is high resolution and corners can be underscaled, then a reduction of the output is allowed without precision leak the log scale that is applied

  • Constructors from standardized interfaces : retreive a smart pointer to a Retina instance

    Declaration

    Objective-C

    + (nonnull Retina *)create:(nonnull Size2i *)inputSize
                     colorMode:(BOOL)colorMode
           colorSamplingMethod:(int)colorSamplingMethod
          useRetinaLogSampling:(BOOL)useRetinaLogSampling;

    Swift

    class func create(inputSize: Size2i, colorMode: Bool, colorSamplingMethod: Int32, useRetinaLogSampling: Bool) -> Retina

    Parameters

    inputSize

    the input frame size

    colorMode

    the chosen processing mode : with or without color processing

    colorSamplingMethod

    specifies which kind of color sampling will be used :

    • cv::bioinspired::RETINA_COLOR_RANDOM: each pixel position is either R, G or B in a random choice
    • cv::bioinspired::RETINA_COLOR_DIAGONAL: color sampling is RGBRGBRGB…, line 2 BRGBRGBRG…, line 3, GBRGBRGBR…
    • cv::bioinspired::RETINA_COLOR_BAYER: standard bayer sampling

    useRetinaLogSampling

    activate retina log sampling, if true, the 2 following parameters can be used factor of the output frame (as the center (fovea) is high resolution and corners can be underscaled, then a reduction of the output is allowed without precision leak the log scale that is applied

  • Constructors from standardized interfaces : retreive a smart pointer to a Retina instance

    Declaration

    Objective-C

    + (nonnull Retina *)create:(nonnull Size2i *)inputSize
                     colorMode:(BOOL)colorMode
           colorSamplingMethod:(int)colorSamplingMethod;

    Swift

    class func create(inputSize: Size2i, colorMode: Bool, colorSamplingMethod: Int32) -> Retina

    Parameters

    inputSize

    the input frame size

    colorMode

    the chosen processing mode : with or without color processing

    colorSamplingMethod

    specifies which kind of color sampling will be used :

    • cv::bioinspired::RETINA_COLOR_RANDOM: each pixel position is either R, G or B in a random choice
    • cv::bioinspired::RETINA_COLOR_DIAGONAL: color sampling is RGBRGBRGB…, line 2 BRGBRGBRG…, line 3, GBRGBRGBR…
    • cv::bioinspired::RETINA_COLOR_BAYER: standard bayer sampling be used factor of the output frame (as the center (fovea) is high resolution and corners can be underscaled, then a reduction of the output is allowed without precision leak the log scale that is applied

  • Constructors from standardized interfaces : retreive a smart pointer to a Retina instance

    Declaration

    Objective-C

    + (nonnull Retina *)create:(nonnull Size2i *)inputSize
                     colorMode:(BOOL)colorMode;

    Swift

    class func create(inputSize: Size2i, colorMode: Bool) -> Retina

    Parameters

    inputSize

    the input frame size

    colorMode

    the chosen processing mode : with or without color processing

    • cv::bioinspired::RETINA_COLOR_RANDOM: each pixel position is either R, G or B in a random choice
    • cv::bioinspired::RETINA_COLOR_DIAGONAL: color sampling is RGBRGBRGB…, line 2 BRGBRGBRG…, line 3, GBRGBRGBR…
    • cv::bioinspired::RETINA_COLOR_BAYER: standard bayer sampling be used factor of the output frame (as the center (fovea) is high resolution and corners can be underscaled, then a reduction of the output is allowed without precision leak the log scale that is applied

  • Declaration

    Objective-C

    + (Retina*)create:(Size2i*)inputSize NS_SWIFT_NAME(create(inputSize:));

    Swift

    class func create(inputSize: Size2i) -> Retina
  • Retreive retina input buffer size - returns: the retina input buffer size

    Declaration

    Objective-C

    - (nonnull Size2i *)getInputSize;

    Swift

    func getInputSize() -> Size2i
  • Retreive retina output buffer size that can be different from the input if a spatial log transformation is applied - returns: the retina output buffer size

    Declaration

    Objective-C

    - (nonnull Size2i *)getOutputSize;

    Swift

    func getOutputSize() -> Size2i
  • Outputs a string showing the used parameters setup - returns: a string which contains formated parameters information

    Declaration

    Objective-C

    - (nonnull NSString *)printSetup;

    Swift

    func printSetup() -> String
  • Activate/desactivate the Parvocellular pathway processing (contours information extraction), by default, it is activated

    Declaration

    Objective-C

    - (void)activateContoursProcessing:(BOOL)activate;

    Swift

    func activateContoursProcessing(activate: Bool)

    Parameters

    activate

    true if Parvocellular (contours information extraction) output should be activated, false if not… if activated, the Parvocellular output can be retrieved using the Retina::getParvo methods

  • Activate/desactivate the Magnocellular pathway processing (motion information extraction), by default, it is activated

    Declaration

    Objective-C

    - (void)activateMovingContoursProcessing:(BOOL)activate;

    Swift

    func activateMovingContoursProcessing(activate: Bool)

    Parameters

    activate

    true if Magnocellular output should be activated, false if not… if activated, the Magnocellular output can be retrieved using the getMagno methods

  • Method which processes an image in the aim to correct its luminance correct backlight problems, enhance details in shadows.

     This method is designed to perform High Dynamic Range image tone mapping (compress \>8bit/pixel
     images to 8bit/pixel). This is a simplified version of the Retina Parvocellular model
     (simplified version of the run/getParvo methods call) since it does not include the
     spatio-temporal filter modelling the Outer Plexiform Layer of the retina that performs spectral
     whitening and many other stuff. However, it works great for tone mapping and in a faster way.
    
     Check the demos and experiments section to see examples and the way to perform tone mapping
     using the original retina model and the method.
    

    Declaration

    Objective-C

    - (void)applyFastToneMapping:(nonnull Mat *)inputImage
           outputToneMappedImage:(nonnull Mat *)outputToneMappedImage;

    Swift

    func applyFastToneMapping(inputImage: Mat, outputToneMappedImage: Mat)

    Parameters

    inputImage

    the input image to process (should be coded in float format : CV_32F, CV_32FC1, CV_32F_C3, CV_32F_C4, the 4th channel won’t be considered).

    outputToneMappedImage

    the output 8bit/channel tone mapped image (CV_8U or CV_8UC3 format).

  • Clears all retina buffers

     (equivalent to opening the eyes after a long period of eye close ;o) whatchout the temporal
     transition occuring just after this method call.
    

    Declaration

    Objective-C

    - (void)clearBuffers;

    Swift

    func clearBuffers()
  • Accessor of the motion channel of the retina (models peripheral vision).

     Warning, getMagnoRAW methods return buffers that are not rescaled within range [0;255] while
     the non RAW method allows a normalized matrix to be retrieved.
    

    Declaration

    Objective-C

    - (void)getMagno:(nonnull Mat *)retinaOutput_magno;

    Swift

    func getMagno(retinaOutput_magno: Mat)

    Parameters

    retinaOutput_magno

    the output buffer (reallocated if necessary), format can be :

    • a Mat, this output is rescaled for standard 8bits image processing use in OpenCV
    • RAW methods actually return a 1D matrix (encoding is M1, M2,… Mn), this output is the original retina filter model output, without any quantification or rescaling.

  • Accessor of the motion channel of the retina (models peripheral vision). - see: -getMagno:

    Declaration

    Objective-C

    - (void)getMagnoRAW:(nonnull Mat *)retinaOutput_magno;

    Swift

    func getMagnoRAW(retinaOutput_magno: Mat)
  • Accessor of the details channel of the retina (models foveal vision).

     Warning, getParvoRAW methods return buffers that are not rescaled within range [0;255] while
     the non RAW method allows a normalized matrix to be retrieved.
    

    Declaration

    Objective-C

    - (void)getParvo:(nonnull Mat *)retinaOutput_parvo;

    Swift

    func getParvo(retinaOutput_parvo: Mat)

    Parameters

    retinaOutput_parvo

    the output buffer (reallocated if necessary), format can be :

    • a Mat, this output is rescaled for standard 8bits image processing use in OpenCV
    • RAW methods actually return a 1D matrix (encoding is R1, R2, … Rn, G1, G2, …, Gn, B1, B2, …Bn), this output is the original retina filter model output, without any quantification or rescaling.

  • Accessor of the details channel of the retina (models foveal vision). - see: -getParvo:

    Declaration

    Objective-C

    - (void)getParvoRAW:(nonnull Mat *)retinaOutput_parvo;

    Swift

    func getParvoRAW(retinaOutput_parvo: Mat)
  • Method which allows retina to be applied on an input image,

     after run, encapsulated retina module is ready to deliver its outputs using dedicated
     acccessors, see getParvo and getMagno methods
    

    Declaration

    Objective-C

    - (void)run:(nonnull Mat *)inputImage;

    Swift

    func run(inputImage: Mat)

    Parameters

    inputImage

    the input Mat image to be processed, can be gray level or BGR coded in any format (from 8bit to 16bits)

  • Activate color saturation as the final step of the color demultiplexing process -> this saturation is a sigmoide function applied to each channel of the demultiplexed image.

    Declaration

    Objective-C

    - (void)setColorSaturation:(BOOL)saturateColors
          colorSaturationValue:(float)colorSaturationValue;

    Swift

    func setColorSaturation(saturateColors: Bool, colorSaturationValue: Float)

    Parameters

    saturateColors

    boolean that activates color saturation (if true) or desactivate (if false)

    colorSaturationValue

    the saturation factor : a simple factor applied on the chrominance buffers

  • Activate color saturation as the final step of the color demultiplexing process -> this saturation is a sigmoide function applied to each channel of the demultiplexed image.

    Declaration

    Objective-C

    - (void)setColorSaturation:(BOOL)saturateColors;

    Swift

    func setColorSaturation(saturateColors: Bool)

    Parameters

    saturateColors

    boolean that activates color saturation (if true) or desactivate (if false) buffers

  • Activate color saturation as the final step of the color demultiplexing process -> this saturation is a sigmoide function applied to each channel of the demultiplexed image. buffers

    Declaration

    Objective-C

    - (void)setColorSaturation;

    Swift

    func setColorSaturation()
  • Try to open an XML retina parameters file to adjust current retina instance setup

     - if the xml file does not exist, then default setup is applied
     - warning, Exceptions are thrown if read XML file is not valid
    

    Declaration

    Objective-C

    - (void)setup:(nonnull NSString *)retinaParameterFile
        applyDefaultSetupOnFailure:(BOOL)applyDefaultSetupOnFailure;

    Swift

    func setup(retinaParameterFile: String, applyDefaultSetupOnFailure: Bool)

    Parameters

    retinaParameterFile

    the parameters filename

    applyDefaultSetupOnFailure

    set to true if an error must be thrown on error

    You can retrieve the current parameters structure using the method Retina::getParameters and update it before running method Retina::setup.

  • Try to open an XML retina parameters file to adjust current retina instance setup

     - if the xml file does not exist, then default setup is applied
     - warning, Exceptions are thrown if read XML file is not valid
    

    Declaration

    Objective-C

    - (void)setup:(nonnull NSString *)retinaParameterFile;

    Swift

    func setup(retinaParameterFile: String)

    Parameters

    retinaParameterFile

    the parameters filename

    You can retrieve the current parameters structure using the method Retina::getParameters and update it before running method Retina::setup.

  • Try to open an XML retina parameters file to adjust current retina instance setup

     - if the xml file does not exist, then default setup is applied
     - warning, Exceptions are thrown if read XML file is not valid
    
     You can retrieve the current parameters structure using the method Retina::getParameters and update
     it before running method Retina::setup.
    

    Declaration

    Objective-C

    - (void)setup;

    Swift

    func setup()
  • Set parameters values for the Inner Plexiform Layer (IPL) magnocellular channel

     this channel processes signals output from OPL processing stage in peripheral vision, it allows
     motion information enhancement. It is decorrelated from the details channel. See reference
     papers for more details.
    

    Declaration

    Objective-C

    - (void)setupIPLMagnoChannel:(BOOL)normaliseOutput
                       parasolCells_beta:(float)parasolCells_beta
                        parasolCells_tau:(float)parasolCells_tau
                          parasolCells_k:(float)parasolCells_k
        amacrinCellsTemporalCutFrequency:(float)amacrinCellsTemporalCutFrequency
                  V0CompressionParameter:(float)V0CompressionParameter
               localAdaptintegration_tau:(float)localAdaptintegration_tau
                 localAdaptintegration_k:(float)localAdaptintegration_k;

    Swift

    func setupIPLMagnoChannel(normaliseOutput: Bool, parasolCells_beta: Float, parasolCells_tau: Float, parasolCells_k: Float, amacrinCellsTemporalCutFrequency: Float, V0CompressionParameter: Float, localAdaptintegration_tau: Float, localAdaptintegration_k: Float)

    Parameters

    normaliseOutput

    specifies if (true) output is rescaled between 0 and 255 of not (false)

    parasolCells_beta

    the low pass filter gain used for local contrast adaptation at the IPL level of the retina (for ganglion cells local adaptation), typical value is 0

    parasolCells_tau

    the low pass filter time constant used for local contrast adaptation at the IPL level of the retina (for ganglion cells local adaptation), unit is frame, typical value is 0 (immediate response)

    parasolCells_k

    the low pass filter spatial constant used for local contrast adaptation at the IPL level of the retina (for ganglion cells local adaptation), unit is pixels, typical value is 5

    amacrinCellsTemporalCutFrequency

    the time constant of the first order high pass fiter of the magnocellular way (motion information channel), unit is frames, typical value is 1.2

    V0CompressionParameter

    the compression strengh of the ganglion cells local adaptation output, set a value between 0.6 and 1 for best results, a high value increases more the low value sensitivity… and the output saturates faster, recommended value: 0.95

    localAdaptintegration_tau

    specifies the temporal constant of the low pas filter involved in the computation of the local “motion mean” for the local adaptation computation

    localAdaptintegration_k

    specifies the spatial constant of the low pas filter involved in the computation of the local “motion mean” for the local adaptation computation

  • Set parameters values for the Inner Plexiform Layer (IPL) magnocellular channel

     this channel processes signals output from OPL processing stage in peripheral vision, it allows
     motion information enhancement. It is decorrelated from the details channel. See reference
     papers for more details.
    

    Declaration

    Objective-C

    - (void)setupIPLMagnoChannel:(BOOL)normaliseOutput
                       parasolCells_beta:(float)parasolCells_beta
                        parasolCells_tau:(float)parasolCells_tau
                          parasolCells_k:(float)parasolCells_k
        amacrinCellsTemporalCutFrequency:(float)amacrinCellsTemporalCutFrequency
                  V0CompressionParameter:(float)V0CompressionParameter
               localAdaptintegration_tau:(float)localAdaptintegration_tau;

    Swift

    func setupIPLMagnoChannel(normaliseOutput: Bool, parasolCells_beta: Float, parasolCells_tau: Float, parasolCells_k: Float, amacrinCellsTemporalCutFrequency: Float, V0CompressionParameter: Float, localAdaptintegration_tau: Float)

    Parameters

    normaliseOutput

    specifies if (true) output is rescaled between 0 and 255 of not (false)

    parasolCells_beta

    the low pass filter gain used for local contrast adaptation at the IPL level of the retina (for ganglion cells local adaptation), typical value is 0

    parasolCells_tau

    the low pass filter time constant used for local contrast adaptation at the IPL level of the retina (for ganglion cells local adaptation), unit is frame, typical value is 0 (immediate response)

    parasolCells_k

    the low pass filter spatial constant used for local contrast adaptation at the IPL level of the retina (for ganglion cells local adaptation), unit is pixels, typical value is 5

    amacrinCellsTemporalCutFrequency

    the time constant of the first order high pass fiter of the magnocellular way (motion information channel), unit is frames, typical value is 1.2

    V0CompressionParameter

    the compression strengh of the ganglion cells local adaptation output, set a value between 0.6 and 1 for best results, a high value increases more the low value sensitivity… and the output saturates faster, recommended value: 0.95

    localAdaptintegration_tau

    specifies the temporal constant of the low pas filter involved in the computation of the local “motion mean” for the local adaptation computation in the computation of the local “motion mean” for the local adaptation computation

  • Set parameters values for the Inner Plexiform Layer (IPL) magnocellular channel

     this channel processes signals output from OPL processing stage in peripheral vision, it allows
     motion information enhancement. It is decorrelated from the details channel. See reference
     papers for more details.
    

    Declaration

    Objective-C

    - (void)setupIPLMagnoChannel:(BOOL)normaliseOutput
                       parasolCells_beta:(float)parasolCells_beta
                        parasolCells_tau:(float)parasolCells_tau
                          parasolCells_k:(float)parasolCells_k
        amacrinCellsTemporalCutFrequency:(float)amacrinCellsTemporalCutFrequency
                  V0CompressionParameter:(float)V0CompressionParameter;

    Swift

    func setupIPLMagnoChannel(normaliseOutput: Bool, parasolCells_beta: Float, parasolCells_tau: Float, parasolCells_k: Float, amacrinCellsTemporalCutFrequency: Float, V0CompressionParameter: Float)

    Parameters

    normaliseOutput

    specifies if (true) output is rescaled between 0 and 255 of not (false)

    parasolCells_beta

    the low pass filter gain used for local contrast adaptation at the IPL level of the retina (for ganglion cells local adaptation), typical value is 0

    parasolCells_tau

    the low pass filter time constant used for local contrast adaptation at the IPL level of the retina (for ganglion cells local adaptation), unit is frame, typical value is 0 (immediate response)

    parasolCells_k

    the low pass filter spatial constant used for local contrast adaptation at the IPL level of the retina (for ganglion cells local adaptation), unit is pixels, typical value is 5

    amacrinCellsTemporalCutFrequency

    the time constant of the first order high pass fiter of the magnocellular way (motion information channel), unit is frames, typical value is 1.2

    V0CompressionParameter

    the compression strengh of the ganglion cells local adaptation output, set a value between 0.6 and 1 for best results, a high value increases more the low value sensitivity… and the output saturates faster, recommended value: 0.95 involved in the computation of the local “motion mean” for the local adaptation computation in the computation of the local “motion mean” for the local adaptation computation

  • Set parameters values for the Inner Plexiform Layer (IPL) magnocellular channel

     this channel processes signals output from OPL processing stage in peripheral vision, it allows
     motion information enhancement. It is decorrelated from the details channel. See reference
     papers for more details.
    

    Declaration

    Objective-C

    - (void)setupIPLMagnoChannel:(BOOL)normaliseOutput
                       parasolCells_beta:(float)parasolCells_beta
                        parasolCells_tau:(float)parasolCells_tau
                          parasolCells_k:(float)parasolCells_k
        amacrinCellsTemporalCutFrequency:(float)amacrinCellsTemporalCutFrequency;

    Swift

    func setupIPLMagnoChannel(normaliseOutput: Bool, parasolCells_beta: Float, parasolCells_tau: Float, parasolCells_k: Float, amacrinCellsTemporalCutFrequency: Float)

    Parameters

    normaliseOutput

    specifies if (true) output is rescaled between 0 and 255 of not (false)

    parasolCells_beta

    the low pass filter gain used for local contrast adaptation at the IPL level of the retina (for ganglion cells local adaptation), typical value is 0

    parasolCells_tau

    the low pass filter time constant used for local contrast adaptation at the IPL level of the retina (for ganglion cells local adaptation), unit is frame, typical value is 0 (immediate response)

    parasolCells_k

    the low pass filter spatial constant used for local contrast adaptation at the IPL level of the retina (for ganglion cells local adaptation), unit is pixels, typical value is 5

    amacrinCellsTemporalCutFrequency

    the time constant of the first order high pass fiter of the magnocellular way (motion information channel), unit is frames, typical value is 1.2 output, set a value between 0.6 and 1 for best results, a high value increases more the low value sensitivity… and the output saturates faster, recommended value: 0.95 involved in the computation of the local “motion mean” for the local adaptation computation in the computation of the local “motion mean” for the local adaptation computation

  • Set parameters values for the Inner Plexiform Layer (IPL) magnocellular channel

     this channel processes signals output from OPL processing stage in peripheral vision, it allows
     motion information enhancement. It is decorrelated from the details channel. See reference
     papers for more details.
    

    Declaration

    Objective-C

    - (void)setupIPLMagnoChannel:(BOOL)normaliseOutput
               parasolCells_beta:(float)parasolCells_beta
                parasolCells_tau:(float)parasolCells_tau
                  parasolCells_k:(float)parasolCells_k;

    Swift

    func setupIPLMagnoChannel(normaliseOutput: Bool, parasolCells_beta: Float, parasolCells_tau: Float, parasolCells_k: Float)

    Parameters

    normaliseOutput

    specifies if (true) output is rescaled between 0 and 255 of not (false)

    parasolCells_beta

    the low pass filter gain used for local contrast adaptation at the IPL level of the retina (for ganglion cells local adaptation), typical value is 0

    parasolCells_tau

    the low pass filter time constant used for local contrast adaptation at the IPL level of the retina (for ganglion cells local adaptation), unit is frame, typical value is 0 (immediate response)

    parasolCells_k

    the low pass filter spatial constant used for local contrast adaptation at the IPL level of the retina (for ganglion cells local adaptation), unit is pixels, typical value is 5 the magnocellular way (motion information channel), unit is frames, typical value is 1.2 output, set a value between 0.6 and 1 for best results, a high value increases more the low value sensitivity… and the output saturates faster, recommended value: 0.95 involved in the computation of the local “motion mean” for the local adaptation computation in the computation of the local “motion mean” for the local adaptation computation

  • Set parameters values for the Inner Plexiform Layer (IPL) magnocellular channel

     this channel processes signals output from OPL processing stage in peripheral vision, it allows
     motion information enhancement. It is decorrelated from the details channel. See reference
     papers for more details.
    

    Declaration

    Objective-C

    - (void)setupIPLMagnoChannel:(BOOL)normaliseOutput
               parasolCells_beta:(float)parasolCells_beta
                parasolCells_tau:(float)parasolCells_tau;

    Swift

    func setupIPLMagnoChannel(normaliseOutput: Bool, parasolCells_beta: Float, parasolCells_tau: Float)

    Parameters

    normaliseOutput

    specifies if (true) output is rescaled between 0 and 255 of not (false)

    parasolCells_beta

    the low pass filter gain used for local contrast adaptation at the IPL level of the retina (for ganglion cells local adaptation), typical value is 0

    parasolCells_tau

    the low pass filter time constant used for local contrast adaptation at the IPL level of the retina (for ganglion cells local adaptation), unit is frame, typical value is 0 (immediate response) at the IPL level of the retina (for ganglion cells local adaptation), unit is pixels, typical value is 5 the magnocellular way (motion information channel), unit is frames, typical value is 1.2 output, set a value between 0.6 and 1 for best results, a high value increases more the low value sensitivity… and the output saturates faster, recommended value: 0.95 involved in the computation of the local “motion mean” for the local adaptation computation in the computation of the local “motion mean” for the local adaptation computation

  • Set parameters values for the Inner Plexiform Layer (IPL) magnocellular channel

     this channel processes signals output from OPL processing stage in peripheral vision, it allows
     motion information enhancement. It is decorrelated from the details channel. See reference
     papers for more details.
    

    Declaration

    Objective-C

    - (void)setupIPLMagnoChannel:(BOOL)normaliseOutput
               parasolCells_beta:(float)parasolCells_beta;

    Swift

    func setupIPLMagnoChannel(normaliseOutput: Bool, parasolCells_beta: Float)

    Parameters

    normaliseOutput

    specifies if (true) output is rescaled between 0 and 255 of not (false)

    parasolCells_beta

    the low pass filter gain used for local contrast adaptation at the IPL level of the retina (for ganglion cells local adaptation), typical value is 0 at the IPL level of the retina (for ganglion cells local adaptation), unit is frame, typical value is 0 (immediate response) at the IPL level of the retina (for ganglion cells local adaptation), unit is pixels, typical value is 5 the magnocellular way (motion information channel), unit is frames, typical value is 1.2 output, set a value between 0.6 and 1 for best results, a high value increases more the low value sensitivity… and the output saturates faster, recommended value: 0.95 involved in the computation of the local “motion mean” for the local adaptation computation in the computation of the local “motion mean” for the local adaptation computation

  • Set parameters values for the Inner Plexiform Layer (IPL) magnocellular channel

     this channel processes signals output from OPL processing stage in peripheral vision, it allows
     motion information enhancement. It is decorrelated from the details channel. See reference
     papers for more details.
    

    Declaration

    Objective-C

    - (void)setupIPLMagnoChannel:(BOOL)normaliseOutput;

    Swift

    func setupIPLMagnoChannel(normaliseOutput: Bool)

    Parameters

    normaliseOutput

    specifies if (true) output is rescaled between 0 and 255 of not (false) IPL level of the retina (for ganglion cells local adaptation), typical value is 0 at the IPL level of the retina (for ganglion cells local adaptation), unit is frame, typical value is 0 (immediate response) at the IPL level of the retina (for ganglion cells local adaptation), unit is pixels, typical value is 5 the magnocellular way (motion information channel), unit is frames, typical value is 1.2 output, set a value between 0.6 and 1 for best results, a high value increases more the low value sensitivity… and the output saturates faster, recommended value: 0.95 involved in the computation of the local “motion mean” for the local adaptation computation in the computation of the local “motion mean” for the local adaptation computation

  • Set parameters values for the Inner Plexiform Layer (IPL) magnocellular channel

     this channel processes signals output from OPL processing stage in peripheral vision, it allows
     motion information enhancement. It is decorrelated from the details channel. See reference
     papers for more details.
    
     IPL level of the retina (for ganglion cells local adaptation), typical value is 0
     at the IPL level of the retina (for ganglion cells local adaptation), unit is frame, typical
     value is 0 (immediate response)
     at the IPL level of the retina (for ganglion cells local adaptation), unit is pixels, typical
     value is 5
     the magnocellular way (motion information channel), unit is frames, typical value is 1.2
     output, set a value between 0.6 and 1 for best results, a high value increases more the low
     value sensitivity... and the output saturates faster, recommended value: 0.95
     involved in the computation of the local "motion mean" for the local adaptation computation
     in the computation of the local "motion mean" for the local adaptation computation
    

    Declaration

    Objective-C

    - (void)setupIPLMagnoChannel;

    Swift

    func setupIPLMagnoChannel()
  • Setup the OPL and IPL parvo channels (see biologocal model)

     OPL is referred as Outer Plexiform Layer of the retina, it allows the spatio-temporal filtering
     which withens the spectrum and reduces spatio-temporal noise while attenuating global luminance
     (low frequency energy) IPL parvo is the OPL next processing stage, it refers to a part of the
     Inner Plexiform layer of the retina, it allows high contours sensitivity in foveal vision. See
     reference papers for more informations.
     for more informations, please have a look at the paper Benoit A., Caplier A., Durette B., Herault, J., "USING HUMAN VISUAL SYSTEM MODELING FOR BIO-INSPIRED LOW LEVEL IMAGE PROCESSING", Elsevier, Computer Vision and Image Understanding 114 (2010), pp. 758-773, DOI: http://dx.doi.org/10.1016/j.cviu.2010.01.011
    

    Declaration

    Objective-C

    - (void)setupOPLandIPLParvoChannel:(BOOL)colorMode
                                 normaliseOutput:(BOOL)normaliseOutput
        photoreceptorsLocalAdaptationSensitivity:
            (float)photoreceptorsLocalAdaptationSensitivity
                  photoreceptorsTemporalConstant:
                      (float)photoreceptorsTemporalConstant
                   photoreceptorsSpatialConstant:
                       (float)photoreceptorsSpatialConstant
                             horizontalCellsGain:(float)horizontalCellsGain
                          HcellsTemporalConstant:(float)HcellsTemporalConstant
                           HcellsSpatialConstant:(float)HcellsSpatialConstant
                        ganglionCellsSensitivity:(float)ganglionCellsSensitivity;

    Swift

    func setupOPLandIPLParvoChannel(colorMode: Bool, normaliseOutput: Bool, photoreceptorsLocalAdaptationSensitivity: Float, photoreceptorsTemporalConstant: Float, photoreceptorsSpatialConstant: Float, horizontalCellsGain: Float, HcellsTemporalConstant: Float, HcellsSpatialConstant: Float, ganglionCellsSensitivity: Float)

    Parameters

    colorMode

    specifies if (true) color is processed of not (false) to then processing gray level image

    normaliseOutput

    specifies if (true) output is rescaled between 0 and 255 of not (false)

    photoreceptorsLocalAdaptationSensitivity

    the photoreceptors sensitivity renage is 0-1 (more log compression effect when value increases)

    photoreceptorsTemporalConstant

    the time constant of the first order low pass filter of the photoreceptors, use it to cut high temporal frequencies (noise or fast motion), unit is frames, typical value is 1 frame

    photoreceptorsSpatialConstant

    the spatial constant of the first order low pass filter of the photoreceptors, use it to cut high spatial frequencies (noise or thick contours), unit is pixels, typical value is 1 pixel

    horizontalCellsGain

    gain of the horizontal cells network, if 0, then the mean value of the output is zero, if the parameter is near 1, then, the luminance is not filtered and is still reachable at the output, typicall value is 0

    HcellsTemporalConstant

    the time constant of the first order low pass filter of the horizontal cells, use it to cut low temporal frequencies (local luminance variations), unit is frames, typical value is 1 frame, as the photoreceptors

    HcellsSpatialConstant

    the spatial constant of the first order low pass filter of the horizontal cells, use it to cut low spatial frequencies (local luminance), unit is pixels, typical value is 5 pixel, this value is also used for local contrast computing when computing the local contrast adaptation at the ganglion cells level (Inner Plexiform Layer parvocellular channel model)

    ganglionCellsSensitivity

    the compression strengh of the ganglion cells local adaptation output, set a value between 0.6 and 1 for best results, a high value increases more the low value sensitivity… and the output saturates faster, recommended value: 0.7

  • Setup the OPL and IPL parvo channels (see biologocal model)

     OPL is referred as Outer Plexiform Layer of the retina, it allows the spatio-temporal filtering
     which withens the spectrum and reduces spatio-temporal noise while attenuating global luminance
     (low frequency energy) IPL parvo is the OPL next processing stage, it refers to a part of the
     Inner Plexiform layer of the retina, it allows high contours sensitivity in foveal vision. See
     reference papers for more informations.
     for more informations, please have a look at the paper Benoit A., Caplier A., Durette B., Herault, J., "USING HUMAN VISUAL SYSTEM MODELING FOR BIO-INSPIRED LOW LEVEL IMAGE PROCESSING", Elsevier, Computer Vision and Image Understanding 114 (2010), pp. 758-773, DOI: http://dx.doi.org/10.1016/j.cviu.2010.01.011
    

    Declaration

    Objective-C

    - (void)setupOPLandIPLParvoChannel:(BOOL)colorMode
                                 normaliseOutput:(BOOL)normaliseOutput
        photoreceptorsLocalAdaptationSensitivity:
            (float)photoreceptorsLocalAdaptationSensitivity
                  photoreceptorsTemporalConstant:
                      (float)photoreceptorsTemporalConstant
                   photoreceptorsSpatialConstant:
                       (float)photoreceptorsSpatialConstant
                             horizontalCellsGain:(float)horizontalCellsGain
                          HcellsTemporalConstant:(float)HcellsTemporalConstant
                           HcellsSpatialConstant:(float)HcellsSpatialConstant;

    Swift

    func setupOPLandIPLParvoChannel(colorMode: Bool, normaliseOutput: Bool, photoreceptorsLocalAdaptationSensitivity: Float, photoreceptorsTemporalConstant: Float, photoreceptorsSpatialConstant: Float, horizontalCellsGain: Float, HcellsTemporalConstant: Float, HcellsSpatialConstant: Float)

    Parameters

    colorMode

    specifies if (true) color is processed of not (false) to then processing gray level image

    normaliseOutput

    specifies if (true) output is rescaled between 0 and 255 of not (false)

    photoreceptorsLocalAdaptationSensitivity

    the photoreceptors sensitivity renage is 0-1 (more log compression effect when value increases)

    photoreceptorsTemporalConstant

    the time constant of the first order low pass filter of the photoreceptors, use it to cut high temporal frequencies (noise or fast motion), unit is frames, typical value is 1 frame

    photoreceptorsSpatialConstant

    the spatial constant of the first order low pass filter of the photoreceptors, use it to cut high spatial frequencies (noise or thick contours), unit is pixels, typical value is 1 pixel

    horizontalCellsGain

    gain of the horizontal cells network, if 0, then the mean value of the output is zero, if the parameter is near 1, then, the luminance is not filtered and is still reachable at the output, typicall value is 0

    HcellsTemporalConstant

    the time constant of the first order low pass filter of the horizontal cells, use it to cut low temporal frequencies (local luminance variations), unit is frames, typical value is 1 frame, as the photoreceptors

    HcellsSpatialConstant

    the spatial constant of the first order low pass filter of the horizontal cells, use it to cut low spatial frequencies (local luminance), unit is pixels, typical value is 5 pixel, this value is also used for local contrast computing when computing the local contrast adaptation at the ganglion cells level (Inner Plexiform Layer parvocellular channel model) output, set a value between 0.6 and 1 for best results, a high value increases more the low value sensitivity… and the output saturates faster, recommended value: 0.7

  • Setup the OPL and IPL parvo channels (see biologocal model)

     OPL is referred as Outer Plexiform Layer of the retina, it allows the spatio-temporal filtering
     which withens the spectrum and reduces spatio-temporal noise while attenuating global luminance
     (low frequency energy) IPL parvo is the OPL next processing stage, it refers to a part of the
     Inner Plexiform layer of the retina, it allows high contours sensitivity in foveal vision. See
     reference papers for more informations.
     for more informations, please have a look at the paper Benoit A., Caplier A., Durette B., Herault, J., "USING HUMAN VISUAL SYSTEM MODELING FOR BIO-INSPIRED LOW LEVEL IMAGE PROCESSING", Elsevier, Computer Vision and Image Understanding 114 (2010), pp. 758-773, DOI: http://dx.doi.org/10.1016/j.cviu.2010.01.011
    

    Declaration

    Objective-C

    - (void)setupOPLandIPLParvoChannel:(BOOL)colorMode
                                 normaliseOutput:(BOOL)normaliseOutput
        photoreceptorsLocalAdaptationSensitivity:
            (float)photoreceptorsLocalAdaptationSensitivity
                  photoreceptorsTemporalConstant:
                      (float)photoreceptorsTemporalConstant
                   photoreceptorsSpatialConstant:
                       (float)photoreceptorsSpatialConstant
                             horizontalCellsGain:(float)horizontalCellsGain
                          HcellsTemporalConstant:(float)HcellsTemporalConstant;

    Swift

    func setupOPLandIPLParvoChannel(colorMode: Bool, normaliseOutput: Bool, photoreceptorsLocalAdaptationSensitivity: Float, photoreceptorsTemporalConstant: Float, photoreceptorsSpatialConstant: Float, horizontalCellsGain: Float, HcellsTemporalConstant: Float)

    Parameters

    colorMode

    specifies if (true) color is processed of not (false) to then processing gray level image

    normaliseOutput

    specifies if (true) output is rescaled between 0 and 255 of not (false)

    photoreceptorsLocalAdaptationSensitivity

    the photoreceptors sensitivity renage is 0-1 (more log compression effect when value increases)

    photoreceptorsTemporalConstant

    the time constant of the first order low pass filter of the photoreceptors, use it to cut high temporal frequencies (noise or fast motion), unit is frames, typical value is 1 frame

    photoreceptorsSpatialConstant

    the spatial constant of the first order low pass filter of the photoreceptors, use it to cut high spatial frequencies (noise or thick contours), unit is pixels, typical value is 1 pixel

    horizontalCellsGain

    gain of the horizontal cells network, if 0, then the mean value of the output is zero, if the parameter is near 1, then, the luminance is not filtered and is still reachable at the output, typicall value is 0

    HcellsTemporalConstant

    the time constant of the first order low pass filter of the horizontal cells, use it to cut low temporal frequencies (local luminance variations), unit is frames, typical value is 1 frame, as the photoreceptors horizontal cells, use it to cut low spatial frequencies (local luminance), unit is pixels, typical value is 5 pixel, this value is also used for local contrast computing when computing the local contrast adaptation at the ganglion cells level (Inner Plexiform Layer parvocellular channel model) output, set a value between 0.6 and 1 for best results, a high value increases more the low value sensitivity… and the output saturates faster, recommended value: 0.7

  • Setup the OPL and IPL parvo channels (see biologocal model)

     OPL is referred as Outer Plexiform Layer of the retina, it allows the spatio-temporal filtering
     which withens the spectrum and reduces spatio-temporal noise while attenuating global luminance
     (low frequency energy) IPL parvo is the OPL next processing stage, it refers to a part of the
     Inner Plexiform layer of the retina, it allows high contours sensitivity in foveal vision. See
     reference papers for more informations.
     for more informations, please have a look at the paper Benoit A., Caplier A., Durette B., Herault, J., "USING HUMAN VISUAL SYSTEM MODELING FOR BIO-INSPIRED LOW LEVEL IMAGE PROCESSING", Elsevier, Computer Vision and Image Understanding 114 (2010), pp. 758-773, DOI: http://dx.doi.org/10.1016/j.cviu.2010.01.011
    

    Declaration

    Objective-C

    - (void)setupOPLandIPLParvoChannel:(BOOL)colorMode
                                 normaliseOutput:(BOOL)normaliseOutput
        photoreceptorsLocalAdaptationSensitivity:
            (float)photoreceptorsLocalAdaptationSensitivity
                  photoreceptorsTemporalConstant:
                      (float)photoreceptorsTemporalConstant
                   photoreceptorsSpatialConstant:
                       (float)photoreceptorsSpatialConstant
                             horizontalCellsGain:(float)horizontalCellsGain;

    Swift

    func setupOPLandIPLParvoChannel(colorMode: Bool, normaliseOutput: Bool, photoreceptorsLocalAdaptationSensitivity: Float, photoreceptorsTemporalConstant: Float, photoreceptorsSpatialConstant: Float, horizontalCellsGain: Float)

    Parameters

    colorMode

    specifies if (true) color is processed of not (false) to then processing gray level image

    normaliseOutput

    specifies if (true) output is rescaled between 0 and 255 of not (false)

    photoreceptorsLocalAdaptationSensitivity

    the photoreceptors sensitivity renage is 0-1 (more log compression effect when value increases)

    photoreceptorsTemporalConstant

    the time constant of the first order low pass filter of the photoreceptors, use it to cut high temporal frequencies (noise or fast motion), unit is frames, typical value is 1 frame

    photoreceptorsSpatialConstant

    the spatial constant of the first order low pass filter of the photoreceptors, use it to cut high spatial frequencies (noise or thick contours), unit is pixels, typical value is 1 pixel

    horizontalCellsGain

    gain of the horizontal cells network, if 0, then the mean value of the output is zero, if the parameter is near 1, then, the luminance is not filtered and is still reachable at the output, typicall value is 0 horizontal cells, use it to cut low temporal frequencies (local luminance variations), unit is frames, typical value is 1 frame, as the photoreceptors horizontal cells, use it to cut low spatial frequencies (local luminance), unit is pixels, typical value is 5 pixel, this value is also used for local contrast computing when computing the local contrast adaptation at the ganglion cells level (Inner Plexiform Layer parvocellular channel model) output, set a value between 0.6 and 1 for best results, a high value increases more the low value sensitivity… and the output saturates faster, recommended value: 0.7

  • Setup the OPL and IPL parvo channels (see biologocal model)

     OPL is referred as Outer Plexiform Layer of the retina, it allows the spatio-temporal filtering
     which withens the spectrum and reduces spatio-temporal noise while attenuating global luminance
     (low frequency energy) IPL parvo is the OPL next processing stage, it refers to a part of the
     Inner Plexiform layer of the retina, it allows high contours sensitivity in foveal vision. See
     reference papers for more informations.
     for more informations, please have a look at the paper Benoit A., Caplier A., Durette B., Herault, J., "USING HUMAN VISUAL SYSTEM MODELING FOR BIO-INSPIRED LOW LEVEL IMAGE PROCESSING", Elsevier, Computer Vision and Image Understanding 114 (2010), pp. 758-773, DOI: http://dx.doi.org/10.1016/j.cviu.2010.01.011
    

    Declaration

    Objective-C

    - (void)setupOPLandIPLParvoChannel:(BOOL)colorMode
                                 normaliseOutput:(BOOL)normaliseOutput
        photoreceptorsLocalAdaptationSensitivity:
            (float)photoreceptorsLocalAdaptationSensitivity
                  photoreceptorsTemporalConstant:
                      (float)photoreceptorsTemporalConstant
                   photoreceptorsSpatialConstant:
                       (float)photoreceptorsSpatialConstant;

    Swift

    func setupOPLandIPLParvoChannel(colorMode: Bool, normaliseOutput: Bool, photoreceptorsLocalAdaptationSensitivity: Float, photoreceptorsTemporalConstant: Float, photoreceptorsSpatialConstant: Float)

    Parameters

    colorMode

    specifies if (true) color is processed of not (false) to then processing gray level image

    normaliseOutput

    specifies if (true) output is rescaled between 0 and 255 of not (false)

    photoreceptorsLocalAdaptationSensitivity

    the photoreceptors sensitivity renage is 0-1 (more log compression effect when value increases)

    photoreceptorsTemporalConstant

    the time constant of the first order low pass filter of the photoreceptors, use it to cut high temporal frequencies (noise or fast motion), unit is frames, typical value is 1 frame

    photoreceptorsSpatialConstant

    the spatial constant of the first order low pass filter of the photoreceptors, use it to cut high spatial frequencies (noise or thick contours), unit is pixels, typical value is 1 pixel the output is zero, if the parameter is near 1, then, the luminance is not filtered and is still reachable at the output, typicall value is 0 horizontal cells, use it to cut low temporal frequencies (local luminance variations), unit is frames, typical value is 1 frame, as the photoreceptors horizontal cells, use it to cut low spatial frequencies (local luminance), unit is pixels, typical value is 5 pixel, this value is also used for local contrast computing when computing the local contrast adaptation at the ganglion cells level (Inner Plexiform Layer parvocellular channel model) output, set a value between 0.6 and 1 for best results, a high value increases more the low value sensitivity… and the output saturates faster, recommended value: 0.7

  • Setup the OPL and IPL parvo channels (see biologocal model)

     OPL is referred as Outer Plexiform Layer of the retina, it allows the spatio-temporal filtering
     which withens the spectrum and reduces spatio-temporal noise while attenuating global luminance
     (low frequency energy) IPL parvo is the OPL next processing stage, it refers to a part of the
     Inner Plexiform layer of the retina, it allows high contours sensitivity in foveal vision. See
     reference papers for more informations.
     for more informations, please have a look at the paper Benoit A., Caplier A., Durette B., Herault, J., "USING HUMAN VISUAL SYSTEM MODELING FOR BIO-INSPIRED LOW LEVEL IMAGE PROCESSING", Elsevier, Computer Vision and Image Understanding 114 (2010), pp. 758-773, DOI: http://dx.doi.org/10.1016/j.cviu.2010.01.011
    

    Declaration

    Objective-C

    - (void)setupOPLandIPLParvoChannel:(BOOL)colorMode
                                 normaliseOutput:(BOOL)normaliseOutput
        photoreceptorsLocalAdaptationSensitivity:
            (float)photoreceptorsLocalAdaptationSensitivity
                  photoreceptorsTemporalConstant:
                      (float)photoreceptorsTemporalConstant;

    Swift

    func setupOPLandIPLParvoChannel(colorMode: Bool, normaliseOutput: Bool, photoreceptorsLocalAdaptationSensitivity: Float, photoreceptorsTemporalConstant: Float)

    Parameters

    colorMode

    specifies if (true) color is processed of not (false) to then processing gray level image

    normaliseOutput

    specifies if (true) output is rescaled between 0 and 255 of not (false)

    photoreceptorsLocalAdaptationSensitivity

    the photoreceptors sensitivity renage is 0-1 (more log compression effect when value increases)

    photoreceptorsTemporalConstant

    the time constant of the first order low pass filter of the photoreceptors, use it to cut high temporal frequencies (noise or fast motion), unit is frames, typical value is 1 frame the photoreceptors, use it to cut high spatial frequencies (noise or thick contours), unit is pixels, typical value is 1 pixel the output is zero, if the parameter is near 1, then, the luminance is not filtered and is still reachable at the output, typicall value is 0 horizontal cells, use it to cut low temporal frequencies (local luminance variations), unit is frames, typical value is 1 frame, as the photoreceptors horizontal cells, use it to cut low spatial frequencies (local luminance), unit is pixels, typical value is 5 pixel, this value is also used for local contrast computing when computing the local contrast adaptation at the ganglion cells level (Inner Plexiform Layer parvocellular channel model) output, set a value between 0.6 and 1 for best results, a high value increases more the low value sensitivity… and the output saturates faster, recommended value: 0.7

  • Setup the OPL and IPL parvo channels (see biologocal model)

     OPL is referred as Outer Plexiform Layer of the retina, it allows the spatio-temporal filtering
     which withens the spectrum and reduces spatio-temporal noise while attenuating global luminance
     (low frequency energy) IPL parvo is the OPL next processing stage, it refers to a part of the
     Inner Plexiform layer of the retina, it allows high contours sensitivity in foveal vision. See
     reference papers for more informations.
     for more informations, please have a look at the paper Benoit A., Caplier A., Durette B., Herault, J., "USING HUMAN VISUAL SYSTEM MODELING FOR BIO-INSPIRED LOW LEVEL IMAGE PROCESSING", Elsevier, Computer Vision and Image Understanding 114 (2010), pp. 758-773, DOI: http://dx.doi.org/10.1016/j.cviu.2010.01.011
    

    Declaration

    Objective-C

    - (void)setupOPLandIPLParvoChannel:(BOOL)colorMode
                                 normaliseOutput:(BOOL)normaliseOutput
        photoreceptorsLocalAdaptationSensitivity:
            (float)photoreceptorsLocalAdaptationSensitivity;

    Swift

    func setupOPLandIPLParvoChannel(colorMode: Bool, normaliseOutput: Bool, photoreceptorsLocalAdaptationSensitivity: Float)

    Parameters

    colorMode

    specifies if (true) color is processed of not (false) to then processing gray level image

    normaliseOutput

    specifies if (true) output is rescaled between 0 and 255 of not (false)

    photoreceptorsLocalAdaptationSensitivity

    the photoreceptors sensitivity renage is 0-1 (more log compression effect when value increases) the photoreceptors, use it to cut high temporal frequencies (noise or fast motion), unit is frames, typical value is 1 frame the photoreceptors, use it to cut high spatial frequencies (noise or thick contours), unit is pixels, typical value is 1 pixel the output is zero, if the parameter is near 1, then, the luminance is not filtered and is still reachable at the output, typicall value is 0 horizontal cells, use it to cut low temporal frequencies (local luminance variations), unit is frames, typical value is 1 frame, as the photoreceptors horizontal cells, use it to cut low spatial frequencies (local luminance), unit is pixels, typical value is 5 pixel, this value is also used for local contrast computing when computing the local contrast adaptation at the ganglion cells level (Inner Plexiform Layer parvocellular channel model) output, set a value between 0.6 and 1 for best results, a high value increases more the low value sensitivity… and the output saturates faster, recommended value: 0.7

  • Setup the OPL and IPL parvo channels (see biologocal model)

     OPL is referred as Outer Plexiform Layer of the retina, it allows the spatio-temporal filtering
     which withens the spectrum and reduces spatio-temporal noise while attenuating global luminance
     (low frequency energy) IPL parvo is the OPL next processing stage, it refers to a part of the
     Inner Plexiform layer of the retina, it allows high contours sensitivity in foveal vision. See
     reference papers for more informations.
     for more informations, please have a look at the paper Benoit A., Caplier A., Durette B., Herault, J., "USING HUMAN VISUAL SYSTEM MODELING FOR BIO-INSPIRED LOW LEVEL IMAGE PROCESSING", Elsevier, Computer Vision and Image Understanding 114 (2010), pp. 758-773, DOI: http://dx.doi.org/10.1016/j.cviu.2010.01.011
    

    Declaration

    Objective-C

    - (void)setupOPLandIPLParvoChannel:(BOOL)colorMode
                       normaliseOutput:(BOOL)normaliseOutput;

    Swift

    func setupOPLandIPLParvoChannel(colorMode: Bool, normaliseOutput: Bool)

    Parameters

    colorMode

    specifies if (true) color is processed of not (false) to then processing gray level image

    normaliseOutput

    specifies if (true) output is rescaled between 0 and 255 of not (false) (more log compression effect when value increases) the photoreceptors, use it to cut high temporal frequencies (noise or fast motion), unit is frames, typical value is 1 frame the photoreceptors, use it to cut high spatial frequencies (noise or thick contours), unit is pixels, typical value is 1 pixel the output is zero, if the parameter is near 1, then, the luminance is not filtered and is still reachable at the output, typicall value is 0 horizontal cells, use it to cut low temporal frequencies (local luminance variations), unit is frames, typical value is 1 frame, as the photoreceptors horizontal cells, use it to cut low spatial frequencies (local luminance), unit is pixels, typical value is 5 pixel, this value is also used for local contrast computing when computing the local contrast adaptation at the ganglion cells level (Inner Plexiform Layer parvocellular channel model) output, set a value between 0.6 and 1 for best results, a high value increases more the low value sensitivity… and the output saturates faster, recommended value: 0.7

  • Setup the OPL and IPL parvo channels (see biologocal model)

     OPL is referred as Outer Plexiform Layer of the retina, it allows the spatio-temporal filtering
     which withens the spectrum and reduces spatio-temporal noise while attenuating global luminance
     (low frequency energy) IPL parvo is the OPL next processing stage, it refers to a part of the
     Inner Plexiform layer of the retina, it allows high contours sensitivity in foveal vision. See
     reference papers for more informations.
     for more informations, please have a look at the paper Benoit A., Caplier A., Durette B., Herault, J., "USING HUMAN VISUAL SYSTEM MODELING FOR BIO-INSPIRED LOW LEVEL IMAGE PROCESSING", Elsevier, Computer Vision and Image Understanding 114 (2010), pp. 758-773, DOI: http://dx.doi.org/10.1016/j.cviu.2010.01.011
    

    Declaration

    Objective-C

    - (void)setupOPLandIPLParvoChannel:(BOOL)colorMode;

    Swift

    func setupOPLandIPLParvoChannel(colorMode: Bool)

    Parameters

    colorMode

    specifies if (true) color is processed of not (false) to then processing gray level image (more log compression effect when value increases) the photoreceptors, use it to cut high temporal frequencies (noise or fast motion), unit is frames, typical value is 1 frame the photoreceptors, use it to cut high spatial frequencies (noise or thick contours), unit is pixels, typical value is 1 pixel the output is zero, if the parameter is near 1, then, the luminance is not filtered and is still reachable at the output, typicall value is 0 horizontal cells, use it to cut low temporal frequencies (local luminance variations), unit is frames, typical value is 1 frame, as the photoreceptors horizontal cells, use it to cut low spatial frequencies (local luminance), unit is pixels, typical value is 5 pixel, this value is also used for local contrast computing when computing the local contrast adaptation at the ganglion cells level (Inner Plexiform Layer parvocellular channel model) output, set a value between 0.6 and 1 for best results, a high value increases more the low value sensitivity… and the output saturates faster, recommended value: 0.7

  • Setup the OPL and IPL parvo channels (see biologocal model)

     OPL is referred as Outer Plexiform Layer of the retina, it allows the spatio-temporal filtering
     which withens the spectrum and reduces spatio-temporal noise while attenuating global luminance
     (low frequency energy) IPL parvo is the OPL next processing stage, it refers to a part of the
     Inner Plexiform layer of the retina, it allows high contours sensitivity in foveal vision. See
     reference papers for more informations.
     for more informations, please have a look at the paper Benoit A., Caplier A., Durette B., Herault, J., "USING HUMAN VISUAL SYSTEM MODELING FOR BIO-INSPIRED LOW LEVEL IMAGE PROCESSING", Elsevier, Computer Vision and Image Understanding 114 (2010), pp. 758-773, DOI: http://dx.doi.org/10.1016/j.cviu.2010.01.011
     level image
     (more log compression effect when value increases)
     the photoreceptors, use it to cut high temporal frequencies (noise or fast motion), unit is
     frames, typical value is 1 frame
     the photoreceptors, use it to cut high spatial frequencies (noise or thick contours), unit is
     pixels, typical value is 1 pixel
     the output is zero, if the parameter is near 1, then, the luminance is not filtered and is
     still reachable at the output, typicall value is 0
     horizontal cells, use it to cut low temporal frequencies (local luminance variations), unit is
     frames, typical value is 1 frame, as the photoreceptors
     horizontal cells, use it to cut low spatial frequencies (local luminance), unit is pixels,
     typical value is 5 pixel, this value is also used for local contrast computing when computing
     the local contrast adaptation at the ganglion cells level (Inner Plexiform Layer parvocellular
     channel model)
     output, set a value between 0.6 and 1 for best results, a high value increases more the low
     value sensitivity... and the output saturates faster, recommended value: 0.7
    

    Declaration

    Objective-C

    - (void)setupOPLandIPLParvoChannel;

    Swift

    func setupOPLandIPLParvoChannel()
  • Write xml/yml formated parameters information

    Declaration

    Objective-C

    - (void)write:(nonnull NSString *)fs;

    Swift

    func write(fs: String)

    Parameters

    fs

    the filename of the xml file that will be open and writen with formatted parameters information