In addition to photoreceptors, the eye needs a well-functioning lens, retina, and an intact optic nerve to recognize shape. Light passes through the lens, hits the retina, activates the appropriate photoreceptors, depending on the available light, which convert the light into an electrical signal that travels along the optic nerve to the lateral genicular nucleus of the thalamus, and then to the primary visual cortex. In the cortex, the adult brain processes information such as lines, orientation, and color. These inputs are integrated into the occipitotemporal cortex, where a representation of the object as a whole is created. Visual information continues to be processed in the posterior parietal cortex, also known as the dorsal current, where the representation of an object`s shape is formed using motion-based signals. Information is thought to be processed simultaneously in the anterior temporal cortex, also known as the ventral current, where the recognition, identification, and naming of objects take place. When detecting an object, the dorsal and ventral currents are active, but the ventral current is more important for distinguishing and recognizing objects. The back current only contributes to object recognition when two objects have similar shapes and the images are deteriorated. The latency observed in the activation of different parts of the brain supports the idea of hierarchical processing of visual stimuli, with representations of objects progressing from simple to complex.  The law of proximity postulates that when we perceive a collection of objects, we see objects close to each other forming a group.
Figure 1.A. we see the MTV logo and the Europe Music Awards logo as a band in the upper left corner and the sponsors` logos as a group in the lower right corner. The white space separating the two groups of logos is used to indicate the „grouping”, and the proximity of the logos of each group is used for this purpose. Thus, a semantic separation of „organizers” from „sponsors” is obtained by structuring the graphic layout according to this simple principle of organization of perception. Pixelated images as an experimental tool are initially used in the field of pattern or shape perception. Pattern recognition is the process by which visually presented objects (images of objects) are identified, categorized, and named. By performing model information processing operations, the visual system essentially tries to answer the question of what is seen. In this chapter, we see what are the most important questions of perceptual theory as well as the methodological questions directly related to the field of pixelated images. The advantages of using pixelation are briefly described. Perception of form is one of the most fundamental visual distinctions humans have acquired in childhood. A child with poor shape perception is likely to be diagnosed with a learning disability because almost all learning activities require some sort of shape perception, especially the ability to read. A child who has difficulty perceiving the shape of letters, syllables or words will have difficulty learning the alphabet or reading.
Distinguishing letters is the most important skill in the early stages of reading. The Gestalt approach can be called „bottom-up” theory because it starts at the bottom (the aspects of stimuli that affect perception) and works its way to higher-order cognitive processes. An example of another bottom-up theory known in the HCI community is James Gibson`s theory of „direct perception” (see Affordances and Perception). Amodal perception · Color perception · Depth perception · Visual perception · Perception of form · Haptic perception · Speech perception · Perception as interpretation · Numerical value of perception · Perception of the pitch · Harmonious perception · Social perception This tutorial explores perception because two people can interpret the same thing differently. Discover human perception in action, spatial consciousness and illusions. A better understanding of these aspects is helpful in understanding how the mind works. Much is also known about depth perception. For example, it has been shown that there are specific cortical cells that respond only to disparate but simultaneous orientations of an object presented to non-corresponding regions of the retina. In addition, information about the nature of 3D structures can be obtained from motion detection (structure from motion). However, although no specific cortical region of „depth assessment” has been identified, the lateral occipital cortex is a privileged area with a lot of activity. Depth appreciation is more than just stereopsis and is constructed from many other clues (see below). An understanding of how these alter perception can only be made by describing and appreciating the different types of visual stimuli that separately induce discrete responses in the brain.
A similar picture emerges when it comes to hemispheric differences in facial recognition. The left hemisphere processes faces in a feature-based manner, while the right hemisphere processes faces in a configuration-based manner (again, this is reminiscent of the local/global distinction). For example, Bombari et al. (2014) induced featural processing by presenting coded faces (thus masking the configuration) and induced configuration processing by presenting fuzzy faces (thus masking features) and found a superior featural treatment of the left and right hemispheres. When participants distinguish faces, performance in the right hemisphere is superior when configuration aspects differ, such as moving the eyes up or down by 2 mm, while an advantage is obtained in the left hemisphere when practicalities differ, such as replacing a narrow nose with a wide one (Miller and Barg, 1983). A recent study by Cattaneo et al. (2014), which uses a more systematic set of configuration relationships in which the internal distance of characteristics (i.e., second-order relationships) has been manipulated, revealed the same pattern of results. It is important to note that hemispheric differences in configuration processing disappear when faces are reversed, a manipulation that interferes with facial recognition (e.g., Diamond & Carey, 1986).