[ 3 / biz / cgl / ck / diy / fa / ic / jp / lit / sci / vr / vt ] [ index / top / reports ] [ become a patron ] [ status ]
2023-11: Warosu is now out of extended maintenance.

/sci/ - Science & Math


View post   

File: 6 KB, 201x187, image.jpg [View same] [iqdb] [saucenao] [google]
2640306 No.2640306 [Reply] [Original]

I have some questions abut this.
Does anyone know much about human vision? Specifically the area of the eye where we gather most of our information from

>> No.2640364

bump, its an actual engineering related, serious question

>> No.2640370

Yes, there are lots of people who do

>> No.2640373

You mean the retina?

>> No.2640405

Ok, so specifically, here is my idea

Im interested in developing a gaming graphics technique that uses multiple render targets of different resolution. For areas of the screen that will be seen by only the peripheral, low-detail parts of the eye (not sure what to call it, this is where I'm weak), that stuff gets rendered into a low res target (which is then scaled up to fit the screen). Then on top of that, and inset, is a higher resolution area for the fovea to look at.

My question, how much of a difference is there in detail between the fovea (might be using that term wrong, i mean the high detail area of the eyes), and the rest of the eye? Is it a linear interpolation or exponential difference?

Pic coming up

>> No.2640412

human vision is just light projected onto the back of our retina of fovea if im right, from there rods and cones make up the sensing part of the process. Your eyes also have a split second working memory independent of the brain that processes images. Theres many theories on how we see color so I really couldn't tell you, but something interesting is that there are no rods or cones on our optic nerve so there is a blind spot in the eye.

>> No.2640417
File: 89 KB, 423x186, hi score.png [View same] [iqdb] [saucenao] [google]
2640417

>>2640405

Do you mean like a standard FPS with focus/aperture elements, like in a camera for instance?

>> No.2640424
File: 9 KB, 270x204, 270px-AcuityHumanEye.svg.png [View same] [iqdb] [saucenao] [google]
2640424

>>2640405

Here is a chart that shows, basically resolution, from different points in the eye (from the fovea in degrees) this is for the left eye

>> No.2640425
File: 89 KB, 802x602, idea3-3-2011.jpg [View same] [iqdb] [saucenao] [google]
2640425

>>2640417
I dont know enough about cameras to answer that
But there is some manipulation of what is called the view frustrum, which I think is probably analogous, so yes

>> No.2640433

>>2640424
Acuity, thats the word I was looking for

So according to that chart, it might actually save significant resources to limit full res areas to only the center of the screen? Resources which could be used to make the center of the screen even more fuckawesome

>> No.2640434
File: 77 KB, 460x585, Picture 1.png [View same] [iqdb] [saucenao] [google]
2640434

>>2640405
See pic for relative receptive field differences. But keep in mind the fovea is also over-represented in the visual cortex, with something like 50% of V1 dedicated to the small area of the fovea.

>> No.2640436

it depends on what you mean by information

>> No.2640447
File: 38 KB, 354x292, 1295033261678.png [View same] [iqdb] [saucenao] [google]
2640447

>>2640434
What does all that mean?
Are you a neuroscientist?

>> No.2640452

>>2640405
You could literally make only a square inch in the centre highly rendered, the rest could be piss poor.

The challenge would come with making the camera move with movements of the eye.

But this guy >>2640424 has got it on the head, there is no definition outside of the small point in the centre of out vision.

>> No.2640461

>>2640436
This is relevant
It's typically like this:
center of eye
>reading, color, contrast, motion
some distance out
>color, contrast, motion
further out
>contrast, motion
Farthest parts of vision
>motion

I suggest you look up on field of view and functional field of view. There's alot of studies into "action games" usually first person shooters, and how they can increase a functional field of view and visual processing, so someone who couldn't drive before may now have the visual capacity to do so (one study showed a 100% or 200% or something increase in functional field of view).

FOV is usually measured with peripheral tests, such as subitizing and contrast detection.

Use google scholar

>> No.2640464

>>2640412
Draw a cross and a dot on a blank bit of paper. Close one eye and focus on the cross.Vary the distance between the page and your face and the dot will dissapear. I thought it show this pretty nicely.

>> No.2640474

>>2640447
It's reasonably unrelated. Receptive fields are a very ingenious way the eye detects lines (correct me someone if I'm wrong). Basically it's a series of AND gates where one of the inputs is a NOT gate, meaning signal is only transmitted when a change is seen from one colour to another (outlines).

And then of course that the fovea, with most of the light sensing cells gets most space in the visual cortex.

>> No.2640476

>>2640461
Hmm. That makes sense, but bodes ill for my idea.
Maybe a racing game would be better for it

>> No.2640480
File: 647 KB, 2048x1536, barefootbandit.jpg [View same] [iqdb] [saucenao] [google]
2640480

mfw future tech allows us to get ocular implants with zoom / night / infrared / x-ray / thermal enhancements

>> No.2640484

>>2640464
Thank you that's a perfect example of the blindspot

>> No.2640491

>>2640480
Nigga, look up cuttlefish.

There's one with like TWELVE eye cones, detecting shit like polarization of lights, both linear and circular, colors, including infrared and ultraviolet and multiple times over the rainbow.

Shit's amazing

>> No.2640492

>>2640484
Kept me entertained for hours when i first heard about it...maybe because I was high? XD

>> No.2640503

>>2640474
No, you're thinking of on-center ganglion cells. Receptive field simply refers to the area of the visual world a particular cell is responsible for.

>> No.2640509

>>2640503
I should add that this is relevant because a larger receptive field results in less acuity. The graph shows the particular relationship between receptive field size and distance from the fovea (measured in degrees of eccentricity), which was the question here:>>2640405

>> No.2640507

>>2640503
Ahh, cheers!

>> No.2640516

>>2640507
>>2640503
You guys seem pretty knowledgeable, thanks for dropping by. Im going to chew on this idea for a while and see if I can come up with some mockups

>> No.2640534

OP, if you're still around, consider this: your idea is a good one, since only the fovea has a high resolution. However, the eye makes up for this by moving constantly. If you make only the center of the screen highly detailed, you'll either have to hold the eye in place (not ideal, obviously) or track where the eye is looking constantly and make that ever-changing area the high-resolution area, which seems to defeat the purpose of your idea since it would probably use more resources than it saved.

>> No.2640566
File: 382 KB, 1737x1049, IMG_0009s.jpg [View same] [iqdb] [saucenao] [google]
2640566

hey op, i just took this for you.

>> No.2640569

>>2640534
Yeah, you're absolutely right. But I think with some games, like racing, for example, the players vision is (metaphorically) locked to the vanishing point, or in FPS when you "aim down the sight" or look down a scope, its certainly not going to stray much. But it wouldn't work for something like a third person perspective. I think i'll start off with proposing it for FPS aiming, and racing games. Id be surprised if they don't already use something like that for aiming, tbh

I wonder how feasible it'd be to use a webcam to track the users eye movements and determine where they're looking on the screen. I know they were doing it in 2005 at least. Not sure how precise it'd be.

>> No.2640582

>>2640566
>1737x1049
My, what big eyes you have.

>> No.2640591

>>2640582
Lol, and I already cropped out most of my face

>> No.2640593

Ugh why did I not take trig in highschool.
How do you find out the other side and hypotenuse if you know all the angles

Example
Player is 2' from the screen, I need to find the point that corresponds with 10 degrees out from the center of the screen

>> No.2640596

>>2640569
I can tell you in FPS you're rarely focusing on the center of the screen.

In most games, danger comes from the periphery, you center your weapon and fire, and then continue tracking the off-center target. In games where the weapon has delayed impact this is especially necessary because aiming at the center of the screen and firing would miss 100% of the time.

Most people can observe a 10-20 ms increase in time delay of input, on top of the 25-35ms needed for a screen to refresh, which puts tolerably input transitions around 50ms for button responses.

>> No.2640602

>>2640593
(cos x) ^2 + (sin x) ^ 2 = hypotenuse

>> No.2640605

>>2640602
hypotenuse squared, sorry

>> No.2640608

>>2640593
horizontal distance = straight line distance*cos(angle)
vertical distance = straight line distance*sin(angle)

>> No.2640610

>>2640602
>>2640605
erm, actually that only works for h = 1

So uh, do the degrees, then just multiply by 2 ft kthanks.

I need to math more often

>> No.2640625
File: 38 KB, 440x442, 67f02c29c1dd[1].jpg [View same] [iqdb] [saucenao] [google]
2640625

Of all times for an ex to IM me, goddamn

>> No.2641062

>>2640405
The non-foveal regions of the retina suffer from more than just low resolution. Cognitive experiments have shown that just scaling up these items to compensate does not account entirely for its poor visual acuity.

The area just generally has less dedicated neural support. On the large part, the primary visual cortex dedicates most of its processing to foveal information, whereas the peripheral areas project to older areas of the brain (superior colliculus) which is not particularly interested in visual acuity, but more detecting motion or sudden bursts of light to exogenously capture attention and automatically bring the fovea over it through moving the eye.

Ultimately, if you try and manipulate these things, it will fail. The eye-saccade system is notoriously poor at readjusting and therefore your players will constantly be moving their fovea into the area of interest. Monkey lesion studies have shown this. The saccade system just doesn't readjust.

People can learn to use a peripheral locus for attention, but it is very difficult to overcome automatic movements of the eye. Then, if you're not tracking the eye to provide your program with info as to where the fovea is, you will struggle to manipulate resolution correctly. In addition, even the best eye trackers suffer from a time lag which would be noticable in-game.

source: dissertation.

>> No.2641761

>>2641062
Thanks for the 11th hr bump, glad I went back to see if anyone else responded.

I'm definitely shying away from eye-tracking and dynamically repositioning the area of high resolution. It'll limit the applications, but at this point I'd be elated to even just get a proof of concept working. Probably a racing game, since I believe theres not much important that happens in the periphery (other than objects flying by, which the brain knows should be flying by so theyre not unexpected so they won't trigger any attention, though Im obviously way out of my league on those things)

If you ever see "afoveal degredation" as a gaming graphics buzzword, it'll be thanks to you guys

>> No.2642019

since no one has bothered to ask yet, what the fuck would be the point of such a graphics system?

>> No.2642084

>>2642019
If you save power on the periphery of the screen by rendering into a lower resolution buffer, you can use those scrimped cycles to add detail to the foveal section of the screen. Its not so much an improvement, as reducing waste.

>> No.2642146

another application to consider this optimization for is video glasses. i know the tech has been relatively stagnant the past ten years. but considering the increases in smartphone resolutions i am sure hd video glasses are in the near future.

>> No.2642789

>>2641761
No worries.

By the way, if you start degrading the information available in the periphery you will start to see deficits in making appropriate eye saccades. The periphery is used as a region of target selection. (E.g. in reading, the periphery is used to obtain information about upcoming words which guides where the eyes are placed).

So, by no means is the periphery useless. It serves an important function.

Oh and by the way, movement or sudden-onset activity in the periphery WILL direct attention, albeit most likely covertly. It's an exogenous reaction that is near enough impossible to overcome.