All Graduated

Well, that’s that. The end has been and gone. I am now a graduate of Royal Holloway, University of London.

Sheesh. I know it’s a cliche, but where did the time go? Seriously, where did it go? I don’t feel grown up enough to have a whole three letters after my name! Only a week and a bit ago, I left campus for the last time. Saying goodbye to Medicine, to happy memories in Tuke and Reid, to times spent rushing through Canada Copse in the rain to a 9am lecture, or times spent stumbling through Canada Copse after a drunken night out at a Monkey’s Mondays (oh, Monkey’s Mondays…), to afternoons spent lazing in the quad with Pimm’s, or in Crosslands on a gloomy winter’s evening, to hours spent trying to stay awake in the Windsor Auditorium or trying to get comfortable in MLT, to horrendously early mornings putting milk out at the College Shop, to campus tours in the sun with enthusiastic parents (and not-so-enthusiastic prospective students…), to times spent making that long walk down to Kingswood to play a few games of squash, or evenings with the debating lot getting raucous and ripping each other’s arguments apart, to late nights writing stats reports and lab reports and trawling through journals for a decent reference or two, to walking up that path to Founder’s in the spring when the daffodils are out and realising that there’s nowhere else in the whole world that I’d rather be…

Yeah, I’m not sure I’m quite ready to let go yet. It doesn’t even seem real yet, in fact. Though graduation was lovely- here, look at this photo. Isn’t it lovely? 

Image

Yes, lovely. Lovely day. Very hot in those ruddy gowns that, as our Principal said, clearly weren’t designed for women- or anyone with shoulders less than 6ft wide. And someone stole my mortarboard after we threw them in the air, so I ended up with an extra small one that I had to balance atop my head for the remainder of the afternoon. But yes, lovely. I very much enjoyed chatting to the lecturers one last time (though I ended up popping to a prof’s leaving do at a pub the following week anyway, so my many emotional goodbyes subsequently seemed a bit redundant). And my parents had a blast. Overall, it couldn’t have gone much better.

But when I look at photos of graduation, as lovely as they are, it seems odd that they’re mine- that this was my graduation, that the psychology cohort I’ve grown to love over these past three years came together that afternoon to celebrate what is, admittedly, a pretty big achievement. It’s weird that it’s happened already. You see photos of friends in higher years graduating before you, you have an idea of what’s going to happen, and you know that one day, that’s going to be you. And yet, when it happens, despite months of expectation, years of anticipating that unavoidable, harshly inevitable day, the day that’s been floating around in the back of your mind even since you were a fresher- yet, when it happens, it doesn’t seem real. 

Eh, I’ll get over it. We all do. 

Now it’s onto the next thing: a master’s in a different city, with different people. A different vibe entirely. However, one thing I am particularly glad about regarding my undergrad years (aside from the degree itself, of course) is just how many wonderful people I managed to meet, and especially how many of those people I’ll be keeping in touch with. Yes, there’s a sense of ‘onwards and upwards’ now, and I might not be going back to the campus at Holloway any time soon, but there’s a lot I’ll be taking with me from that wonderful, wonderful place. 

Yet Another Blog

Don’t worry, don’t worry, I haven’t abandoned this blog (even if it does look like it). No, in fact, I’ve started another blog, which might seem like an odd thing to do considering how I’ve neglected this one, but hey ho. This new blog is a little diary of my year at Oxford, something I failed to do as an undergrad, but that I’ll hopefully be more motivated to maintain now that I’m older and wiser. Apparently. It’s also meant as an aid to others considering applying, especially for postgrad, since there isn’t much information about doing so on the interwebz.

Anyway, here’s the link: http://nonfrustravixi.wordpress.com

Yes, I’ve gone for another Latin title. So sue me.

Taxi

Taxi ride back to campus from a friend’s house in Egham. Taxi driver is chatting shit as usual, rabbiting on about how we only use 30% of our brain or something after I’ve told him I’m a psychology student (can I just clarify here: WE USE ALL OF OUR BRAINS FOR FUCK’S SAKE FFJHJSHAJH). We start driving round the perimeter of campus, the signs that say ‘ROYAL HOLLOWAY 100M TO YOUR LEFT’ coming up, glowing in the night.

After I mention I’m a third year, he says, “oh, are you going to miss this place? Guess not, since it’s Egham”, and then chuckles.

I say, “of course I am. This is my home.”

Fuuuuuuck.

A Lack of Longitudinal Studies

Although I’m very aware I haven’t updated this poor, neglected blog in a while, this is going to be a quick one as there’s one thing that has come up time and time again as I’ve been writing my dissertation.

In a nutshell, my dissertation discusses whether abnormal changes to default mode network activation in patients with Alzheimer’s disease is a potentially useful biomarker and hence could aid in the detection of onset of Alzheimer’s disease, as well as tracking its progression. I’m trying to answer this question by testing the relationship between these two phenomena as rigorously as possible, on several levels and on several timescales.

Although there have been a fair few studies in the area over recent years, this latter aim, of testing the relationship over time, has been ignored. Well, not ignored- almost every study which discusses the default mode network in relation to Alzheimer’s concludes with, “a limitation of this study is that it is not longitudinal” or “more longitudinal studies are needed to confirm these findings”. ‘More’ longitudinal studies? ‘More’? That implies that there are any in the first place.

And unless I’m rubbish at lit searches, there really aren’t. 

However, there are plenty of papers talking about the importance of biomarkers when it comes to the detection of age-related diseases like Alzheimer’s. So why the lack of longitudinal studies? Is this a problem with getting them funded, perhaps- I know they take a long time (obviously) and hence funding bodies might be less inclined to invest their cash in something that doesn’t garner immediate results, but in an area where longitudinal studies are so clearly warranted, why are so few being done? Is this a reflection of a lack of faith in the area? Is it because they’re not getting funded, or because no-one wants to do them in the first place?

These are not merely rhetorical questions- I would genuinely like to know some answers. Another issue a lot of these studies have is a small sample size, which I always find ironic seeing as they always begin their papers with, “Alzheimer’s is one of the most prevalent age-related diseases in existence, and is THE most prevalent form of dementia”. But apparently, despite this prevalence, getting people to actually take part is a struggle- even though this research could in fact help a lot of people.

Again, why? Am I missing something? Am I just too much of an idealistic undergraduate who doesn’t fully appreciate the funding woes and other complications in getting studies like this done?

That may well be the case. However, the midst of dissertation turmoil I’m most upset about the fact that without these studies, writing up this bloody thing has been a lot harder.

I See What You Did There

The more I do of this ruddy project, the more I begin to remember why I chose it in the first place.

Early visual processing has to be one of the most beautiful aspects of psychology I have studied so far. Take simple orientation selectivity in the primary visual cortex, for example. Retinal ganglion cells are found in the retina of the eye and act as a sort of mediator between photoreceptors, which receive light from the outside world, and cortices in the brain (though this is a very simplified conceptualisation of their position in the hierarchy of vision, and there are other mediating neurons which contribute, but this example is just to illustrate a point).

Anyway, each neuron has a receptive field. However, these receptive fields are not simply activated by electrical input; they have a clever structure, with a centre which can be positive and a surround which can be negative (or vice versa). This means that if a stimulus activates just the centre and not the surround of an ON-centre, OFF-surround receptive field of a neuron, then this produces the most activation in this neuron. But by covering the whole receptive field, no response is illicited as the negative surround inhibits the activation of the centre. Get a bunch of these receptive fields overlapped in a row and this can lead to orientation tuning, with the preferred stimulus at the ‘right’ orientation activating a whole row of ON-centres, generating maximum activity that can be integrated and sent to a simple cell which therefore preferentially responds to a line of a particular orientation.

This sort of stuff is quite difficult to put into words without the use of diagrams or schematics, but the point is that the most simple idea of receptive fields that can extract very basic information can ultimately lead to the wonderfully rich percepts that you and I experience every day by the processes of integration, summation, inhibition, and a bunch of other mechanisms which utilise these receptive fields (along with a bit of clever filling in thanks to the rest of our very clever brains).

This is sort of vaguely related to my project, which is actually about how we process motion-induced texture boundaries; essentially, the appearance of boundaries when a bunch of dots of equal luminance move about and seem to bump into each other. This is actually a pretty complicated thing to extract and such stimuli are known as ‘second order’, meaning that instead of being a simple difference between something being there or not being there (so light and dark, for example), the processing of such stimuli actually require a comparison of two points. The easiest way of conceptualising this is thinking about first order as a difference in luminance (light and dark), whilst second order is more about contrast, so it’s relative rather than absolute.

Thing is, although the visual processing of features such as texture or contrast would seem to be quite basic things for visual scientists to investigate, not much is really known about how we process second order stimuli versus first order stimuli. There are different computational models about and so on, but we’re still trying to identify where exactly this processing takes place, as finding this out tells us how it happens. That’s where our project comes in, and by the looks of the rather sparse literature that’s out there, technically no-one has done this stuff (behaviourally) before- though there are similar experiments looking at slightly different parameters.

Anyway, it really is interesting stuff, and although writing up this project makes me want to rip off my fingernails at times, there’s no doubt that visual processing (in theory, anyway) is a beautiful area to study.

Word of the Day: Discrete Infinity

‘Discrete infinity’ refers to the property by which language constructs from a few dozen discrete elements an infinite variety of expressions of thought, imagination and feeling. For example, in English, sentences are built up of discrete units, words- you can have a sentence with 5 words, or with 6 words, but not with 5.5 words. And yet from these discrete units, one can create sentences of infinite length; there is no longest sentence. It is a property unique to human language.

The Last Question

This is why I love university. You meet so many extraordinary people who know about so many interesting and wonderful things.

Case in point- today I received a random message from a fellow debater and good friend linking me to a quote thus follows:

If the brain were so simple we could understand it, we would be so simple we couldn’t.

A nice idea at first glance, but definitely flawed; I’m pretty certain that one day we’ll be able to understand the brain. It’s just going to take a lot of time, and the ‘answer’ will be bloody complicated- like everything else worth knowing.

So then we got onto the topic of knowledge, and finding out stuff, and will we ever actually find out everything? Everything? Because science has progressed so much in the last 100 years and it looks as though we’re just going to find out more and more stuff at an exponential rate.

But everything?

And then he said that new stuff to find out about happens all the time, so it’s as though we can’t keep up.

I replied that this means that our quest for knowledge is futile…Or rather, I’d say that if other stuff wasn’t futile, but it is, because everything is futile. Everything is futile. 

But maybe it isn’t. That’s when he linked me to a fantastic short story by Isaac Asimov, called The Last Question.

And now that is what you must go and read.

 

 

 

(…Though I still think everything’s futile.)