Both the Sound and the Silence
Evidence towards Inequality
Some thoughts during the course Exclusion and Inequality.
Where data is present, it witnesses subjectively. Where it’s silent, it overlooks. These subjective points of view can have a level of intersection with others, or none at all. A point of view by itself, without intersection, has intrinsic bias. If held by a large enough body, that’s an intrinsic bias in the knowledge infrastructure
Take the case study of the partnership between American and Ugandan medical research institutes. The American institute came at the administrative issue solely from its own perspective, and self-centric data. This was a bias in the knowledge infrastructure, with Americans getting to navigate their own leading questions, prioritizing what to look for, where their perceived ‘truth’ was, and where to find it. It did not initiate dealings with the Ugandan institute with the knowledge or interest in seeking out how a smaller country’s institute may run, and what policies that institute may have. The result was a lopsided partnership with the inequalities that we previously addressed.
Now, a similar case study, this time with the presence of data – qualitative – is of disability in the open city. There are multiple building and urban codes for increased accessibility – accessible seats in buses, parking spots, corridor width in buildings, and so on. But there’s still many areas around the world where disability legislation does not push hard enough for equitable access. Physical data from America, New Zealand, and other countries, has shown that the spatial lay out of cities doesn’t account for those with mobility issues, putting them at risk of injury and social exclusion.
Some of the other studies we looked at had qualitative data in the form of fieldwork here, so ethnographic interviews, surveys, observation notes… Starting off with They Want Us Out, the study on Palestinian refugees in Denmark. Here, as they’ve been excluded and deprioritized, the researchers lived with multiple refugee families, recorded experiences and logged their lives as they navigated court hearings, social worker meetings, prison visits, and personal neighborhood surveillance, in order to keep their children safe and relatives together, all while subject to clashing policies. Acting as mediators, the researchers recorded this data with the unique ‘omniscient’ sort of outsider position, and, with further help from media analysis and surveys of government policy reception, help us understand the mechanisms of the dynamic between government, citizen, and refugee, and how exclusion and marginalization plays out under the pretexts of urban policy and regeneration.
A similar approach was taken with Ghanian citizens, in their quest for better adaptation to poor infrastructure that the government doesn’t prioritize enough to upgrade for them. And this was done, through, again observations, interviews, and surveys. It allowed a greater understanding of the perseverance of the Ghanian people, a look into their resilience – they were able to use the space of public toilets as multifunctional public space – as a shared space for business, community, and even politics. A true, free, Ghanian forum. Interviews also went on to show the public’s understanding of government priorities and the knowledge that they, its own people, were not one of them. This works two ways – strengthening the community and its resilience, but also further marginalizing them, as they seek to establish their lives outside of inadequate official governance.
From there, we’ve got a sort of mix of both qualitative and quantitative methods, through maps – two different studies with maps. The data from both maps deals with the aftereffects of discrimination based on borders, race, and class, but one deals with overlooking, and the other is looking way too close.
So, with the post-Katrina study, data proves that the primarily African American neighborhood was an overlooked demographic – despite being right next to a more affluent, primarily white neighbourhood, it was significantly less attended to with placemarks on the map, with more 2nd information than 1st hand information, and less relevant images. This lack of data goes to prove the existence of racialized cyberscapes, where exclusion and discrimination in the physical, social sphere carries onto the digital sphere.
The redlined maps of the 1930s, on the other hand, are the maps that pay special attention to African American neighborhoods – they draw up borders, socioeconomically excluding them from being able to have opportunities elsewhere, while ensuring non-Black residents were outright stopped from buying houses in those redlined neighborhoods.
Shifting into the quantitative data there is also the data on credit analysis of home owners from those neighborhoods – Black home owners were not allowed to buy houses outside despite having enough more than enough credit to do so. The economic repercussions of this redlining still carries forward today.
There’s also an environmental divide between these neighbourhoods and those around them – these neighborhoods were closer to industrial pollution, and combining with the residents needing to commute farther for work opportunities since their neighborhoods have been neglected in that aspect, these neighborhoods face far more dangerous levels of air pollution than their surrounding neighborhoods.
Now, dealing with intrinsic bias in quantitative data gets really sinister when we combine that with technology, with algorithms. The New Jim Code in the age of technology (in the paper Race after Technology) has already proven of hits in the algorithm were skewed a specific way for certain searches – what showed up at the top of the results displayed information with a racist slant. The algorithms process what information is given to them. This quantitative data is fed into the algorithm by programmers/people with an intrinsic bias, perpetuating an intrinsic bias in knowledge infrastructure. And I think this sort of ties into the racialized cyberscape.
Finally, we reach the study Exclusion by Design, and what sort of effect missing quantitative data can have. Here, the overlooked demographic being actually part of the target demographic of the services shows a major flaw, not only in design but in the bias instilled and inequalities perpetuated in a systemic manner. This phenomenon can be of two types – digital lag and digital divide, which I found particularly interesting because of their similarity and difference.
Digital lag speaks of the phenomenon where those with reduced access to technology will get to more updated and advanced forms of technology, after they’ve already become widespread and commonplace – never getting to witness and experience the changes and shifts of society and advantages at the same time as everyone else. Digital lag is from the perspective of access. Digital divide, on the other hand, is from the perspective of exclusion. Digital divide underlines that digital lag will never truly shorten – that lag will always exist, and at times, may not even be a lag but a total divide. There will be and are points where certain demographics will just never be able to access some technology even when ‘catching up’, because legislation and policy does not emphasize equity/justice for them enough.
Thinking about open ends here… I hadn’t considered policymaking based on a lack of data specifically – and the fact that this lack is an inequality/discrimination – that leads to further inequality/discrimination. And how that goes on to just create more and more issues. The connection between inequality and data was not something I’d explored as in-depth before. How far does this go, and where? What are the worst case legislative scenarios?
What are the advantages of being overlooked in the data?
We talked about how today we just have so much data, the question is what to do with it, and what data is useful. What about which data is dangerous? What sort of data should we have less of?