Showing posts with label research. Show all posts
Showing posts with label research. Show all posts

Saturday, September 20, 2025

A Rapid Review on Website Accessibility

I hereby present to you a rapid review on accessibility in development of websites, with the title: Automated Testing for Website Accessibility.

Now, a rapid review could be said to be a method for providing a "quick glance" (overview) at a particular requested topic. By finding evidence in the sources (primary sources perhaps), the goal is to create a type of review, more slim than a full literary review, using perhaps only a single or a few selected databases.

If you are more interested in the rapid review approach in the field of computer science and software engineering, see the reference to Cartaxo in the end of this blog post, which I was provided with in the course I took about scientific method and where I wrote this rapid review.  

This was my first try at the approach, and also first time to use a thematic analysis (which I will write more on eventually, and have already written a few initials thoughts on, which I intend to elaborate further on).

So with that disclaimer in place, I now present my completed rapid review, which I hope might be useful for practitioners and interesting to researchers.

As I recently wrote in my bachelor's thesis proposal, the aim of this rapid review was to give a useful overview of the current (as of 2025) of what tools are being used in accessibility website development and testing, and by applying a more quantitative approach (frequency analysis), yet of a limited sample size: give some indications on what tools are being used, primarily in accessibility research, and their 'popularity'. 

This was explored in RQ2: What types of automatic accessibility testing tools are there? where the following figures can be found.

The most frequently used tools in the studies. 
The most frequently used tools in the studies.
 

Fig. 2. The ten most frequently used tools in the studies. See appendix for a figure of all studies.

 Fig. 2. The ten most frequently used tools in the studies. See appendix for a figure of all studies.

A similar analysis was carried out on the WCAG versions being used in these primary sources (studies), which again, in this limited sample size, indicated that there might be a "lag" in the adoption of the latest WCAG version. The following figure was included in RQ1.

The different WCAG versions in the studies, as accumulated number of studies per WCAG version over time (year).
Fig. 1. The different WCAG versions in the studies, as accumulated number of studies per WCAG version over time (year).

Research question 1 (RQ1) was summarized as: What is possible to test and how effective is automated testing? Besides my analysis of WCAG versions, I looked into various measurements such as coveragecompleteness and correctness.

Besides from these more quantitative measures that were discovered in the studies, concepts like test-ability and effectiveness were also explored. 

Test-ability, as in: What is possible to test? And, how effective are these automatic WCAG based testing tools?

Finally, some more qualitative aspects were examined in research question 3 (RQ3) that dealt with best practices: What are common best practices of using automatic testing tools?

Where some of the key takeaways was: do not solely rely on automated testing. And combine tools. (See sources in the rapid review).

Again, repeating the disclaimer: as this is a limited study the conclusions and the results may be limited as well. And this is merely a bachelor level study, yet I thought it might be an interesting source for both practitioners and researchers.

In either case, I learned a lot myself and will hopefully write my bachelor's thesis in a related area. But as for now, I enjoyed the methodology and implementing it in the work so to speak; as well as working with the analysis.

You can find a link to the rapid review here.

If you wish to cite this rapid review, I'm unsure if it's possible, since it's not peer-reviewed nor published on any official university source. It is only my own personal publication, so to speak. Therefore, something like:

Larsson, Nils, Rapid Review: Automated Testing for Website Accessbility, 2025, written in the course Computer Science C: Scientific Method at Mid Sweden University, published on the compartdev blog on September 20, 2025.

Other related references

You can also watch the video on performing a thematic analysis using PDF sources and open source software on this link, which I used when I wrote this rapid review: here.

B. Cartaxo, G. Pinto, and S. Soares, “Rapid Reviews in Software Engineering,” Mar. 22,
2020, arXiv: arXiv:2003.10006. doi: 10.48550/arXiv.2003.10006. 

Sunday, October 27, 2024

Social computing, Computational Social Science and Sociology and Methodology in Computer Science

These recent weeks I have been struggling with defining a field and defining a research topic; which should include research questions and topics. That also included some methodology in Computer Science and the related field Information Systems.

So lets start with what I have concluded so far.

The topic of social computing is a field or part of Computer Science which deals with social aspects of computing. I will not go into the exact definition here but I imagine it reads something like "social aspects of using computers and interacting with computers and computer information networks". I think that might be a decent starting point.

It should be quite "simple", yet as always in academia, there is a tendency to complicate things wherever possible so lets remember that. However, by simple, I mean, just take the words "social" and "computing", and it should entail the intersection of these two broader topics as well as where they intersect.

That would make the topic somewhat interdisciplinary. I think that can be a good thing, that I could delve into topics like social science and sociology, as well as perhaps some implementation of computer technology in for example networks.

Okay, so this leads me to the methodology part. As I have recently learned and pondered is that computer science traditionally could be viewed in the positivist tradition. That would make sense, as computers are quite quantitative in nature and therefore would oblige by a classical scientific approach.

However, when working with social aspects, qualitative aspects also become important. Now, one can go either way: purely quantitative or qualitative but I think a mix might be nice. I'm thinking of a methodology where both aspects are being taken into account. (I will not go deeper into this at this point, but there's a case to make for choosing a mixed methodology in this particular case -- see below for somewhat of an example).

When I researched this and read about it, what crossed my mind was the meta level of this, or rather, the "computer science" angle, rather than just using some computer technology in a qualitative method. I guess it's wishful thinking but if quantitative method could be applied directly to "social data" - perhaps something would come out of it.

And it appears that this has been done, especially in the field of computational sociology, where a lot of interesting computer based methods are being used.

It also made me realize that another interest of mine; which I wasn't sure was quantitative, actually was -  text analysis. However, I think it holds, or can hold, some qualitative aspects as well.

Well, this might too simple, or it is exactly what science should be about - connecting different areas which haven't been connected before (well you'll most likely won't be the first, but great minds think alike a what not)...

I could also see that one uses quantitative method for "exploratory" purposes, and then, perhaps defines the research problem and topic a bit further. Then adding a qualitative "measurement", and by combining the two, getting further in the analysis, than just using either of the two.

For instance: get the word count of a certain topic in a certain text or group of texts, or get the top 10 word counts in a certain text  or group of texts. Lets say that word is something crucial to whatever is being studied, then it would make sense to learn more about this word and its meaning in the context, rather than just running the word frequency function and stating that this or these word/s are prevalent.

Well, this is what I've been thinking about and this is the preliminary conclusions I guess. Now I just need to pick a relevant "sub topic" and go ahead with my methodology... Unless it seems to be the case that I will need to use a specific methodology.

However, what I have described here, will be part of my "theory", I suppose. Or what I except to find. This will the be considered somewhat deductive, although I suspect there may have been some moments of "induction", as well. Or rather, my chosen sub topic, may or may not have support for what I have just posited.

The State of WordPress and Perhaps Writing a Book About It

I have been preoccupied with certain features of WordPress and building plugins lately. It has been good knowledge to pick up and interestin...