Saturday, September 20, 2025

A Rapid Review on Website Accessibility

I hereby present to you a rapid review on accessibility in development of websites, with the title: Automated Testing for Website Accessibility.

Now,  a rapid review could be said to be a method for providing a "quick glance" (overview) at a particular requested topic. By finding evidence in the sources (primary sources perhaps), the goal is to create a type of review, more slim than a full literary review, using perhaps only a single or a few selected databases.

If you are more interested in the rapid review approach in the field of  computer science and software engineering, see the reference to Cartaxo in the end of this blog post, which I was provided with in the course I took about scientific method and where I wrote this rapid review.  

This was my first try at the approach, and also first time to use a thematic analysis (which I will write more on eventually, and have already written a few initials thoughts on, which I intend to elaborate further on).

So with that disclaimer in place, I now present my completed rapid review, which I hope might be useful for practitioners and interesting to researchers.

As I recently wrote in my bachelor's thesis proposal, the aim of this rapid review was to give a useful overview of the current (as of 2025) of what tools are being used in accessibility website development and testing, and by applying a more quantitative approach (frequency analysis), yet of a limited sample size: give some indications on what tools are being used, primarily in accessibility research, and their 'popularity'. 

This was explored in RQ2: What types of automatic accessibility testing tools are there? where the following figures can be found.

The most frequently used tools in the studies. 
The most frequently used tools in the studies.
 

Fig. 2. The ten most frequently used tools in the studies. See appendix for a figure of all studies.

 Fig. 2. The ten most frequently used tools in the studies. See appendix for a figure of all studies.

A similar analysis was carried out on the WCAG versions being used in these primary sources (studies), which again, in this limited sample size, indicated that there might be a "lag" in the adoption of the latest WCAG version. The following figure was included in RQ1.

The different WCAG versions in the studies, as accumulated number of studies per WCAG version over time (year).
Fig. 1. The different WCAG versions in the studies, as accumulated number of studies per WCAG version over time (year).

Research question 1 (RQ1) was summarized as: What is possible to test and how effective is automated testing? Besides my analysis of WCAG versions, I looked into various measurements such as coveragecompleteness and correctness.

Besides from these more quantitative measures that were discovered in the studies, concepts like test-ability and effectiveness were also explored. 

As in: What is possible to test? And, how effective are these automatic WCAG based testing tools?

Finally, best practices was also examined in research question 3 (RQ3): What are common best practices of using automatic testing tools?

Where some of the key takeaways was: do not solely rely on automated testing (source). And combine tools.

Again, repeating the disclaimer: as this is a limited study the conclusions and the results may be limited as well. And this is merely a bachelor level study, yet I thought it might be an interesting source for both practitioners and researchers.

In either case, I learned a lot myself and will hopefully write my bachelor's thesis in a related area. But as for now, I enjoyed the methodology and implementing it in the work so to speak; as well as working with the analysis.

You can find a link to the rapid review here.

If you wish to cite this rapid review, I'm unsure if it's possible, since it's not peer-reviewed nor published on any official university source. It is only my own personal publication, so to speak. Therefore, something like:

Larsson, Nils, Rapid Review: Automated Testing for Website Accessbility, 2025, written in the course Computer Science C: Scientific Method at Mid Sweden University, published on the compartdev blog at September 20, 2025.

Other related references

You can also watch the video on performing a thematic analysis using PDF sources and open source software on this link, which I used when I wrote this rapid review: here.

B. Cartaxo, G. Pinto, and S. Soares, “Rapid Reviews in Software Engineering,” Mar. 22,
2020, arXiv: arXiv:2003.10006. doi: 10.48550/arXiv.2003.10006. 

Monday, January 13, 2025

Building a responsive flexbox navigation

The research has been standing still for a while. My initial idea was abandoned and I opted instead of going into some more familiar domains: front-end design. So instead pursuing my idea of topic modeling and content (and text) analysis, I have instead steered into the area of accessibility and usability. At least for now.

However, in my rapid review, I will need to perform some sort of content analysis; namely thematic analysis. But, it will be a manual type of thematic analysis.
Well that was good and all, but as I was writing about accessibility I realized it has been a while since I actively pursued front-end design.

Therefore, I have been preoccupied with freshening up a bit on the topic and continuing to explore the possibilities of responsive designs. This brought me down the rabbit hole including researching things like Cassowary, constraint programming and adaptive design, as well "progressive enhancement" and "natural breakpoints".

Finally ending up with the conclusion that flexbox seems to be the breakthrough technology that is sufficient to deal with a lot of the design issues that can occur in web design.

With that realization, I have been repeating and re-learning some of the concepts of flexbox and decided to design a responsive flexbox navigation.
The navigation, can be considered fundamental in any design. So I think it's a good start for any design.

Then, of course, how content is structured can also be considered important, especially when it comes to things like usability, accessibility and SEO. But also for aesthetic reasons.

Building this using only vanilla HTML, CSS and JavaScript felt like a good way to practice and "code through" some concepts that I believe are important, whether you'll use vanilla methods or frameworks.

There are still some work to do, but the majority of work on the navigation now seems to be done. Some added bonus would be using CSS animations, but I think I will finish the first rendition first, being more simple and without "all that bling".

 

A responsive flexbox nav
 



Monday, October 28, 2024

Social computing, Computational Social Science and Sociology and Methodology in Computer Science part 2

Yesterday I wrote some conclusion and something of a summary on my thoughts. Today it was time to "kavla upp ärmarna" (roll up the sleeves) and get going - doing - *something*

Well lets call it research, as research in its truest sense, I suppose - literally searching and reading about essentially *everything* yet *nothing* and a sort of "throw something at the wall and see what sticks" type of method (not to be confused with throwing a strand of pasta at the wall and if it sticks it's ready to eat - not the single pasta strand you threw but the whole batch of pasta you presumably made (now that would be funny if you only cooked one strand of pasta)  - you can view it as a SAMPLE representing the WHOLE (population)).

See, research is about statistics, that is my ultimate conclusion perhaps.

Okay, enough with the shenanigans. It was time to work, work on that search query I suppose - so I included all sorts of Booleans and grouping and whatnot; but ultimately needed to go back and just look at individual words and terms. Any long search query is bound to be somewhat cumbersome, if you want to really understand what is going on, I think. But for demarcation|delimitation, it really is what you need.

At one point - I realized that the only 20 hits was actually all there was about this particular topic.

However, lets get back to the topic; I suppose, if I have anything cogent to say about this topic, which has not been said before (of course not): now: qualitative methods are old school - analog and manual. But, there is a way out of the misery, and that is by involving computers, naturally!

However, you still need to know what to do with this godly power of computation: first you can go easy with word frequencies and such... then it's time for: latent semantic analysis and latent dirichlet allocation - you can also stumble into stuff like probabilistic latent semantic analysis and latent semantic indexing.

Then it's time for machine learning, or if it was perhaps already included in the previous (may be the case): Supervised Machine Learning (SML) and get into that sweet Bayesian statistics.

Let the computer do the job, and sit back and enjoy. I guess. Well, you need to prepare the datasets and do the training and install a bunch of software and well, learn some new math and statistics including but not limited to linear algebra *gulp*. But other than that, just sit back and relax; the transistors will do the work from now on.

Well that would have been the case unless I had a manual method lined up for a literary review; which will likely need to obey certain rules. However; this meta study now has its subject or topic, which is all of above. I think it's a massive study but lets hone in on the particulars; which I believe will be related to the method (quantitative) to some extent and the analysis (content|thematic analysis).

Well it's a mixed methodology I suppose; but treating the data, which is qualitative? With a qualitative method (thematic analysis, using content analysis methods?) I guess you can go wrong here. There are divides which needs to be clarified I can *clearly* see. But I think that is what is interesting. Perhaps.

We will see, it looks more philosophical than anything else, but might do for a meta analysis I guess.

Sunday, October 27, 2024

Social computing, Computational Social Science and Sociology and Methodology in Computer Science

These recent weeks I have been struggling with defining a field and defining a research topic; which should include research questions and topics. That also included some methodology in Computer Science and the related field Information Systems.

So lets start with what I have concluded so far.

The topic of social computing is a field or part of Computer Science which deals with social aspects of computing. I will not go into the exact definition here but I imagine it reads something like "social aspects of using computers and interacting with computers and computer information networks". I think that might be a decent starting point.

It should be quite "simple", yet as always in academia, there is a tendency to complicate things wherever possible so lets remember that. However, by simple, I mean, just take the words "social" and "computing", and it should entail the intersection of these two broader topics as well as where they intersect.

That would make the topic somewhat interdisciplinary. I think that can be a good thing, that I could delve into topics like social science and sociology, as well as perhaps some implementation of computer technology in for example networks.

Okay, so this leads me to the methodology part. As I have recently learned and pondered is that computer science traditionally could be viewed in the positivist tradition. That would make sense, as computers are quite quantitative in nature and therefore would oblige by a classical scientific approach.

However, when working with social aspects, qualitative aspects also become important. Now, one can go either way: purely quantitative or qualitative but I think a mix might be nice. I'm thinking of a methodology where both aspects are being taken into account. (I will not go deeper into this at this point, but there's a case to make for choosing a mixed methodology in this particular case -- see below for somewhat of an example).

When I researched this and read about it, what crossed my mind was the meta level of this, or rather, the "computer science" angle, rather than just using some computer technology in a qualitative method. I guess it's wishful thinking but if quantitative method could be applied directly to "social data" - perhaps something would come out of it.

And it appears that this has been done, especially in the field of computational sociology, where a lot of interesting computer based methods are being used.

It also made me realize that another interest of mine; which I wasn't sure was quantitative, actually was -  text analysis. However, I think it holds, or can hold, some qualitative aspects as well.

Well, this might too simple, or it is exactly what science should be about - connecting different areas which haven't been connected before (well you'll most likely won't be the first, but great minds think alike a what not)...

I could also see that one uses quantitative method for "exploratory" purposes, and then, perhaps defines the research problem and topic a bit further. Then adding a qualitative "measurement", and by combining the two, getting further in the analysis, than just using either of the two.

For instance: get the word count of a certain topic in a certain text or group of texts, or get the top 10 word counts in a certain text  or group of texts. Lets say that word is something crucial to whatever is being studied, then it would make sense to learn more about this word and its meaning in the context, rather than just running the word frequency function and stating that this or these word/s are prevalent.

Well, this is what I've been thinking about and this is the preliminary conclusions I guess. Now I just need to pick a relevant "sub topic" and go ahead with my methodology... Unless it seems to be the case that I will need to use a specific methodology.

However, what I have described here, will be part of my "theory", I suppose. Or what I except to find. This will the be considered somewhat deductive, although I suspect there may have been some moments of "induction", as well. Or rather, my chosen sub topic, may or may not have support for what I have just posited.

Thursday, July 13, 2023

The Modded Chassi Project

I had an old Fujitsu Siemens(TM) chassi laying around which I decided to mod a bit. The idea was to just continously work on this chassi making various "improvements". Mostly to learn more on how to do chassi mods and have some fun.

So far what has been done is this: 

  1. repainted the front, 
  2. made a circular shape on one of the sides, 
  3. added some LED lighting, 
  4. and finally put a hinge to hold the side a little bit better.

1. repainted the front

The front was grey and had seen its better days, so I repainted it using black aerosol paint and clear lacquer.

Colour matched with the CD/DVD.

 

2. made a circular shape on one of the sides

I got some help from a friend cutting this circular shape since I didn't have much experience with an angle grinder. I did however learn to use an angle grinder, and have used it in a different chassi project. An angle grinder can be difficult to use but is a great tool when modding a chassi, especially thin metal sheets. Be careful and make sure you have the proper training if you decide to use an angle grinder, as it is a very powerful and a potentially dangerous piece of equipment.


 

My first cuts with an angle grinder.


 The circular shape.

 
The final result.
 
3. added some LED lighting

I basically just used a LED stripe which had adhesives attached to it. Then it was connected to the USB-port. The LED is RGB so it can switch to many different colours using a wireless remote control.
For future improvements, it would be cool to be able to program the LED stripe and maybe have it synced to music.

RGB LED + remote control plugged in & working.

 The RGB LED stripe attached to the side of the chassi.
 
In action.
 
 Illuminating the darkness.
 
4. put a hinge to hold the side
 
I always thought the side was difficult to take off, so I wanted a hinge solution. I opted for one hinge which was not enough to hold the side in place. Therefore it is a bit tilted right now. I think I need two hinges for it to be more stable. There was quite the bit of work to get this in place but I hope I will be able to make it more stable in the future - adding another hinge.
 
 Drilling the holes in the side for the hinge.

Cutting out some superfluous material...
 
 The hinge attached to the side.
 
 The hinge attached to both the frame and the side.
 

Saturday, July 1, 2023

Adding jar files to a Java Project on the Command Line using Classpath

This is a simple post about how you can add jar files on the command line so that you can incorporate their contents in your own projects. This can be handy if you only want to test a third party library and not work with an IDE.

The idea is that if you want to use a third party library which is contained in a jar-file, what you simply do is that you use the -classpath option in your compile command:

javac -classpath ".:sqlite-jdbc-3.39.3.0.jar" TestClass.java

Where sqlite-jdbc-3.39.3.0.jar is the third party library jar file which you have downloaded. In this case SQLite.

What you must do next is to use the same -classpath option in your run-command:

java -classpath ".:sqlite-jdbc-3.39.3.0.jar" TestClass.java

You can watch an example of this in the video below.



Friday, June 30, 2023

Junction Table in SQLite SQL Database with Cascade on Update and Delete

This is a complementary blog post to the video I've posted on the topic. In this blog post I will simply provide what to type into the command line of SQLite. The video is about the concept of using a junction table (also called a linking table) which is the "bridge" between two tables. This gives you a layout of three tables which is very useful in many practical applications and conforms to the normalization rules of database design. You will have one table with artists and one table with albums, and through the junction table you can link the artist to the album.

First you will start on a new SQLite database file by typing "sqlite database.db" or "sqlite3 database.db" which will open the command line and create the database file in the current directory. The cascading will not work in sqlite3 (and not in previous versions at all, since there was no support for the foreign key) unless you initially type in:

pragma foreign_keys = ON;


This must be done every time SQLite is started (unless you configure the startup settings).

Create the tables
:

artist:

create table artist (
artist_id integer PRIMARY KEY AUTOINCREMENT,
artist_name text NOT NULL);


album:

create table album (
album_id integer PRIMARY KEY AUTOINCREMENT,
title text NOT NULL,
year integer NOT NULL,
tracks integer NOT NULL);


artistalbum: <- junction table

create table artistalbum (
artistalbum_id integer PRIMARY KEY AUTOINCREMENT,
artist_id integer NOT NULL,
album_id integer NOT NULL,
CONSTRAINT fk_artist_id
FOREIGN KEY (artist_id)
REFERENCES artist (artist_id) ON UPDATE CASCADE ON DELETE CASCADE,
CONSTRAINT fk_album_id
FOREIGN KEY (album_id)
REFERENCES album (album_id) ON UPDATE CASCADE ON DELETE CASCADE
);

Add data:

insert into artist (artist_name) VALUES ('ABC');
insert into album (title,year,tracks) VALUES ('CDE',1999,3);
insert into album (title,year,tracks) VALUES ('DEF',1999,3);

Add data in the junction table so that "ABC" has 2 records (the added albums):

insert into artistalbum (artist_id, album_id) VALUES ( (SELECT artist_id FROM artist WHERE artist_name = 'ABC'),
(SELECT album_id FROM album WHERE title = 'CDE') );

insert into artistalbum (artist_id, album_id) VALUES ( (SELECT artist_id FROM artist WHERE artist_name = 'ABC'),
(SELECT album_id FROM album WHERE title = 'DEF') ); 
 

Get all tables:

SELECT artist.artist_name, album.title, album.year, album.tracks
FROM artist JOIN artistalbum ON artist.artist_id = artistalbum.artist_id
JOIN album ON album.album_id = artistalbum.album_id;

Get from all tables where the band name is 'ABC':

SELECT artistalbum.artistalbum_id, album.title, artist.artist_name
FROM artist INNER JOIN (album INNER JOIN artistalbum ON album.album_id = artistalbum.album_id)
ON artist.artist_id = artistalbum.artist_id
WHERE artist.artist_name='ABC';


Delete records using cascade from the "associate table", i.e. the album or the artist table:

DELETE FROM album WHERE title = 'DEF';

You can now notice the that data has been removed from both the "associate table" and the junction table. Make a delete from the album table and all the records in the artistalbum table will also be deleted. However, the artist table will not be effected, so an additional delete must be made there as well.

Check out the video for more details on how this works.



A Rapid Review on Website Accessibility

I hereby present to you a rapid review on accessibility in development of websites, with the title:  Automated Testing for Website Accessibi...