Computer science skills are becoming more valuable as technology becomes a ubiquitous and necessary part of daily life. As a result, programming is increasingly part of K-12 school curriculums. However, as is the case with most other technologies, making programming accessible to people with visual impairment disabilities is very challenging. A team of CU Boulder scientists, led by assistant professor Shaun Kane from the Department of Computer Science, are trying to change that.

Shaun Kane (left) leads CU Boulder’s Superhuman Computing Lab
Programming allows people to perform computational tasks by sending a series of instructions to the computer. Before the internet, programming activities were math-based with complex syntax, making computer science interesting to only those interested in math and science. In K-12 schools, teachers often stayed away from programming and focused instead on learning other uses of the computer.
However, with the rise of the internet, computational thinking has become a highly desired skill in most jobs across STEM and non-STEM fields. To prepare students for this world, more teachers are teaching programming skills at the K-12 level. But because the common methods for teaching these skills to young learners are very visual, there are populations of students who are being left out entirely.
Using programming skills to create media, such as to compose music or create videos, has been shown to highly increase engagement. Environments built on visual cues can be used to create and share graphical animations, games and interactive stories. Currently, block-based languages like Scratch, Blockly etc. are widely used to introduce children to programming.

An example of a block-based language, Scratch.
In these languages, different types of onscreen “blocks” can be assembled together to create games, interactive stories and animations. Block-based languages use simple mouse movements to connect blocks together. Children need not worry about complex syntax and can focus on creating their project. Blocks have distinct shapes and colors which makes it easier to visually identify and assemble programs together.
Although these block-based languages are designed to make computer science accessible to people with little or no background in math and science, people with visual impairments are understandably left out. People who are blind or visually impaired use screen readers, programs which translate on-screen information into speech or braille. Block-based languages use drag-and-drop mouse movements to create programs by connecting blocks together. These actions cannot be read by a screen reader, leaving visually impaired people unable to use block-based languages.
Similarly, shapes and colors are used to determine how blocks fit together. This information is uninterpretable by blind learners. Finally, the final products of such programs (animated games or videos) are not accessible by screen readers. All of these factors make block-based programming unexciting for non-sighted users.
Several approaches have been taken to provide accessibility support to programming languages. Text-based languages with extensive screen reader support have been developed. Audio-based approaches have been explored to assist blind and visually impaired developers. Similarly, touch screens and keyboard-based approaches—with no requirement for interpreting onscreen images—have been explored. But the options are still few and far between.
As an alternative approach, the Superhuman Computing Lab, a research group in the Department of Computer Science, designed a tangible programming language that uses physical blocks to create a story with interesting voices and sound effects. Each line of a story is an instruction, equivalent to a line in a computer program. Stories are made up of different characters, actions that the character performs, and programming constructs. The constructs include concepts like sequential flow, loops and decision-making statements.
The characters, actions, and programming constructs are all represented by individual blocks, and these blocks and block types can be identified by distinct shapes. In the image below, the raised shape on the right of each block indicates if the block is a character, an action or a programming construct.

An example StoryBlocks program that shows how blocks fit together.
As the blocks are moved around to form “sentences” and commands, the interactions between blocks are tracked with a camera. Based on the assembly of blocks, an audio story is generated. The story is even generated with different voices for characters and sound effects for actions.
The prototype of this language produced by CU Boulder scientists currently supports simple stories with limited characters and actions. But the goal is to expand the ability of this language to teach more complicated computing concepts, and eventually make it widely available to visually impaired students.
We need more accessible tools for programming and computer science with a wide range of applications across STEM and non-STEM areas. By making computer science accessible, we will empower people with disabilities to solve accessibility problems that they may face in the future. This language is one small part of that project, and there is much more to come.
By Varsha Koushik