Updated: Jun 23, 2021
Learning to code with physical objects
This semester we had the opportunity to write code that would operate a physical machine, an old-fashioned pen plotter. In the first task, we used it to create a series of shapes and even make art! In the second we taught an agent to play the ultimate doodling game, tic-tac-toe. My key learning is that relatively simple code can operate in unexpected ways when actuating a physical device. The plotter moved in ways I did not anticipate or account for in my code. For example, the original code provided by the course instructors did not bind the movements of the plotter to the page, as it did when simply printing to a file. My realisation is that code is never fully tested until it runs successfully on the physical device, and steps like choosing the right pen, lining it up correctly, making sure the paper is flat and ensuring the right lighting, are essential to the program running effectively. While these may seem trivial, they add significant testing time and this should be factored in when planning cyber-physical projects.
These learnings provoke a few key questions:
Should we adjust the software development lifecycle (SDLC) when programming CPS?
What engineering principles should we follow in the design, build and assurance of these systems?
Software Development Lifecycle (SDLC)
I do not think we need to reconsider the SDLC in order to successfully write programs for CPS as the phases of analysis, design, development, through to testing and release are as applicable to CPS as they are in traditional software programming. What I would contend though, is that CPS are better suited to Agile development methodologies like SCRUM, rather than waterfall approaches. Firstly, Agile is suited to projects where there are changing requirements, and this would allow the designers of CPS to incorporate feedback from the human/computer interaction earlier into the development lifecycle. Secondly, the interaction between the ‘physical’ and ‘cyber’ elements can often cause unintended consequences, and being able to test for this in every sprint, as opposed to the end of the whole development phase, would allow for easier isolation of the issue and therefore likely reduce redundant code and effort.
Every company I have worked for with a software engineering department will use a set of principles to guide the team's work. These are usually loosely based on ones provided by external bodies such as The Open Group Architecture Framework (Togaf). The usually involve principles such as separation of concerns, modularity, abstraction, consistency etc. While these are all undoubtably relevant when creating CPS there may be others required specifically to deal with the human/computer interaction. What these are I do not know and requires further research, though a starting point could be ones requiring model separation, or interpretability, or the use of checkpoints to allow for incrementally training of models. These would be complimentary to the emerging ethical standards and be specifically for engineers to use during the SDLC. A future research area perhaps!
An example of the plotter not being aligned correctly.
My artwork being produced
Time and again the plotter would not recognise the human input