The user interface architecture

Based upon experience with prototypes produced over many years and in numerous computing environments, we will organize Image Performer’s interface around eight core modules. Three of the modules—color, form and motion—define the characteristics of the elements that produce the visual performance. A fourth module— the chordset—plays a role akin to that of presets or voicing controls in organs, synthesizers and similar musical instruments. The fifth module—the canvas—represents the surfaces upon which the performance is projected. The sixth module—trigger pads—provides for the addition of objects to the canvas during a performance. The seventh module—the chordset list—is used to edit, manage, and share chordsets across devices. Finally, there is a module for altering and preserving user preferences.

A 2020 Imager prototype that shows the first six of the eight proposed modules

Each of the modules will occupy a panel. This will, of course, be subject to extension and revision as our design progresses, but these eight represent a good point of departure for the project. After a brief description of the modules, we’ll create structures for managing the panels.

Color, Form, and Motion

The first three modules, related to color, form, and motion, are derived from the work that Thomas Wilfred did on his Clavilux in the 1920s and 30s. Though the specifics of our design will differ from Wilfred’s, his diagram illustrates the complexity of designing a musical instrument for visual performance. A similar diagram for the design of a sonic musical instrument might include melody, rhythm, harmony and tembre. Color alone requires addressing that many dimensions in a visual instrument’s design.

Thomas Wilfred’s Lumia Diagram
Circa 1940-50, Yale University Archives

I have written extensively about this part of the design, including in a 2000 Leonardo article “Color, Form, and Motion: Dimensions of a Musical Art of Light.”

The Lumi and the Chordset

Image Performer has two particularly important organizing objects. The first is the lumi (Logical Unit for Manipulating Images),; the name, a nod to Wilfred’s LUMIA. As its name suggests, a lumi is a basic painting unit used to create dynamically changing images on screen. Each lumi defines a possibly changing form, with colors that can also be changing, moving along some path, at times rhythmically. The lumi’s colors, forms, and motions are continuously updated by oscillators that apply transformations to around a dozen features.

Lumis are collected into chords. The chords allow colors, forms, and motions to be arranged in easily assessed, logical patterns. The chords are, in turn, collected into chordsets. In the current implementation there are eight chords to a chordset. The chordset provides imagers voicing at any given moment. Chords and chordsets can both be changed, so that there is an essentially unlimited collection of arrangements, or voicings, can be available. As a practical matter, it is most common that one or a few will be used throughout a particular movement or composition.

The Canvas

The canvas is a multilayered performance space. Lumis are released onto the canvas, through strokes or taps on the canvas, keypresses or other actions. Once present there, they can be further controlled using triggers, pads, sliders, pedals and other controllers. They can float through the space, or leave traces in their wake. They can be dropped selectively or set to play our rhythms. The entire canvas can be faded out or erased. In short, control over what is happening on the canvas is like the control of what come out of a musical instrument.

Trigger pads

The trigger pads put immediate control of eight lumi groups under the player’s fingertips. This arrangement provides for note-by-note playing, to accompany musical scales, chords or runs. It also provides a straightforward interaction model for defining the behavior of individual lumis. Other triggering models are available, but this one seems to serve several important interaction roles.

The Chordset List

All of the chordsets you have collected or defined can be organized as lists. The chordset list view provides facilities for editing and rearranging chords and chordsets.

User Preferences

There are numerous options for configuring Image Performer’s arrangement and behaviors. As in many such programs a preferences panel is provided to assist in doing this.

Getting Started

So, let’s get started looking at some code. In this post I describe the creation of the Image Performer app’s skeleton. We started by creating an iOS app named Image Performer. The Interface was SwiftUI, the Life Cycle was SwiftUI App and the Language was Swift. The Use Core Data and Include Tests options were left unchecked. In the next panel, we left the Create Git repository on my Mac item checked.

In the ImagePerformerApp.swift file, we added one line, a modifier on the ContentView. As that modifier’s name suggests, it removes the status bar, making the iPad’s whole screen available to the instrument.

import SwiftUI
 
@main
struct ImagePerformerApp: App {
    var body: some Scene {
        WindowGroup {
            ContentView()
                .statusBar(hidden:true)
       }
    }
}

Most of the work of this post is done by two structs, each occupying its own file. The first, which uses the ContentView that already exists, will manage the eight panels described above. The second EdgeControls, which occupies a newly created file of the same name, provides eight buttons that toggle the visibility of the user interface elements. Let’s start with ContentView. At the top of ContentView.swift is the function setupShowPanels. It returns a dictionary, whose entries are a String with the name of a panel and a Boolean with the state of the panel’s visibility.

func setupShowPanels() -> [String:Bool] {
     let initialPanelConfig: [String:Bool] = ["color":true, "form":true, "canvas":true, "motion":true, "chordset":true, "triggerpads":true, "settings":false, "chordsetList": false]
     
     let theDictionary = UserDefaults.standard.dictionary(forKey:"showPanels") ?? initialPanelConfig
         
     return theDictionary as! [String:Bool]
}

The let for the constant initialPanelConfig, sets up a default that will be used before UserDefaults have been saved. The second tries to retrieve a dictionary for the key “showPanels” from UserDefaults. One or the other of those is then returned by the function.

The first line of the structure ContentView uses that setupShowPanels function, assigning its result to a variable with an @State designation. This is so that the resulting dictionary is monitored, by both this struct and, through an @Binding designator, by the EdgeControls struct we will describe later. When any of the Booleans in the dictionary is changed, both of those view will be updated.

The body of ContentView is comprised of a ZStack that contains eight views, one for each panel. The GeometryReader is used to size and position the views. Here is some of the code. The excluded portion repeats the structure that is used for setting the color panel, with parameters for each of the remaining seven panels.

struct ContentView: View {
     @State var showPanels:[String:Bool] = setupShowPanels()
 
     var body: some View {
         return ZStack {
             GeometryReader { geo in
                 if showPanels["canvas"] == true {
                     CanvasPanel()
                         .frame(width:canvasRect(showPanels, width:geo.size.width, height:geo.size.height).size.width, height:canvasRect(showPanels, width:geo.size.width, height:geo.size.height).size.height)
                         .foregroundColor(.white)
                         .background(Color.black)
                         .offset(x:canvasRect(showPanels, width:geo.size.width, height:geo.size.height).origin.x, y:canvasRect(showPanels, width:geo.size.width, height:geo.size.height).origin.y)
                 }
                 
                 if showPanels["color"] == true {
                     ColorPanel()
                         .transition(.scale(scale: 0.01, anchor: UnitPoint(x:0.5, y:0.0)))
                         .frame(width: geo.size.width, height: geo.size.height * 2/12)
                         .foregroundColor(Color( colorLiteral(red: 0.2549019754, green: 0.2745098174, blue: 0.3019607961, alpha: 1)))
                         .background(Color( colorLiteral(red: 0.8039215803, green: 0.8039215803, blue: 0.8039215803, alpha: 1)))
                         .offset(x: 0, y: 0)
                 }

// ...

             }
             
             EdgeControls(showPanels: $showPanels)
                 .zIndex(3)
         }
     }
 }
 

For each panel there are attributes defining its drawing level (.zIndex; 0 by default), how its transition should be handled when coming and going (.scale), what its size should be (.frame), how it should be colored (.foregroundColor and .background), and where it should be located (.offset). A couple of the “if ShowPanels” clauses also contain conditionals that affect the sizes and locations of other panels as well as their own.

The CanvasPanel is different from the others in that its size is dependent upon which other panels are showing. The function canvasRect is used to establish its dimensions and location.

func canvasRect(_ showPanels:[String:Bool], width:CGFloat, height:CGFloat) -> CGRect {
     var theRect: CGRect = CGRect()
     var bottomEdge: CGFloat = 12
     var topEdge: CGFloat = 0
     
     if showPanels["triggerpads"] == true { bottomEdge -= 4 }
     if showPanels["chordset"] == true { bottomEdge -= 2 }
     if showPanels["color"] == true { topEdge = 2 }
     if showPanels["motion"] == true || showPanels["form"] == true { topEdge = 6 }
 

     let availHeight: CGFloat = (bottomEdge - topEdge)/12 * height
     let availWidth: CGFloat = width
     
     if availHeight >= availWidth * 9/16 {
         theRect.size.height = availWidth * 9/16
         theRect.size.width = width
     }
     else {
         theRect.size.height = availHeight
         theRect.size.width = availHeight * 16/9
     }
     
     theRect.origin.x = (width / 2) - (theRect.size.width / 2)
     theRect.origin.y = (height * ((topEdge + ((bottomEdge - topEdge) * 0.5)) / 12)) - (theRect.size.height / 2)
 

     if theRect.size.width <= width * 4/12 {
         theRect.size.width = width * 4/12
         theRect.size.height = theRect.size.width * 9/16
         theRect.origin.x = width * 4/12
         theRect.origin.y = height * 2/12
     }
     
     return theRect
 }
 

 

Once the eight panels have been defined, there is a call to draw the EdgeControls struct, which is contained in the EdgeControls.swift file. The EdgeControls struct places eight buttons around the edges of the display. These buttons will be used to toggle the visibility of various interface elements. The appearance and location of these buttons will likely change but the current organization will serve as placeholders that let us get a start on the prototyping with successive refinement process. Here is the code that EdgeControls uses to do its work, again with the parts that repeat removed.

struct EdgeControls: View {
     @Binding var showPanels: [String:Bool]
     
     var body: some View {
         GeometryReader { geo in
             
             let diam: CGFloat = geo.size.width * 0.1
             
             ZStack {
                 Button("", action: { withAnimation { togglePanel("form") }})
                     .buttonStyle(EdgeButton(diam: diam, theColor: Color( colorLiteral(red: 0.4823430174, green: 0, blue: 0.9692917427, alpha: 1))))
                     .offset(x: -(diam * 0.5), y: -(diam * 0.5))

// ...

             }

            Button("", action: { withAnimation {
                 showEdgeControls.toggle()
             }})
                 .buttonStyle(EdgeButton(diam: diam, theColor: Color( colorLiteral(red: 0.501960814, green: 0.501960814, blue: 0.501960814, alpha: 1))))
                 .offset(x: (geo.size.width * 0.5) - (diam * 0.5), y: geo.size.height - (diam * 0.25))
 
         }
     }
 
     func togglePanel(_ panelName: String) {
         showPanels[panelName]!.toggle()
         UserDefaults.standard.set(showPanels, forKey:"showPanels")
     }

     struct EdgeButton: ButtonStyle {
         var diam: CGFloat
         var theColor: Color
         
         func makeBody(configuration: Self.Configuration) -> some View {
             return configuration.label
                 .frame(width: diam, height: diam)
                 .background(theColor)
                 .clipShape(Circle())
         }
     }
 }

As already mentioned, the showPanels dictionary is declared as an @Binding, so that it uses the dictionary that was setup by the ContentView. GeometryReader is used once again, this time to establish the diameter of the buttons as well as their locations around the screen’s edge. Seven of the eight buttons toggles the visibility of a panel, through the function togglePanel. It then updates the dictionary that is stored in UserDefaults. The ButtonStyle for the buttons is established in the struct EdgeButton.

Finally, there is a file for each of the eight panels, with file and struct names ColorPanel, FormPanel, etc. At this point, these are simple, most containing a single Text view with a formatting attribute.

import SwiftUI
 
struct ColorPanel: View {
    var body: some View {
        VStack {
            Text("Color Panel")
                .fontWeight(.heavy).font(.title)
        }
    }
}

These files will grow more complex as we design Image Performer’s various functions. But with that, the structure is complete. When you compile and run the code, you’ll see this.

High level organization of Image Performer’s user interface

You can tap the buttons around the edges to show and hide various panels. Some sizes and locations will shift as you do so.

As of February 10, 2021.