Week 1: Soundtrap

Week 1: Soundtrap

Course began by looking at the online DAW (Digital Audio Workstation) Soundtrap. Although I am used to other DAWs, it was great to get a feel for what students might use and the tools they would have access to.

As part of the class, we discussed the different kinds of tracks available. In the screenshot above we can see examples of audio tracks (top five coloured lines), and MIDI (the bottom two lines). We also experimented with automation in Soundtrap; although there are relatively few options, such simplicity is a great starting point for students who are new to working in DAWs.

Soundtrap also has an in-built beat maker. Like Soundtrap’s automation, this feature is very simple, which is a positive in that it is very intuitive and a great way to begin discussions with students around drum beats, but the lack of versatility could reduce its effectiveness in later stages. Nonetheless, having the beat maker as part of Soundtrap rather than a separate program allows for easy integration into student compositions and encourages students to think in a more holistic manner than they might with a stand-alone beat program.

Additive Synthesis VS The World

Before I say or do anything else, let me first apologise in advance. This entry is going to be a long one, and may feel like some bizarre kind of blog baseball game. Please bear with me though, because additive synthesis is really just the beginning of something huge that could really change the way we approach the arts in stage 5.

Hold on tight, folks!

What is Additive Synthesis?

So, let’s take a moment to talk about additive synthesis. “Additive Synthesis” is a rather complex sounding term that I’m sure would make most music teachers scratch their heads in confusion. As usual, technologists and academics have found a rather complex term to describe a rather simple idea, so let’s break it down. “Synthesis” refers to the product of several different components or elements brought together. “Additive” simply means we including or “adding” more things into this mixture. I realise that sounds like common sense, but that is important to understand. Basically, when we are talking about additive synthesis, we are adding different elements together to create a new, mixed product. 

What are these elements we are adding together? Sound waves.

Additive synthesis starts with a single wave. These can be any of the simple shapes (sine, triangle, square, saw), any frequency (which in the music classroom we would describe as pitch) and amplitude (volume, or dynamic if we’re linking it to musical terminology). To create new sounds, we can change these values, add new waves, or affect the simple wave in new and different ways. I won’t go any further into additive synthesis as a concept here, but if you do want to explore the idea more there will be some links at the end which will set you off on your journey.

Additive Synthesis in the classroom

I am just going to take a moment and discuss my experiences around additive synthesis, which can hopefully shed some light about how to bring these ideas into the classroom. I was first introduced to the idea of additive synthesis when undertaking my Bachelor of Music (Composition) degree. When we approached it in composition, it was very theoretical. It made sense, sure, but I didn’t understand how to USE it. To help us understand how to use it, we were instructed to experiment. Start with a wave, then change the parametres. Add filters. Affect the wave in different ways. 

And I did.

I was able to make a few different sounds. It was interesting but not particularly useful. In the time it would take me to create something close to what I wanted using the concepts I had learnt, I could have found an online library, paid however much it cost to download and used the closest preset I could find. What that meant was my knowledge of the concept was entirely theoretical, with no idea of the practical applications of the concept, or how any of the filters or parametres that I could change actually affected the sound.

I would like to now take you to the discussion of synthesisers in this current course. First, we put together a simple PHYSICAL synthesiser. The physical part is important, because if you got the order of the parts wrong, either no sound would come out or the different parts of your synthesiser would not work as intended. Furthermore, you can easily and visually add and remove different effects on the sound, starting from the bare minimum – power, oscillator and speaker – and progressively adding more effects. Such an experience teaches students in a practical and hands-on manner what a synthesiser is and how it works.

James then showed us a digital synthesiser. This is the part for me that made everything click, and suddenly I saw WHY and HOW you would use additive synthesis. James started with a preset and, one by one, removed the affects. In front of our eyes we saw a complex, interesting synth broken down to a simple sine wave. In front of our eyes, our eyes were opened to all of the possibilities this way of approaching sound offered us. See, additive synthesis is amazing as it gives you nearly limitless options to explore once you understand how it works. Each student can come up with their own unique sound, sparking new creative ideas. Students can become sculptor of sounds, building both musical and theoretical knowledge and experience with sound and thus deepening their understanding of the musical and even natural world around them.

Let’s Start Digging Deeper

Now we have in our toolbox additive synthesis as a composition and performance tool. But where can we find existing examples in the wider musical world? Why, but film and game music, of course! If we consider film composers such as Hans Zimmer (The Lion King, Dunkirk, Interstellar, Batman V Superman etc.), it is nearly impossible to analyse this music in any meaningful without understanding synthesisers. Nor is it possible for students to recreate sounds inspired by pieces already in existence when they are creating their own pieces. The same can be said of music for video games, such as Mirror’s Edge (PEGI16+/M), Portal 2 (PG) or Terraria (PG).

Of course, games and films do not exclusively use synthesisers to create music and sounds present in the products. This allows for comparison between sound approaches. Why did the composer use a synthesiser instead of an orchestra? What effect does the use of this certain synthesiser have of the scene? How would you describe the sound of the synthesiser? Describe the different components of the synthesiser? Or perhaps even, how does the synthesiser interact with the analog sounds present? Some of those questions are of course harder than others, but that provides an idea of how we can approach synthesisers in a music class.

Cross-Curricular, You Say?

So we know what additive synthesis is, we know why we might want to use, and we know some types of media that we can show our students as examples of how this approach is used in the real world. But it gets even better. Let’s take a sneaky peak at the syllabus…

You can see in the cross-curriculum content section of the syllabus that by exploring game music we can easily cover at least of the competencies: ICT. But it is not difficult to cover more.

Let’s consider games for a moment. What are the elements of a game? Well, there is obviously the soundtrack and sound effects, otherwise we wouldn’t be talking about them in a musical context. Next, we have graphics to consider, so let’s have a chat to the visual arts, visual design and/or photographic and digital media department. Oh, and games need code and logic in order to work, so let’s bring along IST department as well. And games often need voice acting, for characters and the like, so I’m sure there is a place for the drama department as well…

And suddenly, you have a project that crosses the boundaries of the arts. This would definitely not be something you would do in stage 4; in fact, I would only do this toward the end of stage 5. I also am well aware that implementing a project such as this into a school could be very difficult. But there are many benefits to this kind of project. Just some of the benefits off the top of my head include:

  • Preparing students for HSC projects. All of the stage 6 equivalents of the above subjects require the student to complete a major work. Why not give them the experience of creating a major work BEFORE year 12? That way, when students are asked to create something for their HSC, they know how to approach such a task and have developed the necessary skills such as project management. Speaking of project management…
  • Project management. Students will have to learn how to manage their time and resources in order to complete such a project. This will benefit them immensely when it comes to their final years in high school as they try to juggle the completion of their major works and their study.
  • Communication skills and industry simulation. Each student in the group will have to bring their own specialisation to the team. Students will have to learn to talk about their specialisation in a way those not as trained can understand, and learn how to fit their knowledge into that of a greater whole. This reflects how the real world works, giving these students a leg up when it comes to working on their own side projects, or entering into an industry setting.
  • Confidence. At the end of the project, students will have something they can present and be proud of. This can be great for students who are not as confident in their abilities, or those who are having trouble with self-belief. Students can also use the final product as something to show prospective employers alongside major work/s if they decide to follow the creative path through HSC and beyond.

As you can see, a simple yet complex-sounding idea such as additive synthesis can lead us to consider new ways of approaching content and learning in our classrooms and schools.

Oh, and students learnt a thing or two about physics along the way. YEAH, SCIENCE!

Useful Links:

If you are looking for the Australian age ratings for video games, television and film, use the Australian Government’s Australian Classification site.

If you want to find out more about wave-forms and additive synthesis, look through the following:

Writing it all down

Week 3’s class was all about notation software. There are a few things about notation software we should get out the way before we jump head first into this. Firstly, in nearly every school, using notation software is synonymous with composition. Secondly, they look complicated and more often than not, are complicated. Finally, most notation software is not built for the classroom setting, but rather for composers.

OK, so notation software. What have we got? What options are out there? James has kindly compiled a list of notation software out there:

  • Dorico
  • MuseScore
  • Sibelius
  • Staffpad
  • Noteflight
  • Flat.io
  • Notion
  • Finale

Yay, so now we have a list of options. That’s great, but how do you choose one to use? And what should you use each one for? And should teacher’s be using a different software to students? All very complicated…

To make these decisions a little bit easier, I will be reviewing as many of the programs as I can over the next few weeks. It is important to note here I do have more experience with some programs than I do others, but I will do my best to make my reviews as objective as possible.

Each program will be measured on 6 criteria:

  • Interface: How clear is the interface and how easy is it to use?
  • Speed: How fast can a mid-range user notate what they like?
  • Options: How much control do you have over your work in each program? Are there options not available?
  • Cost: How much does each program cost to purchase?
  • Utility for students: How useful is this program for students in different stages?
  • Utility for teachers: How useful is this program for teachers?

With this information, hopefully we can all make more informed decisions about which program we choose to use.

Reflection – Comp Ed Assessment 1

For the first of our assessments for Composition in Music Education, we were asked to create a mixed-bag arrangement in a way that essentially any teacher could pick up the arrangement and use it. For context, a mixed-bag arrangement (or open orchestration, free orchestration or similar term) is a piece that has been written or arranged to be used with any combination of instruments.

For this task, I decided to create an arrangement of France (The Medieval Era) from the video game Civilization VI. I chose this piece for 3 key reasons:

  1. The piece is contemporary, yet uses musical ideas from the medieval period. Students will normally engage with content better if they have some level of connection or interest to it, and video games are an effective and popular way to tap into this interest. This piece is one step better than the average video game soundtrack however, as the music is inspired directly from music written in the location and time period given in the title (Medieval France). This arrangement can then be used as a link between new and old, and discussions about context and musical techniques through the ages can take place.
  2. The melody has a limited range and is modal. This means the melody is easily approachable by a wide range of students, as proficiency on their instrument of choice is not as important. There is also the added benefit that the small range fits well onto Orff instruments, and the chosen key (D minor) uses Bb’s, which can be used on Orff instruments. Furthermore, the limited range and modal nature of the melody allows for greater ease in aural learning and reproduction by students, as well as making approaching improvisation easier for students.
  3. The piece is one in a series of works. France (The Medieval Period) is one work in two series of works: in the France soundtrack series and in the overall Civilization VI soundtrack series. This provides ample opportunity for discussion of cultural representation in media and music, how progress is portrayed musically in media and how changes in musical style occur over time. All of these points allow for deeper discussions into the music and its place in the wider world, which is a vital aspect of music that is sadly often given not enough time in the classroom.

There were three main considerations I had in mind as I approached creating this arrangement: ability to learn parts aurally, opportunities for improvisation, and differentiation and accessibility.

  1. Aural Learning. As mentioned earlier, the piece is naturally suited to aural learning. This has a lot to do with the fact that it is based of French Medieval music, which was usually transmitted aurally. The simplicity of the chords again makes aural learning very approachable.
    The piece as written is already quite approachable to aural learning, as the bass and percussion lines remain nearly the same throughout the whole piece. I made a small change in keeping the same percussion and bass lines going through the B section, rather than changing to something new like in the original piece. This means that the bass and percussion players only need to learn 16 bars in order to perform the piece, making aural learning very approachable for these parts.
    When writing the harmony parts, I wanted to keep them simple to make aural learning as effective as possible. Again, this was helped by the limited chord changes, but by keeping rhythms repetitive and the harmony the same when each section returns, the amount of material students had to learn was reduced.
  2. Opportunities for improvisation. The most obvious opportunity for improvisation is the improvisation section at E. D natural minor and D minor pentatonic are recommended for this section and the notes in these are provided for students. Of course, students do not have to use these scales but they are there if students need them. However, the part I am proudest about are the opportunities to add ornaments. When listening to the original work, the melody was often ornamented, but to notate these ornaments would add unnecessary complexity. I therefore simplified these small parts to their bare bones and notated and asterisk, giving the student the opportunity to add their own improvisation. This is great in two ways. First, students who like to have some more creative freedom in the melody have the opportunity to do so without other students feeling pressured to do the same. Second, to add ornaments at the performers discretion is stylistic of music of this period, and again allows discussion around performance practice and culture to take place.
  3. Differentiation and accessibility. This is basically the key reason to create a mixed-bag arrangement. Differentiation is important as it allows all students to engage with the task at a challenge level appropriate to themselves (Standerfer, 2011). The arrangement was differentiated both between and within parts. For example, The A section uses 2 chords which change quite slowly, whereas in the B section there are 4 chords, usually changing ever half bar. Simplified parts are provided for the melody and bass lines, as well as TAB for guitarists and bass guitarists, and chords for piano and guitar. The arrangement was also moved from A minor to D minor to allow notes to sit more comfortably in most instruments ranges. Where is this was not ideal, such as for brass bass instruments, small notes have been added so that the part is still playable.

Above you can see the simplified score of the arrangement.

Reference List

Standerfer, S. (2011). Differentiation in the Music Classroom. Music Educators Journal, 97(4), 43–48. https://doi.org/10.1177/0027432111404078


This is just a quick post to update everybody what is happening/going to be happening in the next few weeks on this blog.

First up, I’ve been a little slack on posting weekly updates, but those are coming. The one coming up next will be from week 3 about notation software. Following that will be some words about editing software.

After these two, I’ll be doing a post about our recording session. This was a very interesting session as I was suddenly made the director at the start of the lesson. I’ll discuss that experience, how I would have done things differently, and the different educational benefits that may come from placing a student in a similar situation.

There will also be a few other things popping up in the next few days/weeks. I’ll be posting an update on ideas and plans about the big semester project (!!!). I am also thinking of including some work I’ve been doing from another course, as well as some ideas that I have been working on inspired by that course.

Lastly, to expand upon that composition idea, I will (attempt) to do a series of reviews of different notation programs, their pros and cons, and the applicability in the classroom and for practicing music teachers.

That’s all for now!

Week 2: Recording

Week 2 saw the class coming together to begin recording a short music video. This is a vital skill as a music educator as part of our job involves recording performances, helping with productions and (most likely) helping other teachers use the equipment.

The video and audio recording setup

For this task, we made use of several different video and audio recording devices. On the right of the photo you can see two large video cameras; the one in the centre of the image was our default camera while the one on the right was being used for close-ups. We also made use of a small hand-held video camera (you can see me holding it in the photo) and a phone camera. The reason for the different video approaches was to allow for different angles and cuts, as well as to gain experience in using different technology. I found working with the different cameras really fascinating, as although I have experience with working with audio recorders such as the zoom in the photo, this was my first time using much of the video recording tech.

Lights, Camera, Action!

It was also interesting seeing how much difference lighting makes to the image seen by the cameras, and just how different the image our eye sees is to that which the camera sees. The light in the above photo looks super extreme (as it did in person), but for the video cameras it looked just right. We will have to wait to see exactly how it affected the hand-held recordings, but I’m sure whatever the result it will be a great learning experience and influence future recording decisions.

Hi, I’m David!

Hi, I’m David, and I am music educator, composer and violist. Through this blog I will be sharing my experiences and reflections from the technology course as part of the music education degree at the Sydney Conservatorium of Music.

To see some examples of some projects I have been involved in, have a look at Konzertprojekt’s Facebook page, and my own personal page.

Here is some music I like to listen to:

Also, here is a really cool video that can give some ideas for teaching rhythm in the classroom.

Create your website at WordPress.com
Get started