Planet OpenNews

Planet OpenNews

Pietro Passarelli

Thu Nov 09 2017 11:24:05 GMT+0000 (UTC)

This is an extract from gitbook ‘how to tell compelling stories out of video interviews’.


There are different styles and approaches on how to make a documentary. This section aims to address questions that are often reaised a round supposed objectivity or subjectivity of the medium and it’s limitations for knowledge discovery and knowledge sharing, and proposes to provide some different perspective to examine the problem from.

What follows is the bare minimum (I think) you need to know about documentary films history and theory, if you work with audio/visual medium. I draw and expand on an essay I wrote during my MA in doc films a while back.

I might be over simplifying here, but I think there are three main concepts that are worth knowing about when it comes to documentary films history and theory, especially if you are working in documentary or factual production, or even in some other form of story telling for that matter.

One is the American concept of Direct Cinema (equivalent to free cinema in the UK), Second one is the French idea of Cinema Verite, and last but not least Verner Herzog’s concept of Aesthetic truth.

Capturing reality

But rather then dive straight into the key historic moments, it might be worth whatching Capturing reality from NFBC beforehand to get a bit of context, here’s the trailer, that lays out some of the key issues a round documentary films, with interview with established documentary film-makers.

Early Days

Magic Lanterns

Before we dive into it, it’s worth considering that it’s in infancies, documentaries, started as slide-shows of travel expeditions, lectures, travelogues, where the explore would narrate over his slide show in front of a live audience. Crafting it into a compelling story as much as possible.

Magic Lantern movies

Magic lanterns where used as projectors to entrataine audiences.

See how in this BBC documentary it is compared to a precoursour of early days cinema.

It’s interesting to see how this tradition is still very much alive in what are now called “digital stories” or “Multimedia slide shows” such as those made by MediaStorm, see One Man Brand or Remember These Days as examples. They still use a bit of video footage, but the audio interview in the narration and the stills are mostly what drives the narrative, as opposed to on screen actuality.

Another example that resonate with this tradition I think is An inconvenient truth, trailer. Where at the core a lecture by Al Gore is mixed with a voiceover to ad a personal journey element to it.

Nanook of the north

Going back to our fast track history of docs, In Flaherty’s Nanook of the North, he draws on the travelogue tradition. However the narrative was highly crafted during production, staging scenes and sequences, but without voice over narration, perhaps mostly due to limitations of the medium, and presented to the audience as filmed in an observational unobtrusive way. And some could argue that this created a precedent for most of the misconceptions about the camera’s supposed objectivity.

Dziga Vertov and Kino-Pravda (“Cinema Truth”)

With 1920’s influence of futurism we see Dziga Vertov’s “Man with a Movie Camera” drawing from the tradition of city symphony and really pushing the creative possibility of editing.

There’s much more to be said about Dziga Vertov and Kino-Pravda (“Cinema Truth”) but for now, just watch it and see how it substains your attention and creates a narrative, without a voice over or dialogue…

Direct Cinema

In Direct cinema, the feeling of being there captures the audience. There’s is no narration, presenter or the film-maker intervening on screen.

I guess is what most people think of when they think of “fly on the wall”, “pure Obs doc”.

However for it to work, they usually chose crisis situations where not only the contributors are so engrossed into what they are doing that they forget about the camera, but also some sort of story arc is guaranted by the upcoming events that will be unfolding in life regardless of the filming.

Historically this comes from 1960s Robert Drew, Richard Leacock, D.A.Pennebaker, Albert and David Maysels, and Fedrick Wiseman, combining a journalistic approach with documentary film making, by making the most out of the equipment that was been reengineered at the time, to be more portable (Barsam 1992, p.305)

This allowed to

“to combine Flaherty’s engaging style – a marriage of cinematic narrative conventions to footage based on ‘discovery’- with the unobtrusive recording method”. - Saunders 2007, p.9.

Cinema Verite

Cinema Verite inspired by Vertov, combined interview, biographical format, portraits of city life, and the focus was on ordinary people in their daily life. (Barsam 1992, p.301)

Major players were Pierre Perrault, Chris Marker, Mario Ruspoli, Jacques Rozier, and JeanRouch. In the verite style, the sound was sync ,thanks to lightweight equipment. Editing was chronological rather then striving for a dramatic effect (Barsam 1992, 303).

Jean-Pierre Beauviala, invented crystal synchronization, freeing the camera from its connection to the tape recorder, creating the Aton Cameras. Barsam 1992, p.302)

Jean Rouch uses the camera as a catalyst, incorporating interviews and improvisation as tools. To gain insight into hidden thought - the consciousness of the subject – interventions were employed to guide the participants both inside their inner world, but also stimulating their curiosity for one another.

These are the main motivations that led Rouch and Morin to be on-camera participants while directing the film.

Here’s an extract from Chronicle of a summer, and an interviewwith Jean Rouch. Showing the opening scene where they discuss the implications of film-making with the main contributor Marcelline.

“But I wonder if it is possible to film a conversation naturally with a camera present” - Jean Rouch, Chronicle of a summer.

In the 1961 film “Chronique d’une ete”(“Chronicle of a summer”), started off with a camera that weighted 30 to 50 kilos. This restricted them to shooting from a fixed camera. And halfway through filming, Michel Brault brought the first lapel mic, and 10 mm camera to Jean Rouch.

The best example of the freedom of movement allowed by this type of technology is the scene where Marceline iswalking through the Place de la Concorde. She has a recorder and microphone under her coat, and although Rouch and Moran are not aware of what she is saying, they can decide to move the camera,further away, to increase the dramatic effect of the framing. Rouch’s car was used as a dolly.

However this focus on the inner word of the subjects in cinema verite has been criticized as self- reflexive, in danger of implying that the only possible film was “was the one about how that film was made.” (Winston 2008, p.197)

As she so eloquently puts it in this interview, Jean Rouch’s underlying assumption about Truth is that people reveal their “real” self when performing in rituals, in this case the filming ritual done in the cinema verite way, as opposed to when they go about their daily life unaware of being filmed.

If you are interested in finding out more about these two movements, you should check out Cinema Verite, Defining the moment

The cinema vérité style finds his legacy in the more recent more mainstream films by Michael Moore, and Morgan spur lock, but is definitely worth knowing about the British NFTS tradition of Nick Broomfield, Molly Dineen, and Kim Longinotto.

I’d recommend to watch: Rough Aunties“(trailer) and “Sisters in law” by Kim Longinotto The Heart of the Angel and “Geri” by Molly Dineen, >Kurt & Courtney, Soldier Girls, “a time comes”, and “Tracking Down Maggie” by Nick Broomfield. Check out this interview with the latest.

Very interesting point he makes, and you can see how it then raises ethical questions, and gives ethical editorial responsibility to the film-maker. We’ll get back to this point, but for now let’s consider Michael Moore’s approach.

We like non-fiction and we live in fictitious times. - Michael Moore

To take this issue a step further I’d strongly recommend to watch “Manufacturing dissent” (watch), see trailer below, where film-maker Debbie Melnyk criticise Michael More for “bending” the facts in some of his docs.

However before doing so, to avoid spoilers you should probably watch “Roger and Me”(trailer), “Bowling for Colombine”(watch) and Fahrenheit 9/11watch to avoid a spoiler alert, and put the criticism more into context.. if you haven’t already.

Verner Herzog’s concept of “Aesthetic truth”

In today’s documentaries most productions will use a mixture of direct cinema and cinema verite, leaning more towards one or the other, and most often aiming to do interviews, mixed with actuality. Most amazing combination of this I’ve seen on Tv recently is the BBC3 Life and death row series. What I like about it is it does not recur to narration led voice over. It strikes the right balance between interviews and actuality, using those to drive the narrative.

I think it then becomes very useful to understand Werner Herzog idea of aesthetic truth and how for him boundaries between documentaries and fiction are therefore very blurred.

But Before we do so let’s consider this interview I made for the BBC academy college of production with John Douglas one of the directors of one of the episode of the Life and Death Row Series I already mentioned.

The key take away here is the emphasis on the relationship with the contributors and representing your understanding of their truth without making it about you.

Back to Herzog, his concept of aesthetic truth. Which is something behiond the actuality and the factual description of the event, it is its manipulation to represent a higher truth of the moment to shade light on elements that would other wise be missed (Herzog Cronin 2002, p.238)

in his words:

So for me, the boundary between fiction and ‘documentary’ simply does not exist; they are all just films. Both take ‘facts’, characters, stories and play with them in the same kind of way. I actually consider Fitzcarraldo my best ‘documentary’. So I fight against cinema verite because it reaches only the most banal level of understanding of everything around us. I know that by making a clear distinction between ‘fact’ and ‘truth’ in my films, I am able to penetrate into a deeper stratum of truth most films do not even notice. The deep inner truth inherent in cinema can be discovered only by not being bureaucratically, politically and mathematically correct. In other words, I start to invent and play with the ‘facts’ as we know them. Through invention, through imagination, through fabrication, I become more truthful than the little bureau- crats. This is an idea that will become clearer when we discuss some of the later films like Bells from the Deep and Lessons of Darkness. - Herzog Cronin 2002, p.240

Talking about “little dieter’s needs to fly” (Herzog 2007)

Herzog says:

We were very careful about editing and stylizing Dieter’s reality. He had to become an actor playing himself. Everything in the film is authentic Dieter, but to intensify him it is all re-orchestrated, scripted and rehearsed. - (Herzog Cronin 2002, p.265)

Personally I don’t find the re-orchestrating and scripting of events very interesting in itself, but the Idea of representing a moment that is artificially reconstructed, to fabricate and reproduce emotions and feelings that belong to that moment, it is nonetheless quiet compelling.

Here’s an extract from a BBC Imagine Alan Yentob Series episode ‘Werner Herzog: Beyond Reason’.

Alan Yentob asking Herzog about the decision of subsequently making a feature film out Dieter’s story, which was then called Rescue Dawn, raises the question about choosing documentary over fiction, which I find Is an interesting point for debate.

Facts and reality some time are quiet often note enough. You need a Enhancement an intensification of it. Some sort of, an essential version of things to make things transparent - Herzog

For instance you take a film like 127 hours, it could have been a docudrama, in the style of [touching the void](http://en.wikipedia.org/wiki/Touching_the_Void_(film), but Danny Boyle would argue that an actor performing is bests at re-creating the truth and getting the audience into the cinematic story. Vs making a docu-drama.

I don’t think there’s a clear cut answer to this question, and it mostly would depend on the story, and the film-maer skills and sensibility. But none the less I agree with the point raised in Imagine that “Rescue Dawn” ended up being more of a documentary because of how literally was reconstructing very detail without leaving room for imagination and interpretation while, “Little Dieter’s needs to fly” was more of a work of fiction (in a good way) because of the performance and re-enactment, which I find interesting food for thought, especially if you think about Jean Rouch idea of performance in rituals being the time where people are their “real” selves, which we mentioned earlier.

The key point Herzog make however is this

It was my job as the director to translate and edit his thoughts into something profound and cinematic. (Herzog & Cronin 2002, p.265)

On the flip side, this also means that sometimes you’d avoid including material that could have great “dramatic impact”, because it is not the right thing to do both for ethics considerations and for the story.See Werner Herzog explaining it in his words in regards to the decision not to include the recording of the final moments of Timothy Treadwell, and his partner being eaten by a bear in “Grizly Man”.

It [the audio recording] was so terrifying and horrifying that it was immediatly clear it was not going ot be in the moview […] and ofcourse there is a responsability that as a film-maker you have. because you’d violate the right and the dignity and death o these two individuals, you just don’t do it.

However it is an important pivotal moment in the story, and Herzog, cleaverly finds another way to include it, indirectly through an interview with the doctor who performed the autopsy and had listened to the tape, and who can describes it. As well as through filming himself listening to it, while talking to one of the key contributors.

I’d close this section with a good interview with Verner Herzog about one of his latest films “Into the Abyss” where he talks about his process, coincidentally about death row as well.

You can also consider how this applies to other forms of documentaries.

Some more docs you should watch are:

Girzly Man

Herzog’s”Girzly Man”, defenetly worth a watch if you haven’t seen it.

Thin blue line

Errol Morris, “Thin blue line” trailer. definitely worth a watch. For a great example of plot driven narrative.

Touching the void

“Touching the void” I saw this when I was 16 at the cinema and is what eventually got me into documentaries.

Searching for Sugar man

“Searching for Sugar man”

And this great interview with the film-maker, Malik Bendjelloul, about the behind the scene making of. Working with contributors and establishing trust and commitment. (watch the film first tho).

But I want to stress it doesn’t have always to be some sort of reconstruction of something that happened in the past, like these last examples would suggest, for instance “Encounters at the End of the World”(Trailer) or “Into the Abyss”(trailer) by Herzog deal with unfolding events in the present.

I chose these last example because they are films were, taking for granted that the camera is not objective, the question is then shifted on what are the best way to use all the multi-sensory element of the film-making medium to reconstruct the truth of the moment of the story you are trying to tell.

Lastly I think that “Into the Abyss” by Herzog I’d say it’s in a more cinema verite style contrasted with the BBC “life and death row series” which I persoanlly think hits more of a sweet spot between the two styles (direct cinema/cinema verite)

It’s all about story

None the less, it’s all about story, not as hard and fast rules, but as principles, which you’d need to know about to be able to break.

And the reason why I say this is that this last point overshadows to a certain extent the previous schools of thoughts, in the sense that once you’ve figured out what the possible story is, then you can work out the best way to tell it.

Sure it’s a given that story telling and story crafting might come easier to some, but none the less it’s a craft, and as such it can be learned.

And whit this thought in mind I’ll leave you with a clip from Adaptation where Robert Mckee answers to the question:

“What if the writer is attempting to create a story where nothing much happens?[…] more of a reflection of the real world?”

Aligning partially scripted speeches, R&D Notes

Pietro Passarelli

Thu Nov 09 2017 00:00:00 GMT+0000 (UTC)

Use case

When working with the Vox Product Team last year, as part of the Knight-Mozilla fellowship with Open News, while making autoEdit.io I had a chance to talk with many video producers across all brands.

Vox.com video producers, when making explainers videos often rely on scripted voice over narration. Which means that they write their voice over (almost like an essay) and then add images, scenes, and interview sounds bits to it in the video editing software of choice (or sometimes they’d paper-edit this in google docs).

When using autoEdit.io, a fast text based video editing mac os x app, that leverages STT systems to edit video interviews, they pointed out that they would have rather have the possibility to align the scripted voiceover narration (which is about 80% of the spoken words in the piece on average) and have speech to text recognition only on the remaining “off script” parts. Eg interviews with contributors etc.. Rather than have to correct inaccurate speech to text on the voice over script parts for which they already had accurate text.

This use case stood out and was left unresolved due to the complexity of the problem and the lack of available tooling for an efficient solution.

However it’s a use case that will resonate with much of radio/tv/video production where you are looking to do aligning partially scripted speeches. In fact recently was asked about this via email, my response was long I decided to convert it into this blog post to see what are other people thoughts on this.

Forced aligners: Aeneas Vs Gentle

I played around with both Aeneas and Gentle, which are two of the more popular open source forced aligner systems out there. If you know others, please do let me know. I guess they have their pros and cons depending on what you are trying to do. But for now I am currently biased towards Aeneas.

Altho Aeneas is not able to recognise text that is not been provided to the system, while Gentle can do that to a certain extent.

The key difference is the underlying implementation.

You provide text to align and audio to both.

Gentle

Gentle, could also work without the text input, and work as a speech to text service, which is how I’ve integrate it in auotEdit.io, a text based video editing app I worked on last year.

Here the details of the setup/integration between the two, unfortunately is packaged as a server so needs to be open as a separate app, altho some people had some luck running it with docker.

However as force aligner, gentle, first does STT, using Kaldi, and a model created by the community around this tool, and then tries to match the generated text with the text you have provided.

This is time consuming because you have to wait for the STT recognition. Which is roughly same length of the media, but on the bright side, it means it can recognise text that you did not provide.

However, out of the box or with some more digging in the Gentle python code, you could use it to figure out which sections are “off script” and which once re not. With that distinction in place, you could then run the “off script” parts against a better quality STT system to get even better results.

Gentle/Kaldi system language model

As a side note, Gentle STT model is not as great as it could be with more training etc… but of the reasons why I integrated it with autoEdit.io as an open source option is that under the hood it uses Kaldi for the language model and STT. Which is the same system used by BBC R&D for the BBC in house STT. Which has been fed media from the BBC Archive and I’ve been told give pretty good results because of the extra training.

So with a better language model Gentle could be much better as well.

Aeneas

**Aeneas **on the other hand, is very fast, because the underlying implementation is different. And is also very very well documented. It was only after a read through the documentation I really understood some of the more interesting use cases, such as possibility for word level alignment, and how to improve performance by switching to a higher quality TTS.

Aeneas given the text and audio input, converts the text to audio. doing TTS and then compares the two waveforms. Which is how it figures out the timecodes etc.. Apparently this is a process that is more accurate at line level then at word level.

If you knew which sections are “off script”, with Aeneas is also possible to pass in a start and end time-code of the section you want to align. And only align a certain section. Because under the hood it uses ffmpeg.

(btw there’s also an easier bundled up way to install aeneas and it’s dependencies for mac and windows https://github.com/sillsdev/aeneas-installer/releases which is not of much use if you are running it on a linux server)

Possible Implementation

A possible implementation could be in 3 parts + some optimisation tricks

  • Part 1, identify what parts are “of script”,

  • Part 2 align the scripted parted with original text,

  • Part 3 do a STT for “off script parts”.

For part one, you can use a Gentle type of approach, and identify the sections that are “off script” and those that are not.

With that info you can then do part two with Aeneas only on the “on script” sections, and part 3 run the “off script” sections through STT, a high quality commercial one, IBM Watson STT, Google Cloud Speech, Microsoft Bing, Speech Matics etc.. or just use the Gentle results if they are good enough for your use case

Obviously I strongly suggest converting the original media you are trying to align to audio at the beginning of the process, because it then becomes a lot faster to do all these splitting etc..

The transcriber module in autoEdit and some of its components could be of interest for some of these operations if you are working in node.

Optimization 1 - Speed

To speed up part one and part 3 as you can see in the autoEdit code if you chunk the audio you want to align into 5 or 1 min chunks when sending it to STT to get the transcription you can considerably speed up the process. This is how I got autoEdit to always transcribe any length o media in a 5 minute turnaround time, coz they can transcribed concurrently. If speed is not an issue tho, you could always optimised for this later.

This optimization was originally an idea from Sam Lavigne

Optimization 2 - words

If you are worried about chunking in the middle of a word, you can use ffmpeg, to detect silence, and try trim in between words as much as possible. But I’d leave that for later optimization. I think the FT Lab transcription service had played around with that idea see textAV presentation here

Get in touch

Thoughts? Ideas? Alternatives? You have tried this? Get in touch via email, or on twitter @pietropassarell or in the Hyperaud.io slack if you are a member.

BBC #newsHack '17 `@TranscriptionBot`

Pietro Passarelli

Thu Oct 26 2017 00:00:00 GMT+0000 (UTC)

Use case

Journalist working with audio interviews, recorded on their smartphone, for a text article.
Our team got an honorable mention at the BBC News Labs conversational user interface #newsHACK.

Prototype

User uploads audio file onto slack channel, slack bot handles transcriptions, and allows user to query transcription, for questions, answers, both, and insight into the text.

Slackbot Screenshot

Presentation

See github repo README for more info.

That Frustrating Thing About Creating Data Vis in Newsrooms

Lisa Charlotte Rost

Tue Oct 24 2017 00:00:00 GMT+0000 (UTC)

or: Why I started to work at Datawrapper

tl;dr: I believe that it should be easier for journalists to create charts for their articles. The public will benefit from more context, and the Data Vis experts in their newsroom will have time to focus on bigger projects. Datawrapper is my best bet on a data vis tool for journalists.



T H E  P R O B L E M

In newsrooms, two kinds of people deal with data visualisation:

image

On the one side of the are the Data Vis experts. They use their full eight hours per day to create data visualizations (or prepare for it) or to go to data vis conferences. They know everyone in the scene and follow their peers in other newsrooms on Twitter to learn from them, but also to see if they came up with a better election results visualization. And they know shit. They can design, code and analyze data like pros (that’s because they are). They want to use their tools and skills to make something great. Something big. Something grand. And award-worthy! Dare I say field-changing!! And then they get tapped on the shoulder by the journalists and get asked: “Can you make us a bar chart?”

Journalists are the ones on the other side. They’re excited about their ideas, their sources, their stories, their writing. They’re also excited about charts – at least some of them – but only if time allows and if they’re brave enough. Because brave they need to be: Too often they went to the Data Vis experts on the other side of the newsroom and asked them for something seemingly simple. “Oh my, that’s complicated.” was the answer. Or: “That will take at least five hours/two months/a year to make.” So for journalists, charts and maps don’t seem like something that can quickly enhance their articles. Charts seem like a huge deal; rarely worth the hassle.

That leaves us with both sides being frustrated: Journalists are dependent on the Data Vis experts even for the simplest charts. And Data Vis experts want to make something big, but get held back by the Journalists. 

image

The-world-is-not-black-and-white-note: The situation is not like that everywhere. I’m not even sure if the situation is like that in most newsrooms. In many newsrooms, Data Vis experts and journalists have found technical or organizational solutions (like internal charting tools) that work really well for them. But in too many newsrooms, I’ve experienced this kind of frustration first-hand. 

It pains me to see this frustration. 

I don’t want Data Vis experts to make simple bar charts. I want them to make something great, big, grand, award-worthy, dare I say field-changing. Changing the field is important. As a designer by training, I have been interested in finding innovative ways to visualize data since I started to visualize data. It’s just tons of fun. And I’m very happy with the community around data vis exactly because there’s always someone doing something freshly; challenging all the others to think about what we should do, or why, or how. And these people should continue to challenge. They should not be held back by making simple bar charts eight hours a day.

image

The kinds of data visualization I want Data Vis experts to work on. From the 2017 Short List of the Kantar Information is Beautiful Award


I also don’t want journalists to get less excited about charts and maps because it seems like a big hassle. I want journalists to make more charts, not less. This is actually really close to my heart: I want journalism to give us all fewer anecdotes, and more context. (I talked and wrote about this a year ago.) And I see Data Vis as the new kid in town, waving, shouting “Heeeey y’all, I’m perfect for context! I’m here to give overviews! Comparisons! Use me!”. And I see that kid being all disappointed every time an article states “x was done to people 8942 times” / “Our country now does x 3839284 times” without explaining what that means (with a simple bar chart).

image The kinds of data visualization that should be so simple to make that journalists can work on them.



T H E  S O L U T I O N

So that’s where I see the solution: It should be damn simple to make that bar chart. For journalists to be independent, and for Data Vis experts to save time for the big stuff. No code. Just data in, bar chart out. 

And that’s where a charting tool called Datawrapper comes in, and my reason to start working on it. After trying out 24 data vis tools, plus Datawrapper, I found that Datawrapper is the best choice for exactly that use case. Datawrapper sucks at scientific visualizations (better use R, Python or Plotly for that). It’s not great for analyzing big data sets (I recommend R or Tableau). And you won’t be able to make innovative stuff with it (d3.js, NodeBox or Lyra are a good choice for that).

But Datawrapper makes it damn simple to create that bar chart. 
Data in, bar chart out. 

Datawrapper delivers so well because it is niche. From the beginning, the team behind it didn’t ask: “What do people out there need to create data vis?”, but: “What do journalists need to create simple charts for their web articles?” Turns out: Responsiveness! Done. Good chart design even if you have no clue about charts! Done. Interaction! Done. Maybe also a way to style it in the newsroom brand colors and fonts? Done. 

Datawrapper is not perfect yet. Sometimes the tooltips on the new map feature don’t show. Sometimes I change colors and nothing happens until I reload the page. There are smaller bugs here and there. But the team, their goal and how they go there, impresses me a lot. And their company goal aligns with my professional goal: To enable journalists to make more charts, better-looking and faster.

So that’s why I end my freelance life (mostly for the Berlin newspaper “Tagesspiegel” in the last six months) and start to work at Datawrapper. Let the journey begin

If you’re curious what I will do at Datawrapper, hop over to the Datawrapper Blog, where I wrote a little thing to answer that question.

Fact2 Transcriptions Text Editor

Pietro Passarelli

Wed Oct 11 2017 00:00:00 GMT+0000 (UTC)

Fact2 (pronounced Fast Squared) uses machine learning & neural networks to turn unstructured video, audio, text, PDFs to rich, searchable structured data.

I was contracted to create an open source transcription text editor for their pool of proofreaders to correct the automated transcriptions generated by their system using speech to text services.

For github repo see here.

Fact2 Screenshot

Key features

  • supports HTML5 audio and video

  • hypertranscript: double click on a word takes you to corresponding part in the media (audio / video).
    • with customizable interval, default at 3 seconds before
    • in text indication of media time
  • Rollback button, with custumizable variable, default at 15 seconds
  • puase while typing toggle, with customizable interval, default at 3 sec.
  • hide video toggle, (for when concentrating on transcribing/proofreading and it gets distracting)
  • jump to time
  • usal play, pause, rewind, forward, progress bar etc…
  • Keyboard shortcuts, for all of above

  • speed controls, and display of playback rate

  • shortcut to display keyboard shortcuts, and display of keyboard shortcuts along side interface

  • seamless/plain text editing experience.

  • support to
    • display confidence scores
    • display unaligned words
    • add and display speaker diarizaiton info, with button and shortcut
    • add description info eg [inaudible], with button and shortcut
  • mobile responsive (altho double tapping on words not enabled yet)

Keeping up with OpenNews and SRCCON:WORK

the OpenNews Team

Thu Oct 05 2017 09:00:00 GMT+0000 (UTC)

In our second stakeholder newsletter, a details about SRCCON:WORK, perspective from Brittany Mayes, and looking ahead to Source's 5-year anniversary and upcoming events.

SRCCON:WORK, space to care for and challenge one another

the OpenNews Team

Wed Oct 04 2017 09:00:00 GMT+0000 (UTC)

As we prepare for SRCCON:WORK, we want to share the confirmed sessions so far and details about the ticket process.