It’s the workflow, stupid!

Main menu:

Site search


April 2020
« Dec    



On hold

Life has taken a turn for the busier over the last few years, so this project is on hold for now. Also, I had to take down the wiki because it has been invaded by spam and malware.

Resuming development

Over the past year or so, I have put filmmaking on hold. Part of the reason was a busy life, and part was a badly behaving computer which would overheat and shut down every time I tried to do any sort of CPU-intensive work on it (such as rendering video). I recently built myself a new computer (4 TB of hard drive space baby!) and am starting up again. With Blender 2.5 nearing beta, I will be switching to Python 3 and probably doing a significant rewrite of the entire project. I won’t be focusing as much on using .avs scripts within Blender, since that will be limited to 32-bit Windows.

I will probably start BAAM over from scratch, since the Blender Python API has changed significantly. (This probably affects somewhere near zero people in the entire world.)

BlenderAVC 0.7 released, now including BAAM

I released version 0.7 of BlenderAVC yesterday, which has been updated with a few new features and now includes a Blender build based on the recent 2.48a source code.

The package now includes BAAM (BlenderAVC Asset Manager). This Python script runs within Blender and allows you to browse your video clips (processed by BlenderAVC) and easily add them to the VSE timeline.

Progress report on BAAM and release of all project sources

I’ve been working on BAAM (BlenderAVC Asset Manager), which is a simple asset manager integrated into Blender via a Python script. After a bit more testing I’ll release it and write some documentation.

I have also uploaded a source code package for all tools used in BlenderAVC in order to comply with the terms of the GPL.

BlenderAVC version 0.6 released

Version 0.6 of BlenderAVC features an improved GUI, (fairly) complete support for HDV, 24p (24 frames progressive-scan), and a number of bug fixes.

BlenderAVC version 0.5 released — HDV files needed!

I fixed some GUI issues in BlenderAVC and improved the “installation” process (fewer files to unzip). A build of Blender 2.47 is now included. If you have any raw HDV (.M2T) files you can make available, please let me know. I don’t have access to an HDV camera, and I’d love to have some clips for testing.

Released BlenderAVC version 0.2

I released a binary package of BlenderAVC version 0.2 today. Feedback is welcome!

Added HDV support to BlenderAVC

I found some HDV video from a camera I had borrowed a while back and added HDV support to BlenderAVC. It wasn’t too hard — DGMpgDec is (naturally) very similar to DGAvcDev. I also added a separate AVS script for audio only. While this may seem odd, it’s essential on a slower machine, since Blender’s proxy feature does not support audio.

Simple media management

I have been thinking (read: obsessing) over the idea of implementing a simple media management feature in Blender using a Python script. This morning I found a topic — prerendering for sequencer — which shows that the Python API covers at least part of the VSE. So it may be feasible.

Since the BlenderAVC clip meta info is contained in a clips.xml document in each “project” folder, the script could allow you to choose a set of these XML documents and then use them to show your available clips — size, length, thumbnail … ? You could then tell it to place a clip on the timeline and it could automatically create a meta strip containing the video and audio tracks. Hmmmm …

First project using Blender as an NLE

Yesterday i finished a rough draft of a 20-minute video for my father. It was the first project I’ve done using Blender‘s Video Sequence Editor (NLE), and I was fairly impressed. Using proxies, it is fast, even on my 4-year-old machine. It’s also stable — didn’t crash once! I really like the meta strips; being able to “tab” into (and out of) them is extremely powerful.

The biggest drawbacks have to do with media management. Loading 40+ clips into the timeline was a little painful, especially since I had to load each audio track in separately and then sync it with the video. The reason: While you can automatically pull in video and audio using an AVS script, Blender cannot yet proxy the audio. Thus, when you play the clip, AviSynth is still attempting to decode the AVC video in real time. This makes for extreme stuttering on my machine.

I am thinking of trying to write a Python extension which would allow an entire set of BlenderAVC clips to be sucked in at once. It would lay them out end-to-end on the timeline, sync up the audio and automatically stuff ‘em into meta strips. It would also be slick to add a strip name from a clips.xml file attribute. Stay tuned!