Recently there was the need to fetch few things from dailymotions.
keepvid.com and al. sites are cool, but the joke for me was to write a command-line version. Here is a function for the born-shells:
function dailygrab {
durl=$1
miniurl=$2
fullurl=`curl $1 2>&1 | grep Location | head -1 |
perl -pe 's/.*flashvars=\"url=\&url=http/http/g;
s/&.*//g;
s/%([A-Fa-f0-9]{2})/pack('C', hex($1))/seg;'`
if [ "x$miniurl" = "x" ]; then
miniurl=`echo "$fullurl" | sed 's/.*\///' | sed 's/\?.*//'`
fi
wget -c -O$miniurl $fullurl
}
The usage:
- go to your destination directory
- dailygrab the_dailymotion_url [optionaloutputfilename]
The url does not have to be the permalink one: the result of a search is enough.
For example, I searched the keyword "debian", copied the destination link of the first video (using the right mouse click on the browser), went to the console and typed:
% dailygrab http://www.dailymotion.com/relevance/\
search/debian/video/xizji_xgl-sous-debian-sid
...
10:26:44 (151.00 KB/s) - `885870.flv' saved [839032/839032]
What's cool, thanks to wget, is that if this fails you can resume the download.
You need curl, wget, perl and a decent shell. And of course every script could be optimised, rewritten, etc... Feel free to criticise it!
PS: Any dailymotion change in how they output their pages might break this function, of course.
(Edit, 22.04.2007, added an optional 2nd parameter for the output file)
(Edit, 17.07.2007, Added &url= in the perl parser)