Search Results

Search found 4010 results on 161 pages for 'audio fingerprinting'.

Page 148/161 | < Previous Page | 144 145 146 147 148 149 150 151 152 153 154 155  | Next Page >

  • How can I attach a file using Wordpress custom fields / meta boxes?

    - by shipshape
    I am using Wordpress's add_meta_box() function to add customized meta fields to the Add New Post page, like this. I want one of these fields to allow the user to upload a file, so that a single image, pdf, audio file, or video can be associated with the post. The closest example I've seen is this one. Unfortunately it does not suit my needs, as I want my file to be processed by Wordpress's Media Uploader - so it should appear in the Media Library afterwards, and thumbnails should be generated according to the Media settings. I think ideally there would be a way to tap into Wordpress's existing Add Media dialog, and simply output the URL of the uploaded file into a text box, but I don't see how to do that. This question is similar, but the answers are a little clunky - I would like to keep this super simple for my end users. How can I accomplish this? Please do not recommend plugins such as Flutter or Magic Fields - I have tried these and they do not suit my purposes (I want the images to be processed by Wordpress's Media Uploader). I am using Wordpress 3.0-alpha. Thanks!

    Read the article

  • Dynamically created iframe used to download file triggers onload with firebug but not without

    - by justkt
    EDIT: as this problem is now "solved" to the point of working, I am looking to have the information on why. For the fix, see my comment below. I have an web application which repeatedly downloads wav files dynamically (after a timeout or as instructed by the user) into an iframe in order to trigger the a default audio player to play them. The application targets only FF 2 or 3. In order to determine when the file is downloaded completely, I am hoping to use the window.onload handler for the iframe. Based on this stackoverflow.com answer I am creating a new iframe each time. As long as firebug is enabled on the browser using the application, everything works great. Without firebug, the onload never fires. The version of firebug is 1.3.1, while I've tested Firefox 2.0.0.19 and 3.0.7. Any ideas how I can get the onload from the iframe to reliably trigger when the wav file has downloaded? Or is there another way to signal the completion of the download? Here's the pertinent code: HTML (hidden's only attribute is display:none;): <div id="audioContainer" class="hidden"> </div> JavaScript (could also use jQuery, but innerHTML is faster than html() from what I've read): waitingForFile = true; // (declared at the beginning of closure) $("#loading").removeClass("hidden"); var content = "<iframe id='audioPlayer' name='audioPlayer' src='" + /path/to/file.wav + "' onload='notifyLoaded()'></iframe>"; document.getElementById("audioContainer").innerHTML = content; And the content of notifyLoaded: function notifyLoaded() { waitingForFile = false; // (declared at beginning of the closure) $("#loading").addClass("hidden"); } I have also tried creating the iframe via document.createElement, but I found the same behavior. The onload triggered each time with firebug enabled and never without it. EDIT: Fixed the information on how the iframe is being declared and added the callback function code. No, no console.log calls here.

    Read the article

  • SoundManager / Jquery / Regular expression : Parse class name before certain character To Get SoundI

    - by j-man86
    So I am trying to access a jquery soundmanager variable from one script (wpaudio.js – from the wp-audio plugin) inside of another (init.js – my own javascript). I am creating an alternate pause/play button higher up on the page and need to resume the current soundID, which is contained as part of a class name in the DOM. Here is the code that creates that class name in wpaudio.js: function wpaButtonCheck() { if (!this.playState || this.paused) jQuery('#' + this.sID + '_play').attr('src', wpa_url + '/wpa_play.png'); else jQuery('#' + this.sID + '_play').attr('src', wpa_url + '/wpa_pause.png'); } Here is the output: where wpa0 would be the sID of the sound I need. My current script in init.js is: $('.mixesSidebar #currentSong .playBtn').toggle(function() { soundManager.pauseAll(); $(this).addClass('paused'); }, function() { soundManager.resumeAll(); $(this).removeClass('paused'); }); I need to change resumeAll to "resume(this.sID)", but I need to somehow store the sID onclick and call it in the above function. Alternately, I think a regular expression that could get the class name of the current play button and either parse the string up to the "_play" or use a trim function to get rid of "_play"– but I'm not sure how to do this. Thanks for your help!

    Read the article

  • SoundManager / Jquery : Get SoundID sID

    - by j-man86
    So I am trying to access a jquery soundmanager variable from one script (wpaudio.js – from the wp-audio plugin) inside of another (init.js – my own javascript). I am creating an alternate pause/play button higher up on the page and need to resume the current soundID, which is contained as part of a class name in the DOM. Here is the code that creates that class name in wpaudio.js: function wpaButtonCheck() { if (!this.playState || this.paused) jQuery('#' + this.sID + '_play').attr('src', wpa_url + '/wpa_play.png'); else jQuery('#' + this.sID + '_play').attr('src', wpa_url + '/wpa_pause.png'); } Here is the output: <img src="http://24.232.185.173/wordpress/wp-content/plugins/wpaudio-mp3-player/wpa_play.png" class="wpa_play" id="wpa0_play"> where wpa0 would be the sID of the sound I need. My current script in init.js is: $('.mixesSidebar #currentSong .playBtn').toggle(function() { soundManager.pauseAll(); $(this).addClass('paused'); }, function() { soundManager.resumeAll(); $(this).removeClass('paused'); }); I need to change resumeAll to "resume(this.sID)", but I need to somehow store the sID onclick and call it in the above function. Alternately, I think a regular expression that could get the class name of the current play button and either parse the string up to the "_play" or use a trim function to get rid of "_play"– but I'm not sure how to do this. Thanks for your help!

    Read the article

  • C# StripStatusText Update Issue

    - by ikurtz
    I am here due to a strange behaviour in Button_Click event. The code is attached. The issue is the first StripStatus message is never displayed. any ideas as to why? private void FireBtn_Click(object sender, EventArgs e) { // Control local controls for launching attack AwayTableLayoutPanel.Enabled = false; AwayCancelBtn.Enabled = false; FireBtn.Enabled = false; ////////////// Below statusBar message is never displayed but the folowing sound clip is. GameToolStripStatusLabel.Text = "(Home vs. Away)(Attack Coordinate: (" + GameModel.alphaCoords(GridLock.Column) + "," + GridLock.Row + "))(Action: Fire)"; //////////////////////////////////////////// if (audio) { SoundPlayer fire = new SoundPlayer(Properties.Resources.fire); fire.PlaySync(); fire.Dispose(); } // compile attack message XmlSerializer s; StringWriter w; FireGridUnit fireGridUnit = new FireGridUnit(); fireGridUnit.FireGridLocation = GridLock; s = new XmlSerializer(typeof(FireGridUnit)); w = new StringWriter(); s.Serialize(w, fireGridUnit); ////////////////////////////////////////////////////////// // send attack message GameMessage GameMessageAction = new GameMessage(); GameMessageAction.gameAction = GameMessage.GameAction.FireAttack; GameMessageAction.statusMessage = w.ToString(); s = new XmlSerializer(typeof(GameMessage)); w = new StringWriter(); s.Serialize(w, GameMessageAction); SendGameMsg(w.ToString()); GameToolStripStatusLabel.Text = "(Home vs. Away)(Attack Coordinate: (" + GameModel.alphaCoords(GridLock.Column) + "," + GridLock.Row + "))(Action: Awaiting Fire Result)"; } EDIT: if I put in a messageBox after the StripStatus message the status is updated.

    Read the article

  • process of connecting RTP with SIP via SDP & land lines

    - by TacB0sS
    Hello to everyone, I have a problem with starting a media session and to combine it with my SIP client. I've designed a recursive SIP client that reuse the same request template to send the next requests to server, according to the acceptable sequences noted in the RFC's, and examples that I read. as far as I could tell the SIP part is working fine registers to server invites, and authenticates. I didn't complete any calls to clients yet because of the content header needs to be filled up (which I didn't yet so I get a 503 from the server which is OK I guess). for a long time I didn't know where to start with the media session, and slowly learned how to use the JMF and I've constructed an object that handles RTP transmitting, now I'm standing at the cross road, on the one hand I have my SIP signaling but it needs the SDP content header to complete the invite, and on the other I have the RTP which is knows how to p2p. For me to complete my design I require your help with the following questions: Is there an easy//a simple//an implemented way to convert the Audio/Video format from the JMF into SDP media headers? or even a generator that I would input all the parameters for the content header, and it would generate a content header fast, or do I have to implement this myself? Once I've finished constructing the SDK and the SIP is up and running and I get an OK response from the server (after ringing and all), how do I start the media session? do I connect p2p according to caller details I send in the SIP invite? If 2 is correct, then how does a connection to land lines would be? does land lines knows that once they send an OK back to server they listen/start RTP session on a specific port? Or did I get everything wrong? :-/ I really appreciate any help I could I get, I looked every where for answers but they are not clear, they ignore question 2 as if it was an obvious thing, but for me it just isn't. Thank in advance, Adam Zehavi.

    Read the article

  • Playing a .TS file on iOS

    - by Jonathan Grynspan
    We're working with some hardware that produces files in the .TS format, and we'd like to play them on an iOS device. (The files are internally consistent with what iOS supports--MPEG-4 video, AAC audio.) We've been investigating three options so far: Roll our own integrated HTTP Live Streaming server and serve up a faux M3U8 playlist from within the app. This... doesn't seem to want to play nice, and we've had mixed luck actually getting the .TS files to play on devices. Unwrap the MPEG-4 and AAC data from the TS file and re-wrap it as MP4. This, I'm told, is exceedingly difficult to do, and I haven't found anything useful online that could shed light on how to do it. We've got code in the pipeline to do it but it won't be ready until long after we need it. If we could do it, I could easily subclass NSURLProtocol and have it working within a matter of hours minutes. Use FFmpeg to implement option #2. FFmpeg seems like a possible solution but it isn't configured to build for iOS and I don't have the background to get it working (whereas the rest of our engineers don't have the Apple background needed.) I think #2 is our best bet, but as I don't know the ins and outs of MPEG-2 TS and MPEG-4, I don't have the ability to put it together myself. Does anybody have any insight into this problem? Perhaps some experience playing local TS files on iOS, or some tips on converting from TS to MP4?

    Read the article

  • How do I begin reading source code?

    - by anonnoir
    I understand the value of reading source code, and I am trying my best to read as much as I can. However, every time I try getting into a 'large' (i.e. complete) project of sorts, I am overwhelmed. For example, I use Anki a lot when revising languages. Also, I'm interested in getting to know how an audio player works (because I have some project ideas), hence quodlibet on Google Code. But whenever I open the source code folders for the above programs, there are just so many files that I don't know where or what to begin with. I think that I should start with files marked init.py but I can't see the logical structure of the programs, or what reasoning was applied when the original writer divided his modules the way he did. Hence, my questions: How/where should I begin reading source? Any general tips or ideas? How does a programmer keep in mind the overall structure and logic of the program, especially for large projects, and is it common not to document that structure? As an open source reader, must I look through all of the code and get a bird's eye view of the code and libraries, before even being able to proceed? Would an IDE like Eclipse SDK (with PyDev) help with code-reading? Thanks for the help; I really appreciate your helping me.

    Read the article

  • How can I filter these Django records?

    - by mipadi
    I have a set of Django models as shown in the following diagram (the names of the reverse relationships are shown in the yellow bubbles): In each relationship, a Person may have 0 or more of the items. Additionally, the slug field is (unfortunately) not unique; multiple Person records may have the same slug fields. Essentially these records are duplicates. I want to obtain a list of all records that meet the following criteria: All duplicate records (that is, having the same slug) with at least one Entry OR at least one Audio OR at least one Episode OR at least one Article. So far, I have the following query: Person.objects.values('slug').annotate(num_records=Count('slug')).filter(num_records__gt=1) This groups all records by slug, then adds a num_records attribute that says how many records have that slug, but the additional filtering is not performed (and I don't even know if this would work right anyway, since, given a set of duplicate records, one may have, e.g., and Entry and the other may have an Article). In a nutshell, I want to find all duplicate records and collapse them, along with their associated models, into one record. What's the best way to do this with Django?

    Read the article

  • Converting UnicodeString to PAnsiChar in Delphi XE

    - by moodforaday
    In Delphi XE I am using the BASS audio library, which contains this function: function BASS_StreamCreateURL(url: PAnsiChar; offset: DWORD; flags: DWORD; proc: DOWNLOADPROC; user: Pointer):HSTREAM; stdcall; external bassdll; The 'url' parameter is of type PAnsiChar, so in my code I do a cast: FStreamHandle := BASS_StreamCreateURL(PAnsiChar( url ) [...] The compiler emits a warning on this line: "suspicious typecast of string to PAnsiChar". In trying to eliminate the warning, I found that the recommended way is to use a double cast: FStreamHandle := BASS_StreamCreateURL(PAnsiChar( AnsiString( url )) [...] This does eliminate the warning, but the BASS function now returns error code 2 ("cannot open file"), which tells me the URL string it receives is somehow broken. I cannot see what the bass DLL actually receives, but using a breakpoint in the debugger the string looks good: var s : PAnsiChar; begin s := PAnsiChar( AnsiString( url )); At this point string s appears fine, but the BASS function fails when I pass it. My initial code: PAnsiChar( url ) works well with BASS, but emits a warning. So what's the correct way of getting from UnicodeString to PAnsiChar without a warning?

    Read the article

  • PlaySound linker error in C++

    - by logic-unit
    Hello, I'm getting this error: [Linker error] undefined reference to 'PlaySoundA@12' Id returned 1 exit status From this code: // c++ program to generate a random sequence of numbers then play corresponding audio files #include <windows.h> #include <mmsystem.h> #include <iostream> #pragma comment(lib, "winmm.lib") using namespace std; int main() { int i; i = 0; // set the value of i while (i <= 11) // set the loop to run 11 times { int number; number = rand() % 10 + 1; // generate a random number sequence // cycling through the numbers to find the right wav and play it if (number == 0) { PlaySound("0.wav", NULL, SND_FILENAME); // play the random number } else if (number == 1) { PlaySound("1.wav", NULL, SND_FILENAME); // play the random number } //else ifs repeat to 11... i++; // increment i } return 0; } I've tried absolute and relative paths for the wavs, the file size of them is under 1Mb each too. I've read another thread here on the subject: http://stackoverflow.com/questions/1565439/how-to-playsound-in-c As you may well have guessed this is my first C++ program, so my knowledge is limited with where to go next. I've tried pretty much every page Google has on the subject including MSDN usage page. Any ideas? Thanks in advance...

    Read the article

  • FFMpeg Error av_interleaved_write_frame():

    - by rajaneesh
    this my code . after running php code FFmpeg version 0.5, Copyright (c) 2000-2009 Fabrice Bellard, et al. configuration: --prefix=/usr --libdir=/usr/lib --shlibdir=/usr/lib --mandir=/usr/share/man --incdir=/usr/include --enable-libamr-nb --enable-libamr-wb --enable-libdirac --enable-libfaac --enable-libfaad --enable-libmp3lame --enable-libtheora --enable-libx264 --enable-gpl --enable-nonfree --enable-postproc --enable-pthreads --enable-shared --enable-swscale --enable-x11grab libavutil 49.15. 0 / 49.15. 0 libavcodec 52.20. 0 / 52.20. 0 libavformat 52.31. 0 / 52.31. 0 libavdevice 52. 1. 0 / 52. 1. 0 libswscale 0. 7. 1 / 0. 7. 1 libpostproc 51. 2. 0 / 51. 2. 0 built on Nov 6 2009 19:05:03, gcc: 4.1.2 20080704 (Red Hat 4.1.2-46) Seems stream 0 codec frame rate differs from container frame rate: 50.00 (50/1) - 25.00 (25/1) Input #0, flv, from 'demo.flv': Duration: 00:00:30.83, start: 0.000000, bitrate: 546 kb/s Stream #0.0: Video: h264, yuv420p, 640x360 [PAR 1:1 DAR 16:9], 546 kb/s, 25 tbr, 1k tbn, 50 tbc Stream #0.1: Audio: aac, 44100 Hz, stereo, s16 Output #0, image2, to 'demo.jpg': Stream #0.0: Video: mjpeg, yuvj420p, 640x360 [PAR 1:1 DAR 16:9], q=2-31, 200 kb/s, 90k tbn, 1 tbc Stream mapping: Stream #0.0 - #0.0 Press [q] to stop encoding av_interleaved_write_frame(): I/O error occurred Usually that means that input file is truncated and/or corrupted.

    Read the article

  • Array output for option of command in bash script

    - by dewaforex
    Hi, Sorry for my bad english I'm stuck figure out with my bash script with array for option of command I make bash script to extract attachments from mkv file, and at the end merge again that attachments to mkv file after the video/audio has been encoding.. this is for extract attachment #find the total of attachment A=$(mkvmerge -i input.mkv | grep -i attachment | awk '{printf $3 "\n"}' | sed 's;\:;;' | awk 'END { print NR }') #extract it for (( i=1; i<=$A; i++ )) do font[${i}]="$(mkvmerge -i input.mkv | grep -i attachment | awk '{for (i=11; i <= NF; i++) printf($i"%c" , (i==NF)?ORS:OFS) }' | sed "s/'//g" | awk "NR==$i")" mkvextract attachments input.mkv $i:"${font[${i}]}" done And now for merge again the attachment for (( i=1; i<=$A; i++ )) do #seach for space between file name and and '\' before the space because some attachment has space in filename font1[${i}]=$(echo ${font[${i}]} | sed 's/ /\\ /g') #make option for add attachment attachment[${i}]=$"--attach-file ${font1[${i}]}" done mkvmerge -o output.mkv -d 1 -S test.mp4 sub.ass ${attachment[*]} The problem, still can't work for file name with space. When I tried echo the ${attachment[*]}, It's seem all right --attach-file Beach.ttf --attach-file Candara.ttf --attach-file CASUCM.TTF --attach-file Complete\ in\ Him.ttf --attach-file CURLZ_.TTF --attach-file Frostys\ Winterland.TTF --attach-file stilltim.ttf But the output still recognize the file name with space only the first word. mkvmerge v3.0.0 ('Hang up your Hang-Ups') built on Dec 6 2010 19:19:04 Automatic MIME type recognition for 'Beach.ttf': application/x-truetype-font Automatic MIME type recognition for 'Candara.ttf': application/x-truetype-font Automatic MIME type recognition for 'CASUCM.TTF': application/x-truetype-font Error: The file 'Complete\' cannot be attached because it does not exist or cannot be read. I hope somebody can help me. Thanks

    Read the article

  • Encoding h.264 with libavcodec/x264

    - by Leviathan
    I am attempting to encode video using libavcodec/libavformat. I'm trying to change the standard output-example.c from ffmpeg source. The AVI file is created on the disk, but the only sound is encoded. I tried adding a lot of options for x264 from here. All the other codecs works fine, mpeg2, mpeg4, mjpeg, xvid. In addition to specifying the parameters x264, I also set the codec to AVOutputFormat structure. That's all I've done. AVOutputFormat *pOutFormat; // in header file av_register_all(); AVCodec *codec = avcodec_find_encoder_by_name("libx264"); pOutFormat = guess_format("avi", NULL, NULL); pOutFormat->video_codec = codec->id; The debug output of my application: Output #0, mp4, to 'D:\1.avi': Stream #0.0: Video: libx264, yuv420p, 320x240, q=10-51, 500 kb/s, 90k tbn, 25 tbc Stream #0.1: Audio: aac, 44100 Hz, 1 channels, s16, 128 kb/s [libx264 @ 0x694010]using cpu capabilities: MMX2 SSE2Fast SSSE3 FastShuffle SSE4.2 [libx264 @ 0x694010]bitrate tolerance too small, using .01 [libx264 @ 0x694010]profile Main, level 2.0 [libx264 @ 0x694010]frame I:150 Avg QP:14.76 size: 2534 [libx264 @ 0x694010]mb I I16..4: 75.9% 0.0% 24.1% [libx264 @ 0x694010]final ratefactor: 17.57 [libx264 @ 0x694010]coded y,uvDC,uvAC intra: 42.7% 92.4% 47.4% [libx264 @ 0x694010]i16 v,h,dc,p: 11% 14% 2% 73% [libx264 @ 0x694010]i4 v,h,dc,ddl,ddr,vr,hd,vl,hu: 21% 18% 29% 5% 8% 10% 3% 3% 2% [libx264 @ 0x694010]kb/s:506.79

    Read the article

  • Getting nice sound from Java

    - by Peter Lang
    I managed to play midi files using Java, but it produces some distracting noise. I figured out that this is caused by the poor quality soundbank file shipped with Java 6 SDK/JRE. How can I improve that quality? Here is what I have so far: MidiNote example using a Receiver works fine (sounds the same as when playing midi files with other players), so it does not seem to use the Soundbank shipped with Java but the fallback mechanism that uses a hardware MIDI port. Using SimpleMidiPlayer example to play a Midi file works, but the quality is poor. When I delete lib/audio/soundbank.gm, the quality is not bad any more, so the fallback is used again. When I put soundbank-deluxe.gm into the same directory, it is used and produces much better sound. Messing with the clients soundbank file as described in the official Installation Instructions certainly isn't an option, so I tried to put the new soundbank-file into the jar-file and load it: Soundbank soundbank = MidiSystem.getSoundbank( getClass().getResourceAsStream("soundbank-deluxe.gm")); if(synthesizer.isSoundbankSupported(soundbank)) { System.out.println(synthesizer.loadAllInstruments(soundbank)); } This prints true, but the sound remains unchanged. What am I doing wrong loading the soundbank file? Can I force the hardware MIDI port to be used instead of the standard soundbank file?

    Read the article

  • Setting Position of source and listener has no effect

    - by Ben E
    Hi Guys, First time i've worked with OpenAL, and for the life of my i can't figure out why setting the position of the source doesn't have any effect on the sound. The sounds are in stero format, i've made sure i set the listener position, the sound is not realtive to the listener and OpenAL isn't giving out any error. Can anyone shed some light? Create Audio device ALenum result; mDevice = alcOpenDevice(NULL); if((result = alGetError()) != AL_NO_ERROR) { std::cerr << "Failed to create Device. " << GetALError(result) << std::endl; return; } mContext = alcCreateContext(mDevice, NULL); if((result = alGetError()) != AL_NO_ERROR) { std::cerr << "Failed to create Context. " << GetALError(result) << std::endl; return; } alcMakeContextCurrent(mContext); SoundListener::SetListenerPosition(0.0f, 0.0f, 0.0f); SoundListener::SetListenerOrientation(0.0f, 0.0f, -1.0f); The two listener functions call alListener3f(AL_POSITION, x, y, z); Real vec[6] = {x, y, z, 0.0f, 1.0f, 0.0f}; alListenerfv(AL_ORIENTATION, vec); I set the sources position to 1,0,0 which should be to the right of the listener but it has no effect alSource3f(mSourceHandle, AL_POSITION, x, y, z); Any guidance would be much appreciated

    Read the article

  • Linking AS code to symbols defined in an external SWC?

    - by Ender
    (apologies ahead of time, I only really know Flash; my Flex experience is basically nil. There may be a very standard and obvious workflow solution that Flex people know about) I have a number of UI elements that are graphically quite complex (they're not components, they're just Sprites). Since it takes a long time to compile them, I've been trying to move them into an external .swc. However, I want to associate some code with these classes, but I don't want to have to recompile the graphical assets every time I make a code change. At the moment I have it set up like this: UI elements are created in a separate FLA and exported to a SWC. In my primary FLA, I have actionscript classes that extend each of the graphical assets in the SWC. For example: external.swc: (some symbol defined in the Library and exported for actionscript in frame 1) class: com.foo.WidgetGraphic base: flash.display.Sprite main.fla: Widget.as: package com.foo { public class Widget extends WidgetGraphic { ... } } This works, but is time-consuming and prone to error. I'd rather be able to avoid having to inherit from each graphical asset, and just define them directly. Is there a better way to do what I'm trying to accomplish? Note: the main concern here is compile time. I don't have any movies or audio or fonts, just a lot of vector art assets that appear to be slowing down my compilation time significantly. When I'm debugging I'm only making code changes, and would rather not have to keep recompiling the art...

    Read the article

  • c# regex split and extract multiple parts from a string

    - by nLL
    Hi, I am trying to extract some parts of the "Video:" line from below text. Seems stream 0 codec frame rate differs from container frame rate: 30000.00 (300 00/1) - 14.93 (1000/67) Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'C:\a.3gp': Metadata: major_brand : 3gp5 minor_version : 0 compatible_brands: 3gp5isom Duration: 00:00:45.82, start: 0.000000, bitrate: 357 kb/s Stream #0.0(und): Video: mpeg4, yuv420p, 352x276 [PAR 1:1 DAR 88:69], 344 kb /s, 14.93 fps, 14.93 tbr, 90k tbn, 30k tbc Stream #0.1(und): Audio: aac, 16000 Hz, mono, s16, 11 kb/s Stream #0.2(und): Data: mp4s / 0x7334706D, 0 kb/s Stream #0.3(und): Data: mp4s / 0x7334706D, 0 kb/s* This is an output from ffmpeg command line where i can get Video: part with private string ExtractVideoFormat(string rawInfo) { string v = string.Empty; Regex re = new Regex("[V|v]ideo:.*", RegexOptions.Compiled); Match m = re.Match(rawInfo); if (m.Success) { v = m.Value; } return v; } and result is mpeg4, yuv420p, 352x276 [PAR 1:1 DAR 88:69], 344 kb What i am trying to do is to somehow split that line and get mpeg4 yuv420p 352x276 [PAR 1:1 DAR 88:69] 344 kb assigned to diffrent string objects instead of single

    Read the article

  • Why is cell phone software is still so primitive?

    - by Tomislav Nakic-Alfirevic
    I don't do mobile development, but it strikes me as odd that features like this aren't available by default on most phones: full text search: searches all address book contents, messages, anything else being a plus better call management: e.g. a rotating audio call log, meaning you always have the last N calls recorded for your listening pleasure later (your little girl just said her first "da-da" while you were on a business trip, you had a telephone job interview, you received complex instructions to do something etc.) bluetooth remote control (like e.g. anyRemote, but available by default on a bluetooth phone) no multitasking capabilities worth mentioning and in general no e.g. weekly software updates, making the phone much more usable (even if it had to be done over USB, rather than over the network). I'm sure I was dumbfounded by the lack or design of other features as well, but they don't come to mind right now. To clarify, I'm not talking about smartphones here: my plain, 2-year old phone has a CPU an order of magnitude faster than my first PC, about as much storage space and it's ridiculous how bad (slow, unwieldy) the software is and it's not one phone or one manufacturer. What keeps the (to me) obvious software functionality vacuum on a capable hardware platform from being filled up?

    Read the article

  • Best way to handle huge strings in c#

    - by srk
    I have to write the data below to a textfile after replacing two values with ##IP##, ##PORT##. what is the best way ? should i hold all in a string and use Replace and write to textfile ? Data : [APP] iVersion= 101 pcVersion=1.01a pcBuildDate=Mar 27 2009 [MAIN] iFirstSetup= 0 rcMain.rcLeft= 676 rcMain.rcTop= 378 rcMain.rcRight= 1004 rcMain.rcBottom= 672 iShowLog= 0 iMode= 1 [GENERAL] iTips= 1 iTrayAnimation= 1 iCheckColor= 1 iPriority= 1 iSsememcpy= 1 iAutoOpenRecv= 1 pcRecvPath=C:\Documents and Settings\karthikeyan\My Documents\Downloads\fremote101a\FantasyRemote101a\recv pcFileName=FantasyRemote iLanguage= 1 [SERVER] iAcceptVideo= 1 iAcceptAudio= 1 iAcceptInput= 1 iAutoAccept= 1 iAutoTray= 0 iConnectSound= 1 iEnablePassword= 0 pcPassword= pcPort=7902 [CLIENT] iAutoConnect= 0 pcPassword= pcDefaultPort=7902 [NETWORK] pcConnectAddr=##IP## pcPort=##Port## [VIDEO] iEnable= 1 pcFcc=AMV3 pcFccServer= pcDiscription= pcDiscriptionServer= iFps= 30 iMouse= 2 iHalfsize= 0 iCapturblt= 0 iShared= 0 iSharedTime= 5 iVsync= 1 iCodecSendState= 1 iCompress= 2 pcPlugin= iPluginScan= 0 iPluginAspectW= 16 iPluginAspectH= 9 iPluginMouse= 1 iActiveClient= 0 iDesktop1= 1 iDesktop2= 2 iDesktop3= 0 iDesktop4= 3 iScan= 1 iFixW= 16 iFixH= 8 [AUDIO] iEnable= 1 iFps= 30 iVolume= 6 iRecDevice= 0 iPlayDevice= 0 pcSamplesPerSec=44100Hz pcChannels=2ch:Stereo pcBitsPerSample=16bit iRecBuffNum= 150 iPlayBuffNum= 4 [INPUT] iEnable= 1 iFps= 30 iMoe= 0 iAtlTab= 1 [MENU] iAlwaysOnTop= 0 iWindowMode= 0 iFrameSize= 4 iSnap= 1 [HOTKEY] iEnable= 1 key_IDM_HELP=0x00000070 mod_IDM_HELP=0x00000000 key_IDM_ALWAYSONTOP=0x00000071 mod_IDM_ALWAYSONTOP=0x00000000 key_IDM_CONNECT=0x00000072 mod_IDM_CONNECT=0x00000000 key_IDM_DISCONNECT=0x00000073 mod_IDM_DISCONNECT=0x00000000 key_IDM_CONFIG=0x00000000 mod_IDM_CONFIG=0x00000000 key_IDM_CODEC_SELECT=0x00000000 mod_IDM_CODEC_SELECT=0x00000000 key_IDM_CODEC_CONFIG=0x00000000 mod_IDM_CODEC_CONFIG=0x00000000 key_IDM_SIZE_50=0x00000074 mod_IDM_SIZE_50=0x00000000 key_IDM_SIZE_100=0x00000075 mod_IDM_SIZE_100=0x00000000 key_IDM_SIZE_200=0x00000076 mod_IDM_SIZE_200=0x00000000 key_IDM_SIZE_300=0x00000000 mod_IDM_SIZE_300=0x00000000 key_IDM_SIZE_400=0x00000000 mod_IDM_SIZE_400=0x00000000 key_IDM_CAPTUREWINDOW=0x00000077 mod_IDM_CAPTUREWINDOW=0x00000004 key_IDM_REGION=0x00000077 mod_IDM_REGION=0x00000000 key_IDM_DESKTOP1=0x00000078 mod_IDM_DESKTOP1=0x00000000 key_IDM_ACTIVE_MENU=0x00000079 mod_IDM_ACTIVE_MENU=0x00000000 key_IDM_PLUGIN=0x0000007A mod_IDM_PLUGIN=0x00000000 key_IDM_PLUGIN_SCAN=0x00000000 mod_IDM_PLUGIN_SCAN=0x00000000 key_IDM_DESKTOP2=0x00000078 mod_IDM_DESKTOP2=0x00000004 key_IDM_DESKTOP3=0x00000079 mod_IDM_DESKTOP3=0x00000004 key_IDM_DESKTOP4=0x0000007A mod_IDM_DESKTOP4=0x00000004 key_IDM_WINDOW_NORMAL=0x0000000D mod_IDM_WINDOW_NORMAL=0x00000004 key_IDM_WINDOW_NOFRAME=0x0000000D mod_IDM_WINDOW_NOFRAME=0x00000002 key_IDM_WINDOW_FULLSCREEN=0x0000000D mod_IDM_WINDOW_FULLSCREEN=0x00000001 key_IDM_MINIMIZE=0x00000000 mod_IDM_MINIMIZE=0x00000000 key_IDM_MAXIMIZE=0x00000000 mod_IDM_MAXIMIZE=0x00000000 key_IDM_REC_START=0x00000000 mod_IDM_REC_START=0x00000000 key_IDM_REC_STOP=0x00000000 mod_IDM_REC_STOP=0x00000000 key_IDM_SCREENSHOT=0x0000002C mod_IDM_SCREENSHOT=0x00000002 key_IDM_AUDIO_MUTE=0x00000073 mod_IDM_AUDIO_MUTE=0x00000004 key_IDM_AUDIO_VOLUME_DOWN=0x00000074 mod_IDM_AUDIO_VOLUME_DOWN=0x00000004 key_IDM_AUDIO_VOLUME_UP=0x00000075 mod_IDM_AUDIO_VOLUME_UP=0x00000004 key_IDM_CTRLALTDEL=0x00000023 mod_IDM_CTRLALTDEL=0x00000003 key_IDM_QUIT=0x00000000 mod_IDM_QUIT=0x00000000 key_IDM_MENU=0x0000007B mod_IDM_MENU=0x00000000 [OVERLAY] iIndicator= 1 iAlphaBlt= 1 iEnterHide= 0 pcFont=MS UI Gothic [AVI] iSound= 1 iFileSizeLimit= 100000 iPool= 4 iBuffSize= 32 iStartDiskSpaceCheck= 1 iStartDiskSpace= 1000 iRecDiskSpaceCheck= 1 iRecDiskSpace= 100 iCache= 0 iAutoOpen= 1 pcPath=C:\Documents and Settings\karthikeyan\My Documents\Downloads\fremote101a\FantasyRemote101a\avi [SCREENSHOT] iSound= 1 iAutoOpen= 1 pcPath=C:\Documents and Settings\karthikeyan\My Documents\Downloads\fremote101a\FantasyRemote101a\ss pcPlugin=BMP [CDLG_SERVER] mrcWnd.rcLeft= 667 mrcWnd.rcTop= 415 mrcWnd.rcRight= 1013 mrcWnd.rcBottom= 634 [CWND_CLIENT] miShowLog= 0 m_iOverlayLock= 0 [CDLG_CONFIG] mrcWnd.rcLeft= 467 mrcWnd.rcTop= 247 mrcWnd.rcRight= 1213 mrcWnd.rcBottom= 802 miTabConfigSel= 2

    Read the article

  • Django: Serving Media Behind Custom URL

    - by TheLizardKing
    So I of course know that serving static files through Django will send you straight to hell but I am confused on how to use a custom url to mask the true location of the file using Django. http://stackoverflow.com/questions/2681338/django-serving-a-download-in-a-generic-view but the answer I accepted seems to be the "wrong" way of doing things. urls.py: url(r'^song/(?P<song_id>\d+)/download/$', song_download, name='song_download'), views.py: def song_download(request, song_id): song = Song.objects.get(id=song_id) fsock = open(os.path.join(song.path, song.filename)) response = HttpResponse(fsock, mimetype='audio/mpeg') response['Content-Disposition'] = "attachment; filename=%s - %s.mp3" % (song.artist, song.title) return response This solution works perfectly but not perfectly enough it turns out. How can I avoid having a direct link to the mp3 while still serving through nginx/apache? EDIT 1 - ADDITIONAL INFO Currently I can get my files by using an address such as: http://www.example.com/music/song/1692/download/ But the above mentioned method is the devil's work. How can I accomplished what I get above while still making nginx/apache serve the media? Is this something that should be done at the webserver level? Some crazy mod_rewrite? http://static.example.com/music/Aphex%20Twin%20-%20Richard%20D.%20James%20(V0)/10%20Logon-Rock%20Witch.mp3

    Read the article

  • Ruby ICalendar Gem: How to get e-mail reminders working.

    - by Jenny
    I'm trying to work out how to use the icalendar ruby gem, found at: http://icalendar.rubyforge.org/ According to their tutorial, you do something like: cal.event.do # ...other event properties alarm do action "EMAIL" description "This is an event reminder" # email body (required) summary "Alarm notification" # email subject (required) attendees %w(mailto:[email protected] mailto:[email protected]) # one or more email recipients (required) add_attendee "mailto:[email protected]" remove_attendee "mailto:[email protected]" trigger "-PT15M" # 15 minutes before add_attach "ftp://host.com/novo-procs/felizano.exe", {"FMTTYPE" => "application/binary"} # email attachments (optional) end alarm do action "DISPLAY" # This line isn't necessary, it's the default summary "Alarm notification" trigger "-P1DT0H0M0S" # 1 day before end alarm do action "AUDIO" trigger "-PT15M" add_attach "Basso", {"VALUE" => ["URI"]} # only one attach allowed (optional) end So, I am doing something similar in my code. def schedule_event puts "Scheduling an event for " + self.title + " at " + self.start_time start = self.start_time endt = self.start_time title = self.title desc = self.description chan = self.channel.name # Create a calendar with an event (standard method) cal = Calendar.new cal.event do dtstart Program.convertToDate(start) dtend Program.convertToDate(endt) summary "Want to watch" + title + "on: " + chan + " at: " + start description desc klass "PRIVATE" alarm do action "EMAIL" description desc # email body (required) summary "Want to watch" + title + "on: " + chan + " at: " + start # email subject (required) attendees %w(mailto:[email protected]) # one or more email recipients (required) trigger "-PT25M" # 25 minutes before end end However, I never see any e-mail sent to my account... I have even tried hard coding the start times to be Time.now, and sending them out 0 minutes before, but no luck... Am I doing something glaringly wrong?

    Read the article

  • How to post a poll on the Facebook wall

    - by Bengt
    Hi, I'm trying to convert my poll app into a Facebook iframe app. My app is written in PHP and uses some Ajax calls to vote at a poll. In the application canvas everything is working fine, but of course I want to get the poll on the wall of a user too. Unfortunately I'm not able to find out how I can post a simple poll with some radio buttons for the options on the wall. I know how to publish images, text, audio files and links to the wall, but I have no idea how to publish my poll on the wall. And I don't just want to use links to vote, I want the user be able to choose a radio button. Does anyone have an idea how to do this or where to find information about doing this? I'm stuck there now for a while and it gets pretty frustrating. I'm using the new Graph API by the way. Or is this impossible? But I don't think so. Any help is appreciated. Bengt

    Read the article

  • DSP - Filtering frequencies using DFT

    - by Trap
    I'm trying to implement a DFT-based 8-band equalizer for the sole purpose of learning. To prove that my DFT implementation works I fed an audio signal, analyzed it and then resynthesized it again with no modifications made to the frequency spectrum. So far so good. I'm using the so-called 'standard way of calculating the DFT' which is by correlation. This method calculates the real and imaginary parts both N/2 + 1 samples in length. To attenuate a frequency I'm just doing: float atnFactor = 0.6; Re[k] *= atnFactor; Im[k] *= atnFactor; where 'k' is an index in the range 0 to N/2, but what I get after resynthesis is a slighty distorted signal, especially at low frequencies. The input signal sample rate is 44.1 khz and since I just want a 8-band equalizer I'm feeding the DFT 16 samples at a time so I have 8 frequency bins to play with. Can someone show me what I'm doing wrong? I tried to find info on this subject on the internet but couldn't find any. Thanks in advance.

    Read the article

  • INSERT INTO sql server error : invalid object name

    - by thormayer
    I have a problem with some statement on SQL SERVER the error I get is that I have an invalid object name 'TBL_VIDEOS' INSERT INTO TBL_VIDEOS ( TBL_VIDEOS.ID, TBL_VIDEOS.TITLE, TBL_VIDEOS.V_DESCRIPTION, TBL_VIDEOS.UPLOAD_DATE, TBL_VIDEOS.V_VIEWS, TBL_VIDEOS.USERNAME, TBL_VIDEOS.RATING, TBL_VIDEOS.V_SOURCE, TBL_VIDEOS.FLAG ) VALUES ('Z8MTRH3LmTVm', 'Why Creativity is the New Economy', 'Dr Richard Florida, one of the world&#39;s leading experts on economic competitiveness, demographic trends and cultural and technological innovation shows how developing the full human and creative capabilities of each individual, combined with institutional supports such as commercial innovation and new industry, will put us back on the path to economic and social prosperity. Listen to the podcast of the full event including audience Q&amp;A: http://www.thersa.org/events/audio-and-past-events/2012/why-creativity-is-the-new-economy Our events are made possible with the support of our Fellowship. Support us by donating or applying to become a Fellow. Donate: http://www.thersa.org/support-the-rsa Become a Fellow: http://www.thersa.org/fellowship/apply', CURRENT_TIMESTAMP, 0, 1, 0, 'http://www.youtube.com/watch?v=VPX7gowr2vE&feature=g-all-u' ,0) and I wonder what i've done wrong ? (btw, the error refer to line 1.. guess its the table name.. but it correct!

    Read the article

< Previous Page | 144 145 146 147 148 149 150 151 152 153 154 155  | Next Page >