Search Results

Search found 5434 results on 218 pages for 'digital audio'.

Page 78/218 | < Previous Page | 74 75 76 77 78 79 80 81 82 83 84 85  | Next Page >

  • How can I synchronize a text with audio/sound in XNA/XACT?

    - by Omkar
    Hello Geeks, I wanted to display the text while sound is playing at background. In short if there is sound/audio for "What is this", I want to display the text "What is this" in text box synchronously. Is this possible with XNA/XACT? and can I use this in standard C# based WPF or Silverlight applications? Appreciating your help.

    Read the article

  • How to stream audio from ASP.NET MVC controller when it's still encoding?

    - by kyrisu
    Background I have wave files on my server that I want to stream. Because of the size I want to encode them to mp3. I've tried to use FileStreamResult - but it doesn't work because as soon as program leaves the controller stream is closed and I get - "Cannot access a closed stream" FileContentResult - but it's not a stream and the user would need to wait for encoding to finish Question Is there a way to stream audio from the controller while it's still encoding?

    Read the article

  • convert decrypted .vobs to .avi with ffmpeg on ubuntu

    - by Arcath
    I have a .vob file that has bee ripped from a dvd, when I watch the .vob its very good quality video and 5.1 english audio but when I use ffmpeg it has rubbish video and mono french audio. That was using this command: ffmpeg -i /samba/ripping/vobs/12161840#2.vob -f avi /samba/ripping/avis/test.avi I've tried a few different variations on that but it never comes back with anything good just bigger files with bad video and incorrect sound. I know the videos good and the correct audio streams exist so how do I select a 5.1 track and get good video? ffmpeg gives the .vob details as: Input #0, mpeg, from '/samba/ripping/vobs/12161840#2.vob': Duration: 00:42:05.56, start: 0.287267, bitrate: 5738 kb/s Stream #0.0[0x1e0]: Video: mpeg2video, yuv420p, 720x576 [PAR 64:45 DAR 16:9], 8436 kb/s, 25 fps, 25 tbr, 90k tbn, 50 tbc Stream #0.1[0x80]: Audio: ac3, 48000 Hz, 5.1, s16, 384 kb/s Stream #0.2[0x81]: Audio: ac3, 48000 Hz, 5.1, s16, 384 kb/s Stream #0.3[0x82]: Audio: ac3, 48000 Hz, mono, s16, 192 kb/s Output #0, avi, to '/samba/ripping/avis/test.avi': Metadata: ISFT : Lavf52.64.2 Stream #0.0: Video: mpeg4, yuv420p, 720x576 [PAR 64:45 DAR 16:9], q=2-31, 200 kb/s, 25 tbn, 25 tbc Stream #0.1: Audio: mp2, 48000 Hz, mono, s16, 64 kb/s Stream mapping: Stream #0.0 -> #0.0 Stream #0.3 -> #0.1

    Read the article

  • Find if a user is facing a certain Location using Digital Compass by constructing a triangle and usi

    - by Aidan
    Hi Guys, I'm constructing a geolocation based application and I'm trying to figure out a way to make my application realise when a user is facing the direction of the given location (a particular long / lat co-ord). I've done some Googling and checked the SDK but can't really find anything for such a thing. Does anyone know of a way? To clarify Android knows my location, the second location and my orientation. What I want is a way for Android to recognise when my orientation is "facing" the second location (e.g within 90 Degrees or so). We're also assuming that the user is stationary and needs updates every second or so therefore getBearing(); is useless. Alright so we get it has to be math, there appears to be no simple SDK stuff we can use. I did some searching of my own and found Barycentric Co-ords http://www.blackpawn.com/texts/pointinpoly/default.html . So what I'm trying to do now is map the camera's field of view. For Example if the person is facing a certain direction the program should construct a triangle around that field of view, e.g it should make one vertices the phone's position and then go out either side for a set distance making the 2 end points vertices constructing a triangle. If I had this I could then apply Barycentric co-ords to see if the point lay within the newly constructed triangle. Idea's anyone? Example. I could get my current orientation, add 45 to it and go up X distance one side and subtact 45 and go up X distance the other side to find my 2 other points. Though, how would I make android know which way it should go "up" I guess? It would know its baring as this stage so I need it to recognise it's bearing and go out that direction.

    Read the article

  • How do digital certificates prove the identity of a device?

    - by StackedCrooked
    I understand how the relation between issuer and subject certificates enables verification of the subject's authenticity. If I connect to a networked device, and it sends me its certificate to identify itself, then I can verify that it was issued by a trusted party and that it has not been tampered with in any way. However, suppose I simply upload this certificate onto another device. Then what prevents me from having this device identify itself with the copied certificate?

    Read the article

  • Is it possible to broadcast audio to shoutcast / icecast / other server? from flash player?

    - by Jeffrey
    I am trying to create a flash client that can stream audio to an online radio server. Theoretically a user could enter the server info / login, and then connect and start sending data to the server which could then be broadcasted and listened to by other clients. I don't think this would be very hard, but am unsure about what data formats to use and what is the best server for the job. I'd like to be able to use one of the most popular radio servers like shoutCast. Any ideas? Thanks in advance.

    Read the article

  • Linux application that bundles multiple incoming audio and video streams into one container file?

    - by StackedCrooked
    I've been assigned to implement a video on-demand service for a local university. Different aspects of the lectures (video, audio, screen cast, white board) will be recorded. During a lecture all these data streams arrive at one Linux server. This server should transcode and bundle all these streams into one container (Matroska) file. My options seem to be: Write a GStreamer application do something with FFMPEG do something with VLC ...? Has anyone done something similar in the past? Can you recommend something? Edit For those interested, here are a few of my findings: Matroska is not a good format for streaming (it's possible, but it's not its primary intent) For Flash streaming you can use MPEG4 If you want to combine different videos into one video where each subvideo occupies a rectangular portion of the total screen, then this GStreamer script is useful (I found it on this blog post). Desktop capture works fine with VLC

    Read the article

  • VST plugin : using FFT on audio input buffer with arbitrary size, how ?

    - by Led
    I'm getting interested in programming a VST plugin, and I have a basic knowledge of audio dsp's and FFT's. I'd like to use VST.Net, and I'm wondering how to implement an FFT-based effect. The process-code looks like public override void Process(VstAudioBuffer[] inChannels, VstAudioBuffer[] outChannels) If I'm correct, normally the FFT would be applied on the input, some processing would be done on the FFT'd data, and then an inverse-FFT would create the processed soundbuffer. But since the FFT works on a specified buffersize that will most probably be different then the (arbitrary) amount of input/output-samples, how would you handle this ?

    Read the article

  • Successfully concatenating multiple videos

    - by wiseguydigital
    My mission is to create videos out of old web slideshows. To start with I have jpegs and audio files that worked as Flash slideshows in an old system, structured such as this: Audio structure my_audio_1.mp3 (this file is a 3 second mp3 of silence) my_audio_2.mp3 my_audio_3.mp3 my_audio_4 etc... roughly 30 mp3s per slideshow Image structure my_image_1.jpg (this acts as the opening slide) my_image_2.jpg my_image_3.jpg my_image_4. etc... roughly 30 images per slideshow. As there are almost 100 slideshows that must be converted to video, I have created a web-based interface using PHP to automate the process, that sits on a local system and attempts to combine the files using shell_exec(). The process uses the following workflow: Loop through each slide and make an avi or mpeg. So for instance my_mini_video_2.avi would be a video that consists of my_image_2.jpg and has a soundtrack of my_audio_2.mp3. This slide would last the length of my_audio_2.mp3. Join / stitch / concat all of the mini videos to create the final video (Using a combination of cat and either mencoder or ffmpeg (I have also tried avimerge but to no avail). Transcode the new 'master' video to various formats such as flv etc. I thought this would be simple and have been close on many occasions but it still won't work. I can't get past stage 2 as I can't get a perfect 'master' video. I have now experimented with Mencoder, FFMpeg and seem to have been through every combination I can think of. The problem is that the audio and visuals never sync, no matter what I try. Also, I have even tried created audio-less mini videos, joining the MP3s into one long MP3 using both cat and mp3wrap and then assigning the new long MP3 as the audio track, but this always produces either a very short file or a badly slowed down file and makes the female voiceover sound like a male boxer!!! There appears to be no problems at all with the original files. Does anybody have any experience in producing a video successfully from the same kind of starting point? Or any ideas on what I may be doing wrong? As an example: If I create silent mini-videos, and stitch them together into 'temp-master.mpg' and then join the MP3s together into single MP3 called 'temp-master-audio.mp3', the audio file's duration is 09:10 and the video file's duration is 08:35. They should be the same and the audio will seem sloooow. I haven't posted code as I have written lots and lots of combinations.

    Read the article

  • Reproduce PIPE functionality in IronPython

    - by Muppet Geoff
    Hi, I am hoping some genious out there can help me out with this... I am using sox to merge and resample a group of WAV files, and pipe the output directly to the input of NeroAACEnc for encoding to AAC format. I originally ran the process in a script, which included: sox.exe d:\audio\1.wav d:\audio\2.wav d:\audio\3.wav -c 1 -r 22050 -t wav - | neroAacEnc.exe -q 0.5 -if - -of test.m4a This worked as expected. The '-' in the comand line translates as 'Pipe/redirect input/output (stdin/stdout)' - So Sox pipes to stdout, and NeroAACEnc reads from stdin, the | joins them together. I then migrated the whole solution to Python, and the equivalent command became: from subprocess import call, Popen, PIPE runwav = Popen(['sox.exe', 'd:\audio\1.wav', 'd:\audio\2.wav', 'd:\audio\3.wav', '-c', '1', '-r', '22050', '-t', 'wav', '-'], shell=False, stdout=PIPE) runm4b = call(['neroAacEnc.exe', '-q', '0.5', '-if', '-', '-of', 'test.m4a'], shell=False, stdin=runwav.stdout) This also worked like a charm, exactly as expected. Slightly more convoluted, but hey :) Well now I have to move it to IronPython, and the Subprocess module isn't available (the partial implementation that is, doesn't have Popen/PIPE support - plus it seems silly to add a custom library when there is probably a native alternative). I should mention here, that I opted for IronPython over C#, because I am comfortable with Python now - however, there is a chance of moving it again later to C# native, and I am using IronPython to ease myself into it :) I have no C# or .net experience. So far I have the following equivalent, that sets up the 2 processes: from System.Diagnostics import Process wav = Process() wav.StartInfo.UseShellExecute = False wav.StartInfo.RedirectStandardOutput = True wav.StartInfo.FileName = 'sox.exe' wav.StartInfo.Arguments = 'd:\audio\1.wav d:\audio\2.wav d:\audio\3.wav -c 1 -r 22050 -t wav -' wav.Start() m4b = Process() m4b.StartInfo.UseShellExecute = False m4b.StartInfo.RedirectStandardInput = True m4b.StartInfo.FileName = 'neroAacEnc.exe' m4b.StartInfo.Arguments = '-q 0.5 -if - -of test.m4a' m4b.Start() I know that these 2 processes start (I can see Nero and Sox in the task manager) but what I can't figure out (for the life of me) is how to string the two output/input streams together, as with the previous two solutions. I have searched and searched, so I thought I'd ask! If anyone knows either: How to join the two streams with the same net result as the Python and Commandline versions; or A better way to acheive what I am trying to do. I would be extremely grateful! Many thanks in advance, Geoff P.S. A code sample based off the above would be awesome :) or a specific code example of a similar process that I can easily translate... this has broked my brayne.

    Read the article

  • How to use infinit live streams with JAVE library? (Java, ffmpeg)

    - by Ole Jak
    So I want to use JAVE to save mp3 radio stream to my File system. I have this code for file saving but what shall I do to save a stream (stop on timer for ex) File source = new File("source.wav"); File target = new File("target.mp3"); AudioAttributes audio = new AudioAttributes(); audio.setCodec("libmp3lame"); audio.setBitRate(new Integer(128000)); audio.setChannels(new Integer(2)); audio.setSamplingRate(new Integer(44100)); EncodingAttributes attrs = new EncodingAttributes(); attrs.setFormat("mp3"); attrs.setAudioAttributes(audio); Encoder encoder = new Encoder(); encoder.encode(source, target, attrs);

    Read the article

  • Trouble installing ubuntu server on virtualbox (osx)

    - by audio.zoom
    Hello all, I'm trying to install lucid lynx 10.04.2 server on a virtualbox on snow leopard. I have 2 server iso files freshly downloaded one i386 and one 64bit. When I try to start the virtual machine with either one set to be the cd drive I'm getting the same error: Failed to open a session for the virtual machine Ub. Failed to load VMMR0.r0 (VERR_SUPLIB_OWNER_NOT_ROOT). Unknown error creating VM (VERR_SUPLIB_OWNER_NOT_ROOT). Couldn't find anything on it on google so I'm trying to see if anyone else has dealt with this issue. Thanks much in advance! edit: just downloaded the 32bit desktop edition to same avail edit2: ran Disk Utility' replair permissions then restarted. New error VERR_SUPLIB_WORLD_WRITABLE (instead of VERR_SUPLIB_OWNER_NOT_ROOT)

    Read the article

  • Does any economically-feasible publicly available software compare audio files to determine if they are dupes?

    - by drachenstern
    In the vein of this question http://unix.stackexchange.com/questions/3037/is-there-an-easy-way-to-replace-duplicate-files-with-hardlinks is there any software that will automatically parse a library of my songs and find the ones that really are duplicates that one can be eliminated? Here's an example: My brother used to be a huge fan of remixing CDs. He would take all of his favorite tracks and put them on one. Then he would use my computer to read them in. So now I have like 6 copies of Californication on my HDD, and they're all a few bytes difference overall. I have hundreds of songs in my library like this. I want to trim them down to having uniques. They don't all have correct ID3 tags, so figuring out that Untitled(74).mp3 is the same as californication.mp3 is the same as whowrotethis.mp3 is tricky. I do NOT want to consider a concert album and a studio album rip to be the same (if I just did artist/title matching I would end up with this scenario, which doesn't work for me). I use Windows (pick your platform) and will be getting an OSX box later in the year. I'll run Linux if that's what it takes to get it organized. I have unprotected AAC and mp3 files. Bonus points for messing with WAV or MIDI and bonus points for converting from those into MP3 (I can always use Audacity and LAME to convert later if I know they match or to convert ahead of time if that will make things easier). Are there any suggestions, or do I need to goto Programmers or SO and build a list of requirements for comparing these things and write the software myself?

    Read the article

  • How does Amarok rip Audio CDs (in Ubuntu Lucid)?

    - by Hanno Fietz
    I'm in the process of moving my CD collection into my Amarok library. Mostly, it works great. Sometimes however, the process just hangs forever. The problem seems to occur at random (i. e. often, but not always at the same disk/track) and the consequences range from none (successful after cancel/retry) to Amarok's internal db becoming completely messed up. I would like to investigate and file a proper bug report or find a fix / workaround, but I don't understand how Amarok does the ripping. When all is working, there's a lame process encoding to a temporary file, which appears in my collection once it's finished. When the process hangs, that lame command is still there, but waiting forever for data on stdin, which seems to come from a third process. That seems to be kio_audiocd, but I don't know whether that's correct and what it's supposed to do. What's going on?

    Read the article

  • how to stream audio and video files, but use any media player on Windows (without using Windows file

    - by RamyenHead
    I want to access and play media files on machine S (Windows XP) from machine C (Windows XP). Using Windows File Sharing ("share this folder" stuff), if it works, I would share the folder containing media files on machine S, and I would be able to play media files, sitting in front of C, using any media player I want. Windows somehow ensures that the remote files behave like local files. But Windows file sharing won't work for me, is there any alternative? If two machines were both Linux, I would install an SSH server on S and use Nautilus from C to access and play media files. The reason why I can't use Windows file sharing is, my campus use two different subnets, I have S and C on different subnets and it seems that the firewall governing the whole network in campus doesn't allow file sharing between different subnets. I tried changing Windows Firewall settings on S to allow C in, it still wouldn't work, so it must be the other firewall.

    Read the article

  • How to split audio into multiple channels from optical S/PDIF or 1/8"?

    - by Josh M.
    I have a motherboard which has an optical S/PDIF output or 1/8". I'd like to "split" that signal into the appropriate channels so that I can then connect that to the wires behind my car's headunit which, in turn, run to the amp. The factory Bose amp just takes a single connector with a million wires running out of it, so that's why I would need to separate the signal into separate channels. On the other end there are four RCA connectors: front left, front right, rear left, rear right. The sub-woofer signal does not require an additional connection. Edit: Revised to include S/PDIF or 1/8".

    Read the article

  • Quality wise, is Windows Media Audio 10 Professional equivalent to WMA?

    - by Louis
    I noticed that for encoding CD rips, Zune is still using WMA 9.2 instead of WMA 10 Pro. On a given file using the highest quality VBR settings looks like this: VBR Quality 98, 44 kHz, stereo 1-pass VBR On the same file if I use WMA 10 Pro, with the same settings, the resulting file is about 20% smaller. Using my ears, I'm unable to tell the difference, but I'm wondering if this was the goal of WMA 10 Pro (to be as good as WMA at a lower bitrate). Is the quality of a WMA 10 Pro file equal to that of a WMA 9.2 file encoded with the same settings?

    Read the article

< Previous Page | 74 75 76 77 78 79 80 81 82 83 84 85  | Next Page >