Search Results

Search found 7178 results on 288 pages for 'audio playing'.

Page 71/288 | < Previous Page | 67 68 69 70 71 72 73 74 75 76 77 78  | Next Page >

  • Loop OpenAL source with offset

    - by ressaw
    The OpenAL API states that an setting an offset still causes the sound to loop back to zero for looping sources. But is there a way to loop and still have an offset somehow? I have an mp3, and since it contains headers with information at the start of the file, there's a small, but noticable, delay in looping when it rewinds. If not, are there any other compressed formats that don't contain these empty headers?

    Read the article

  • concatenating mp3 files or joining mp3 files using java

    - by Sukhhhh
    We would like to concatenate/merge/join mp3 files seamlessly using "java" in any environment. We are trying the following options at the moment ( please let us know any other options): Using JMF -- ruled out as it supported only in windows http://java.sun.com/javase/technologies/desktop/media/jmf/reference/faqs/index.html Using tritinous , jlayer and Lame combination. Please let us know thoughts and the links those mention about concatenate/merge/join mp3 files using option 2.

    Read the article

  • can not get jplayer plugin to work

    - by Richard
    Hello, I hope somebody has some experience with the jplayer plugin I have been staring at the sourcecode of the demo's and looking in firebug, but I can't see why it is not showing at all. It also try's to use the flash file, but in other examples the embed code does not show up in the container div either. How could I get this to work, or debug? $(document).ready(function(){ $("#jpId").jPlayer( { ready: function () { this.element.jPlayer("setFile", "/mp3/nobodymove.mp3"); // Defines the mp3 } }); }); thanks, Richard

    Read the article

  • VB FFT - stuck understanding relationship of results to frequency

    - by WaveyDavey
    Trying to understand an fft (Fast Fourier Transform) routine I'm using (stealing)(recycling) Input is an array of 512 data points which are a sample waveform. Test data is generated into this array. fft transforms this array into frequency domain. Trying to understand relationship between freq, period, sample rate and position in fft array. I'll illustrate with examples: ======================================== Sample rate is 1000 samples/s. Generate a set of samples at 10Hz. Input array has peak values at arr(28), arr(128), arr(228) ... period = 100 sample points peak value in fft array is at index 6 (excluding a huge value at 0) ======================================== Sample rate is 8000 samples/s Generate set of samples at 440Hz Input array peak values include arr(7), arr(25), arr(43), arr(61) ... period = 18 sample points peak value in fft array is at index 29 (excluding a huge value at 0) ======================================== How do I relate the index of the peak in the fft array to frequency ?

    Read the article

  • NAudio Mp3 Playback in Console

    - by Kurru
    Hi I'm trying to make a helper dll that will simplify the NAudio framework into a subset of functions I'm likely to need but I've hit a stumbling block right off the bat. I'm trying to use the following code to play an mp3 but I'm not hearing anything at all. Any help would be appreciated! static WaveOut waveout; static WaveStream playback; static System.Threading.ManualResetEvent wait = new System.Threading.ManualResetEvent(false); static void Main(string[] args) { System.Threading.Thread t = new System.Threading.Thread(new System.Threading.ThreadStart(PlaySong)); t.Start(); wait.WaitOne(); System.Threading.Thread.Sleep(2 * 1000); waveout.Stop(); waveout.Dispose(); playback.Dispose(); } static void PlaySong() { waveout = new WaveOut(); playback = OpenMp3Stream(@"songname.mp3"); waveout.Init(playback); waveout.Play(); Console.WriteLine("Started"); wait.Set(); } private static WaveChannel32 OpenMp3Stream(string fileName) { WaveChannel32 inputStream; WaveStream mp3Reader = new Mp3FileReader(fileName); WaveStream pcmStream = WaveFormatConversionStream.CreatePcmStream(mp3Reader); WaveStream blockAlignedStream = new BlockAlignReductionStream(pcmStream); inputStream = new WaveChannel32(blockAlignedStream); return inputStream; }

    Read the article

  • Sound/Silence in a wav file.

    - by Vivek
    Hi, I am searching for a utility/code that could detect and let me know if my 1 minute wav file contains sound or not ? Other way, if it could detect the duration of the silence(if exists) at any position in the wav file, that would also server the purpose. Does SOX support any command for that ? I tried with Java, but didnt found anything in JMF. Thanks Vivek

    Read the article

  • (Android SDk 2.1) Getting error when I use setAudioSource and setVideoSource

    - by Rainfer
    I got the follow error when I run setAudioSource and setVideoSource. 03-16 10:26:25.302: ERROR/audio_input(52): unsupported parameter: x-pvmf/media-input-node/cap-config-interface;valtype=key_specific_value 03-16 10:26:25.302: ERROR/audio_input(52): VerifyAndSetParameter failed 03-16 10:26:25.302: ERROR/CameraInput(52): Unsupported parameter(x-pvmf/media-input-node/cap-config-interface;valtype=key_specific_value) 03-16 10:26:25.302: ERROR/CameraInput(52): VerifiyAndSetParameter failed on parameter #0 This error happen on both emulator and the device. (I am using Google nexus one) I have set the CAMERA and RECORD_AUDIO user permission already. I spent many days but I still cannot figure out what is the cause of this runtime error.

    Read the article

  • Detect and record a sound with python

    - by Jean-Pierre
    I'm using this program to record a sound in python: import pyaudio import wave import sys chunk = 1024 FORMAT = pyaudio.paInt16 CHANNELS = 1 RATE = 44100 RECORD_SECONDS = 5 WAVE_OUTPUT_FILENAME = "output.wav" p = pyaudio.PyAudio() stream = p.open(format = FORMAT, channels = CHANNELS, rate = RATE, input = True, frames_per_buffer = chunk) print "* recording" all = [] for i in range(0, RATE / chunk * RECORD_SECONDS): data = stream.read(chunk) all.append(data) print "* done recording" stream.close() p.terminate() write data to WAVE file data = ''.join(all) wf = wave.open(WAVE_OUTPUT_FILENAME, 'wb') wf.setnchannels(CHANNELS) wf.setsampwidth(p.get_sample_size(FORMAT)) wf.setframerate(RATE) wf.writeframes(data) wf.close() I want to change the program to start recording when sound is detected by the sound card input. Probably should compare the input sound level in Chunk, but how do this?

    Read the article

  • J2ME Camera and Sound Recorder Access On A Windows Mobile

    - by Steven Knox
    I'm currently involved in a research project that requires me to access a Windows Mobile Camera and sound recorder with J2ME to, well take pictures and record sound... the phone has to be a windows mobile for some reason that has nothing to do with me and the software has to be written in Java, also not my decision. So I need to try and find a phone that supports this (if one exists) so I'd like to know if anyone has found one? Thank You For Your Help. (Note the phone supporting MMAPI (JSR 135) does not imply that you can use the camera and sound recorder, our current phone has this and has not access).

    Read the article

  • Record AVAudioPlayer output using AVAudioRecorder

    - by Kieran
    In my app the user plays a sound by pressing a button. There are several buttons which can be played simultaneously. The sounds are played using AVAudioPlayer instances. I want to record the output of these instances using AVAudioRecorder. I have set it all up and a file is created and records but when I play it back it does not play any sound. It is just a silent file the length of the recording. Does anyone know if there is a setting I am missing with AVAudioPlayer or AVAudioRecorder? Thanks

    Read the article

  • NAudio playback wont stop successfully

    - by Kurru
    Hi When using NAudio to playback an mp3 [in the console], I cant figure out how to stop the playback. When I call waveout.Stop() the code just stops running and waveout.Dispose() never gets called. Is it something to do with the function callback? I dont know how to fix that if it is. static string MP3 = @"song.mp3"; static WaveOut waveout; static WaveStream playback; static void Main(string[] args) { waveout = new WaveOut(WaveCallbackInfo.FunctionCallback()); playback = OpenMp3Stream(MP3); waveout.Init(playback); waveout.Play(); Console.WriteLine("Started"); Thread.Sleep(2 * 1000); Console.WriteLine("Ending"); if (waveout.PlaybackState != PlaybackState.Stopped) waveout.Stop(); Console.WriteLine("Stopped"); waveout.Dispose(); Console.WriteLine("1st dispose"); playback.Dispose(); Console.WriteLine("2nd dispose"); } private static WaveChannel32 OpenMp3Stream(string fileName) { WaveChannel32 inputStream; WaveStream mp3Reader = new Mp3FileReader(fileName); WaveStream pcmStream = WaveFormatConversionStream.CreatePcmStream(mp3Reader); WaveStream blockAlignedStream = new BlockAlignReductionStream(pcmStream); inputStream = new WaveChannel32(blockAlignedStream); return inputStream; }

    Read the article

  • Looping music with intro in XNA using SoundEffect

    - by Jordan Roher
    I have two sound files: Sound A is an 18 second intro designed to be played once Sound B is a 1 minute looping track I'd like to play Sound A once, then once Sound A is done, immediately play Sound B and keep looping Sound B until I tell it to stop. This is supposed to be looping town music in an RPG. I've tried doing this in code using just SoundEffect, but there's a tiny yet noticeable gap between the end of Sound A and the beginning of Sound B. Even if I put monitoring code watching Sound A's SoundEffectInstance.State in the Update() function, I haven't been able to start Sound B exactly when Sound A finishes so that it's seamless. I'd prefer to use SoundEffect because I can load WMA files rather than being stuck with WAVs in XACT.

    Read the article

  • Determining what frequencies correspond to the x axis in aurioTouch sample application

    - by eagle
    I'm looking at the aurioTouch sample application for the iPhone SDK. It has a basic spectrum analyzer implemented when you choose the "FFT" option. One of the things the app is lacking is X axis labels (i.e. the frequency labels). In the aurioTouchAppDelegate.mm file, in the function - (void)drawOscilloscope at line 652, it has the following code: if (displayMode == aurioTouchDisplayModeOscilloscopeFFT) { if (fftBufferManager->HasNewAudioData()) { if (fftBufferManager->ComputeFFT(l_fftData)) [self setFFTData:l_fftData length:fftBufferManager->GetNumberFrames() / 2]; else hasNewFFTData = NO; } if (hasNewFFTData) { int y, maxY; maxY = drawBufferLen; for (y=0; y<maxY; y++) { CGFloat yFract = (CGFloat)y / (CGFloat)(maxY - 1); CGFloat fftIdx = yFract * ((CGFloat)fftLength); double fftIdx_i, fftIdx_f; fftIdx_f = modf(fftIdx, &fftIdx_i); SInt8 fft_l, fft_r; CGFloat fft_l_fl, fft_r_fl; CGFloat interpVal; fft_l = (fftData[(int)fftIdx_i] & 0xFF000000) >> 24; fft_r = (fftData[(int)fftIdx_i + 1] & 0xFF000000) >> 24; fft_l_fl = (CGFloat)(fft_l + 80) / 64.; fft_r_fl = (CGFloat)(fft_r + 80) / 64.; interpVal = fft_l_fl * (1. - fftIdx_f) + fft_r_fl * fftIdx_f; interpVal = CLAMP(0., interpVal, 1.); drawBuffers[0][y] = (interpVal * 120); } cycleOscilloscopeLines(); } } From my understanding, this part of the code is what is used to decide which magnitude to draw for each frequency in the UI. My question is how can I determine what frequency each iteration (or y value) represents inside the for loop. For example, if I want to know what the magnitude is for 6kHz, I'm thinking of adding a line similar to the following: if (yValueRepresentskHz(y, 6)) NSLog(@"The magnitude for 6kHz is %f", (interpVal * 120)); Please note that although they chose to use the variable name y, from what I understand, it actually represents the x-axis in the visual graph of the spectrum analyzer, and the value of the drawBuffers[0][y] represents the y-axis.

    Read the article

  • Text to speech on iPhone

    - by lostInTransit
    Hi Is there any way we can convert text to speech in an iPhone app? Is it possible using the SDK? Thanks Are there any third-party TTS engines available for the iPhone? (AFAIK Acapela is not yet released)

    Read the article

  • AudioQueue in-memory playback example

    - by Jonesy
    Does anybody know of any examples using AudioQueue that play from an in-memory source? All the examples I can find play from files (using AudioFileReadPackets) but in my particular case I am generating the data myself in realtime so ideally, I want to enqueue the data myself rather than sucking it out of a file using the callback. Any help much appreciated.

    Read the article

  • loading mp3 from file using random access to flash.media.Sound

    - by Irfan Mulic
    We are migrating application from Delphi to Flex (Air) that plays mp3 files from random access big file. it has positions and sizes to extract mp3 data to FileStream-MemoryStream and then we use bass.dll to play it from memory stream. Now I have to play those same mp3's in flex but I am not sure how... I was reading something similar for reading/writing data using ByteArray from here but how to apply it to flash.media.Sound ? http://livedocs.adobe.com/flex/3/html/help.html?content=ByteArrays_2.html Any help?

    Read the article

  • If I want to play the same sound 10 times per second, must I have 10 copies of that sound in memory?

    - by mystify
    I have a sound that needs to get played 10 times per second. The sound is 1 second long. So it does overlap like 10 times. However, as far as I understand the Finch sound library, I would need 10 different instances of a sound in place so that I can play it 10 times at almost the same time. When I have just one instance, the sound would stop and play from the beginning on every iteration, but not overlap with itself. How to do that?

    Read the article

  • How to sound audible bell from crontab

    - by user1526251
    The command line: /bin/echo -e "\007" in bash will ring the bell. With the line: /bin/echo -e "\007" in my crontab I expected the bell to ring every minute, but it's silent. I know crontab is working because the line: /bin/touch $HOME/jkjkjk updates the file jkjkjk every minute as it should. I found a posting some years ago suggesting that standard output should be directed to /dev/tty1 in crontab. But the line: /bin/echo "\007" /dev/tty1 Still fails. What to try next?

    Read the article

< Previous Page | 67 68 69 70 71 72 73 74 75 76 77 78  | Next Page >