Search Results

Search found 7178 results on 288 pages for 'audio playing'.

Page 72/288 | < Previous Page | 68 69 70 71 72 73 74 75 76 77 78 79  | Next Page >

  • No mic activity with setLoopBack set to false - AS3

    - by Franky
    Trying to figure out why setloopback needs to be set to true for microphone activity to be detected. The problem is the echo feedback when using a macbook with a built in mic. If anyone has some ideas about this let me know. Right now I'm experimenting with toggling gain, depending on activity to simulate echo reduction. Not optimal though. @lessfame

    Read the article

  • Piping SoX in Python - subprocess alternative?

    - by Cochise Ruhulessin
    I use SoX in an application. The application uses it to apply various operations on audiofiles, such as trimming. This works fine: from subprocess import Popen, PIPE kwargs = {'stdin': PIPE, 'stdout': PIPE, 'stderr': PIPE} pipe = Popen(['sox','-t','mp3','-', 'test.mp3','trim','0','15'], **kwargs) output, errors = pipe.communicate(input=open('test.mp3','rb').read()) if errors: raise RuntimeError(errors) This will cause problems on large files hower, since read() loads the complete file to memory; which is slow and may cause the pipes' buffer to overflow. A workaround exists: from subprocess import Popen, PIPE import tempfile import uuid import shutil import os kwargs = {'stdin': PIPE, 'stdout': PIPE, 'stderr': PIPE} tmp = os.path.join(tempfile.gettempdir(), uuid.uuid1().hex + '.mp3') pipe = Popen(['sox','test.mp3', tmp,'trim','0','15'], **kwargs) output, errors = pipe.communicate() if errors: raise RuntimeError(errors) shutil.copy2(tmp, 'test.mp3') os.remove(tmp) So the question stands as follows: Are there any alternatives to this approach, aside from writing a Python extension to the Sox C API?

    Read the article

  • What is the best way to merge mp3 files?

    - by Dan Williams
    I've got many, many mp3 files that I would like to merge into a single file. I've used the command line method copy /b 1.mp3+2.mp3 3.mp3 but it's a pain when there's a lot of them and their namings are inconsistent. The time never seems to come out right either.

    Read the article

  • Extracting note onset from MIDI

    - by Dolphin
    Hi I need to extract musical features (note details-pitch, duration, rhythm, loudness, note start time) from a polyphonic (having 2 scores for treble and bass - bass may also have chords) MIDI file. I'm using the jMusic API to extract these details from a MIDI file. My approach is to go through each score, into parts, then phrases and finally notes and extract the details. With my approach, it's reading all the treble notes first and then the bass notes - but chords are not captured (i.e. only a single note of the chord is taken), and I cannot identify from which point onwards are the bass notes. So what I tried was to get the note onsets (i.e. the start time of note being played) - since the starting time of both the treble and bass notes at the start of the piece should be same - But I cannot extract the note onset using jMusic API. Each time it shows 0.0. Is there any way I can identify the voice (treble or bass) of a note? And also all the notes of a chord? How is the voice or note onset for each note stored in MIDI? Is this different for each MIDI file? Any insight is greatly appreciated. Thanks in advance

    Read the article

  • Pitch detection and change java

    - by omegas27
    Hello, I'm french so I'm sorry if you have trouble to understand some of my sentences. Aniways, I saw in some topics that the pitch could be fetected thanks to the Fourier transform but I didn't really understand how to implement it. Moreover, I didn't find how to change the pitch of a wav file and if possibl ,a mp3 file I am listening to music using javaSound for the wav and JLayer for the mp3. Thanks

    Read the article

  • Getting following exception javax.sound.sampled.LineUnavailableException: line with format ULAW 800

    - by angelina
    Dear All, I tried to play and get duration of a wave file using code below but got following exception.please resolve.I m using a wave file format. URL url = new URL("foo.wav"); Clip clip = AudioSystem.getClip(); AudioInputStream ais = AudioSystem.getAudioInputStream(url); clip.open(ais); System.out.println(clip.getMicrosecondLength()); **javax.sound.sampled.LineUnavailableException: line with format ULAW 8000.0 Hz, 8 bit, mono, 1 bytes/frame, not supported.**

    Read the article

  • SoundPlayer causing Memory Leaks?

    - by Nick Udell
    I'm writing a basic writing app in C# and I wanted to have the program make typewriter sounds as you typed. I've hooked the KeyPress event on my RichTextBox to a function that uses a SoundPlayer to play a short wav file every time a key is pressed, however I've noticed after a while my computer slows to a crawl and checking my processes, audiodlg.exe was using 5 GIGABYTES of RAM. The code I'm using is as follows: I initialise the SoundPlayer as a global variable on program start with SoundPlayer sp = new SoundPlayer("typewriter.wav") Then on the KeyPress event I simply call sp.Play(); Does anybody know what's causing the heavy memory usage? The file is less than a second long, so it shouldn't be clogging the thing up too much.

    Read the article

  • Android PCM Bytes

    - by Pintac
    Hi I am using the AudioRecord class to analize raw pcm bytes as it comes in the mic. So thats working nicely. Now i need convert the pcm bytes into decibel. I have a formula that takes sound presure in Pa into db. db = 20 * log10(Pa/ref Pa) So the question is the bytes i am getting from audiorecorder from the buffer what is it is it amplitude pascal sound pressure or what. I tried to putting the value into te formula but it comes back with very hight db so i do not think its right thanks

    Read the article

  • How can I get latency info from Android's AudioTrack class?

    - by Ryan
    I've noticed that the C++ classes underlying the AudioTrack and AudioRecord APIs in Android both have a latency() method that is not exposed via JNI. As far as I can see, the latency() method in AudioRecord still does not take into account the hardware latency (they have a TODO comment for that), but the latency() method in AudioTrack does add in the hardware latency. I absolutely need to get this latency value from AudioTrack. Is there any possible way I can do this? I don't care what kind of crazy hack is needed as long as it doesn't require a rooted phone (the resulting code must still be packaged as an app on the market).

    Read the article

  • Open Source sound engine

    - by Steph Thirion
    When I started using SoundEngine (from CrashLanding and TouchFighter), I had read about a few people recommending not to use it, for it was, according to them, not stable enough. Still it was the only solution I knew of to play sounds with pitch and position control without learning C++ and OpenAL, so I ignored the warnings and went on with it. But now I'm starting to worry. The 2.2 SDK introduced AVFoundation. Using both SoundEngine from CrashLanding (for sounds) and AVAudioPlayer (for music), I found out SoundEngine behaves strangely when the only existing AVAudioPlayer is released (all sounds stop until a new AVAudioPlayer is initiated). Around the same time as the 2.2 SDK came out, the CrashLanding sample code was mysteriously removed from the ADC site. I'm worried there are more bad surprises to come. My question is, is anyone aware of an Open Source alternative to SoundEngine? Maybe even a C++ library that uses OpenAL?

    Read the article

  • A way to enable a LaunchDaemon to output sound?

    - by Varun Mehta
    I have a small Foundation application that checks a website and plays a sound if it sees a certain value. This application successfully plays a sound when I run it as my user from the Terminal. I've configured this app to run as a LaunchDaemon, with the following plist: <?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd"> <plist version="1.0"> <dict> <key>Label</key> <string>org.myorg.appidentifier</string> <key>ProgramArguments</key> <array> <string>/Users/varunm/path/to/cli/application</string> </array> <key>KeepAlive</key> <true/> <key>RunAtLoad</key> <true/> </dict> </plist> When I have this service launched I can see it successfully read in and log values from the website, but it never generates any sound. The sound files are located in the same directory as the binary, and I use the following code: NSSound *soundToPlay = [[NSSound alloc] initWithContentsOfFile:@"sound.wav" byReference:NO]; [soundToPlay setDelegate:stopper]; [soundToPlay play]; while (g_keepRunning) { [[NSRunLoop currentRunLoop] runUntilDate:[NSDate dateWithTimeIntervalSinceNow:1.0]]; } [soundToPlay setCurrentTime:0.0]; Is there any way to get my LaunchDaemon application to play sound? This machine gets run by different people, and sometimes has no one logged in, which is why I have to configure it as a LaunchDaemon.

    Read the article

  • How to record sound from a microphone in VB6?

    - by Clay Nichols
    We've been recording sound for over a decade using what seems like a very clunky method using the Winmm.dll and the MCIsendString. I've read that this doesn't set the recording quality value correctly (not sure if that article was ever true or is still true). I was wondering if there is any better way to record sound.

    Read the article

  • Correct way to Convert 16bit PCM Wave data to float

    - by fredley
    I have a wave file in 16bit PCM form. I've got the raw data in a byte[] and a method for extracting samples, and I need them in float format, i.e. a float[] to do a Fourier Transform. Here's my code, does this look right? I'm working on Android so javax.sound.sampled etc. is not available. private static short getSample(byte[] buffer, int position) { return (short) (((buffer[position + 1] & 0xff) << 8) | (buffer[position] & 0xff)); } ... float[] samples = new float[samplesLength]; for (int i = 0;i<input.length/2;i+=2){ samples[i/2] = (float)getSample(input,i) / (float)Short.MAX_VALUE; }

    Read the article

  • can the python wave module accept StringIO object

    - by user368005
    i'm trying to use the wave module to read wav files in python. whats not typical of my applications is that I'm NOT using a file or a filename to read the wav file, but instead i have the wav file in a buffer. And here's what i'm doing import StringIO buffer = StringIO.StringIO() buffer.output(wav_buffer) file = wave.open(buffer, 'r') but i'm getting a EOFError when i run it... File "/System/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/wave.py", line 493, in open return Wave_read(f) File "/System/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/wave.py", line 163, in __init__ self.initfp(f) File "/System/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/wave.py", line 128, in initfp self._file = Chunk(file, bigendian = 0) File "/System/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5/chunk.py", line 63, in __init__ raise EOFError i know the StringIO stuff works for creation of wav file and i tried the following and it works import StringIO buffer = StringIO.StringIO() audio_out = wave.open(buffer, 'w') audio_out.setframerate(m.getRate()) audio_out.setsampwidth(2) audio_out.setcomptype('NONE', 'not compressed') audio_out.setnchannels(1) audio_out.writeframes(raw_audio) audio_out.close() buffer.flush() # these lines do not work... # buffer.output(wav_buffer) # file = wave.open(buffer, 'r') # this file plays out fine in VLC file = open(FILE_NAME + ".wav", 'w') file.write(buffer.getvalue()) file.close() buffer.close()

    Read the article

  • Why do calls to waveOutGetPosition hang?

    - by MusiGenesis
    I'm using the winmm.dll API method waveOutGetPosition to get the current position of the playback of a WAV file. Sometimes this works as expected for me, but eventually one of the calls never returns and my application locks up. I found this thread with a few users who have experienced the same problem: http://social.msdn.microsoft.com/Forums/en-US/windowsgeneraldevelopmentissues/thread/c6a1e80e-4a18-47e7-af11-56a89f638ad7 but no solution. Has anyone run into this problem before?

    Read the article

  • Rapid calls to fread crashes the application

    - by Slynk
    I'm writing a function to load a wave file and, in the process, split the data into 2 separate buffers if it's stereo. The program gets to i = 18 and crashes during the left channel fread pass. (You can ignore the couts, they are just there for debugging.) Maybe I should load the file in one pass and use memmove to fill the buffers? if(params.channels == 2){ params.leftChannelData = new unsigned char[params.dataSize/2]; params.rightChannelData = new unsigned char[params.dataSize/2]; bool isLeft = true; int offset = 0; const int stride = sizeof(BYTE) * (params.bitsPerSample/8); for(int i = 0; i < params.dataSize; i += stride) { std::cout << "i = " << i << " "; if(isLeft){ std::cout << "Before Left Channel, "; fread(params.leftChannelData+offset, sizeof(BYTE), stride, file + i); std::cout << "After Left Channel, "; } else{ std::cout << "Before Right Channel, "; fread(params.rightChannelData+offset, sizeof(BYTE), stride, file + i); std::cout << "After Right Channel, "; offset += stride; std::cout << "After offset incr.\n"; } isLeft != isLeft; } } else { params.leftChannelData = new unsigned char[params.dataSize]; fread(params.leftChannelData, sizeof(BYTE), params.dataSize, file); }

    Read the article

  • What does LAME text does in MP3 file?

    - by Dims
    I see here http://en.wikipedia.org/wiki/MP3 that MP3 file consists of MP3 headers interchanged with MP3 data. MP3 header consist of few bytes. But here is my MP3 file dump with ID3 tag cut. Header is highlighted with blue. You can see that "LAME3.96" text is highlighted with green. What does it does there? Is this a part of MP3 elementary stream? Or this is the part of some headers I didn't tag?

    Read the article

  • What exactly does raw microphone data represent?

    - by esperantist
    I'm using PyAudio, a PortAudio wrapper for Python. I'm getting data from a microphone. Data which is represented by a continuous stream of bytes divided into chunks (of a size determined by me). I've tried to plot the signal, assuming the bytes represent the current signal amplitude, but I get an interesting image that I can't easily describe. ^^ It seems to be composed of two waves, one shifted from the other. What exactly do the particular bytes represent, and how does this change when I'm recording only one channel, instead of two? Any explanations, suggestions, code snippets, anything, very welcome! (I'm new at this.) Thanks!

    Read the article

  • How to play extracted wave file byte array in C#?

    - by user261924
    At the moment i have managed to separate the left and right channel of a WAVE file and have included the header in a byte[] array. My next step is to be about to play both channels. How can this be done? Here is a code snippet: byte[] song_left = new byte[fa.Length]; byte[] song_right = new byte[fa.Length]; int p = 0; for (int c = 0; c < 43; c++) { song_left[p] = header[c]; p++; } int q = 0; for (s = startByte; s < length; s = s + 3) { song_left[s] = sLeft[q]; q++; s++; song_left[s] = sLeft[q]; q++; } p = 0; for (int c = 0; c < 43; c++) { song_right[p] = header[c]; p++; } This part is reading the header and data from both the right and light channel and saving it to array sLeft[] and sRight[]. This part is working perfectly. Once I obtained the byte arrays, I did the following: System.IO.File.WriteAllBytes("c:\\left.wav", song_left); System.IO.File.WriteAllBytes("c:\\right.wav", song_right); Added a button to play the saved wave file: private void button2_Click(object sender, EventArgs e) { spWave = new SoundPlayer("c:\\left.wav"); spWave.Play(); } Once I hit the play button, this error appers: An unhandled exception of type 'System.InvalidOperationException' occurred in System.dll Additional information: The wave header is corrupt. Any ideas?

    Read the article

  • How to start writing out an existing AudioQueue in response to an event?

    - by Halle
    Hello, I am writing a class that opens an AudioQueue and analyzes its characteristics, and then under certain conditions can begin or end writing out a file from that AudioQueue that is already instantiated. This is my code (entirely based on SpeakHere) that opens the AudioQueue without writing anything out to tmp: void AQRecorder::StartListen() { int i, bufferByteSize; UInt32 size; try { SetupAudioFormat(kAudioFormatLinearPCM); XThrowIfError(AudioQueueNewInput(&mRecordFormat, MyInputBufferHandler, this, NULL, NULL, 0, &mQueue), "AudioQueueNewInput failed"); mRecordPacket = 0; size = sizeof(mRecordFormat); XThrowIfError(AudioQueueGetProperty(mQueue, kAudioQueueProperty_StreamDescription, &mRecordFormat, &size), "couldn't get queue's format"); bufferByteSize = ComputeRecordBufferSize(&mRecordFormat, kBufferDurationSeconds); for (i = 0; i < kNumberRecordBuffers; ++i) { XThrowIfError(AudioQueueAllocateBuffer(mQueue, bufferByteSize, &mBuffers[i]), "AudioQueueAllocateBuffer failed"); XThrowIfError(AudioQueueEnqueueBuffer(mQueue, mBuffers[i], 0, NULL), "AudioQueueEnqueueBuffer failed"); } mIsRunning = true; XThrowIfError(AudioQueueStart(mQueue, NULL), "AudioQueueStart failed"); } catch (CAXException &e) { char buf[256]; fprintf(stderr, "Error: %s (%s)\n", e.mOperation, e.FormatError(buf)); } catch (...) { fprintf(stderr, "An unknown error occurred\n"); } } But I'm a little unclear on how to write a function that will tell this queue "from now until the stop signal, start writing out this queue to tmp as a file". I understand how to tell an AudioQueue to write out as a file at the time that it's created, how to set files format, etc, but not how to tell it to start and stop midstream. Much appreciative of any pointers, thanks.

    Read the article

  • byte[] to wav file

    - by John
    Hi, It would be great if you could tell me how I could save a byte[] to a wav file. Sometimes I need to set different samplerate, number of bits and channels. Thanks for your help.

    Read the article

  • DSP - Filter sweep effect

    - by Trap
    I'm implementing a 'filter sweep' effect (I don't know if it's called like that). What I do is basically create a low-pass filter and make it 'move' along a certain frequency range. To calculate the filter cut-off frequency at a given moment I use a user-provided linear function, which yields values between 0 and 1. My first attempt was to directly map the values returned by the linear function to the range of frequencies, as in cf = freqRange * lf(x). Although it worked ok it looked as if the sweep ran much faster when moving through low frequencies and then slowed down during its way to the high frequency zone. I'm not sure why is this but I guess it's something to do with human hearing perceiving changes in frequency in a non-linear manner. My next attempt was to move the filter's cut-off frequency in a logarithmic way. It works much better now but I still feel that the filter doesn't move at a constant perceived speed through the range of frequencies. How should I divide the frequency space to obtain a constant perceived sweep speed? Thanks in advance.

    Read the article

< Previous Page | 68 69 70 71 72 73 74 75 76 77 78 79  | Next Page >