Search Results

Search found 3136 results on 126 pages for 'buffer overrun'.

Page 79/126 | < Previous Page | 75 76 77 78 79 80 81 82 83 84 85 86  | Next Page >

  • Android library to get pitch from WAV file

    - by Sakura
    I have a list of sampled data from the WAV file. I would like to pass in these values into a library and get the frequency of the music played in the WAV file. For now, I will have 1 frequency in the WAV file and I would like to find a library that is compatible with Android. I understand that I need to use FFT to get the frequency domain. Is there any good libraries for that? I found that [KissFFT][1] is quite popular but I am not very sure how compatible it is on Android. Is there an easier and good library that can perform the task I want? EDIT: I tried to use JTransforms to get the FFT of the WAV file but always failed at getting the correct frequency of the file. Currently, the WAV file contains sine curve of 440Hz, music note A4. However, I got the result as 441. Then I tried to get the frequency of G4, I got the result as 882Hz which is incorrect. The frequency of G4 is supposed to be 783Hz. Could it be due to not enough samples? If yes, how much samples should I take? //DFT DoubleFFT_1D fft = new DoubleFFT_1D(numOfFrames); double max_fftval = -1; int max_i = -1; double[] fftData = new double[numOfFrames * 2]; for (int i = 0; i < numOfFrames; i++) { // copying audio data to the fft data buffer, imaginary part is 0 fftData[2 * i] = buffer[i]; fftData[2 * i + 1] = 0; } fft.complexForward(fftData); for (int i = 0; i < fftData.length; i += 2) { // complex numbers -> vectors, so we compute the length of the vector, which is sqrt(realpart^2+imaginarypart^2) double vlen = Math.sqrt((fftData[i] * fftData[i]) + (fftData[i + 1] * fftData[i + 1])); //fd.append(Double.toString(vlen)); // fd.append(","); if (max_fftval < vlen) { // if this length is bigger than our stored biggest length max_fftval = vlen; max_i = i; } } //double dominantFreq = ((double)max_i / fftData.length) * sampleRate; double dominantFreq = (max_i/2.0) * sampleRate / numOfFrames; fd.append(Double.toString(dominantFreq)); Can someone help me out? EDIT2: I manage to fix the problem mentioned above by increasing the number of samples to 100000, however, sometimes I am getting the overtones as the frequency. Any idea how to fix it? Should I use Harmonic Product Frequency or Autocorrelation algorithms?

    Read the article

  • Modify audio pitch of recorded clip (m4v)

    - by devcube
    I'm writing an app in which I'm trying to change the pitch of the audio when I'm recording a movie (.m4v). Or by modifying the audio pitch of the movie afterwards. I want the end result to be a movie (.m4v) that has the original length (i.e. same visual as original) but with modified sound pitch, e.g. a "chipmunk voice". A realtime conversion is to prefer if possible. I've read alot about changing audio pitch in iOS but most examples focus on playback, i.e. playing the sound with a different pitch. In my app I'm recording a movie (.m4v / AVFileTypeQuickTimeMovie) and saving it using standard AVAssetWriter. When saving the movie I have access to the following elements where I've tried to manipulate the audio (e.g. modify the pitch): audio buffer (CMSampleBufferRef) audio input writer (AVAssetWriterAudioInput) audio input writer options (e.g. AVNumberOfChannelsKey, AVSampleRateKey, AVChannelLayoutKey) asset writer (AVAssetWriter) I've tried to hook into the above objects to modify the audio pitch, but without success. I've also tried with Dirac as described here: Real Time Pitch Change In iPhone Using Dirac And OpenAL with AL_PITCH as described here: Piping output from OpenAL into a buffer And the "BASS" library from un4seen: Change Pitch/Tempo In Realtime I haven't found success with any of the above libs, most likely because I don't really know how to use them, and where to hook them into the audio saving code. There seems to be alot of librarys that have similar effects but focuses on playback or custom recording code. I want to manipulate the audio stream I've already got (AVAssetWriterAudioInput) or modify the saved movie clip (.m4v). I want the video to be unmodifed visually, i.e. played at the same speed. But I want the audio to go faster (like a chipmunk) or slower (like a ... monster? :)). Do you have any suggestions how I can modify the pitch in either real time (when recording the movie) or afterwards by converting the entire movie (.m4v file)? Should I look further into Dirac, OpenAL, SoundTouch, BASS or some other library? I want to be able to share the movie to others with modified audio, that's the reason I can't rely on modifying the pitch for playback only. Any help is appreciated, thanks!

    Read the article

  • Help with \0 terminated strings in C#

    - by Joshua
    I'm using a low level native API where I send an unsafe byte buffer pointer to get a c-string value. So it gives me // using byte[255] c_str string s = new string(Encoding.ASCII.GetChars(c_str)); // now s == "heresastring\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0(etc)"; So obviously I'm not doing it right, how I get rid of the excess?

    Read the article

  • How to enable MALLOC_PROTECT_BEFORE in Xcode?

    - by Daniel S.
    After switching on some debug options in Xcode, it now tells me the following in the output: GuardMalloc[Roadcast-4010]: free: magic is 0x0000090b, not 0xdeadbeef. GuardMalloc[Roadcast-4010]: free: header magic value at 0x43f49bf0, for block 0x43f49c00-0x43f50000, has been trashed by a buffer underrun. GuardMalloc[Roadcast-4010]: Try running with MALLOC_PROTECT_BEFORE to catch this error immediately as it happens. How do I switch on MALLOC_PROTECT_BEFORE?

    Read the article

  • HttpWebRequest ReadWriteTimeout ignored in .NET; works in Mono

    - by jimvfr
    When writing data to a web server, my tests show HttpWebRequest.ReadWriteTimeout is ignored, contrary to the MSDN spec. For example if I set ReadWriteTimeout to 1 (=1 msec), call myRequestStream.Write() passing in a buffer that takes 10 seconds to transfer, it transfers successfully and never times out using .NET 3.5 SP1. The same test running on Mono 2.6 times out immediately as expected. What could be wrong?

    Read the article

  • C++ memcpy problem :(

    - by Simon
    Hey all :) I have a problem my src pointer of memcpy is pointing wrong. unsigned char* lpBuffer is a buffer that contains my bytes, i checked with olly. The code: IMAGE_DOS_HEADER iDOSh; memcpy(&iDOSh,lpBuffer,sizeof(iDOSh)); The problem is that lpBuffer points wrong, output from debugger is dest = 002859E8 RIGHT src = 000001D8 FALSE src is pointing invalid :( i have no idea why Thanks for reading

    Read the article

  • How to load image data from resource bitmap file for directshow filter ?

    - by Forrest
    I need put one bitmap image to my directshow filter. Then user can use this bitmap image and do not care where is it. First, I import this bitmap file into resource bundle, and get one IDB_BITMAP1. Then, I need to read this IDB_BITMAP1 using opencv cvLoadImage or some windows image API to load this image into buffer. So question is how to do this ? Or is that possible ? Thanks

    Read the article

  • emacs split into 3 even windows

    - by Michael
    Hi all, Quick question: How do I specify the number of characters in a split window? C-x-3 Splits my window into two windows evenly, but a subsequent split will split one of the windows in half. I'd like 3 equal sized windows. The documentation says that I should be able to specify the number of characters for the left buffer as a parameter, but I cant seem to get that to work. Any ideas for syntax? Thanks.

    Read the article

  • What are possible causes of IDirect3DVertexBuffer9::Lock failing?

    - by Suma
    In error reports from some I have quite often seen following behaviour: IDirect3DVertexBuffer9::Lock fails, returned error code is D3DERR_NOTAVAILABLE. Once this happens, quite frequently (but not always) it is followed by CreateTexture or CreateVertexBuffer failing with error D3DERR_OUTOFVIDEOMEMORY. What are possible reasons for vertex buffer lock failure? Could virtual memory address space exhausted, or what?

    Read the article

  • how to deal with the position in a c# stream

    - by CapsicumDreams
    The (entire) documentation for the position property on a stream says: When overridden in a derived class, gets or sets the position within the current stream. The Position property does not keep track of the number of bytes from the stream that have been consumed, skipped, or both. That's it. OK, so we're fairly clear on what it doesn't tell us, but I'd really like to know what it in fact does stand for. What is 'the position' for? Why would we want to alter or read it? If we change it - what happens? In a pratical example, I have a a stream that periodically gets written to, and I have a thread that attempts to read from it (ideally ASAP). From reading many SO issues, I reset the position field to zero to start my reading. Once this is done: Does this affect where the writer to this stream is going to attempt to put the data? Do I need to keep track of the last write position myself? (ie if I set the position to zero to read, does the writer begin to overwrite everything from the first byte?) If so, do I need a semaphore/lock around this 'position' field (subclassing, perhaps?) due to my two threads accessing it? If I don't handle this property, does the writer just overflow the buffer? Perhaps I don't understand the Stream itself - I'm regarding it as a FIFO pipe: shove data in at one end, and suck it out at the other. If it's not like this, then do I have to keep copying the data past my last read (ie from position 0x84 on) back to the start of my buffer? I've seriously tried to research all of this for quite some time - but I'm new to .NET. Perhaps the Streams have a long, proud (undocumented) history that everyone else implicitly understands. But for a newcomer, it's like reading the manual to your car, and finding out: The accelerator pedal affects the volume of fuel and air sent to the fuel injectors. It does not affect the volume of the entertainment system, or the air pressure in any of the tires, if fitted. Technically true, but seriously, what we want to know is that if we mash it to the floor you go faster..

    Read the article

  • How do you properly use WideCharToMultiByte

    - by Obediah Stane
    I've read the documentation here: http://msdn.microsoft.com/en-us/library/ms776420(VS.85).aspx I'm stuck on this parameter: lpMultiByteStr [out] Pointer to a buffer that receives the converted string. I'm not quite sure how to properly initialize the variable and feed it into the function

    Read the article

  • Is it possible to develop a remote desktop server application?

    - by Heshan Perera
    I just want to know whether it is possible to develop an Android application that will allow remotely controlling an Android phone in the same way that remote desktop, or team viewer allows control over desktop operating systems. Is it possible on an unrooted phone? The basic functionlit required to acompolish this is would be the ability to capture the frame buffer and programmatically invoke touch on the device. Any feedback on this matter would be highly appreciated.

    Read the article

  • Why might my Emacs use spaces instead of tabs?

    - by Fletcher Moore
    I am trying to diagnose this problem. TAB creates 4 spaces instead of a 4 col TAB like I want. But I don't think it should because C-h v indent-tabs-mode on the buffer in question says it is set to t. When I check my keybindings, TAB is set to c-indent-line-or-region. Does this function ignore my tabs-mode?

    Read the article

  • Handling of data truncation in FUSE

    - by Vi
    I expect any good program should do all their reads and writes in a loop until all data written/read without relying that write will write everything (even with regular files). Am I right? Implemented simple FUSE filesystem which only allows reading and writing with small buffers, very often returning that it is written less bytes that in a buffer (using -o direct_io). Some programs work, some not. Are them buggy or programs should not expect truncated writes and reads from the regular files?

    Read the article

  • Akka framework support for finding duplicate messages

    - by scala_is_awesome
    I'm trying to build a high-performance distributed system with Akka and Scala. If a message requesting an expensive (and side-effect-free) computation arrives, and the exact same computation has already been requested before, I want to avoid computing the result again. If the computation requested previously has already completed and the result is available, I can cache it and re-use it. However, the time window in which duplicate computation can be requested may be arbitrarily small. e.g. I could get a thousand or a million messages requesting the same expensive computation at the same instant for all practical purposes. There is a commercial product called Gigaspaces that supposedly handles this situation. However there seems to be no framework support for dealing with duplicate work requests in Akka at the moment. Given that the Akka framework already has access to all the messages being routed through the framework, it seems that a framework solution could make a lot of sense here. Here is what I am proposing for the Akka framework to do: 1. Create a trait to indicate a type of messages (say, "ExpensiveComputation" or something similar) that are to be subject to the following caching approach. 2. Smartly (hashing etc.) identify identical messages received by (the same or different) actors within a user-configurable time window. Other options: select a maximum buffer size of memory to be used for this purpose, subject to (say LRU) replacement etc. Akka can also choose to cache only the results of messages that were expensive to process; the messages that took very little time to process can be re-processed again if needed; no need to waste precious buffer space caching them and their results. 3. When identical messages (received within that time window, possibly "at the same time instant") are identified, avoid unnecessary duplicate computations. The framework would do this automatically, and essentially, the duplicate messages would never get received by a new actor for processing; they would silently vanish and the result from processing it once (whether that computation was already done in the past, or ongoing right then) would get sent to all appropriate recipients (immediately if already available, and upon completion of the computation if not). Note that messages should be considered identical even if the "reply" fields are different, as long as the semantics/computations they represent are identical in every other respect. Also note that the computation should be purely functional, i.e. free from side-effects, for the caching optimization suggested to work and not change the program semantics at all. If what I am suggesting is not compatible with the Akka way of doing things, and/or if you see some strong reasons why this is a very bad idea, please let me know. Thanks, Is Awesome, Scala

    Read the article

  • How can I background the R process in ESS / Emacs?

    - by Conor
    I often run long R scripts when I start my R environment. I would like to be able to load / run the R script in Emacs / ESS and continue other work in another buffer. When I press C-g or C-c C-c the process is interrupted, and I must restart the script. What is the best way to background the R process in ESS / Emacs?

    Read the article

  • c++ meaning of the use of const in the signature

    - by jbu
    Please help me understand the following signature: err_type funcName(const Type& buffer) const; so for the first const, does that mean the contents of Type cannot change or that the reference cannot change? secondly, what does the second const mean? I don't really even have a hint. Thanks in advance, jbu

    Read the article

  • converting webpage into jpeg image using java

    - by ravi
    I am building a web application, in Java, where i want the whole screenshot of the webpage, if i give the URL of the webpage as input. The basic idea i have is to capture the display buffer of the rendering component..I have no idea of how to do it.. plz help..

    Read the article

< Previous Page | 75 76 77 78 79 80 81 82 83 84 85 86  | Next Page >