Search Results

Search found 3993 results on 160 pages for 'audio'.

Page 57/160 | < Previous Page | 53 54 55 56 57 58 59 60 61 62 63 64  | Next Page >

  • How to extract semi-precise frequencies from a WAV file using Fourier Transforms

    - by Seisatsu
    Let us say that I have a WAV file. In this file, is a series of sine tones at precise 1 second intervals. I want to use the FFTW library to extract these tones in sequence. Is this particularly hard to do? How would I go about this? Also, what is the bast way to write tones of this kind into a WAV file? I assume I would only need a simple audio library for the output. My language of choice is C

    Read the article

  • Write wave files to memory in Java

    - by Cliff
    I'm trying to figure out why my servlet code creates wave files with improper headers. I use: AudioSystem.write( new AudioInputStream( new ByteArrayInputStream(memoryBytes), new AudioFormat(22000, 16, 1, true,false), memoryBytes.length ), AudioFileFormat.Type.WAVE, servletOutputStream ); taking a byte array from memory containing raw PCM samples and a servlet output stream that gets returned to the client. In the result I get a normal wave file but with zeros in the chunk size fields. Is the API broken? I would think that the size could be filled in using the size passed in the audio input stream. But now, after typing this out I'm thinking its not making this info available to the outer write() method on AudioSystem. It seems like the AudioSystem.write call needs a size parameter unless it is able to pull the size from the stream... which wouldn't work with an arbitrary sized stream. Does anyone know how to make this example work?

    Read the article

  • Why doesnt R.raw.'songname' not work on android devices?

    - by James Rattray
    I have some media (Audio tracks) on an app... With file path 'R.raw.test' I use some code to get it into a mediaplayer... MediaPlayer.create(Textbox.this, R.raw.fly); And it works PERFECTLY on the Android Emulator... (Plays track on click of button) Can someone explain why, when I put it on my Archos (5 IT) it doesnt work at all? -As soon as the button is clicked, it crashes... Do you have to do something to file paths or what? Please help... Thanks alot... James

    Read the article

  • Create mp3 previews from wav and aiff files

    - by August Lilleaas
    I would like to create a program that makes mp3s of the first 30 seconds of an aiff or wav file. I would also like to be able to choose location and length, such as the audio between 2:12 and 2:42. Are there any tools that lets me do this? Shelling out is OK. The application will run on a linux server, so it would have to be a tool that works on linux. I don't mind doing it in two steps - i.e. a tool that first creates the cutout of the aiff/wav, then pass it to a mp3 encoder.

    Read the article

  • Architecture of chatroulette

    - by user317163
    Could somebody explain to me the architecture behind chatroulette? I was thinking about a similar project that would only implement Audio support (for starters). Is the best way to set this up a flash server? If so, how should I go about getting into flash, will I need flex 4? I have some beginner experience with c++, c# and java but I have never developed anything for the web. I was also wondering how the randomizer matches up the participants. How would you code something like this. Im obviously pretty clueless here and I'd greatly appreciate some advice regarding this problem -- I don't expect copy and paste solutions. It would just be nice to hear how you guys would tackle this problem. Thank you very much

    Read the article

  • Java and gstreamer-java initialisation error

    - by Mark
    I am building a small app which will play streaming audio from the internet in java (mainly internet radio stations). I have decided to use the gstreamer-java library for the sound, which uses JNA. I would like to include a check in the code, to see whether the gstreamer library has been initialised. When I have left the "Gst.init()" code out (to mimic when the library has not been initialised correctly), the application throws out the following messages: (process:21888): GLib-GObject-CRITICAL **: /build/buildd/glib2.0-2.22.3/gobject/gtype.c:2458: initialization assertion failed, use IA__g_type_init() prior to this function (process:21888): GLib-CRITICAL **: g_once_init_leave: assertion `initialization_value != 0' failed The app calls the gstreamer-java library. The error messages appear but the thread continues to run, hogging the CPU. Is there any way to catch the error or to add a check to prevent it from happening? An alternative would be to put the "Gst.init()" in the main class, but I am not sure if this would always guarantee the gstreamer library is initialised.

    Read the article

  • Detecting when Bluetooth is disabled on iOS5

    - by Non Umemoto
    I'm developing blog speaker app. I wanna pause the audio when bluetooth is disabled like iPod app. I thought it's not possible without using private api after reading this. Check if Bluetooth is Enabled? But, my customer told me that Rhapsody and DI Radio apps both support it. Then I found iOS5 has Core Bluetooth framework. https://developer.apple.com/library/ios/documentation/CoreBluetooth/Reference/CoreBluetooth_Framework/CoreBluetooth_Framework.pdf CBCentralManagerStatePoweredOff status seems like the one. But, the description says this api only supports Bluetooth 4.0 low energy devices. Did anyone try doing the same thing? I want to support current popular bluetooth headsets, or bluetooth enabled steering wheel on the car. I don't know if it's worth trying when it only supports some brand new bluetooth.

    Read the article

  • About data size filled in the buffer

    - by Bohan Lu
    I need low-latency audio in my project, and I know Android 2.3 supports OpenSL ES. I have read documents and sample code and I decide to use Android simple buffer queue to do the play and record. I now try to write a simple application to do the test. However, I have some questions about recording. If I set the recorder stop when it is recording, how do I know the exact number of bytes filled in the last buffer if it is not filled up ? In 1.1 version, the callback function has some parameters about buffer and its filled data, but there is no such parameters in version 1.0.1. Is there any way to get this information ? Any suggestion would be greatly appreciated !

    Read the article

  • Making of a "Babbelbox" where you can speak to for partys

    - by Spidfire
    Ive got a project to make for a party, its called in holland a "Babbelbox". its a computer with a webcam and microphone that can be used to make a kind of video log of everyone who wants to say something about the party. But the problem is that i dont know where to start. ive made a kind of video show system in c but i cant save any data to a good format so it wont jam my harddisk in one hour full. Requirements: Record video + audio Recoding has to start after pressing a button Good compression over the recorded videos (would be even better if it can to be read by final cut pro or premiere pro) Light wight programm would be nice but i could scale up the computer power

    Read the article

  • For the iPad or iPhone, how do you control the system Volume? For example, have a button that mutes

    - by SolidSnake4444
    I would like to make a button in my iPad app (probably will be similar to iPhone apps) that when I push this button, all audio is muted, even when you exit the app. I don't see anyway that you can control the volume, although I'm sure other apps have that I have seen in the app store for the iPhone. I also read some places that doing this would reject you from the app store. How could I go about lowering, or highering the volume of the iPad from an app that works even when the app closes? Thank you!

    Read the article

  • "Winamp style" spectrum analyzer

    - by cvb
    I have a program that plots the spectrum analysis (Amp/Freq) of a signal, which is preety much the DFT converted to polar. However, this is not exactly the sort of graph that, say, winamp (right at the top-left corner), or effectively any other audio software plots. I am not really sure what is this sort of graph called (if it has a distinct name at all), so I am not sure what to look for. I am preety positive about the frequency axis being base two exponential, the amplitude axis puzzles me though. Any pointers?

    Read the article

  • How to receive a datastream from a device on your computer, in C#

    - by WebDevHobo
    I plan to build a small audio-recorder app in C#. My laptop has a built in Microphone that's always active, so I want to use that as an early-stage test. I would simply start recording, save the file as a .wav or even use the LAME dll to make it into an MP3. The problem is, I don't know how to contact that microphone. Do I use a library that can detect a device, or do I just catch a stream of bytes from the port that the device is on? I don't have any experience with receiving data from connected devices. I suppose that I'll need to enter all the data into a byte array and then Serialize that into a WAV file, but I'm not sure. Can I get some pointers on this subject?

    Read the article

  • Java stop MIDI playback

    - by user456268
    Hi I have java application which plays midi messages from sequence. I'm doing this using jfugue library. the problem is when I'm tryingto stop playback with stop button (which call sequencer.stop() and sequencer.close()) the last played note is sound all of rest time, and I can't stop it. So I'm asking about solution about stopping all audio and MIDI too! sound playback from java application. Notice: If you want propose just mute volume, you need to know that I want end-use will be able to press play button again and hear the sound again, so muting volumr will be not a solution, or explain please. Thank you!

    Read the article

  • SoundManager + FFMPEG causing loud popping sound when streaming MP3s?

    - by David
    Hi there, I built an application that plays both uploaded original mp3 files, and copies that have been converted with FFMPEG. I am finding that in some cases the FFMPEG files have a horrible popping/clicking/screeching sound for a split second at startup (hear below). But when I analyze the file in an audio editor there is nothing there, so it seems to be either the browser or soundManager reacting badly to something in that file. Wondering if there is any way I can fix this either by adjusting FFMPEG settings, soundManager settings, or..... Any suggestions? I've uploaded the offending sound in the link below (before the music starts playing). Thanks for your help! Hear sound

    Read the article

  • Python frequency detection

    - by Tsuki
    Ok what im trying to do is a kind of audio processing software that can detect a prevalent frequency an if the frequency is played for long enough (few ms) i know i got a positive match. i know i would need to use FFT or something simiral but in this field of math i suck, i did search the internet but didn not find a code that could do only this. the goal im trying to accieve is to make myself a custom protocol to send data trough sound, need very low bitrate per sec but im also very limited on the transmiting end so the recieving software will need to be able custom (cant use an actual hardware/software modem) also i want this to be software only (no additional hardware except soundcard) thanks alot for the help.

    Read the article

  • iPhone SDK SDL_openAudio with Multitasking Support

    - by brokedid
    Hello, I'm playing audio from a Online Live RTPS Stream with ffmpeg(because Apple doesn't support rtsp live streaming). Now I would play my Stream in the background. I started a thread in the background and registered the music for Background support. When the Application is entering in Background the NSThread is paused, and then Resuming after returning from background. If I start playing a Music (MP3-Stream) in the Application which use official Apple Frameworks then when the App is entering Background both Streams are played. What can I do to fix this?

    Read the article

  • What Is Causing The Humming Sound On My Website?

    - by Draven Vestatt
    I've noticed this on a handful of websites on the web. Sometimes there will be a low humming sound, that doesn't increase or decrease with volume. I've searched the web, and I can't find anything addressing it. My website that I've working on(still under construction): http://nottheactualaddress.com Do you hear a low humming sound? The audio is low even if you turn up your volume. If so, what do you think is causing it? It's driving me crazy...

    Read the article

  • Can I play any Buffer only once at a given time?

    - by mystify
    From the OpenAL documentation: The basic OpenAL objects are a Listener, a Source, and a Buffer. There can be a large number of Buffers, which contain audio data. Each buffer can be attached to one or more Sources My problem is, that I have one sound file which I need to play multiple times per second, at the same time. The sound is 2 seconds long. So it will overlap. Would I need multiple filled buffers for this (= multiple times that sound in memory)? If I would attach one Buffer to multiple Sources, would I be able to play the sound 10 times, overlapping itself, with just one copy in memory? Or would I still have to deal with 10 copies of that sound in memory?

    Read the article

  • Stuggling with webkit-transition in javascript

    - by Mungbeans
    I've tried a few variations of using webkit-transition that I've found from googling but I've not been able to get any to work. I have some audio controls that I make appear on a click event, they appear suddenly and jerky so I want to fade them in. The target browser is iOS so I am trying webkit extensions. This is what I currently have: <div id = "controls"> <audio id = "audio" controls></audio> </div> #controls { position:absolute; top: 35px; left:73px; height: 20px; width: 180px; display:none; } #audio { opacity:0.0; } audio.src = clip; audio.addEventListener('pause', onPauseOrStop, false); audio.addEventListener('ended', onPauseOrStop, false); audio.play(); audioControls.style.display = 'block'; audio.style.setProperty("-webkit-transition", "opacity 0.4s"); audio.style.opacity = 0.7; The documentation for webkit-transition says it takes effect on a change in the property, so I was assuming changing style.opacity in the last line would kick it off. The controls appear with an opacity of 0.7 but I want it to fade in and that animation isn't happening. I also tried this: #audio { opacity:0.0; -webkit-transition-property: opacity; -webkit-transition-duration: 1s; -webkit-timing-function: ease-in; } Also tried audio.style.webkitTransition = "opacity 1.4s"; from this posting How to set CSS3 transition using javascript? I can't get anything to work, I'm testing on iOS, Safari desktop and Chrome. Same non result on all of them.

    Read the article

  • Core Audio - CARIngBuffer

    - by tech74
    Hi, Im looking at using the CARingBuffer in iPhone SDK 3.1 Developer\Extras\CoreAudio\PublicUtility, however was a little puzzled about some of its methods. Firstly this will only make sense really to anyone who's used this class For example the GetTimebounds,SetTimeBounds, ClipTimeBounds functions what are these actually doing? Also when using it, i get crashes caused by example this method in the main Fetch method - ZeroABL(abl, 0, destStartOffset * mBytesPerFrame); CARingBufferError CARingBuffer::Fetch(AudioBufferList *abl, UInt32 nFrames, SampleTime startRead) { SampleTime endRead = startRead + nFrames; SampleTime startRead0 = startRead; SampleTime endRead0 = endRead; SampleTime size; CARingBufferError err = ClipTimeBounds(startRead, endRead); if (err) return err; size = endRead - startRead; SInt32 destStartOffset = startRead - startRead0; if (destStartOffset 0) { ZeroABL(abl, 0, destStartOffset * mBytesPerFrame); } Here the destStartOffset has become larger than the size of the abl Bufferlist so when a memset is done it exceeds the boundaries of the abl Bufferlist causing the crash. Why hasn't this class got checks in to prevent this.

    Read the article

  • Audio Recording and Playback

    - by Siva
    Hi, I am new to iphone development. In my app, I want to record a voice and play the recorded voice. Now I am trying to do via speak here sample code, but i feel it is too hard to understand with AudioToolbox framework. Somebody saying AudioToolbox framework is too difficult to implement it. is there any other sample with other than AudioToolbox framework or which way is best to do that? Please help me!

    Read the article

  • Audio looping in Objective-C/iPhone

    - by Neurofluxation
    So, I'm finishing up an iPhone App. I have the following code in place to play the file: while(![player isPlaying]) { totalSoundDuration = soundDuration + 0.5; //Gives a half second break between sounds sleep(totalSoundDuration); //Don't play next sound until the previous sound has finished [player play]; //Play sound NSLog(@" \n Sound Finished Playing \n"); //Output to console } For some reason, the sound plays once then the code loops and it outputs the following: Sound Finished Playing Sound Finished Playing Sound Finished Playing etc... This just repeats forever, I don't suppose any of you lovely people can fathom what could be the boggle? Cheers!

    Read the article

  • detecting pauses in a spoken word audio file using pymad, pcm, vad, etc

    - by james
    First I am going to broadly state what I'm trying to do and ask for advice. Then I will explain my current approach and ask for answers to my current problems. Problem I have an MP3 file of a person speaking. I'd like to split it up into segments roughly corresponding to a sentence or phrase. (I'd do it manually, but we are talking hours of data.) If you have advice on how to do this programatically or for some existing utilities, I'd love to hear it. (I'm aware of voice activity detection and I've looked into it a bit, but I didn't see any freely available utilities.) Current Approach I thought the simplest thing would be to scan the MP3 at certain intervals and identify places where the average volume was below some threshold. Then I would use some existing utility to cut up the mp3 at those locations. I've been playing around with pymad and I believe that I've successfully extracted the PCM (pulse code modulation) data for each frame of the mp3. Now I am stuck because I can't really seem to wrap my head around how the PCM data translates to relative volume. I'm also aware of other complicating factors like multiple channels, big endian vs little, etc. Advice on how to map a group of pcm samples to relative volume would be key. Thanks!

    Read the article

  • Audio Reminders

    - by abhishek mishra
    Hi , I am developing a reminder application. A part of it is to have voive notes as reminders. On click of voice notes button i want to start the inbuilt voice recorder. How do i go ahead for it ? Also once it starts i want to retrieve the path where it gets stored so that it can be played automatically on the day the timeline is reached. Is it possible ?

    Read the article

  • HTML5 iPhone Safari Mobile visualize something rather than quicktime symbol when creating an audio t

    - by Antonio Murgia
    I'm writing a very simple webpage in html5 for iPhone. the page is this one Not Working Everything works but in the page from the iPhone i see the quicktime logo with a slash on it and if i tap on it the player shows up the play button and in the background there is the quicktime logo. is it possible to replace the logos with a personal image? thank you in advance.

    Read the article

< Previous Page | 53 54 55 56 57 58 59 60 61 62 63 64  | Next Page >