Search Results

Search found 5322 results on 213 pages for 'audio chat'.

Page 70/213 | < Previous Page | 66 67 68 69 70 71 72 73 74 75 76 77  | Next Page >

  • Alternate play / pause button for WordPress wpaudio soundmanager plugin

    - by j-man86
    Hello! I am using the wpaudio plugin to convert mp3 links into a javascript/flash audio player. My problem is that I use this plugin in two areas on my site: one on a black background, and one on a white background. I need to use an alternate set of play/pause buttons for each page (white buttons for the black background and vice versa). I am at a total loss on how to do this. I need to some how incorporate a "if page is..." statement into the wpaudio.js but I don't know how to do this with jQuery. Can anyone help? Thanks so much!

    Read the article

  • Manipulating multi-track ogg files programatically

    - by Chad Birch
    I'm planning to create a program for manipulating multi-track OGG files, but I don't have any experience with the relevant libraries, so I'm looking for recommendations about which language/library to use for this. I don't really have any preference for the language, I'll happily code it in C, C#, Python, whatever makes things the easiest (or even possible). Perhaps it's even a possibility to automate Audacity somehow? In terms of requirements, I'm not looking for anything particularly fancy. It will probably be a command-line program, I don't need to be able to play the audio, draw image representations of the waveforms, etc. The program will basically be used as a converter, but I need to do some processing before outputting. That is, I need the ability to programatically remove some tracks, set panning per-track, change track volumes, etc. Nothing too complex, just some basic processing, and then output the result in either MP3 or a format easily converted to MP3, such as WAV. Any suggestions or general information would be appreciated, thanks.

    Read the article

  • AudioQueueOfflineRender returning empty data

    - by hyn
    I'm having problems using AudioQueueOfflineRender to decode AAC data. When I examine the buffer after the call, it is always filled with empty data. I made sure the input buffer is valid and packet descriptions are provided. I searched and found that a few others have had the same problem: http://lists.apple.com/archives/Coreaudio-api/2008/Jul/msg00119.html Also, the inTimestamp argument doesn't make sense to me. Why should the renderer care where in the audio the beginning of the buffer corresponds to? The function throws an error if I pass in NULL, so I pass in the timestamp anyway.

    Read the article

  • How to extract semi-precise frequencies from a WAV file using Fourier Transforms

    - by Seisatsu
    Let us say that I have a WAV file. In this file, is a series of sine tones at precise 1 second intervals. I want to use the FFTW library to extract these tones in sequence. Is this particularly hard to do? How would I go about this? Also, what is the bast way to write tones of this kind into a WAV file? I assume I would only need a simple audio library for the output. My language of choice is C

    Read the article

  • Write wave files to memory in Java

    - by Cliff
    I'm trying to figure out why my servlet code creates wave files with improper headers. I use: AudioSystem.write( new AudioInputStream( new ByteArrayInputStream(memoryBytes), new AudioFormat(22000, 16, 1, true,false), memoryBytes.length ), AudioFileFormat.Type.WAVE, servletOutputStream ); taking a byte array from memory containing raw PCM samples and a servlet output stream that gets returned to the client. In the result I get a normal wave file but with zeros in the chunk size fields. Is the API broken? I would think that the size could be filled in using the size passed in the audio input stream. But now, after typing this out I'm thinking its not making this info available to the outer write() method on AudioSystem. It seems like the AudioSystem.write call needs a size parameter unless it is able to pull the size from the stream... which wouldn't work with an arbitrary sized stream. Does anyone know how to make this example work?

    Read the article

  • Why doesnt R.raw.'songname' not work on android devices?

    - by James Rattray
    I have some media (Audio tracks) on an app... With file path 'R.raw.test' I use some code to get it into a mediaplayer... MediaPlayer.create(Textbox.this, R.raw.fly); And it works PERFECTLY on the Android Emulator... (Plays track on click of button) Can someone explain why, when I put it on my Archos (5 IT) it doesnt work at all? -As soon as the button is clicked, it crashes... Do you have to do something to file paths or what? Please help... Thanks alot... James

    Read the article

  • Create mp3 previews from wav and aiff files

    - by August Lilleaas
    I would like to create a program that makes mp3s of the first 30 seconds of an aiff or wav file. I would also like to be able to choose location and length, such as the audio between 2:12 and 2:42. Are there any tools that lets me do this? Shelling out is OK. The application will run on a linux server, so it would have to be a tool that works on linux. I don't mind doing it in two steps - i.e. a tool that first creates the cutout of the aiff/wav, then pass it to a mp3 encoder.

    Read the article

  • Java and gstreamer-java initialisation error

    - by Mark
    I am building a small app which will play streaming audio from the internet in java (mainly internet radio stations). I have decided to use the gstreamer-java library for the sound, which uses JNA. I would like to include a check in the code, to see whether the gstreamer library has been initialised. When I have left the "Gst.init()" code out (to mimic when the library has not been initialised correctly), the application throws out the following messages: (process:21888): GLib-GObject-CRITICAL **: /build/buildd/glib2.0-2.22.3/gobject/gtype.c:2458: initialization assertion failed, use IA__g_type_init() prior to this function (process:21888): GLib-CRITICAL **: g_once_init_leave: assertion `initialization_value != 0' failed The app calls the gstreamer-java library. The error messages appear but the thread continues to run, hogging the CPU. Is there any way to catch the error or to add a check to prevent it from happening? An alternative would be to put the "Gst.init()" in the main class, but I am not sure if this would always guarantee the gstreamer library is initialised.

    Read the article

  • Detecting when Bluetooth is disabled on iOS5

    - by Non Umemoto
    I'm developing blog speaker app. I wanna pause the audio when bluetooth is disabled like iPod app. I thought it's not possible without using private api after reading this. Check if Bluetooth is Enabled? But, my customer told me that Rhapsody and DI Radio apps both support it. Then I found iOS5 has Core Bluetooth framework. https://developer.apple.com/library/ios/documentation/CoreBluetooth/Reference/CoreBluetooth_Framework/CoreBluetooth_Framework.pdf CBCentralManagerStatePoweredOff status seems like the one. But, the description says this api only supports Bluetooth 4.0 low energy devices. Did anyone try doing the same thing? I want to support current popular bluetooth headsets, or bluetooth enabled steering wheel on the car. I don't know if it's worth trying when it only supports some brand new bluetooth.

    Read the article

  • About data size filled in the buffer

    - by Bohan Lu
    I need low-latency audio in my project, and I know Android 2.3 supports OpenSL ES. I have read documents and sample code and I decide to use Android simple buffer queue to do the play and record. I now try to write a simple application to do the test. However, I have some questions about recording. If I set the recorder stop when it is recording, how do I know the exact number of bytes filled in the last buffer if it is not filled up ? In 1.1 version, the callback function has some parameters about buffer and its filled data, but there is no such parameters in version 1.0.1. Is there any way to get this information ? Any suggestion would be greatly appreciated !

    Read the article

  • Making of a "Babbelbox" where you can speak to for partys

    - by Spidfire
    Ive got a project to make for a party, its called in holland a "Babbelbox". its a computer with a webcam and microphone that can be used to make a kind of video log of everyone who wants to say something about the party. But the problem is that i dont know where to start. ive made a kind of video show system in c but i cant save any data to a good format so it wont jam my harddisk in one hour full. Requirements: Record video + audio Recoding has to start after pressing a button Good compression over the recorded videos (would be even better if it can to be read by final cut pro or premiere pro) Light wight programm would be nice but i could scale up the computer power

    Read the article

  • For the iPad or iPhone, how do you control the system Volume? For example, have a button that mutes

    - by SolidSnake4444
    I would like to make a button in my iPad app (probably will be similar to iPhone apps) that when I push this button, all audio is muted, even when you exit the app. I don't see anyway that you can control the volume, although I'm sure other apps have that I have seen in the app store for the iPhone. I also read some places that doing this would reject you from the app store. How could I go about lowering, or highering the volume of the iPad from an app that works even when the app closes? Thank you!

    Read the article

  • "Winamp style" spectrum analyzer

    - by cvb
    I have a program that plots the spectrum analysis (Amp/Freq) of a signal, which is preety much the DFT converted to polar. However, this is not exactly the sort of graph that, say, winamp (right at the top-left corner), or effectively any other audio software plots. I am not really sure what is this sort of graph called (if it has a distinct name at all), so I am not sure what to look for. I am preety positive about the frequency axis being base two exponential, the amplitude axis puzzles me though. Any pointers?

    Read the article

  • How to receive a datastream from a device on your computer, in C#

    - by WebDevHobo
    I plan to build a small audio-recorder app in C#. My laptop has a built in Microphone that's always active, so I want to use that as an early-stage test. I would simply start recording, save the file as a .wav or even use the LAME dll to make it into an MP3. The problem is, I don't know how to contact that microphone. Do I use a library that can detect a device, or do I just catch a stream of bytes from the port that the device is on? I don't have any experience with receiving data from connected devices. I suppose that I'll need to enter all the data into a byte array and then Serialize that into a WAV file, but I'm not sure. Can I get some pointers on this subject?

    Read the article

  • Java stop MIDI playback

    - by user456268
    Hi I have java application which plays midi messages from sequence. I'm doing this using jfugue library. the problem is when I'm tryingto stop playback with stop button (which call sequencer.stop() and sequencer.close()) the last played note is sound all of rest time, and I can't stop it. So I'm asking about solution about stopping all audio and MIDI too! sound playback from java application. Notice: If you want propose just mute volume, you need to know that I want end-use will be able to press play button again and hear the sound again, so muting volumr will be not a solution, or explain please. Thank you!

    Read the article

  • SoundManager + FFMPEG causing loud popping sound when streaming MP3s?

    - by David
    Hi there, I built an application that plays both uploaded original mp3 files, and copies that have been converted with FFMPEG. I am finding that in some cases the FFMPEG files have a horrible popping/clicking/screeching sound for a split second at startup (hear below). But when I analyze the file in an audio editor there is nothing there, so it seems to be either the browser or soundManager reacting badly to something in that file. Wondering if there is any way I can fix this either by adjusting FFMPEG settings, soundManager settings, or..... Any suggestions? I've uploaded the offending sound in the link below (before the music starts playing). Thanks for your help! Hear sound

    Read the article

  • Python frequency detection

    - by Tsuki
    Ok what im trying to do is a kind of audio processing software that can detect a prevalent frequency an if the frequency is played for long enough (few ms) i know i got a positive match. i know i would need to use FFT or something simiral but in this field of math i suck, i did search the internet but didn not find a code that could do only this. the goal im trying to accieve is to make myself a custom protocol to send data trough sound, need very low bitrate per sec but im also very limited on the transmiting end so the recieving software will need to be able custom (cant use an actual hardware/software modem) also i want this to be software only (no additional hardware except soundcard) thanks alot for the help.

    Read the article

  • iPhone SDK SDL_openAudio with Multitasking Support

    - by brokedid
    Hello, I'm playing audio from a Online Live RTPS Stream with ffmpeg(because Apple doesn't support rtsp live streaming). Now I would play my Stream in the background. I started a thread in the background and registered the music for Background support. When the Application is entering in Background the NSThread is paused, and then Resuming after returning from background. If I start playing a Music (MP3-Stream) in the Application which use official Apple Frameworks then when the App is entering Background both Streams are played. What can I do to fix this?

    Read the article

  • Can I play any Buffer only once at a given time?

    - by mystify
    From the OpenAL documentation: The basic OpenAL objects are a Listener, a Source, and a Buffer. There can be a large number of Buffers, which contain audio data. Each buffer can be attached to one or more Sources My problem is, that I have one sound file which I need to play multiple times per second, at the same time. The sound is 2 seconds long. So it will overlap. Would I need multiple filled buffers for this (= multiple times that sound in memory)? If I would attach one Buffer to multiple Sources, would I be able to play the sound 10 times, overlapping itself, with just one copy in memory? Or would I still have to deal with 10 copies of that sound in memory?

    Read the article

  • What Is Causing The Humming Sound On My Website?

    - by Draven Vestatt
    I've noticed this on a handful of websites on the web. Sometimes there will be a low humming sound, that doesn't increase or decrease with volume. I've searched the web, and I can't find anything addressing it. My website that I've working on(still under construction): http://nottheactualaddress.com Do you hear a low humming sound? The audio is low even if you turn up your volume. If so, what do you think is causing it? It's driving me crazy...

    Read the article

  • Stuggling with webkit-transition in javascript

    - by Mungbeans
    I've tried a few variations of using webkit-transition that I've found from googling but I've not been able to get any to work. I have some audio controls that I make appear on a click event, they appear suddenly and jerky so I want to fade them in. The target browser is iOS so I am trying webkit extensions. This is what I currently have: <div id = "controls"> <audio id = "audio" controls></audio> </div> #controls { position:absolute; top: 35px; left:73px; height: 20px; width: 180px; display:none; } #audio { opacity:0.0; } audio.src = clip; audio.addEventListener('pause', onPauseOrStop, false); audio.addEventListener('ended', onPauseOrStop, false); audio.play(); audioControls.style.display = 'block'; audio.style.setProperty("-webkit-transition", "opacity 0.4s"); audio.style.opacity = 0.7; The documentation for webkit-transition says it takes effect on a change in the property, so I was assuming changing style.opacity in the last line would kick it off. The controls appear with an opacity of 0.7 but I want it to fade in and that animation isn't happening. I also tried this: #audio { opacity:0.0; -webkit-transition-property: opacity; -webkit-transition-duration: 1s; -webkit-timing-function: ease-in; } Also tried audio.style.webkitTransition = "opacity 1.4s"; from this posting How to set CSS3 transition using javascript? I can't get anything to work, I'm testing on iOS, Safari desktop and Chrome. Same non result on all of them.

    Read the article

  • Core Audio - CARIngBuffer

    - by tech74
    Hi, Im looking at using the CARingBuffer in iPhone SDK 3.1 Developer\Extras\CoreAudio\PublicUtility, however was a little puzzled about some of its methods. Firstly this will only make sense really to anyone who's used this class For example the GetTimebounds,SetTimeBounds, ClipTimeBounds functions what are these actually doing? Also when using it, i get crashes caused by example this method in the main Fetch method - ZeroABL(abl, 0, destStartOffset * mBytesPerFrame); CARingBufferError CARingBuffer::Fetch(AudioBufferList *abl, UInt32 nFrames, SampleTime startRead) { SampleTime endRead = startRead + nFrames; SampleTime startRead0 = startRead; SampleTime endRead0 = endRead; SampleTime size; CARingBufferError err = ClipTimeBounds(startRead, endRead); if (err) return err; size = endRead - startRead; SInt32 destStartOffset = startRead - startRead0; if (destStartOffset 0) { ZeroABL(abl, 0, destStartOffset * mBytesPerFrame); } Here the destStartOffset has become larger than the size of the abl Bufferlist so when a memset is done it exceeds the boundaries of the abl Bufferlist causing the crash. Why hasn't this class got checks in to prevent this.

    Read the article

  • Audio Recording and Playback

    - by Siva
    Hi, I am new to iphone development. In my app, I want to record a voice and play the recorded voice. Now I am trying to do via speak here sample code, but i feel it is too hard to understand with AudioToolbox framework. Somebody saying AudioToolbox framework is too difficult to implement it. is there any other sample with other than AudioToolbox framework or which way is best to do that? Please help me!

    Read the article

  • Audio looping in Objective-C/iPhone

    - by Neurofluxation
    So, I'm finishing up an iPhone App. I have the following code in place to play the file: while(![player isPlaying]) { totalSoundDuration = soundDuration + 0.5; //Gives a half second break between sounds sleep(totalSoundDuration); //Don't play next sound until the previous sound has finished [player play]; //Play sound NSLog(@" \n Sound Finished Playing \n"); //Output to console } For some reason, the sound plays once then the code loops and it outputs the following: Sound Finished Playing Sound Finished Playing Sound Finished Playing etc... This just repeats forever, I don't suppose any of you lovely people can fathom what could be the boggle? Cheers!

    Read the article

  • detecting pauses in a spoken word audio file using pymad, pcm, vad, etc

    - by james
    First I am going to broadly state what I'm trying to do and ask for advice. Then I will explain my current approach and ask for answers to my current problems. Problem I have an MP3 file of a person speaking. I'd like to split it up into segments roughly corresponding to a sentence or phrase. (I'd do it manually, but we are talking hours of data.) If you have advice on how to do this programatically or for some existing utilities, I'd love to hear it. (I'm aware of voice activity detection and I've looked into it a bit, but I didn't see any freely available utilities.) Current Approach I thought the simplest thing would be to scan the MP3 at certain intervals and identify places where the average volume was below some threshold. Then I would use some existing utility to cut up the mp3 at those locations. I've been playing around with pymad and I believe that I've successfully extracted the PCM (pulse code modulation) data for each frame of the mp3. Now I am stuck because I can't really seem to wrap my head around how the PCM data translates to relative volume. I'm also aware of other complicating factors like multiple channels, big endian vs little, etc. Advice on how to map a group of pcm samples to relative volume would be key. Thanks!

    Read the article

< Previous Page | 66 67 68 69 70 71 72 73 74 75 76 77  | Next Page >