Search Results

Search found 9901 results on 397 pages for 'audio processing'.

Page 14/397 | < Previous Page | 10 11 12 13 14 15 16 17 18 19 20 21  | Next Page >

  • Java algorithm for normalizing audio

    - by Marty Pitt
    I'm trying to normalize an audio file of speech. Specifically, where an audio file contains peaks in volume, I'm trying to level it out, so the quiet sections are louder, and the peaks are quieter. I know very little about audio manipulation, beyond what I've learnt from working on this task. Also, my math is embarrassingly weak. I've done some research, and the Xuggle site provides a sample which shows reducing the volume using the following code: (full version here) @Override public void onAudioSamples(IAudioSamplesEvent event) { // get the raw audio byes and adjust it's value ShortBuffer buffer = event.getAudioSamples().getByteBuffer().asShortBuffer(); for (int i = 0; i < buffer.limit(); ++i) buffer.put(i, (short)(buffer.get(i) * mVolume)); super.onAudioSamples(event); } Here, they modify the bytes in getAudioSamples() by a constant of mVolume. Building on this approach, I've attempted a normalisation modifies the bytes in getAudioSamples() to a normalised value, considering the max/min in the file. (See below for details). I have a simple filter to leave "silence" alone (ie., anything below a value). I'm finding that the output file is very noisy (ie., the quality is seriously degraded). I assume that the error is either in my normalisation algorithim, or the way I manipulate the bytes. However, I'm unsure of where to go next. Here's an abridged version of what I'm currently doing. Step 1: Find peaks in file: Reads the full audio file, and finds this highest and lowest values of buffer.get() for all AudioSamples @Override public void onAudioSamples(IAudioSamplesEvent event) { IAudioSamples audioSamples = event.getAudioSamples(); ShortBuffer buffer = audioSamples.getByteBuffer().asShortBuffer(); short min = Short.MAX_VALUE; short max = Short.MIN_VALUE; for (int i = 0; i < buffer.limit(); ++i) { short value = buffer.get(i); min = (short) Math.min(min, value); max = (short) Math.max(max, value); } // assign of min/max ommitted for brevity. super.onAudioSamples(event); } Step 2: Normalize all values: In a loop similar to step1, replace the buffer with normalized values, calling: buffer.put(i, normalize(buffer.get(i)); public short normalize(short value) { if (isBackgroundNoise(value)) return value; short rawMin = // min from step1 short rawMax = // max from step1 short targetRangeMin = 1000; short targetRangeMax = 8000; int abs = Math.abs(value); double a = (abs - rawMin) * (targetRangeMax - targetRangeMin); double b = (rawMax - rawMin); double result = targetRangeMin + ( a/b ); // Copy the sign of value to result. result = Math.copySign(result,value); return (short) result; } Questions: Is this a valid approach for attempting to normalize an audio file? Is my math in normalize() valid? Why would this cause the file to become noisy, where a similar approach in the demo code doesn't?

    Read the article

  • How can I determine what codec is being used?

    - by jldugger
    This forum comment and this superuser answer suggest that the audio compression contributes to loss of quality. I've noticed that music played over my BT setup sometimes pitch bends in ways I don't remember the original doing, and I'm wondering if SBC has something to do with it. I'm using Ubuntu 10.10 on a Mac Pro, connecting to a pair of Sony DR-BT50's. Is there a way to inspect which Bluetooth codec pulseaudio is using, what codecs both ends of the bluetooth link support?

    Read the article

  • edit sound files

    - by doug
    I have an audio file with a very bad recording (recording was made in very noisy conditions, at a conference). My recorder was too far from the speakers. Do you have any idea about how to improve the sound quality? Do you know any good resource for that?

    Read the article

  • How do audio based games such as Audiosurf and Beat Hazard work?

    - by The Communist Duck
    Note: I am not asking how to make a clone of one of these. I am asking about how they work. I'm sure everyone's seen the games where you use your own music files (or provided ones) and the games produce levels based on them, such as Audiosurf and Beat Hazard. Here is a video of Audiosurf in action, to show what I mean. If you provide a heavy metal song, you would get a completely different set of obstacles, enemies, and game experience from something like Vivaldi. What does interest me is how these games work. I do not know much about audio (well, data-side), but how do they process the song to understand when it is settling down or when it's speeding up? I guess they could just feed the pitch values (assuming those sorts of things exist in audio files) to form a level, but it wouldn't fully explain it. I'm either looking for an explanation, some links to articles about this sort of thing (I'm sure there's a term or terms for it), or even an open-source implementation of this kind of thing ;-) EDIT: After some searching and a little help, I found out about FFT (Fast Fourier Transform). This maybe a step in the right direction, but it is something that does not make any sense to me..or fits with my physics knowledge of waves.

    Read the article

  • Is it possible for a faulty processor to cause audio static/noise?

    - by Tom
    I have a Core 2 Extreme processor I received from a friend and have set up an XBMC box using it. However, I constantly get audio static whenever playing any music or videos. Here is a video of the sound: http://www.youtube.com/watch?v=SqKQkxYRVA4 I have tried replacing everything short of the case and the processor, including cables, audio interfaces, operating systems, ram, etc, leading me to think it might be either the case shorting out the motherboards I have tried or a faulty processor. Is it possible for a faulty processor to cause audio static/noise? Any feedback would be appreciated. Edit - Here's a list of things I have tried: Reinstalling OS Installing/upgrading/repairing PulseAudio/Alsa Installing alternate OSes, straight Ubuntu, Lubuntu, Xubuntu, Arch, Mint, Windows 7 Switching audio from the external card to internal Optical, audio out through HDMI, audio out through headphones Different ports on receiver (my main desktop sounds fine on the same sound system) Different optical cables Unplugging everything unnecessary from the motherboard (1 HD, 1 Stick of Ram, 1 Keyboard) Swapping out ram Swapping out the motherboard Replacing the Graphics Card (was replaced due to fan being noisy, not specifically for this problem) Different harddrives Swapping power supply Disabling onboard audio Switching Power Cable Plugging in through surge protector Plugging into different outlet on separate circuit

    Read the article

  • [SOLVED} How do I restore my audio after uninstalling Ventrilo?

    - by Marcx
    Hi, I've a Dell studio 1555 bought on september with Windows 7 64bit Professional on it. The audio device works proprerly, while listening to audio contents (from disk or internet) When I use Ventrilo, the audio from other people sounds good and I hear their voices clearly When I use any other VOIP programs like Teamspeak 3, MSN or Skype, I hear a disturbed voice, and it's impossible to comprehend something... Anyway everything worked fine until I installed Ventrilo, but removing it didn´t solve my problem. Update: Here's a sample of how I hear others people voices.. Audio Sample After some tests, also the desktop has the same problem. (I tried TeamSpeak3) Here are some details on my laptop and desktop Laptop Dell Studio 1555 Core 2 Duo P8600 2.4Ghz 4Gb Ram Dual Channel Ati HD 4570 512Mb dedicated (up to 2048) IDT High Definition Audio Desktop Motherboard Asus P5KPL-AM Dual Core CPU E5200 2.50Ghz 2x2GB PC6400 Dual Channel Ati Radeon HD 4650 512MB VIA High Definition Audio Both computers have Windows 7 Professional 64Bit. So how do I restore my audio? SOLVED The problem was in router firmware, there was a bug that recognized VoIP traffic as a DOS attack and the router grambled every packet... I've installed the newest firmware and everything is fine :)

    Read the article

  • Looping HTML5 audio on the iPhone

    - by Peeps
    I'm trying to make a HTML5 webapp that simply plays a sound over and over and over again, on my iPhone. I don't know any Obj-C to do it natively. What I have works fine, but the sound only plays once: <!DOCTYPE html> <html> <head> <title>noisemaker!</title> <meta http-equiv="content-type" content="text/html; charset=utf-8" /> <meta name="viewport" content="maximum-scale=1, minimum-scale=1, width=device-width, user-scalable=no" /> <meta name="apple-mobile-web-app-capable" content="yes" /> </head> <body> <audio src="noise.mp3" autoplay controls loop></audio> </body> </html> Is there a way to either bypass the QuickTime audio screen and loop it in the webpage, or get the QuickTime audio screen to loop the sound?

    Read the article

  • Virtual audio driver (microphone)

    - by Dalamber
    Hello guys, I want to develop a virtual microphone driver. Please, do not say anything about DirectShow - that's not "the way". I need a solution that will work with any software including Skype and MSN. And DirectShow doesn't fit these requirements. I found AVStream Filter-Centric Simulated Capture Driver (avssamp.sys) in Windows 7 WDK. What I need is an audio part of it. By default it reads avssamp.wav and plays it. But this driver is registered as WDM streaming capture device. And I want it in Audio Capture Device. There are some posts in the web but they are all the same: http://www.tech-archive.net/Archive/Development/microsoft.public.development.device.drivers/2005-05/msg00124.html http://www.winvistatips.com/problem-installing-avssamp-audio-capture-sources-category-t184898.html I think registering this filter-driver as audio capture device will make Skype recognize it as a microphone and thefore I will be able to push any PCM file as if it goes from mic. If someone already faced this problem before, please help. Thanks in advance.

    Read the article

  • Processing a tab delimited file with shell script processing

    - by Lilly Tooner
    Hello, normally I would use Python/Perl for this procedure but I find myself (for political reasons) having to pull this off using a bash shell. I have a large tab delimited file that contains six columns and the second column is integers. I need to shell script a solution that would verify that the file indeed is six columns and that the second column is indeed integers. I am assuming that I would need to use sed/awk here somewhere. Problem is that I'm not that familiar with sed/awk. Any advice would be appreciated. Many thanks! Lilly

    Read the article

  • processing: convert int to byte

    - by inspectorG4dget
    Hello SO, I'm trying to convert an int into a byte in Processing 1.0.9. This is the snippet of code that I have been working with: byte xByte = byte(mouseX); byte yByte = byte(mouseY); byte setFirst = byte(128); byte resetFirst = byte(127); xByte = xByte | setFirst; yByte = yByte >> 1; port.write(xByte); port.write(yByte); According to the Processing API, this should work, but I keep getting an error at xByte = xByte | setFirst; that says: cannot convert from int to byte I have tried converting 128 and 127 to they respective hex values (0x80 and 0x7F), but that didn't work either. I have tried everything mentioned in the API as well as some other blogs, but I feel like I'm missing something very trivial. I would appreciate any help. Thank you.

    Read the article

  • Image Viewer application, Image processing with Display Data.

    - by Harsha
    Hello All, I am working on Image Viewer application and planning to build in WPF. My Image size are usually larger than 3000x3500. After searching for week, I got sample code from MSDN. But it is written in ATL COM. So I am planning to work and build the Image viewer as follows: After reading the Image I will scale down to my viewer size, viwer is around 1000x1000. Lets call this Image Data as Display Data. Once displaying this data, I will work only this Display data. For all Image processing operation, I will use this display data and when user choose to save the image, I will apply all the operation to original Image data. My question is, Is is ok to use Display data for showing and initial image processing operations.

    Read the article

  • Image Viewer application, Image processing with Dispaly Data.

    - by Harsha
    Hello All, I am working on Image Viewer application and planning to build in WPF. My Image size are usually larger than 3000x3500. After searching for week, I got sample code from MSDN. But it is written in ATL COM. So I am planning to work and build the Image viewer as follows: After reading the Image I will scale down to my viewer size, viwer is around 1000x1000. Lets call this Image Data as Display Data. Once displaying this data, I will work only this Display data. For all Image processing operation, I will use this display data and when user choose to save the image, I will apply all the operation to original Image data. My question is, Is is ok to use Display data for showing and initial image processing operations.

    Read the article

  • Asynchronous Processing in JBoss 6 ("Comet")

    - by chris_l
    edit: Retagged as tomcat, since this is really a question about the Tomcat embedded inside JBoss 6, rather than JBoss itself I have an extremely simple servlet, which works on Glassfish v3. It uses Servlet 3.0 Asynchronous Processing. Here's a simplified version (which doesn't do much): @WebServlet(asyncSupported=true) public class SimpleServlet extends HttpServlet { @Override protected void doGet(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException { final AsyncContext ac = request.startAsync(); ac.setTimeout(3000); } } On JBoss 6.0.0 (Milestone 2), I get the following Exception: java.lang.IllegalStateException: The servlet or filters that are being used by this request do not support async operation at org.apache.catalina.connector.Request.startAsync(Request.java:3096) at org.apache.catalina.connector.Request.startAsync(Request.java:3090) at org.apache.catalina.connector.RequestFacade.startAsync(RequestFacade.java:990) at playcomet.SimpleServlet.doGet(SimpleServlet.java:18) at javax.servlet.http.HttpServlet.service(HttpServlet.java:734) ... Do I have to do anything special to enable Asynchronous Processing in JBoss 6? Or do I need an additional deployment descriptor? ...

    Read the article

  • Best way to remove an object from an array in Processing

    - by cmal
    I really wish Processing had push and pop methods for working with Arrays, but since it does not I'm left trying to figure out the best way to remove an object at a specific position in an array. I'm sure this is as basic as it gets for many people, but I could use some help with it, and I haven't been able to figure much out by browsing the Processing reference. I don't think it matters, but for your reference here is the code I used to add the objects initially: Flower[] flowers = new Flower[0]; for (int i=0; i < 20; i++) { Flower fl = new Flower(); flowers = (Flower[]) expand(flowers, flowers.length + 1); flowers[flowers.length - 1] = fl; } For the sake of this question, let's assume I want to remove an object from position 15. Thanks, guys.

    Read the article

  • Play 2.0 RESTful request post-processing

    - by virtualeyes
    In regard to this question I am curious how one can do post-request REST processing a la (crude): def postProcessor[T](content: T) = { request match { case Accepts.Json() => asJson(content) case Accepts.Xml() => asXml(content) case _ => content } } overriding onRouteRequest in Global config does not appear to provide access to body of the response, so it would seem that Action composition is the way to go to intercept the response and do post-processing task(s). Question: is this a good idea, or is it better to do content-type casting directly within a controller (or other class) method where the type to cast is known? Currently I'm doing this kind of thing everywhere: toJson( i18n("account not found") ) toJson( Map('orderNum-> orderNum) ) while I'd like the toJson/toXml conversion to happen based on accepts header post-request.

    Read the article

  • Create a Sin wave line with Processing

    - by Nintari
    Hey everybody, first post here, and probably an easy one. I've got the code from Processing's reference site: float a = 0.0; float inc = TWO_PI/25.0; for(int i=0; i<100; i=i+4) { line(i, 50, i, 50+sin(a)*40.0); a = a + inc; } http://processing.org/reference/sin_.html However, what I need is a line that follows the curve of a Sin wave, not lines representing points along the curve and ending at the 0 axis. So basically I need to draw an "S" shape with a sin wave equation. Can someone run me through how to do this? Thank you in advance, -Askee

    Read the article

  • How do I restore my audio after uninstalling Ventrilo?

    - by Marcx
    Hi, I've a Dell studio 1555 bought on september with Windows 7 64bit Professional on it. The audio device works proprerly, while listening to audio contents (from disk or internet) When I use Ventrilo, the audio from other people sounds good and I hear their voices clearly When I use any other VOIP programs like Teamspeak 3, MSN or Skype, I hear a disturbed voice, and it's impossible to comprehend something... Anyway everything worked fine until I installed Ventrilo, but removing it didn´t solve my problem. So how do I restore my audio?

    Read the article

  • How do I use different audio devices for different apps in Windows 8?

    - by Eclipse
    Besides switching the default audio device, how can I send the audio from one app (say x-box music) to one audio device, and another (say the video app) to another audio device? Edit: Looking further, I found this: http://channel9.msdn.com/Events/BUILD/BUILD2011/APP-408T At 16:16, he demonstrates exactly what I'm wanting to do, but when I go to the devices charm, I get a message: "You don't have any devices that can receive content from Music".

    Read the article

  • Best way to learn iphone audio queue services, step by step tutorial

    - by optician
    Hi Everyone, I'm trying to learn how to handle audio at a fairly low level with audio queue services. I have been progrmaing in memory managed languages for quite a while, and have just completed the c programing tutorial by vtc (2007). This has left me comfortable with the understanding of pointers and memory allocation, but the apple documention still leaves me wanting for a simpler implenation and explaination. Maybe I need to learn objective c and cocoa better. I have heard that this book is good. Cocoa(R) Programming for Mac(R) OS X (3rd Edition) Could someone suggest a learning path that is going to help me get an better understanding of working with audio and an iphone. I want to be able to play mp3 files back and also alter the pitch of them as they are playing. I am prepared that I may have to temporarily convert the mp3 files into pcm files to do things like that to them. Thanks everyone.

    Read the article

  • Which audio library to use?

    - by Jeb
    I want to build a .Net application for processing audio, and distribute it using ClickOnce deployment. I need access to a raw audio pipeline. Which audio library should I be using? I've heard the managed libraries for DirectSound are a dead end. I need as little as possible to be installed on the client's machine. Anything outside of the ClickOnce process isn't going to work. NAudio might be a possibility, but isn't there potentially a separate driver install? There's also SlimDX. It's a shame -- the managed DirectX libraries seem to work nicely and from what I've read, DirectX can be included in the ClickOnce install.

    Read the article

  • Unexpected behavior with AudioQueueServices callback while recording audio

    - by rcw3
    I'm recording a continuous stream of data using AudioQueueServices. It is my understanding that the callback will only be called when the buffer fills with data. In practice, the first callback has a full buffer, the 2nd callback is 3/4 full, the 3rd callback is full, the 4th is 3/4 full, and so on. These buffers are 8000 packets (recording 8khz audio) - so I should be getting back 1s of audio to the callback each time. I've confirmed that my audio queue buffer size is correct (and is somewhat confirmed by the behavior). What am I doing wrong? Should I be doing something in the AudioQueueNewInput with a different RunLoop? I tried but this didn't seem to make a difference... By the way, if I run in the debugger, each callback is full with 8000 samples - making me think this is a threading / timing thing.

    Read the article

< Previous Page | 10 11 12 13 14 15 16 17 18 19 20 21  | Next Page >