Search Results

Search found 3993 results on 160 pages for 'audio'.

Page 60/160 | < Previous Page | 56 57 58 59 60 61 62 63 64 65 66 67  | Next Page >

  • FMOD.net streaming, callback and exinfo parameters

    - by Tesserex
    I posted a question on gamedev about how to play nsf files (NES console music) in FMOD. It didn't get any results, but since then I made some progress. I decided that the easiest method was just to compile an existing player into a dll and then call it from C# to populate my buffer. The problem now is getting it to sound right, and making sure all my paremeters are correct. Here are the facts so far: The nsf dll is dealing with shorts, so the data is PCM16. The sample nsf I'm using has a playback rate of 60 Hz. Just for playing around now, I'm using a frequency of 48000. Based on 2 and 3, the dll calculates a necessary buffer size of 48000 / 60hz = 800. This means it will render 800 shorts worth of buffer for every simulated NES frame. I've so far got my C# code to play the nsf, at the correct pitch and tempo, but it's very grainy / fuzzy, which I'm attributing to the fact that the FMOD read callback is giving a data length of 1600, whereas I should be expecting 800. I've tried playing around with all the numbers and it either crashes, or the music changes pitch, tempo, or both. Here's some of my C# code: uint channels = 1, frequency = 48000; FMOD.MODE mode = (FMOD.MODE.DEFAULT | FMOD.MODE.OPENUSER | FMOD.MODE.LOOP_NORMAL); FMOD.Sound sound = new FMOD.Sound(); FMOD.CREATESOUNDEXINFO ex = new FMOD.CREATESOUNDEXINFO(); ex.cbsize = Marshal.SizeOf(ex); ex.fileoffset = 0; ex.format = FMOD.SOUND_FORMAT.PCM16; // does this even matter? It doesn't change my results as long as it's long enough for one update ex.length = frequency; ex.numchannels = (int)channels; ex.defaultfrequency = (int)frequency; ex.pcmreadcallback = pcmreadcallback; ex.dlsname = null; // eventually I will calculate this with frequency / nsf hz, but I'm just testing for now ex.decodebuffersize = 800; // from the dll load_nsf_file("file.nsf", 8, (int)frequency); // 8 is the track number to play var result = system.createSound( (string)null, (mode | FMOD.MODE.CREATESTREAM), ref ex, ref sound); channel = new FMOD.Channel(); result = system.playSound(FMOD.CHANNELINDEX.FREE, sound, false, ref channel); private FMOD.RESULT PCMREADCALLBACK(IntPtr soundraw, IntPtr data, uint datalen) { // from the dll process_buffer(data, (int)800); // if I use datalen, it usually crashes (I can't get datalen to = 800 safely) return FMOD.RESULT.OK; } So here are some of my questions: What is the relationship between exinfo.decodebuffersize, frequency, and the datalen parameter of the read callback? With this code sample, it's coming in as 3200. I don't know where that factor of 4 between it and the decodebuffersize comes from. Is datalen in the callback referring to number of bytes, or shorts? The process_buffer function takes a short array and its length. I would expect fmod is talking about shorts as well because I told it PCM16. Maybe my playback quality is bad for some totally different reason. If so I have no idea where to begin solving that. Any ideas there?

    Read the article

  • NAudio Mp3 Playback in Console

    - by Kurru
    Hi I'm trying to make a helper dll that will simplify the NAudio framework into a subset of functions I'm likely to need but I've hit a stumbling block right off the bat. I'm trying to use the following code to play an mp3 but I'm not hearing anything at all. Any help would be appreciated! static WaveOut waveout; static WaveStream playback; static System.Threading.ManualResetEvent wait = new System.Threading.ManualResetEvent(false); static void Main(string[] args) { System.Threading.Thread t = new System.Threading.Thread(new System.Threading.ThreadStart(PlaySong)); t.Start(); wait.WaitOne(); System.Threading.Thread.Sleep(2 * 1000); waveout.Stop(); waveout.Dispose(); playback.Dispose(); } static void PlaySong() { waveout = new WaveOut(); playback = OpenMp3Stream(@"songname.mp3"); waveout.Init(playback); waveout.Play(); Console.WriteLine("Started"); wait.Set(); } private static WaveChannel32 OpenMp3Stream(string fileName) { WaveChannel32 inputStream; WaveStream mp3Reader = new Mp3FileReader(fileName); WaveStream pcmStream = WaveFormatConversionStream.CreatePcmStream(mp3Reader); WaveStream blockAlignedStream = new BlockAlignReductionStream(pcmStream); inputStream = new WaveChannel32(blockAlignedStream); return inputStream; }

    Read the article

  • Sound/Silence in a wav file.

    - by Vivek
    Hi, I am searching for a utility/code that could detect and let me know if my 1 minute wav file contains sound or not ? Other way, if it could detect the duration of the silence(if exists) at any position in the wav file, that would also server the purpose. Does SOX support any command for that ? I tried with Java, but didnt found anything in JMF. Thanks Vivek

    Read the article

  • SoundManager2 has irregular latency

    - by Stefan Monov
    I'm playing some notes at regular intervals. Each one is delayed by a random number of milliseconds, creating a jarring irregular effect. How do I fix it? Note: I'm OK with some latency, just as long as it's consistent. Answers of the type "implement your own small SoundManager2 replacement, optimized for timing-sensitive playback" are OK, if you know how to do that :) but I'm trying to avoid rewriting my whole app in flash for now. For an example of app with zero audible latency see the flash-based ToneMatrix. Testcase (see it here live or get it in an zip): <head> <title></title> <script type="text/javascript" src="http://www.schillmania.com/projects/soundmanager2/script/soundmanager2.js"> </script> <script type="text/javascript"> soundManager.url = '.' soundManager.flashVersion = 9 soundManager.useHighPerformance = true soundManager.useFastPolling = true soundManager.autoLoad = true function recur(func, delay) { window.setTimeout(function() { recur(func, delay); func(); }, delay) } soundManager.onload = function() { var sound = soundManager.createSound("test", "test.mp3") recur(function() { sound.play() }, 300) } </script> </head> <body> </body> </html>

    Read the article

  • How do I play back a WAV in ActionScript?

    - by Jeremy White
    Please see the class I have created at http://textsnip.com/51013f for parsing a WAVE file in ActionScript 3.0. This class is correctly pulling apart info from the file header & fmt chunks, isolating the data chunk, and creating a new ByteArray to store the data chunk. It takes in an uncompressed WAVE file with a format tag of 1. The WAVE file is embedded into my SWF with the following Flex embed tag: [Embed(source="some_sound.wav", mimeType="application/octet-stream")] public var sound_class:Class; public var wave:WaveFile = new WaveFile(new sound_class()); After the data chunk is separated, the class attempts to make a Sound object that can stream the samples from the data chunk. I'm having issues with the streaming process, probably because I'm not good at math and don't really know what's happening with the bits/bytes, etc. Here are the two documents I'm using as a reference for the WAVE file format: http://www.lightlink.com/tjweber/StripWav/Canon.html https://ccrma.stanford.edu/courses/422/projects/WaveFormat/ Right now, the file IS playing back! In real time, even! But...the sound is really distorted. What's going on?

    Read the article

  • (Android SDk 2.1) Getting error when I use setAudioSource and setVideoSource

    - by Rainfer
    I got the follow error when I run setAudioSource and setVideoSource. 03-16 10:26:25.302: ERROR/audio_input(52): unsupported parameter: x-pvmf/media-input-node/cap-config-interface;valtype=key_specific_value 03-16 10:26:25.302: ERROR/audio_input(52): VerifyAndSetParameter failed 03-16 10:26:25.302: ERROR/CameraInput(52): Unsupported parameter(x-pvmf/media-input-node/cap-config-interface;valtype=key_specific_value) 03-16 10:26:25.302: ERROR/CameraInput(52): VerifiyAndSetParameter failed on parameter #0 This error happen on both emulator and the device. (I am using Google nexus one) I have set the CAMERA and RECORD_AUDIO user permission already. I spent many days but I still cannot figure out what is the cause of this runtime error.

    Read the article

  • Detect and record a sound with python

    - by Jean-Pierre
    I'm using this program to record a sound in python: import pyaudio import wave import sys chunk = 1024 FORMAT = pyaudio.paInt16 CHANNELS = 1 RATE = 44100 RECORD_SECONDS = 5 WAVE_OUTPUT_FILENAME = "output.wav" p = pyaudio.PyAudio() stream = p.open(format = FORMAT, channels = CHANNELS, rate = RATE, input = True, frames_per_buffer = chunk) print "* recording" all = [] for i in range(0, RATE / chunk * RECORD_SECONDS): data = stream.read(chunk) all.append(data) print "* done recording" stream.close() p.terminate() write data to WAVE file data = ''.join(all) wf = wave.open(WAVE_OUTPUT_FILENAME, 'wb') wf.setnchannels(CHANNELS) wf.setsampwidth(p.get_sample_size(FORMAT)) wf.setframerate(RATE) wf.writeframes(data) wf.close() I want to change the program to start recording when sound is detected by the sound card input. Probably should compare the input sound level in Chunk, but how do this?

    Read the article

  • Playing sounds in iPhone SDK?

    - by seanny94
    Does anyone have a snippet that uses the AudioToolBox framework that can be used to play a short sound? I would be grateful if you shared it with me and the rest of the community. Everywhere else I have looked doesn't seem to be too clear with their code. Thanks!

    Read the article

  • playing only part of a sound using FMOD

    - by carneades
    I'm trying to play only part of a sound using FMOD, say frames 50000-100000 of a 200000 frame file. I have found a couple of ways to seek forward (i.e. to start playback at frame 50000) but I have not found a way to make sure the sound stops playing at 100000. Is there any way FMOD can natively do this without having to add lbsndfile or the like into the picture? I should also mention that I am using the streaming option. I have to assume that these sounds are arbitrarily large and cannot be comfortably/quickly loaded into memory.

    Read the article

  • J2ME Camera and Sound Recorder Access On A Windows Mobile

    - by Steven Knox
    I'm currently involved in a research project that requires me to access a Windows Mobile Camera and sound recorder with J2ME to, well take pictures and record sound... the phone has to be a windows mobile for some reason that has nothing to do with me and the software has to be written in Java, also not my decision. So I need to try and find a phone that supports this (if one exists) so I'd like to know if anyone has found one? Thank You For Your Help. (Note the phone supporting MMAPI (JSR 135) does not imply that you can use the camera and sound recorder, our current phone has this and has not access).

    Read the article

  • Record AVAudioPlayer output using AVAudioRecorder

    - by Kieran
    In my app the user plays a sound by pressing a button. There are several buttons which can be played simultaneously. The sounds are played using AVAudioPlayer instances. I want to record the output of these instances using AVAudioRecorder. I have set it all up and a file is created and records but when I play it back it does not play any sound. It is just a silent file the length of the recording. Does anyone know if there is a setting I am missing with AVAudioPlayer or AVAudioRecorder? Thanks

    Read the article

  • How to produce precisely-timed tone and silence in C#

    - by Bob Denny
    I have a C# project that plays Morse code for RSS feeds. I write it using Managed DirectX, only to discover that Managed DirectX is old and deprecated. The task I have is to play pure sine wave bursts interspersed with silence periods (the code) which are precisely timed as to their duration. I need to be able to call a function which plays a pure tone for so many milliseconds, then Thread.Sleep() then play another, etc. At its fastest, the tones and spaces can be as short as 40ms. It's working quite well in Managed DirectX. To get the precisely timed tone I create 1 sec. of sine wave into a secondary buffer, then to play a tone of a certain duration I seek forward to within x milliseconds of the end of the buffer then play. I've tried System.Media.SoundPlayer. It's a loser because you have to Play(), Sleep(), then Stop() for arbitrary tone lengths. The result is a tone that is too long, variable by CPU load. It takes an indeterminate amount of time to actually stop the tone. I then embarked on a lengthy attempt to use NAudio 1.3. I ended up with a memory resident stream providing the tone data, and again seeking forward leaving the desired length of tone remaining in the stream, then playing. This worked OK on the DirectSoundOut class for a while (see below) but the WaveOut class quickly dies with an internal assert saying that buffers are still on the queue despite PlayerStopped = true. This is odd since I play to the end then put a wait of the same duration between the end of the tone and the start of the next. You'd think that 80ms after starting Play of a 40 ms tone that it wouldn't have buffers on the queue. DirectSoundOut works well for a while, but its problem is that for every tone burst Play() it spins off a separate thread. Eventually (5 min or so) it just stops working. You can see thread after thread after thread exiting in the Output window while running the project in VS2008 IDE. I don't create new objects during playing, I just Seek() the tone stream then call Play() over and over, so I don't think it's a problem with orphaned buffers/whatever piling up till it's choked. I'm out of patience on this one, so I'm asking in the hopes that someone here has faced a similar requirement and can steer me in a direction with a likely solution. Thanks in advance...

    Read the article

  • How can a silverlight app download and play an mp3 file from a URL?

    - by Edward Tanguay
    I have a small Silverlight app which downloads all of the images and text it needs from a URL, like this: if (dataItem.Kind == DataItemKind.BitmapImage) { WebClient webClientBitmapImageLoader = new WebClient(); webClientBitmapImageLoader.OpenReadCompleted += new OpenReadCompletedEventHandler(webClientBitmapImageLoader_OpenReadCompleted); webClientBitmapImageLoader.OpenReadAsync(new Uri(dataItem.SourceUri, UriKind.Absolute), dataItem); } else if (dataItem.Kind == DataItemKind.TextFile) { WebClient webClientTextFileLoader = new WebClient(); webClientTextFileLoader.DownloadStringCompleted += new DownloadStringCompletedEventHandler(webClientTextFileLoader_DownloadStringCompleted); webClientTextFileLoader.DownloadStringAsync(new Uri(dataItem.SourceUri, UriKind.Absolute), dataItem); } and: void webClientBitmapImageLoader_OpenReadCompleted(object sender, OpenReadCompletedEventArgs e) { BitmapImage bitmapImage = new BitmapImage(); bitmapImage.SetSource(e.Result); DataItem dataItem = e.UserState as DataItem; CompleteItemLoadedProcess(dataItem, bitmapImage); } void webClientTextFileLoader_DownloadStringCompleted(object sender, DownloadStringCompletedEventArgs e) { DataItem dataItem = e.UserState as DataItem; string textFileContent = e.Result.ForceWindowLineBreaks(); CompleteItemLoadedProcess(dataItem, textFileContent); } Each of the images and text files are then put in a dictionary so that the application has access to them at any time. This works well. Now I want to do the same with mp3 files, but all information I find on the web about playing mp3 files in Silverlight shows how to embed them in the .xap file, which I don't want to do since I wouldn't be able to download them dynamically as I do above. How can I download and play mp3 files in Silverlight like I download and show images and text?

    Read the article

  • NAudio playback wont stop successfully

    - by Kurru
    Hi When using NAudio to playback an mp3 [in the console], I cant figure out how to stop the playback. When I call waveout.Stop() the code just stops running and waveout.Dispose() never gets called. Is it something to do with the function callback? I dont know how to fix that if it is. static string MP3 = @"song.mp3"; static WaveOut waveout; static WaveStream playback; static void Main(string[] args) { waveout = new WaveOut(WaveCallbackInfo.FunctionCallback()); playback = OpenMp3Stream(MP3); waveout.Init(playback); waveout.Play(); Console.WriteLine("Started"); Thread.Sleep(2 * 1000); Console.WriteLine("Ending"); if (waveout.PlaybackState != PlaybackState.Stopped) waveout.Stop(); Console.WriteLine("Stopped"); waveout.Dispose(); Console.WriteLine("1st dispose"); playback.Dispose(); Console.WriteLine("2nd dispose"); } private static WaveChannel32 OpenMp3Stream(string fileName) { WaveChannel32 inputStream; WaveStream mp3Reader = new Mp3FileReader(fileName); WaveStream pcmStream = WaveFormatConversionStream.CreatePcmStream(mp3Reader); WaveStream blockAlignedStream = new BlockAlignReductionStream(pcmStream); inputStream = new WaveChannel32(blockAlignedStream); return inputStream; }

    Read the article

  • Looping music with intro in XNA using SoundEffect

    - by Jordan Roher
    I have two sound files: Sound A is an 18 second intro designed to be played once Sound B is a 1 minute looping track I'd like to play Sound A once, then once Sound A is done, immediately play Sound B and keep looping Sound B until I tell it to stop. This is supposed to be looping town music in an RPG. I've tried doing this in code using just SoundEffect, but there's a tiny yet noticeable gap between the end of Sound A and the beginning of Sound B. Even if I put monitoring code watching Sound A's SoundEffectInstance.State in the Update() function, I haven't been able to start Sound B exactly when Sound A finishes so that it's seamless. I'd prefer to use SoundEffect because I can load WMA files rather than being stuck with WAVs in XACT.

    Read the article

  • Determining what frequencies correspond to the x axis in aurioTouch sample application

    - by eagle
    I'm looking at the aurioTouch sample application for the iPhone SDK. It has a basic spectrum analyzer implemented when you choose the "FFT" option. One of the things the app is lacking is X axis labels (i.e. the frequency labels). In the aurioTouchAppDelegate.mm file, in the function - (void)drawOscilloscope at line 652, it has the following code: if (displayMode == aurioTouchDisplayModeOscilloscopeFFT) { if (fftBufferManager->HasNewAudioData()) { if (fftBufferManager->ComputeFFT(l_fftData)) [self setFFTData:l_fftData length:fftBufferManager->GetNumberFrames() / 2]; else hasNewFFTData = NO; } if (hasNewFFTData) { int y, maxY; maxY = drawBufferLen; for (y=0; y<maxY; y++) { CGFloat yFract = (CGFloat)y / (CGFloat)(maxY - 1); CGFloat fftIdx = yFract * ((CGFloat)fftLength); double fftIdx_i, fftIdx_f; fftIdx_f = modf(fftIdx, &fftIdx_i); SInt8 fft_l, fft_r; CGFloat fft_l_fl, fft_r_fl; CGFloat interpVal; fft_l = (fftData[(int)fftIdx_i] & 0xFF000000) >> 24; fft_r = (fftData[(int)fftIdx_i + 1] & 0xFF000000) >> 24; fft_l_fl = (CGFloat)(fft_l + 80) / 64.; fft_r_fl = (CGFloat)(fft_r + 80) / 64.; interpVal = fft_l_fl * (1. - fftIdx_f) + fft_r_fl * fftIdx_f; interpVal = CLAMP(0., interpVal, 1.); drawBuffers[0][y] = (interpVal * 120); } cycleOscilloscopeLines(); } } From my understanding, this part of the code is what is used to decide which magnitude to draw for each frequency in the UI. My question is how can I determine what frequency each iteration (or y value) represents inside the for loop. For example, if I want to know what the magnitude is for 6kHz, I'm thinking of adding a line similar to the following: if (yValueRepresentskHz(y, 6)) NSLog(@"The magnitude for 6kHz is %f", (interpVal * 120)); Please note that although they chose to use the variable name y, from what I understand, it actually represents the x-axis in the visual graph of the spectrum analyzer, and the value of the drawBuffers[0][y] represents the y-axis.

    Read the article

  • Text to speech on iPhone

    - by lostInTransit
    Hi Is there any way we can convert text to speech in an iPhone app? Is it possible using the SDK? Thanks Are there any third-party TTS engines available for the iPhone? (AFAIK Acapela is not yet released)

    Read the article

  • How to open wav file with Lua

    - by Pete Webbo
    Hello, I am trying to do some wav processing using Lua, but have fallen a the first hurdle! I cannot find a function or library that will allow me to load a wav file and access the raw data. There is one library, but it onl allows playing of wavs, not access to the raw data. Are there any out there? Cheers, Pete.

    Read the article

  • AudioQueue in-memory playback example

    - by Jonesy
    Does anybody know of any examples using AudioQueue that play from an in-memory source? All the examples I can find play from files (using AudioFileReadPackets) but in my particular case I am generating the data myself in realtime so ideally, I want to enqueue the data myself rather than sucking it out of a file using the callback. Any help much appreciated.

    Read the article

  • loading mp3 from file using random access to flash.media.Sound

    - by Irfan Mulic
    We are migrating application from Delphi to Flex (Air) that plays mp3 files from random access big file. it has positions and sizes to extract mp3 data to FileStream-MemoryStream and then we use bass.dll to play it from memory stream. Now I have to play those same mp3's in flex but I am not sure how... I was reading something similar for reading/writing data using ByteArray from here but how to apply it to flash.media.Sound ? http://livedocs.adobe.com/flex/3/html/help.html?content=ByteArrays_2.html Any help?

    Read the article

< Previous Page | 56 57 58 59 60 61 62 63 64 65 66 67  | Next Page >