Search Results

Search found 5073 results on 203 pages for 'audio jack'.

Page 61/203 | < Previous Page | 57 58 59 60 61 62 63 64 65 66 67 68  | Next Page >

  • How can I find the song position of a song being played with XACT?

    - by DJ SymBiotiX
    So I'm making a game in XNA and I need to use XACT for my songs (rather than media player). I need to use XACT because each song will have multiple layers that combine when played at the same time (bass, lead, drums) etc. I cant use the media player because the media player can only play one song at a time. Anyways, so lets say I have a song playing with XACT in my project with the following code public SongController() { audioEngine = new AudioEngine(@"Content\Song1\Song1.xgs"); waveBank = new WaveBank(audioEngine, @"Content\Song1\Layers.xwb"); soundBank = new SoundBank(audioEngine, @"Content\Song1\SongLayers.xsb"); songTime = new PlayTime(); Vox = soundBank.GetCue("Vox"); BG = soundBank.GetCue("BG"); Bass = soundBank.GetCue("Bass"); Lead = soundBank.GetCue("Lead"); Other = soundBank.GetCue("Other"); Vox.SetVariable("CueVolume", 100.0f); BG.SetVariable("CueVolume", 100.0f); Bass.SetVariable("CueVolume", 100.0f); Lead.SetVariable("CueVolume", 100.0f); Other.SetVariable("CueVolume", 100.0f); _bassVol = 100.0f; _voxVol = 100.0f; _leadVol = 100.0f; _otherVol = 100.0f; Vox.Play(); BG.Play(); Bass.Play(); Lead.Play(); Other.Play(); } So when I look at the variables in Vox, or BG (they are Cue's btw) I cant seem to find any play position in them. So I guess the question is: Is there a variable I can query to find that data, or do I need to make my own class that starts counting up from the time I start the song? Thanks

    Read the article

  • kAudioSessionProperty_CurrentHardwareSampleRate input/output

    - by iter
    kAudioSessionProperty_CurrentHardwareSampleRate seems to describe the input sampling rate. I wonder if there is a way to determine the available output sampling rate on an iPhone / iPad (iPhone supports 44.1K; iPad, 48K). http://developer.apple.com/iphone/library/documentation/AudioToolbox/Reference/AudioSessionServicesReference/Reference/reference.html#//apple_ref/doc/c_ref/kAudioSessionProperty_CurrentHardwareSampleRate

    Read the article

  • Loop OpenAL source with offset

    - by ressaw
    The OpenAL API states that an setting an offset still causes the sound to loop back to zero for looping sources. But is there a way to loop and still have an offset somehow? I have an mp3, and since it contains headers with information at the start of the file, there's a small, but noticable, delay in looping when it rewinds. If not, are there any other compressed formats that don't contain these empty headers?

    Read the article

  • can not get jplayer plugin to work

    - by Richard
    Hello, I hope somebody has some experience with the jplayer plugin I have been staring at the sourcecode of the demo's and looking in firebug, but I can't see why it is not showing at all. It also try's to use the flash file, but in other examples the embed code does not show up in the container div either. How could I get this to work, or debug? $(document).ready(function(){ $("#jpId").jPlayer( { ready: function () { this.element.jPlayer("setFile", "/mp3/nobodymove.mp3"); // Defines the mp3 } }); }); thanks, Richard

    Read the article

  • concatenating mp3 files or joining mp3 files using java

    - by Sukhhhh
    We would like to concatenate/merge/join mp3 files seamlessly using "java" in any environment. We are trying the following options at the moment ( please let us know any other options): Using JMF -- ruled out as it supported only in windows http://java.sun.com/javase/technologies/desktop/media/jmf/reference/faqs/index.html Using tritinous , jlayer and Lame combination. Please let us know thoughts and the links those mention about concatenate/merge/join mp3 files using option 2.

    Read the article

  • VB FFT - stuck understanding relationship of results to frequency

    - by WaveyDavey
    Trying to understand an fft (Fast Fourier Transform) routine I'm using (stealing)(recycling) Input is an array of 512 data points which are a sample waveform. Test data is generated into this array. fft transforms this array into frequency domain. Trying to understand relationship between freq, period, sample rate and position in fft array. I'll illustrate with examples: ======================================== Sample rate is 1000 samples/s. Generate a set of samples at 10Hz. Input array has peak values at arr(28), arr(128), arr(228) ... period = 100 sample points peak value in fft array is at index 6 (excluding a huge value at 0) ======================================== Sample rate is 8000 samples/s Generate set of samples at 440Hz Input array peak values include arr(7), arr(25), arr(43), arr(61) ... period = 18 sample points peak value in fft array is at index 29 (excluding a huge value at 0) ======================================== How do I relate the index of the peak in the fft array to frequency ?

    Read the article

  • FMOD.net streaming, callback and exinfo parameters

    - by Tesserex
    I posted a question on gamedev about how to play nsf files (NES console music) in FMOD. It didn't get any results, but since then I made some progress. I decided that the easiest method was just to compile an existing player into a dll and then call it from C# to populate my buffer. The problem now is getting it to sound right, and making sure all my paremeters are correct. Here are the facts so far: The nsf dll is dealing with shorts, so the data is PCM16. The sample nsf I'm using has a playback rate of 60 Hz. Just for playing around now, I'm using a frequency of 48000. Based on 2 and 3, the dll calculates a necessary buffer size of 48000 / 60hz = 800. This means it will render 800 shorts worth of buffer for every simulated NES frame. I've so far got my C# code to play the nsf, at the correct pitch and tempo, but it's very grainy / fuzzy, which I'm attributing to the fact that the FMOD read callback is giving a data length of 1600, whereas I should be expecting 800. I've tried playing around with all the numbers and it either crashes, or the music changes pitch, tempo, or both. Here's some of my C# code: uint channels = 1, frequency = 48000; FMOD.MODE mode = (FMOD.MODE.DEFAULT | FMOD.MODE.OPENUSER | FMOD.MODE.LOOP_NORMAL); FMOD.Sound sound = new FMOD.Sound(); FMOD.CREATESOUNDEXINFO ex = new FMOD.CREATESOUNDEXINFO(); ex.cbsize = Marshal.SizeOf(ex); ex.fileoffset = 0; ex.format = FMOD.SOUND_FORMAT.PCM16; // does this even matter? It doesn't change my results as long as it's long enough for one update ex.length = frequency; ex.numchannels = (int)channels; ex.defaultfrequency = (int)frequency; ex.pcmreadcallback = pcmreadcallback; ex.dlsname = null; // eventually I will calculate this with frequency / nsf hz, but I'm just testing for now ex.decodebuffersize = 800; // from the dll load_nsf_file("file.nsf", 8, (int)frequency); // 8 is the track number to play var result = system.createSound( (string)null, (mode | FMOD.MODE.CREATESTREAM), ref ex, ref sound); channel = new FMOD.Channel(); result = system.playSound(FMOD.CHANNELINDEX.FREE, sound, false, ref channel); private FMOD.RESULT PCMREADCALLBACK(IntPtr soundraw, IntPtr data, uint datalen) { // from the dll process_buffer(data, (int)800); // if I use datalen, it usually crashes (I can't get datalen to = 800 safely) return FMOD.RESULT.OK; } So here are some of my questions: What is the relationship between exinfo.decodebuffersize, frequency, and the datalen parameter of the read callback? With this code sample, it's coming in as 3200. I don't know where that factor of 4 between it and the decodebuffersize comes from. Is datalen in the callback referring to number of bytes, or shorts? The process_buffer function takes a short array and its length. I would expect fmod is talking about shorts as well because I told it PCM16. Maybe my playback quality is bad for some totally different reason. If so I have no idea where to begin solving that. Any ideas there?

    Read the article

  • NAudio Mp3 Playback in Console

    - by Kurru
    Hi I'm trying to make a helper dll that will simplify the NAudio framework into a subset of functions I'm likely to need but I've hit a stumbling block right off the bat. I'm trying to use the following code to play an mp3 but I'm not hearing anything at all. Any help would be appreciated! static WaveOut waveout; static WaveStream playback; static System.Threading.ManualResetEvent wait = new System.Threading.ManualResetEvent(false); static void Main(string[] args) { System.Threading.Thread t = new System.Threading.Thread(new System.Threading.ThreadStart(PlaySong)); t.Start(); wait.WaitOne(); System.Threading.Thread.Sleep(2 * 1000); waveout.Stop(); waveout.Dispose(); playback.Dispose(); } static void PlaySong() { waveout = new WaveOut(); playback = OpenMp3Stream(@"songname.mp3"); waveout.Init(playback); waveout.Play(); Console.WriteLine("Started"); wait.Set(); } private static WaveChannel32 OpenMp3Stream(string fileName) { WaveChannel32 inputStream; WaveStream mp3Reader = new Mp3FileReader(fileName); WaveStream pcmStream = WaveFormatConversionStream.CreatePcmStream(mp3Reader); WaveStream blockAlignedStream = new BlockAlignReductionStream(pcmStream); inputStream = new WaveChannel32(blockAlignedStream); return inputStream; }

    Read the article

  • Sound/Silence in a wav file.

    - by Vivek
    Hi, I am searching for a utility/code that could detect and let me know if my 1 minute wav file contains sound or not ? Other way, if it could detect the duration of the silence(if exists) at any position in the wav file, that would also server the purpose. Does SOX support any command for that ? I tried with Java, but didnt found anything in JMF. Thanks Vivek

    Read the article

  • SoundManager2 has irregular latency

    - by Stefan Monov
    I'm playing some notes at regular intervals. Each one is delayed by a random number of milliseconds, creating a jarring irregular effect. How do I fix it? Note: I'm OK with some latency, just as long as it's consistent. Answers of the type "implement your own small SoundManager2 replacement, optimized for timing-sensitive playback" are OK, if you know how to do that :) but I'm trying to avoid rewriting my whole app in flash for now. For an example of app with zero audible latency see the flash-based ToneMatrix. Testcase (see it here live or get it in an zip): <head> <title></title> <script type="text/javascript" src="http://www.schillmania.com/projects/soundmanager2/script/soundmanager2.js"> </script> <script type="text/javascript"> soundManager.url = '.' soundManager.flashVersion = 9 soundManager.useHighPerformance = true soundManager.useFastPolling = true soundManager.autoLoad = true function recur(func, delay) { window.setTimeout(function() { recur(func, delay); func(); }, delay) } soundManager.onload = function() { var sound = soundManager.createSound("test", "test.mp3") recur(function() { sound.play() }, 300) } </script> </head> <body> </body> </html>

    Read the article

  • How do I play back a WAV in ActionScript?

    - by Jeremy White
    Please see the class I have created at http://textsnip.com/51013f for parsing a WAVE file in ActionScript 3.0. This class is correctly pulling apart info from the file header & fmt chunks, isolating the data chunk, and creating a new ByteArray to store the data chunk. It takes in an uncompressed WAVE file with a format tag of 1. The WAVE file is embedded into my SWF with the following Flex embed tag: [Embed(source="some_sound.wav", mimeType="application/octet-stream")] public var sound_class:Class; public var wave:WaveFile = new WaveFile(new sound_class()); After the data chunk is separated, the class attempts to make a Sound object that can stream the samples from the data chunk. I'm having issues with the streaming process, probably because I'm not good at math and don't really know what's happening with the bits/bytes, etc. Here are the two documents I'm using as a reference for the WAVE file format: http://www.lightlink.com/tjweber/StripWav/Canon.html https://ccrma.stanford.edu/courses/422/projects/WaveFormat/ Right now, the file IS playing back! In real time, even! But...the sound is really distorted. What's going on?

    Read the article

  • (Android SDk 2.1) Getting error when I use setAudioSource and setVideoSource

    - by Rainfer
    I got the follow error when I run setAudioSource and setVideoSource. 03-16 10:26:25.302: ERROR/audio_input(52): unsupported parameter: x-pvmf/media-input-node/cap-config-interface;valtype=key_specific_value 03-16 10:26:25.302: ERROR/audio_input(52): VerifyAndSetParameter failed 03-16 10:26:25.302: ERROR/CameraInput(52): Unsupported parameter(x-pvmf/media-input-node/cap-config-interface;valtype=key_specific_value) 03-16 10:26:25.302: ERROR/CameraInput(52): VerifiyAndSetParameter failed on parameter #0 This error happen on both emulator and the device. (I am using Google nexus one) I have set the CAMERA and RECORD_AUDIO user permission already. I spent many days but I still cannot figure out what is the cause of this runtime error.

    Read the article

  • Detect and record a sound with python

    - by Jean-Pierre
    I'm using this program to record a sound in python: import pyaudio import wave import sys chunk = 1024 FORMAT = pyaudio.paInt16 CHANNELS = 1 RATE = 44100 RECORD_SECONDS = 5 WAVE_OUTPUT_FILENAME = "output.wav" p = pyaudio.PyAudio() stream = p.open(format = FORMAT, channels = CHANNELS, rate = RATE, input = True, frames_per_buffer = chunk) print "* recording" all = [] for i in range(0, RATE / chunk * RECORD_SECONDS): data = stream.read(chunk) all.append(data) print "* done recording" stream.close() p.terminate() write data to WAVE file data = ''.join(all) wf = wave.open(WAVE_OUTPUT_FILENAME, 'wb') wf.setnchannels(CHANNELS) wf.setsampwidth(p.get_sample_size(FORMAT)) wf.setframerate(RATE) wf.writeframes(data) wf.close() I want to change the program to start recording when sound is detected by the sound card input. Probably should compare the input sound level in Chunk, but how do this?

    Read the article

  • Playing sounds in iPhone SDK?

    - by seanny94
    Does anyone have a snippet that uses the AudioToolBox framework that can be used to play a short sound? I would be grateful if you shared it with me and the rest of the community. Everywhere else I have looked doesn't seem to be too clear with their code. Thanks!

    Read the article

  • playing only part of a sound using FMOD

    - by carneades
    I'm trying to play only part of a sound using FMOD, say frames 50000-100000 of a 200000 frame file. I have found a couple of ways to seek forward (i.e. to start playback at frame 50000) but I have not found a way to make sure the sound stops playing at 100000. Is there any way FMOD can natively do this without having to add lbsndfile or the like into the picture? I should also mention that I am using the streaming option. I have to assume that these sounds are arbitrarily large and cannot be comfortably/quickly loaded into memory.

    Read the article

  • J2ME Camera and Sound Recorder Access On A Windows Mobile

    - by Steven Knox
    I'm currently involved in a research project that requires me to access a Windows Mobile Camera and sound recorder with J2ME to, well take pictures and record sound... the phone has to be a windows mobile for some reason that has nothing to do with me and the software has to be written in Java, also not my decision. So I need to try and find a phone that supports this (if one exists) so I'd like to know if anyone has found one? Thank You For Your Help. (Note the phone supporting MMAPI (JSR 135) does not imply that you can use the camera and sound recorder, our current phone has this and has not access).

    Read the article

  • How can a silverlight app download and play an mp3 file from a URL?

    - by Edward Tanguay
    I have a small Silverlight app which downloads all of the images and text it needs from a URL, like this: if (dataItem.Kind == DataItemKind.BitmapImage) { WebClient webClientBitmapImageLoader = new WebClient(); webClientBitmapImageLoader.OpenReadCompleted += new OpenReadCompletedEventHandler(webClientBitmapImageLoader_OpenReadCompleted); webClientBitmapImageLoader.OpenReadAsync(new Uri(dataItem.SourceUri, UriKind.Absolute), dataItem); } else if (dataItem.Kind == DataItemKind.TextFile) { WebClient webClientTextFileLoader = new WebClient(); webClientTextFileLoader.DownloadStringCompleted += new DownloadStringCompletedEventHandler(webClientTextFileLoader_DownloadStringCompleted); webClientTextFileLoader.DownloadStringAsync(new Uri(dataItem.SourceUri, UriKind.Absolute), dataItem); } and: void webClientBitmapImageLoader_OpenReadCompleted(object sender, OpenReadCompletedEventArgs e) { BitmapImage bitmapImage = new BitmapImage(); bitmapImage.SetSource(e.Result); DataItem dataItem = e.UserState as DataItem; CompleteItemLoadedProcess(dataItem, bitmapImage); } void webClientTextFileLoader_DownloadStringCompleted(object sender, DownloadStringCompletedEventArgs e) { DataItem dataItem = e.UserState as DataItem; string textFileContent = e.Result.ForceWindowLineBreaks(); CompleteItemLoadedProcess(dataItem, textFileContent); } Each of the images and text files are then put in a dictionary so that the application has access to them at any time. This works well. Now I want to do the same with mp3 files, but all information I find on the web about playing mp3 files in Silverlight shows how to embed them in the .xap file, which I don't want to do since I wouldn't be able to download them dynamically as I do above. How can I download and play mp3 files in Silverlight like I download and show images and text?

    Read the article

  • Record AVAudioPlayer output using AVAudioRecorder

    - by Kieran
    In my app the user plays a sound by pressing a button. There are several buttons which can be played simultaneously. The sounds are played using AVAudioPlayer instances. I want to record the output of these instances using AVAudioRecorder. I have set it all up and a file is created and records but when I play it back it does not play any sound. It is just a silent file the length of the recording. Does anyone know if there is a setting I am missing with AVAudioPlayer or AVAudioRecorder? Thanks

    Read the article

  • How to produce precisely-timed tone and silence in C#

    - by Bob Denny
    I have a C# project that plays Morse code for RSS feeds. I write it using Managed DirectX, only to discover that Managed DirectX is old and deprecated. The task I have is to play pure sine wave bursts interspersed with silence periods (the code) which are precisely timed as to their duration. I need to be able to call a function which plays a pure tone for so many milliseconds, then Thread.Sleep() then play another, etc. At its fastest, the tones and spaces can be as short as 40ms. It's working quite well in Managed DirectX. To get the precisely timed tone I create 1 sec. of sine wave into a secondary buffer, then to play a tone of a certain duration I seek forward to within x milliseconds of the end of the buffer then play. I've tried System.Media.SoundPlayer. It's a loser because you have to Play(), Sleep(), then Stop() for arbitrary tone lengths. The result is a tone that is too long, variable by CPU load. It takes an indeterminate amount of time to actually stop the tone. I then embarked on a lengthy attempt to use NAudio 1.3. I ended up with a memory resident stream providing the tone data, and again seeking forward leaving the desired length of tone remaining in the stream, then playing. This worked OK on the DirectSoundOut class for a while (see below) but the WaveOut class quickly dies with an internal assert saying that buffers are still on the queue despite PlayerStopped = true. This is odd since I play to the end then put a wait of the same duration between the end of the tone and the start of the next. You'd think that 80ms after starting Play of a 40 ms tone that it wouldn't have buffers on the queue. DirectSoundOut works well for a while, but its problem is that for every tone burst Play() it spins off a separate thread. Eventually (5 min or so) it just stops working. You can see thread after thread after thread exiting in the Output window while running the project in VS2008 IDE. I don't create new objects during playing, I just Seek() the tone stream then call Play() over and over, so I don't think it's a problem with orphaned buffers/whatever piling up till it's choked. I'm out of patience on this one, so I'm asking in the hopes that someone here has faced a similar requirement and can steer me in a direction with a likely solution. Thanks in advance...

    Read the article

< Previous Page | 57 58 59 60 61 62 63 64 65 66 67 68  | Next Page >