Search Results

Search found 496 results on 20 pages for 'wav'.

Page 9/20 | < Previous Page | 5 6 7 8 9 10 11 12 13 14 15 16  | Next Page >

  • iPhone AVAudioPlayer failed to find codec

    - by Anthony
    Hello, I am writing an app that downloads a wav file from a server and needs to play that file. The files use the mulaw codec with 2:1 compression. These wav files are dynamically created by a seperate process so there is no way for me to preconvert the files to a different format or codec, I need to be able to play them as is. I am using an AVAudioPlayer instance initialized as follows: NSURL *audioURL = [[NSURL alloc] initWithString:@"http://xxx.../file.wav"]; NSData *audioData = [[NSData alloc] initWithContentsOfURL:audioURL]; AVAudioPlayer *audio = [[AVAudioPlayer alloc] initWithData:audioData error:nil]; [audio play]; However, when the play method executes, I get the following Console Output when executing on the Simulator: AudioQueue codec policy 1: failed to find a codec of the requested type I also tried saving the downloaded data to a local file and using a file URL, however that yeilds the same results. The downloaded file does play fine on both Mac and Windows based desktop media players. The SDK docs state that the mulaw codec is supported on the iPhone, so I am unsure why it is failing to find it. Any assistance would be greatly appreciated. Thanks.

    Read the article

  • Swt file dialog too much files selected?

    - by InsertNickHere
    Hi there, the swt file dialog will give me an empty result array if I select too much files (approx. 2500files). The listing shows you how I use this dialog. If i select too many sound files, the syso will show 0. Debugging tells me, that the files array is empty in this case. Is there any way to get this work? FileDialog fileDialog = new FileDialog(mainView.getShell(), SWT.MULTI); fileDialog.setText("Choose sound files"); fileDialog.setFilterExtensions(new String[] { new String("*.wav") }); Vector<String> result = new Vector<String>(); fileDialog.open(); String[] files = fileDialog.getFileNames(); for (int i = 0, n = files.length; i < n; i++) { if( !files[i].contains(".wav")) { System.out.println(files[i]); } StringBuffer stringBuffer = new StringBuffer(); stringBuffer.append(fileDialog.getFilterPath()); if (stringBuffer.charAt(stringBuffer.length() - 1) != File.separatorChar) { stringBuffer.append(File.separatorChar); } stringBuffer.append(files[i]); stringBuffer.append(""); String finalName = stringBuffer.toString(); if( !finalName.contains(".wav")) { System.out.println(finalName); } result.add(finalName); } System.out.println(result.size()) ;

    Read the article

  • waveInProc / Windows audio question...

    - by BTR
    I'm using the Windows API to get audio input. I've followed all the steps on MSDN and managed to record audio to a WAV file. No problem. I'm using multiple buffers and all that. I'd like to do more with the buffers than simply write to a file, so now I've got a callback set up. It works great and I'm getting the data, but I'm not sure what to do with it once I have it. Here's my callback... everything here works: // Media API callback void CALLBACK AudioRecorder::waveInProc(HWAVEIN hWaveIn, UINT uMsg, DWORD dwInstance, DWORD dwParam1, DWORD dwParam2) { // Data received if (uMsg == WIM_DATA) { // Get wav header LPWAVEHDR mBuffer = (WAVEHDR *)dwParam1; // Now what? for (unsigned i = 0; i != mBuffer->dwBytesRecorded; ++i) { // I can see the char, how do get them into my file and audio buffers? cout << mBuffer->lpData[i] << "\n"; } // Re-use buffer mResultHnd = waveInAddBuffer(hWaveIn, mBuffer, sizeof(mInputBuffer[0])); // mInputBuffer is a const WAVEHDR * } } // waveInOpen cannot use an instance method as its callback, // so we create a static method which calls the instance version void CALLBACK AudioRecorder::staticWaveInProc(HWAVEIN hWaveIn, UINT uMsg, DWORD_PTR dwInstance, DWORD_PTR dwParam1, DWORD_PTR dwParam2) { // Call instance version of method reinterpret_cast<AudioRecorder *>(dwParam1)->waveInProc(hWaveIn, uMsg, dwInstance, dwParam1, dwParam2); } Like I said, it works great, but I'm trying to do the following: Convert the data to short and copy into an array Convert the data to float and copy into an array Copy the data to a larger char array which I'll write into a WAV Relay the data to an arbitrary output device I've worked with FMOD a lot and I'm familiar with interleaving and all that. But FMOD dishes everything out as floats. In this case, I'm going the other way. I guess I'm basically just looking for resources on how to go from LPSTR to short, float, and unsigned char. Thanks much in advance!

    Read the article

  • Filter design for audio signal.

    - by beanyblue
    What I am trying to do is simple. I have a few .wav files. I want to remove noise and filter out specific frequencies. I don't have matlab and I intend to write my own code for all the filters. Right now, I have a way to read the .wav file and dump out the structure into a text file. My questions are the following: Can I directly apply the digital filters on this sampled data?{ ie, can I directly do a convolution between my input samples and h(n) for the filter function that i choose?). How do I choose the number of coefficients for the Window function? I have octave, so if someone can point me to anything that gives me some idea on how to process the .wav file using octave, that would be great too. I want to be able to filter out the frequency and then listen to the sound again. Is this possible with octave? I'm just a beginner with these kinds of things, so please bear with me if my questions are too naive. Any help will be great.

    Read the article

  • PhP/HTML play button [migrated]

    - by Marian
    I'm wanting to make my own small webpage, I've got a domain Saoo.eu As you see there is a small play button in the corner witch plays a playlist. Is there anyway to have that playbutton on each page I'd add in the future without resetting every time the page changes? Am I forced to use iFrames for that? This is my player code <button id="audioControl" style="width:30px;height:25px;"></button> <audio id="aud" src="" autoplay autobuffer /> Script: $(document).ready(function() { $('#audioControl').html('II'); if(Modernizr.audio && Modernizr.audio.mp3) { audio.setAttribute("src",'http://daokun.webs.com/play0.mp3'); } else if(Modernizr.audio && Modernizr.audio.wav) { audio.setAttribute("src", 'http://daokun.webs.com/play0.ogg'); } }); var audio = document.getElementById('aud'), count = 0; $('#audioControl').toggle( function () { audio.pause(); $('#audioControl').html('>'); }, function () { audio.play(); $('#audioControl').html('II'); } ); audio.addEventListener("ended", function() { count++; if(count == 4){count = 0;} if(Modernizr.audio && Modernizr.audio.mp3) { audio.setAttribute("src",'http://daokun.webs.com/play'+count+'.mp3'); } else if(Modernizr.audio && Modernizr.audio.wav) { audio.setAttribute("src", 'http://daokun.webs.com/play'+count+'.ogg'); } audio.load(); });

    Read the article

  • Change Audio title from English to Sinhalese using ffmpeg

    - by user330461
    I insert an extra Sound track in my video file and it works well. ffmpeg -i news.mov -i news.wav -map 0:0 -map 0:1 -map 1:0 -pass 1 -vcodec libx264 -preset fast -b 512k -minrate 512k -maxrate 512k -bufsize 512k -threads 0 -f mp4 -an -y /dev/null && ffmpeg -i news.mov -i news.wav -map 0:0 -map 0:1 -map 1:0 -pass 2 -acodec libfaac -ab 128k -ac 2 -vcodec libx264 -preset fast -b 512k -minrate 512k -maxrate 512k -bufsize 512k -threads 0 -f mp4 news.mp4 The default audio track come with the label "English" and I would like to give it a label "Sinhalese" The Second Audio track come up without a label as "track#1" and I would like to give that a label of "Tamil". How do I do that ?

    Read the article

  • merging video and audio with custom panning

    - by cherouvim
    I have: a video which has mono audio inside a audio (mono) I'd like to merge those two to a single video file containing: video from #1 audio from #1 full left pan + audio #2 full right pan Is this possible in ffmpeg using 1 command? I've tried the following which almost does this but the video/audio gets out of sync: ffmpeg -i video.mp4 -filter_complex "amovie=audio.wav [r] ; [r] amerge" output.mp4 -y I've managed to do it with multiple commands: #1 create right panned audio ffmpeg -i audio.wav -ac 2 -vbr 5 audio-stereo.mp3 -y ffmpeg -i audio-stereo.mp3 -af pan=stereo:c1=c1 audio-right.mp3 -y #2 create left panned video ffmpeg -i video.mp4 -af pan=stereo:c0=c0 video-left.mp4 -y #3 merge the two ffmpeg -i video-left.mp4 -i audio-right.mp3 -filter_complex "amix=inputs=2" video-mixed.mp4 -y It does the job, but is it possible with 1 command?

    Read the article

  • Notification area balloon tip pop sound in Windows 7

    - by Worm Regards
    When I was using Windows XP, there was a distinct sound when an application showed a balloon tip in the notification area (aka system tray). Unfortunately, I didn't look any deeper into it. Now Windows 7 has this behavior disabled by default and I do not know how to configure it. Discovered the name of sound file used to accompany tray balloon tips in Windows XP Windows XP Balloon.wav More clues: interesting registry key is HKEY_USERS\XP Registry Hive\AppEvents\Schemes\Apps\.Default\SystemNotification\.Default Default value is %SystemRoot%\media\Windows XP Balloon.wav So, the System Notification event label appears to be correct, but tray balloons are silenced elsewhere.

    Read the article

  • No audio in Google Chrome

    - by Z9iT
    I started with Ubuntu 12.04 Minimal. Then installed only 3 utils sudo apt-get install xorg xinit google-chrome-stable alsa-base alsa-utils alsa-oss I have added google-chrome to .xinitrc file. Used sudo alsamixer to unmute everything using M. Also I am able to hear sound when I run this independently in a terminal sudo aplay /usr/share/sounds/alsa/Front_Center.wav However Google Chrome is not giving any sound output be it on youtube or the same file (/usr/share/sounds/alsa/Front_Center.wav) opened by browsing in chrome. UPDATE : the moment i install some Desktop (display) Manager like gnome or lxde and launch chrome then, the audio is perfect success. However if i kill the xsession and the desktop manager (lxde) AND then start with loading only the chrome (without DM) then again i loose the sound. This makes me wonder that there is something which is not allowing the sound to be loaded into chrome directly, but once the session like lxde loads, then it works flawless. I am thinking that i should rather ask, how to authorize google-chrome to use sound software? Miscellaneous : I am surprised to know that I cannot start google-chrome by sudo command (it asks to be a normal user) && that i cannot start alsamixer as a normal user (i must use sudo alsamixer ) May someone please help what i need to do so that google chrome speaks????

    Read the article

  • Prevent Windows Exporer to extract metadata

    - by olafure
    Windows Explorer (windows 7 x64) crashes when it sees allegedly corrupted .wav files. I'm dealing with this problem and the hotfix doesn't work for me: http://support.microsoft.com/kb/976417/en-us The hotfix says that this happens if the .wav file is corrupt (which btw I don't think it is). What makes this even worse is that I can't access the file in any program! As soon as the open dialog sees the file, windows tries it's metadata extraction trick and exporer.exe halts. So my question: Can I by any means tell windows to stop this "metadata extraction" action ? (I have seen multiple problems associated with it in the past).

    Read the article

  • How can I use the Band Pass Filter in GarageBand?

    - by Another Registered User
    What I want to do: I have a music WAV file and want to put a Band Pass Filter over it, to filter out anoying high frequencies. I was reading on the net that there is a "AuBandPass" plugin in Mac OS X. I just can't figure out how I could use that in GarageBand. I don't even find the effects at all. I created a new GarageBand file and dropped the WAV file in there. Now I can play that song in GarageBand. What must I do next?

    Read the article

  • How do I get a mp3 file's total time in Java?

    - by Tom Brito
    The answers provided in How do I get a sound file’s total time in Java? work well for wav files, but not for mp3 files. They are (given a file): AudioInputStream audioInputStream = AudioSystem.getAudioInputStream(file); AudioFormat format = audioInputStream.getFormat(); long frames = audioInputStream.getFrameLength(); double durationInSeconds = (frames+0.0) / format.getFrameRate(); and: AudioInputStream audioInputStream = AudioSystem.getAudioInputStream(file); AudioFormat format = audioInputStream.getFormat(); long audioFileLength = file.length(); int frameSize = format.getFrameSize(); float frameRate = format.getFrameRate(); float durationInSeconds = (audioFileLength / (frameSize * frameRate)); They give the same correct result for wav files, but wrong and different results for mp3 files. Any idea what do I have to do to get the mp3 file's duration?

    Read the article

  • SoundPlayer causing Memory Leaks?

    - by Nick Udell
    I'm writing a basic writing app in C# and I wanted to have the program make typewriter sounds as you typed. I've hooked the KeyPress event on my RichTextBox to a function that uses a SoundPlayer to play a short wav file every time a key is pressed, however I've noticed after a while my computer slows to a crawl and checking my processes, audiodlg.exe was using 5 GIGABYTES of RAM. The code I'm using is as follows: I initialise the SoundPlayer as a global variable on program start with SoundPlayer sp = new SoundPlayer("typewriter.wav") Then on the KeyPress event I simply call sp.Play(); Does anybody know what's causing the heavy memory usage? The file is less than a second long, so it shouldn't be clogging the thing up too much.

    Read the article

  • Pipe ffmpeg to oggenc(2) with .NET

    - by acidzombie24
    I am trying to encode ogg files at -q 6/192k. ffmpeg doesnt seem to be listenting to my command ffmpeg -i blah -acodec vorbis -ab 192k -y out.ogg So i would like to use the recommended ogg encoder. It says the input must be wav or similar. I would like to pipe a wav from ffmpeg to ogg. However not only am i unsure if oggenc2 will accept input from stdin, i have no idea how to pipe one process to another inside of .net using the Process class.

    Read the article

  • [[NSURL alloc] initFileURLWithPath:(NSString)] returns null

    - by Ajay Pandey
    NSString *soundFilePath = [[NSBundle mainBundle] pathForResource:@"Opening" ofType:@"wav"]; NSURL *fileURL = [[NSURL alloc] initFileURLWithPath: soundFilePath]; NSLog(@"@ajay"); AVAudioPlayer *audioPlayer = [[AVAudioPlayer alloc] initWithContentsOfURL:fileURL error:nil]; [fileURL release]; [audioPlayer play]; i have inserted a wav file in my project.But NSURL *fileURL = [[NSURL alloc] initFileURLWithPath: soundFilePath]; returns NULL and console prints following: and application KILLS... Can someone help me?

    Read the article

  • How does one capture H.323 voice traffic on a VOIP network?

    - by Chris Holmes
    What I am trying to do is capture the WAV data of a phone conversation on a VOIP network using SharpPCap/PCap.Net. We are using the H.323 recommendation and my understanding is that voice data is located in the RTP packets. However, there is no way to heuristically determine if a UDP packet is a RTP packet, so we have to do more work before we can capture the data. The H.323 recommendation apparently uses a lot of traffic on specific TCP ports to negotiate the call before the WAV data is sent via RTP. However, I am having very little luck determining what data is actually sent on those TCP ports, when it is sent, what the packets look like, how to handle it, etc. If anyone has any information on how to go about this I'd really appreciate it. My Google-Fu seems to be failing me on this one.

    Read the article

  • feature extraction from acoustic signals

    - by Dolphin
    Hi everyone, It's been a while. I found APIs in Java for extracting features from acoustic audio files and symbolic files separately. But now I have a problem in mapping from low level wav audio features to high level midi features. i.e. I need to write the extracted wav audio features on to midi format. But I cannot think of anything even close to it. Can someone pls provide me some insight as in how I can approach this. Greatly appreciate your responses. Advance thanks

    Read the article

  • Problem when trying to connect to a desktop server from android on wifi

    - by thiagolee
    Hello, I am trying to send a file from the phone running Android 1.5 to a server on a desktop. I wrote some code, which works on emulator, but on the phone it doesn't. I'm connecting to the network through WiFi. It works, I can access the internet through my phone and I've configured my router. The application stops when I'm trying to connect. I have the permissions. Someone have any ideas, below is my code. Running on Android package br.ufs.reconhecimento; import java.io.DataInputStream; import java.io.DataOutputStream; import java.io.File; import java.io.FileInputStream; import java.io.IOException; import java.net.Socket; import android.app.Activity; import android.app.AlertDialog; import android.content.DialogInterface; import android.content.Intent; import android.os.Bundle; import android.util.Log; import android.view.View; import android.view.View.OnClickListener; import android.widget.EditText; import android.widget.ImageButton; /** * Sample code that invokes the speech recognition intent API. */ public class Reconhecimento extends Activity implements OnClickListener { static final int VOICE_RECOGNITION_REQUEST_CODE = 1234; static final String LOG_VOZ = "UFS-Reconhecimento"; final int INICIAR_GRAVACAO = 01; int porta = 5158; // Porta definida no servidor int tempoEspera = 1000; String ipConexao = "172.20.0.189"; EditText ipEdit; /** * Called with the activity is first created. */ @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); // Inflate our UI from its XML layout description. setContentView(R.layout.main); // Get display items for later interaction ImageButton speakButton = (ImageButton) findViewById(R.id.btn_speak); speakButton.setPadding(10, 10, 10, 10); speakButton.setOnClickListener(this); //Alerta para o endereço IP AlertDialog.Builder alerta = new AlertDialog.Builder(this); alerta.setTitle("IP");//+mainWifi.getWifiState()); ipEdit = new EditText(this); ipEdit.setText(ipConexao); alerta.setView(ipEdit); alerta.setMessage("Por favor, Confirme o endereço IP."); alerta.setPositiveButton("OK", new DialogInterface.OnClickListener() { public void onClick(DialogInterface dialog, int which) { ipConexao = ipEdit.getText().toString(); Log.d(LOG_VOZ, "Nova Atribuição do Endreço IP: " + ipConexao); } }); alerta.create(); alerta.show(); } /** * Handle the click on the start recognition button. */ public void onClick(View v) { if (v.getId() == R.id.btn_speak) { //startVoiceRecognitionActivity(); Log.d(LOG_VOZ, "Iniciando a próxima tela"); Intent recordIntent = new Intent(this, GravacaoAtivity.class); Log.d(LOG_VOZ, "Iniciando a tela (instancia criada)"); startActivityForResult(recordIntent, INICIAR_GRAVACAO); Log.d(LOG_VOZ, "Gravação iniciada ..."); } } /** * Handle the results from the recognition activity. */ @Override protected void onActivityResult(int requestCode, int resultCode, Intent data) { Log.d(LOG_VOZ, "Iniciando onActivityResult()"); if (requestCode == INICIAR_GRAVACAO && resultCode == RESULT_OK) { String path = data.getStringExtra(GravacaoAtivity.RETORNO); conexaoSocket(path); } else Log.e(LOG_VOZ, "Resultado Inexperado ..."); } private void conexaoSocket(String path) { Socket socket = SocketOpener.openSocket(ipConexao, porta, tempoEspera); if(socket == null) return; try { DataOutputStream conexao = new DataOutputStream(socket.getOutputStream()); Log.d(LOG_VOZ, "Acessando arquivo ..."); File file = new File(path); DataInputStream arquivo = new DataInputStream(new FileInputStream(file)); Log.d(LOG_VOZ, "Iniciando Transmissão ..."); conexao.writeLong(file.length()); for(int i = 0; i < file.length(); i++) conexao.writeByte(arquivo.readByte()); Log.d(LOG_VOZ, "Transmissão realizada com sucesso..."); Log.d(LOG_VOZ, "Fechando a conexão..."); conexao.close(); socket.close(); Log.d(LOG_VOZ, "============ Processo finalizado com Sucesso =============="); } catch (IOException e) { Log.e(LOG_VOZ, "Erro ao fazer a conexão via Socket. " + e.getMessage()); // TODO Auto-generated catch block } } } class SocketOpener implements Runnable { private String host; private int porta; private Socket socket; public SocketOpener(String host, int porta) { this.host = host; this.porta = porta; socket = null; } public static Socket openSocket(String host, int porta, int timeOut) { SocketOpener opener = new SocketOpener(host, porta); Thread t = new Thread(opener); t.start(); try { t.join(timeOut); } catch(InterruptedException e) { Log.e(Reconhecimento.LOG_VOZ, "Erro ao fazer o join da thread do socket. " + e.getMessage()); //TODO: Mensagem informativa return null; } return opener.getSocket(); } public void run() { try { socket = new Socket(host, porta); }catch(IOException e) { Log.e(Reconhecimento.LOG_VOZ, "Erro na criação do socket. " + e.getMessage()); //TODO: Mensagem informativa } } public Socket getSocket() { return socket; } } Running on the desktop Java: import java.io.BufferedReader; import java.io.DataInputStream; import java.io.DataOutputStream; import java.io.File; import java.io.FileOutputStream; import java.io.IOException; import java.io.InputStreamReader; import java.net.ServerSocket; import java.net.Socket; public class ServidorArquivo { private static int porta = 5158; static String ARQUIVO = "voz.amr"; /** * Caminho que será gravado o arquivo de audio */ static String PATH = "/home/iade/Trabalho/lib/"; public static void main(String[] args) { int i = 1; try { System.out.println("Iniciando o Servidor Socket - Android."); ServerSocket s = new ServerSocket(porta); System.out.println("Servidor Iniciado com Sucesso..."); System.out.println("Aguardando conexões na porta: " + porta); while(true) { Socket recebendo = s.accept(); System.out.println("Aceitando conexão de nº " + i); new ThreadedHandler(recebendo).start(); i++; } } catch (Exception e) { System.out.println("Erro: " + e.getMessage()); e.printStackTrace(); } } } class ThreadedHandler extends Thread { private Socket socket; public ThreadedHandler(Socket so) { socket = so; } public void run() { DataInputStream entrada = null; DataOutputStream arquivo = null; try { entrada = new DataInputStream(socket.getInputStream()); System.out.println("========== Iniciando a leitura dos dados via Sockets =========="); long tamanho = entrada.readLong(); System.out.println("Tamanho do vetor " + tamanho); File file = new File(ServidorArquivo.PATH + ServidorArquivo.ARQUIVO); if(!file.exists()) file.createNewFile(); arquivo = new DataOutputStream(new FileOutputStream(file)); for(int j = 0; j < tamanho; j++) { arquivo.write(entrada.readByte()); } System.out.println("========== Dados recebidos com sucesso =========="); } catch (Exception e) { System.out.println("Erro ao tratar do socket: " + e.getMessage()); e.printStackTrace(); } finally { System.out.println("**** Fechando as conexões ****"); try { entrada.close(); socket.close(); arquivo.close(); } catch (IOException e) { System.out.println("Erro ao fechar conex&#65533;es " + e.getMessage()); e.printStackTrace(); } } System.out.println("============= Fim da Gravação ==========="); // tratar o arquivo String cmd1 = "ffmpeg -i voz.amr -ab 12288 -ar 16000 voz.wav"; String cmd2 = "soundstretch voz.wav voz2.wav -tempo=100"; String dir = "/home/iade/Trabalho/lib"; File workDir = new File(dir); File f1 = new File(dir+"/voz.wav"); File f2 = new File(dir+"/voz2.wav"); f1.delete(); f2.delete(); try { executeCommand(cmd1, workDir); System.out.println("realizou cmd1"); executeCommand(cmd2, workDir); System.out.println("realizou cmd2"); } catch (IOException e) { // TODO Auto-generated catch block e.printStackTrace(); } catch (InterruptedException e) { // TODO Auto-generated catch block e.printStackTrace(); } } private void executeCommand(String cmd1, File workDir) throws IOException, InterruptedException { String s; Process p = Runtime.getRuntime().exec(cmd1,null,workDir); int i = p.waitFor(); if (i == 0) { BufferedReader stdInput = new BufferedReader(new InputStreamReader(p.getInputStream())); // read the output from the command while ((s = stdInput.readLine()) != null) { System.out.println(s); } } else { BufferedReader stdErr = new BufferedReader(new InputStreamReader(p.getErrorStream())); // read the output from the command while ((s = stdErr.readLine()) != null) { System.out.println(s); } } } } Thanks in advance.

    Read the article

  • Simple sound effect loop using AudioToolKit

    - by Typeoneerror
    I've created a few sounds for use in my game. I can play them at certain events without issue: // create sounds CFBundleRef mainBundle; mainBundle = CFBundleGetMainBundle(); _soundFileShake = CFBundleCopyResourceURL(mainBundle, CFSTR("shake"), CFSTR("wav"), NULL); AudioServicesCreateSystemSoundID(_soundFileShake, &_soundIdShake); // later... AudioServicesPlaySystemSound(_soundIdShake); The game has a mechanism which allows you to shake the device to activate some functionality. I've got the shaking code done so I get get a "shaking started" and "shaking ended" message to my game. What I need to have happen is start playing "shave.wav" when shaking starts and loop it until it stops. Is there a way to do this with AudioToolbox/AudioServices? How could I do this if not?

    Read the article

  • C++ Microsoft SAPI: How to set Windows text-to-speech output to a memory buffer?

    - by Vladimir
    Hi all, I have been trying to figure out how to "speak" a text into a memory buffer using Windows SAPI 5.1 but so far no success, even though it seems it should be quite simple. There is an example of streaming the synthesized speech into a .wav file, but no examples of how to stream it to a memory buffer. In the end I need to have the synthesized speech in a char* array in 16 kHz 16-bit little-endian PCM format. Currently I create a temp .wav file, redirect speech output there, then read it, but it seems to be a rather stupid solution. Anyone knows how to do that? Thanks!

    Read the article

  • Soundpool sample not ready

    - by SteD
    I have a .wav file that I'd like to use across my game, currently I am loading the sound in onCreate() of each activity in the game. soundCount = soundpool.load(this,R.raw.count, 1); The sound will be played once the activity starts. soundpool.play(soundCount, 0.9f, 0.9f, 1, -1, 1f); Problem is at times I will hit the error "sample x not ready". Is it possible to load the .wav file once upon starting the game and keep it in memory and use it later across the game? Or is it possible to wait for 1-2 seconds for the sound to load finish?

    Read the article

  • Nyquist won't play audio

    - by erjiang
    I downloaded Nyquist, and am having trouble playing sounds from it. If I run it normally, I get: Nyquist -- A Language for Sound Synthesis and Composition Copyright (c) 1991,1992,1995 by Roger B. Dannenberg Version 2.29 > (play (osc 60)) Saving sound file to ./eric-temp.wav error: snd_save -- could not open audio output > If I wrap it by running padsp ny, the sound plays fine for about half a second, and then I get garbage fed to my speakers. Any solutions?

    Read the article

  • I'm using OpenAL, trying to load a .ogg file and having .dll troubles

    - by Brendan Webster
    I'm using OpenAL for my game's music, and it loads .wav files by default, but to load in Ogg files I had to download and setup a few .dlls and lib files. I have fixed all errors with dlls except for this: I need vorbis.dll, and it says it's missing vorbis_window. I just can't find the dll anywhere online that includes the vorbis_window, anyone have suggestions on how I should fix this problem with my dll?

    Read the article

  • alsa doesn't won't in vlc

    - by freebird
    Alsa Audio Output works fine from terminal aplay /usr/share/sounds/alsa/Noise.wav . But i got to change from default to Alsa Audio Output in vlc . Found in Tools Perfernces Audio Outputs The issue lie when i change it to Alsa i Loose all sound. When i leave it defualt i get a annoying Audio delay of like 200ms or 500ms. from what i have found you have to use Alsa Audio Outpu to fix that issue.

    Read the article

  • Sound Channel map incorrect on an ION 330 ASRock

    - by math
    I have a ION 330 ASRock just updated on Precise and the sound channel mapping is incorrect. Channel map is set as follow: Front Left = Front left Center = Rear Left Front Right = Font Right Rear Right = LFE Rear Left = Center LFE = Rear Right I am testing the sound channel using speaker-test -c 2 -t wav Can some one tell me how I can remap easily? I have tried several possible changes to /etc/pulse/defaut.pa unsuccessfully.

    Read the article

< Previous Page | 5 6 7 8 9 10 11 12 13 14 15 16  | Next Page >