Search Results

Search found 3942 results on 158 pages for 'no sound'.

Page 58/158 | < Previous Page | 54 55 56 57 58 59 60 61 62 63 64 65  | Next Page >

  • want to build an alarm app in iphone.

    - by Sumit Kr Singh
    Hi, I want to build an alarm application for iphone. I want to ignore iphone device state and volume buttons state. I want to play sound anyhow in full volume and also want that user cant modify volume using iphone hardware buttons while sound is played. Does anybody know how to implement it? Please post the code here....... Thankx in Advance.......

    Read the article

  • How to program a real-time accurate audio sequencer on the iphone?

    - by Walchy
    Hi... I want to program a simple audio sequencer on the iphone but I can't get accurate timing. The last days I tried all possible audio techniques on the iphone, starting from AudioServicesPlaySystemSound and AVAudioPlayer and OpenAL to AudioQueues. In my last attempt I tried the CocosDenshion sound engine which uses openAL and allows to load sounds into multiple buffers and then play them whenever needed. Here is the basic code: init: int channelGroups[1]; channelGroups[0] = 8; soundEngine = [[CDSoundEngine alloc] init:channelGroups channelGroupTotal:1]; int i=0; for(NSString *soundName in [NSArray arrayWithObjects:@"base1", @"snare1", @"hihat1", @"dit", @"snare", nil]) { [soundEngine loadBuffer:i fileName:soundName fileType:@"wav"]; i++; } [NSTimer scheduledTimerWithTimeInterval:0.14 target:self selector:@selector(drumLoop:) userInfo:nil repeats:YES]; In the initialisation I create the sound engine, load some sounds to different buffers and then establish the sequencer loop with NSTimer. audio loop: - (void)drumLoop:(NSTimer *)timer { for(int track=0; track<4; track++) { unsigned char note=pattern[track][step]; if(note) [soundEngine playSound:note-1 channelGroupId:0 pitch:1.0f pan:.5 gain:1.0 loop:NO]; } if(++step>=16) step=0; } Thats it and it works as it should BUT the timing is shaky and instable. As soon as something else happens (i.g. drawing in a view) it goes out of sync. As I understand the sound engine and openAL the buffers are loaded (in the init code) and then are ready to start immediately with alSourcePlay(source); - so the problem may be with NSTimer? Now there are dozens of sound sequencer apps in the appstore and they have accurate timing. I.g. "idrum" has a perfect stable beat even in 180 bpm when zooming and drawing is done. So there must be a solution. Does anybody has any idea? Thanks for any help in advance! Best regards, Walchy

    Read the article

  • Playing an arbitrary tone with Android.

    - by fiXedd
    Is there any way to make Android emit a sound of arbitrary frequency (meaning, I don't want to have pre-recorded sound files)? I've looked around and ToneGenerator was the only thing I was able to find that was even close, but it seems to only be capable of outputting the standard DTMF tones. Any ideas?

    Read the article

  • List input and output audio devices in Applet

    - by Jhonny Everson
    I am running a signed applet that needs to provide the ability for the user to select the input and output audio devices ( similar to what skype provides). I borrowed the following code from other thread: import javax.sound.sampled.*; public class SoundAudit { public static void main(String[] args) { try { System.out.println("OS: "+System.getProperty("os.name")+" "+ System.getProperty("os.version")+"/"+ System.getProperty("os.arch")+"\nJava: "+ System.getProperty("java.version")+" ("+ System.getProperty("java.vendor")+")\n"); for (Mixer.Info thisMixerInfo : AudioSystem.getMixerInfo()) { System.out.println("Mixer: "+thisMixerInfo.getDescription()+ " ["+thisMixerInfo.getName()+"]"); Mixer thisMixer = AudioSystem.getMixer(thisMixerInfo); for (Line.Info thisLineInfo:thisMixer.getSourceLineInfo()) { if (thisLineInfo.getLineClass().getName().equals( "javax.sound.sampled.Port")) { Line thisLine = thisMixer.getLine(thisLineInfo); thisLine.open(); System.out.println(" Source Port: " +thisLineInfo.toString()); for (Control thisControl : thisLine.getControls()) { System.out.println(AnalyzeControl(thisControl));} thisLine.close();}} for (Line.Info thisLineInfo:thisMixer.getTargetLineInfo()) { if (thisLineInfo.getLineClass().getName().equals( "javax.sound.sampled.Port")) { Line thisLine = thisMixer.getLine(thisLineInfo); thisLine.open(); System.out.println(" Target Port: " +thisLineInfo.toString()); for (Control thisControl : thisLine.getControls()) { System.out.println(AnalyzeControl(thisControl));} thisLine.close();}}} } catch (Exception e) {e.printStackTrace();}} public static String AnalyzeControl(Control thisControl) { String type = thisControl.getType().toString(); if (thisControl instanceof BooleanControl) { return " Control: "+type+" (boolean)"; } if (thisControl instanceof CompoundControl) { System.out.println(" Control: "+type+ " (compound - values below)"); String toReturn = ""; for (Control children: ((CompoundControl)thisControl).getMemberControls()) { toReturn+=" "+AnalyzeControl(children)+"\n";} return toReturn.substring(0, toReturn.length()-1);} if (thisControl instanceof EnumControl) { return " Control:"+type+" (enum: "+thisControl.toString()+")";} if (thisControl instanceof FloatControl) { return " Control: "+type+" (float: from "+ ((FloatControl) thisControl).getMinimum()+" to "+ ((FloatControl) thisControl).getMaximum()+")";} return " Control: unknown type";} } But what I get: Mixer: Software mixer and synthesizer [Java Sound Audio Engine] Mixer: No details available [Microphone (Pink Front)] I was expecting the get the real list of my devices (My preferences panels shows 3 output devices and 1 Microphone). I am running on Mac OS X 10.6.7. Is there other way to get that info from Java?

    Read the article

  • Triggering event when Button is pressed down in Android

    - by Cody
    I have the following code for Android which works fine to play a sound once a button is clicked: Button SoundButton2 = (Button)findViewById(R.id.sound2); SoundButton2.setOnClickListener(new OnClickListener() { public void onClick(View v) { mSoundManager.playSound(2); } }); My problem is that I want the sound to play immediately upon pressing the button (touch down), not when it is released (touch up). Any ideas on how I can accomplish this?

    Read the article

  • SoundMixer.computeSpectrum with microphone

    - by paleozogt
    Flex has the SoundMixer.computeSpectrum function that lets you compute an FFT from the currently playing sound. What I'd like to do is compute an FFT without playing the sound. Since Flash 10.1 lets us access the microphone bytes directly, it seems like we should be able to compute the FFT directly off of what the user is speaking.

    Read the article

  • SoundMixer.computeSpectrum with microphone

    - by paleozogt
    Flex has the SoundMixer.computeSpectrum function that lets you compute an FFT from the currently playing sound. What I'd like to do is compute an FFT without playing the sound. Since Flash 10.1 lets us access the microphone bytes directly, it seems like we should be able to compute the FFT directly off of what the user is speaking.

    Read the article

  • Creating simple waveforms with CoreAudio

    - by Koning Baard
    I am new to CoreAudio, and I would like to output a simple sine wave and square wave with a given frequency and amplitude through the speakers using CA. I don't want to use sound files as I want to synthesize the sound. What do I need to do this? And can you give me an example or tutorial? Thanks.

    Read the article

  • Is the Finch audio library for iPhone capable of doing this?

    - by mystify
    I need to: - start / stop sounds with lengths between 0.1 and 10 seconds - change the playback volume I want to / would like to / would be nice to have to: - change the playback speed - change the playback pitch / frequency - pause an sound and resume playing it later - play a sound backwards Is Finch my best friend here?

    Read the article

  • Webcam capture and convert to avi

    - by Spidfire
    Im trying to make a program that captures a video from the webcam and sound from the microphone but im getting stuck at the part where ive try to make a movie out of still images ive heard you need to use directshow but it doesnt jet work for me Does someone know a good piece of example code that captures video and sound and can encode it to a file (divx or something like that) ? or some suggestions where to look so i can build it myself (if a other programming language is better for this im happy to know it early. )

    Read the article

  • No Audio from Windows XP after formatting.

    - by karthik
    Hello folks, I have reinstalled in my Windows XP machine. After it is re-isntalled the audio in my machine has failed. I have installed all the device drivers in my mother board. The Realtek Sound driver is also installed. Note: Sound is working when I test the surround settings test in my RealTek program, but unable to play any audio, tried playing both in my local machine and from the internet.

    Read the article

  • mp3 playback stop echoing, as3

    - by pixelGreaser
    Hitting play more than once, causes an echo and I can't stop my mp3 player. What's the best practice for mp3 playback? var mySound:Sound = new Sound(); playButton.addEventListener (MouseEvent.CLICK, myPlayButtonHandler); var myChannel:SoundChannel = new SoundChannel(); function myPlayButtonHandler (e:MouseEvent):void { myChannel = mySound.play(); } stopButton.addEventListener(MouseEvent.CLICK, onClickStop); function onClickStop(e:MouseEvent):void{ myChannel.stop(); }

    Read the article

  • Difference in Django object creation call

    - by PhilGo20
    I'd like to know if there's a difference between the following two calls to create an object in Django Animal.objects.create(name="cat", sound="meow") and Animal(name="cat", sound="meow") I see both in test cases and I want to make sure I am not missing something. thanks

    Read the article

  • What things must I know about OpenAL memory management?

    - by mystify
    I am playing sound with OpenAL, and it seems to increase memory footprint dramatically for every little sound I play. It seems that OpenAL never frees memory itself and that playing a Source causes memory footprint to grow. I couldn't find any good resources about OpenAL memory management, but I bet I must do a lot of stuff myself. Maybe someone knows a ressource for that?

    Read the article

  • Adobe Flash and mp3 licence

    - by Dovyski
    When I publish a Flash file that contains any sound (such as a WAV file), I can choose the sound compression method (MP3, raw, ADPCM, etc.). My question is about the mp3 compression and it's licence. Flash gives me the option to compress a WAV file as mp3, but is the licence to use the mp3 format included? I have paid for a Flash licence, does it give the right to use mp3 in my SWF files freely or do I have to pay royalties to someone else?

    Read the article

  • How do I link ASP.NET membership/role users to tables in db?

    - by SnapConfig.com
    I am going to use forms authentication but I want to be able to link the asp.net users to some tables in the db for example If I have a class and students (as roles) I'll have a class students table. I'm planning to put in a Users table containing a simple int userid and ASP.NET username in there and put userid wherever I want to link the users. Does that sound good? any other options of modeling this? it does sound a bit convoluted?

    Read the article

  • Webcam capture with c# and convert to avi

    - by Spidfire
    Im trying to make a program that captures a video from the webcam and sound from the microphone but im getting stuck at the part where ive try to make a movie out of still images ive heard you need to use directshow but it doesnt jet work for me Does someone know a good piece of example code that captures video and sound and can encode it to a file (divx or something like that) ? or some suggestions where to look so i can build it myself (if a other programming language is better for this im happy to know it early. )

    Read the article

  • Is it possible to capture audio output and apply effects to it?

    - by Ciaran
    Using .NET and DirectSound I want to be able to take all output sound that is coming from my audio device and apply effects to it. I've had a quick look at the docs on MSDN and there doesn't seem to be any explanation as to how to do something like this. I've read elsewhere that you'd be better off writing a driver to sit in front of your real audio driver and have that do whatever you want with the sound. Any ideas anyone to push me in the right direction?

    Read the article

  • Device drivers and Windows

    - by b-gen-jack-o-neill
    Hi, I am trying to complete the picture of how the PC and the OS interacts together. And I am at point, where I am little out of guess when it comes to device drivers. Please, don´t write things like its too complicated, or you don´t need to know when using high programming laguage and winapi functions. I want to know, it´s for study purposes. So, the very basic structure of how OS and PC (by PC I mean of course HW) is how I see it is that all other than direct CPU commands, which can CPU do on itself (arithmetic operation, its registers access and memory access) must pass thru OS. Mainly becouse from ring level 3 you cannot use in and out intructions which are used for acesing other HW. I know that there is MMIO,but it must be set by port comunication first. It was not like this all the time. Even I am bit young to remember MSDOS, I know you could access HW directly, becouse there ws no limitation, no ring mode. So you could to write string to diplay use wheather DOS function, or directly acess video card memory and write it by yourself. But as OS developed, there is no longer this possibility. But it is fine, since OS now handles all the HW comunication, and frankly it more convinient and much more safe (I would say the only option) in multitasking environment. So nowdays you instead of using int instructions to use BIOS mapped function or DOS function you call dll which internally than handles everything you don´t need to know about. I understand this. I also undrstand that device drivers is the piece of code that runs in ring level 0, so it can do all the HW interactions. But what I don´t understand is connection between OS and device driver. Let´s take a example - I want to make a sound card make a sound. So I call windows API to acess sound card, but what happens than? Does windows call device drivers to do so? But if it does call device driver, does it mean, that all device drivers which can be called by winAPI function, must have routines named in some specific way? I mean, when I have new sound card, must its drivers have functions named same as the old one? So Windows can actually call the same function from its perspective? But if Windows have predefined sets of functions requored by device drivers, that it cannot use new drivers that doesent existed before last version of OS came out. Please, help me understand this mess. I am really getting mad. Thanks.

    Read the article

< Previous Page | 54 55 56 57 58 59 60 61 62 63 64 65  | Next Page >