Search Results

Search found 4684 results on 188 pages for 'indicator sound'.

Page 79/188 | < Previous Page | 75 76 77 78 79 80 81 82 83 84 85 86  | Next Page >

  • Html5 Audio plays only once in my Javascript code.

    - by Poul
    I have a dashboard web-app that I want to play an alert sound if its having problems connecting. The site's ajax code will poll for data and throttle down its refresh rate if it can't connect. Once the server comes back up, the site will continue working. In the mean time I would like a sound to play each time it can't connect (so I know to check the server). Here is that code. This code works. var error_audio = new Audio("audio/"+settings.refresh.error_audio); error_audio.load(); //this gets called when there is a connection error. function onConnectionError() { error_audio.play(); } However the 2nd time through the function the audio doesn't play. Digging around in Chrome's debugger the 'played' attribute in the audio element gets set to true. Setting it to false has no results. Any ideas?

    Read the article

  • Python How to make a cross-module function?

    - by Evan
    I want to be able to call a global function from an imported class, for example In file PetStore.py class AnimalSound(object): def __init__(self): if 'makenoise' in globals(): self.makenoise = globals()['makenoise'] else: self.makenoise = lambda: 'meow' def __str__(self): return self.makenoise() Then when I test in the Python Interpreter >>> def makenoise(): ... return 'bark' ... >>> from PetStore import AnimalSound >>> sound = AnimalSound() >>> sound.makenoise() 'meow' I get a 'meow' instead of 'bark'. I have tried using the solutions provided in python-how-to-make-a-cross-module-variable with no luck.

    Read the article

  • Html5 - Callback when media is ready on iPad wont work

    - by Kap
    I'm trying to add a callback to a HTML5 audio element on an iPad. I added an eventlistener to the element, the myOtherThing() starts but there is no sound. If I pause and the play the sound again the audio starts. This works in chrome. Does anyone have an idea how I can do this? myAudioElement.src = "path_to_file"; addEventListener("canplay", function(){ myAudioElement.play(); myOtherThing.start(); }); SOLVED Just wanted to share my solution here, just in case someone else needs it. As far as I understand the iPad does not trigger any events without user interactions. So to be able to use "canply", "playing" and all the other events you need to use the built in media controller. Once you press play in that controller, the events gets triggered. After that you can use your custom interface.

    Read the article

  • GUI Control For Audio Presentation

    - by Boris
    I need GUI control for audio file presentation. The language is not very important but it should run on windows platform. I should be able to :- load the file play the sound put and move markers across the audio bar. it would be nice if it can load itself from RTP wireshark captures (and not wav files). An example may be seen in audacity (may be someone even had an experience extracting it from there). Writing nyquist scripts in audacity is not a good option because I have to operate on RTP captures and not on raw sound samples. Another example of such control is wireshark RTP analyzer. Any advise?

    Read the article

  • Strange beep when using cout

    - by Unknown
    Hello everyone, today when I was working on some code of mine I came across a beeping sound when printing a buffer to the screen. Here's the mysterious character that produces the beep: '' I don't know if you can see it, but my computer beeps when I try to print it like this: cout<<(char)7<<endl; Another point of interest is that the 'beep' doesn't originate from my on board beeper, but from my headphone/speaker Is this just my computer or there something wrong with the cout function? EDIT: But then why does printing this character produce the beep sound? does that mean that I could send other such characters through the cout function to produce different effects?

    Read the article

  • Notifying when screen is off

    - by Al
    I'm trying to generate a notification which vibrates the phone and plays a sound when the screen is off (cpu turned off). According to the Log messages, the notification is being sent, but the phone doesn't vibrate or play the sound until I turn the screen on again. I tried holding a 2 second temporary wakelock (PowerManager.PARTIAL_WAKE_LOCK), which I thought would be ample time for the notification to be played, but alas, it still doesn't. Any pointers to get the notification to run reliably? I'm testing this on an G1 running Android 1.6. Code I'm using: notif.vibrate = new long[] {100, 1000}; notif.defaults |= Notification.DEFAULT_SOUND; notif.ledARGB = Color.RED; notif.ledOnMS = 1; notif.ledOffMS = 0; notif.flags = Notification.FLAG_SHOW_LIGHTS; notif.flags |= NOTIF_FLAGS; //static var if (!screenOn) { //var which updates when screen turns off/on mWakeLock.acquire(2000); } manager.notify(NOTIF_ID, notif);

    Read the article

  • SpeechBackground

    - by abinila
    Hai everyone, I have used the SpeechBackground application in asterisk. I used the version 1.6.0.6. I have a entry like, ;;SpeechCreate exten => s,1,SpeechCreate() exten => s,2,SpeechActivateGrammar(yesno) exten => s,3,SpeechStart() exten => s,4,SpeechBackground(demo-instruct) exten => s,5,SpeechDeactivateGrammar(yesno) I don't know which file I meed to give in SpeechBackground application. Please give me any idea. I have given the sound file from /sounds directory. If I call to 's' the call will be immediately released.I didn't get any audio sound. Please any one help me...

    Read the article

  • Windows Mobile 6.5 SndPlayAsync - C# wrapper?

    - by dominolog
    Hello I'm implementing mp3 playback on Windows Mobile 6.5. I need to use SndPlayAsync API function since I don't want to block calling thread until the file is played (SndPlaySync blocks until the audio file is playing). Unfortunately the SndPlayAsync method takes sound handle instead of sound file path as parameter so there's a need to open the handle before and release of it after playback. The problem is that I don't have any information about the playback completion in this API. Did anybody use a C# wrapper for this API? Where can I get one? I've looked up OPENNETCF but they seem not to support this API. Regards

    Read the article

  • The right way to delete file to trash in Snow Leopard using Cocoa ?

    - by Irwan
    I mean the right way must able to "Put Back" in Finder and isn't playing sound Here are the methods I tried so far: NSString * name = @"test.zip"; NSArray * files = [NSArray arrayWithObject: name]; NSWorkspace * ws = [NSWorkspace sharedWorkspace]; [ws performFileOperation: NSWorkspaceRecycleOperation source: @"/Users/" destination: @"" files: files tag: 0]; Downturn : can't "Put Back" in Finder OSStatus status = FSPathMoveObjectToTrashSync( "/Users/test.zip", NULL, kFSFileOperationDefaultOptions ); Downturn : can't "Put Back" in Finder tell application "Finder" set deletedfile to alias "Snow Leopard:Users:test.zip" delete deletedfile end tell Downturn : playing sound so it's annoying if I execute it repeatedly

    Read the article

  • problems with matlab wavrecord and wavread

    - by user504363
    Hi all I have a problem in matlab I want to record a speech for 2 seconds then read the recorded sound and plot it I use the code FS = 8000; new_wav = wavrecord(2*FS,FS,'int16'); x = wavread(new_wav); plot(x); but the error appears ??? Error using ==> fileparts at 20 Input must be a row vector of characters. Error in ==> wavread>open_wav at 193 [pat,nam,ext] = fileparts(file); Error in ==> wavread at 65 [fid,msg] = open_wav(file); Error in ==> test at 2 x = wavread(new_wav); I plotted correctly recorded sound files, but when I want to record new one through matlab I get this errors. I tried many ways by changing FS and 'int16' but nothing happens. thanks

    Read the article

  • Is there any advantage to having more than 16gb ram on a Windows Dev machine?

    - by Robert Kozak
    Assuming a machine (Dual Quad Core Xeon (2.26GHz) with 24GB RAM) running Windows Server 2008 and Hyper-V. How many VMs can I expect to run at the same time with good performance. Is this overkill? Can you really have too much RAM? Assuming 2GB per VM thats around 16GB for the VMs with 8GB left over for the Main OS and Hyper-V. Sound about right? Edit: Tried to make the question sound less like bragging. Was never my intention. Its a hard question to write.

    Read the article

  • Change Preference Item Summary Text Color in Android 4

    - by AntounG
    I have the below sample of preference items <CheckBoxPreference android:key="chkSound" android:summary="Sound is Off" android:title="Sound" /> I use a theme in the res/values to change the Summary text color <style name="ThemeDarkText"> <item name="android:textColor">#000000</item> </style> And in the code I write this line setTheme(R.style.ThemeDarkText); Its working fine in Android 2.1 but when I tried to run it on a different os (ex Android 4.0) It didn't change the summary text color just the title color only..!! Any help?

    Read the article

  • form_for with json return

    - by Lowgain
    I currently have a form like this: <% form_for @stem, :html => {:multipart => true} do |f| %> <%= f.file_field :sound %> <% end %> This outputs (essentially): <form method="post" id="new_stem" class="new_stem" action="/stems"> <input type="file" size="30" name="stem[sound]" id="stem_sound"> </form> However I'm planning to use jQuery's ajaxForm plugin here and would like the new stem to be returned in JSON format. I know if the form's action was "/stems.json" this would work, but is there a parameter I can put into the form_for call to ask it to return JSON? I tried doing <% form_for @stem, :html => {:multipart => true, :action => '/stems.json'} do |f| %> but this didn't appear to work.

    Read the article

  • iPhone audio Filter Question

    - by Joe
    Okay, I am going to try to make this totally not a "plz send teh codez kthxbai" I am considering an app which takes sound (eventually an audio track) and applies an audio filter to it. So I can play sounds with AudioServicesPlaySystemSound via AudioToolbox framework just fine. What I need is a very simple example of how I might take a sound and apply (for instance) midrange boost etc. Actually the kind of alteration is irrelevant -- if I can get my head around how the alteration is done I can figure out the rest. I am just finding both docs and examples of altering audio in code to be very scarce. Thanks for any help!

    Read the article

  • Android. Playing multiple sounds using SoundManager

    - by Jerry
    Shown are a few lines of code. If I play a single sound, it runs fine. Adding a second sound causes it to crash. Any advice is appreciated. private SoundManager mSoundManager; /** Called when the activity is first created. */ @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.sos); mSoundManager = new SoundManager(); mSoundManager.initSounds(getBaseContext()); mSoundManager.addSound(1,R.raw.dit); mSoundManager.addSound(1,R.raw.dah); Button SoundButton = (Button)findViewById(R.id.SoundButton); SoundButton.setOnClickListener(new OnClickListener() { public void onClick(View v) { mSoundManager.playSound(1); mSoundManager.playSound(2); } }); }

    Read the article

  • Using UIImageViews for 'pages' in an iPhone/iPad storybook app?

    - by outtoplayinc
    I'm new to iPhone programming, and well, what seems obvious to me may seem silly to a seasoned coder. I did a few 'switching views' tutorials on Youtube, and basically, they seems to work nicely for adding pages to a storybook type app. You add a UIViewController and associated view for each page. My question is would this become insanely slow, or a memory hog if I continued this method for say....35+ pages? Each page would also have a sound file associated with it that would play narration when a page load and stops when we leave. Basically, think of a powerpoint type app, with sound, possibly animated image elements, next & back buttons. I'm probably thinking of this very simplistically, but that's where my experience is at for the moment. Any insight or tips as to better and or more efficient ways to proceed would be greatly appreciated.

    Read the article

  • What algorithm would you use to code a parrot?

    - by Phil H
    A parrot learns the most commonly uttered words and phrases in its vicinity so it can repeat them at inappropriate moments. So how would you create a software version? Assuming it has access to a microphone and can record sound at will, how would you code it without requiring infinite resources? The best I can imagine is to divide the stream using silences in the sound, and then use some pattern recognition to encode each one as a list of tokens, storing new ones as you meet them. Hashing the token sequences and counting occurrences in a database, you could build up a picture of the most frequently uttered phrases. But given the huge variety in phrases, how do you prevent this just becoming a huge list? And the sheer number of pairs to match would surely generate lot of false positives from the combinatorial nature of matching. Would you use a neural net, since that's how a real parrot manages it? Or is there another, cleverer way of matching large-scale patterns in analogue data?

    Read the article

  • how to use a wav file in eclipse

    - by AlphaAndOmega
    I've been trying to add audio to a project I've been doing. I found some code on here for html that is also supposed to work with file but it keeps saying "Exception in thread "main" javax.sound.sampled.UnsupportedAudioFileException: could not get audio input stream from input file at javax.sound.sampled.AudioSystem.getAudioInputStream(Unknown Source) at LoopSound.main(LoopSound.java:15)" the code public class LoopSound { public static void main(String[] args) throws Throwable { File file = new File("c:\\Users\\rabidbun\\Pictures\\10177-m-001.wav"); Clip clip = AudioSystem.getClip(); // getAudioInputStream() also accepts a File or InputStream AudioInputStream wav = AudioSystem.getAudioInputStream( file ); clip.open(wav); // loop continuously clip.loop(-1000); SwingUtilities.invokeLater(new Runnable() { public void run() { // A GUI element to prevent the Clip's daemon Thread // from terminating at the end of the main() JOptionPane.showMessageDialog(null, "Close to exit!"); } }); } } What is wrong with the code?

    Read the article

  • absolute audio synchronization

    - by user1780526
    I would like to synchronize my computer with an external camcorder recording so that I can know exactly (to the millisecond) when certain recored events happen with respect to other sensors logged by the computer. One idea is to playback short sound pulses or chirps every second from the computer that get picked up by the microphone on the camcorder. But the accuracy of a simple cron job playing a sound clip is not precise enough. I was thinking of using something like gstreamer, but how does one get it to playback a clip at precisely a certain time according to the system clock?

    Read the article

  • Asterisk : SpeechBackground application.

    - by abinila
    Hai everyone, I have used the SpeechBackground application in asterisk. I used the version 1.6.0.6. I have a entry like, ;;SpeechCreate exten => s,1,SpeechCreate() exten => s,2,SpeechActivateGrammar(yesno) exten => s,3,SpeechStart() exten => s,4,SpeechBackground(demo-instruct) exten => s,5,SpeechDeactivateGrammar(yesno) I don't know which file I meed to give in SpeechBackground application. Please give me any idea. I have given the sound file from /sounds directory. If I call to 's' the call will be immediately released.I didn't get any audio sound. Please any one help me...

    Read the article

  • Background audio not working in windows 8 store / metro app

    - by roryok
    I've tried setting background audio through both a mediaElement in XAML <MediaElement x:Name="MyAudio" Source="Assets/Sound.mp3" AudioCategory="BackgroundCapableMedia" AutoPlay="False" /> And programmatically async void setUpAudio() { var package = Windows.ApplicationModel.Package.Current; var installedLocation = package.InstalledLocation; var storageFile = await installedLocation.GetFileAsync("Assets\\Sound.mp3"); if (storageFile != null) { var stream = await storageFile.OpenAsync(Windows.Storage.FileAccessMode.Read); _soundEffect = new MediaElement(); _soundEffect.AudioCategory = AudioCategory.BackgroundCapableMedia; _soundEffect.AutoPlay = false; _soundEffect.SetSource(stream, storageFile.ContentType); } } // and later... _soundEffect.Play(); But neither works for me. As soon as I minimise the app the music fades out

    Read the article

< Previous Page | 75 76 77 78 79 80 81 82 83 84 85 86  | Next Page >