Search Results

Search found 8494 results on 340 pages for 'movie sound creation'.

Page 126/340 | < Previous Page | 122 123 124 125 126 127 128 129 130 131 132 133  | Next Page >

  • Listening to the iPhone mic with SCListener and playing music at the same time: how?

    - by Eamon Ford
    Hello, I am using Stephen Celis' SCListener class (for iPhone) to "listen" from the microphone, but I also need to be playing music at the same time using the MediaPlayer framework. However, when I start listening with SCListener, the music fades out and stops. I have set the kAudioSessionCategory_PlayAndRecord property on the audio session in SCListener, which should allow me to play audio and record audio at the same time, but as far as I can tell it has no effect. I'm confused, because according to other developers' results, this works just fine, but not for me. I'm thinking maybe the kAudioSessionCategory_PlayAndRecord property allows you to play sound and record if you're using the AVAudioPlayer framework or something to play the sound, but maybe not the MediaPlayer framework? This would be a problem for me because I need to play music from the user's iPod library, which, as far as I know is only possible to do using the MediaPlayer framework. Does anyone know how I can get around this problem? Thanks in advance!

    Read the article

  • How to play multiple audio sources simultaneously in Silverlight

    - by Shurup
    I want to play simultaneous multiply audio sources in Silverlight. So I've created a prototype in Silverlight 4 that should play a two mp3 files containing the same ticks sound with an intervall 1 second. So these files must be sounded as one sound if they will be played together with any whole second offsets (0 and 1, 0 and 2, 1 and 1 seconds, etc.) I my prototype I use two MediaElement (me and me2) objects. DateTime startTime; private void Play_Clicked(object sender, RoutedEventArgs e) { me.SetSource(new FileStream(file1), FileMode.Open))); me2.SetSource(new FileStream(file2), FileMode.Open))); var timer = new DispatcherTimer { Interval = TimeSpan.FromMilliseconds(1) }; timer.Tick += RefreshData; timer.Start(); } First file should be played at 00:00 sec. and the second in 00:02 second. void RefreshData(object sender, EventArgs e) { if(me.CurrentState != MediaElementState.Playing) { startTime = DateTime.Now; me.Play(); return; } var elapsed = DateTime.Now - startTime; if(me2.CurrentState != MediaElementState.Playing && elapsed >= TimeSpan.FromSeconds(2)) { me2.Play(); ((DispatcherTimer)sender).Stop(); } } The tracks played every time different and not simultaneous as they should (as one sound). Addition: I've tested a code from the Bobby's answer. private void Play_Clicked(object sender, RoutedEventArgs e) { me.SetSource(new FileStream(file1), FileMode.Open))); me2.SetSource(new FileStream(file2), FileMode.Open))); // This code plays well enough. // me.Play(); // me2.Play(); // But adding the 2 second offset using the timer, // they play no simultaneous. var timer = new DispatcherTimer { Interval = TimeSpan.FromSeconds(2) }; timer.Tick += (source, arg) => { me2.Play(); ((DispatcherTimer)source).Stop(); }; timer.Start(); } Is it possible to play them together using only one MediaElement or any implementation of MediaStreamSource that can play multiply sources?

    Read the article

  • Record AVAudioPlayer output using AVAudioRecorder

    - by Kieran
    In my app the user plays a sound by pressing a button. There are several buttons which can be played simultaneously. The sounds are played using AVAudioPlayer instances. I want to record the output of these instances using AVAudioRecorder. I have set it all up and a file is created and records but when I play it back it does not play any sound. It is just a silent file the length of the recording. Does anyone know if there is a setting I am missing with AVAudioPlayer or AVAudioRecorder? Thanks

    Read the article

  • "RFC 2833 RTP Event" Consecutive Events and the E "End" Bit

    - by brian_d
    Hello, I can send out a RFC 2833 dtmf event as outlined at http://www.ietf.org/rfc/rfc2833.txt When I do set the E "End" bit, but leave it as 0, I get the following behaviour: If for example keys 7874556332111111145855885#3 were pressed, then ALL events would be sent and show up in a program like wireshark, however only 87456321458585#3 would sound. So the first key (which I figure could be a separate issue) and any repeats of an event (ie 11111) are failing to sound. In section 3.9, figure 2 of the above linked document, they give a 911 example. Here all but the last event have the E bit set. When I set the bit for all numbers, I never get an event to sound. I have thought of a couple possible thing but do not know if they are the reason: 1) figure 2 shows payload types of 96 and 97 sent. I have not nor know how to exactly. In section 3.8, codes 96 and 97 are described as "the dynamic payload types 96 and 97 have been assigned for the redundancy mechanism and the telephone event payload respectively" 2) In section 3.5, "E:", "A sender MAY delay setting the end bit until retransmitting the last packet for a tone, rather than on its first transmission" Does anyone have an idea of how to actually do this? I have also fiddled around with timestamp intervals and the RTP marker. Any help is greatly appreciated. Here is a sample wireshark event capture of the relevant areas: 6590 31.159045000 xx.x.x.xxx --.--.---.-- RTP EVENT Payload type=RTP Event, DTMF Pound # (end) Real-Time Transport Protocol Stream setup by SDP (frame 6225) Setup frame: 6225 Setup Method: SDP 10.. .... = Version: RFC 1889 Version (2) ..0. .... = Padding: False ...0 .... = Extension: False .... 0000 = Contributing source identifiers count: 0 0... .... = Marker: False Payload type: telephone-event (101) Sequence number: 0 Extended sequence number: 65536 Timestamp: 0 Synchronization Source identifier: 0x15f27104 (368210180) RFC 2833 RTP Event Event ID: DTMF Pound # (11) 1... .... = End of Event: True .0.. .... = Reserved: False ..00 0000 = Volume: 0 Event Duration: 2048

    Read the article

  • Convert byte array to wav file

    - by Eyla
    I'm trying to play a wav sound that stored in byte array called bytes. I know that I should convert the byte array to wav file and save it in my local drive then called the saved file but I was not able to convert the byte array to wav file. please help me to give sample code to convert byte arrary of wav sound to wav file. here is my code: protected void Button1_Click(object sender, EventArgs e) { byte[] bytes = GetbyteArray(); //missing code to convert the byte array to wav file ..................... System.Media.SoundPlayer myPlayer = new System.Media.SoundPlayer(myfile); myPlayer.Stream = new MemoryStream(); myPlayer.Play(); }

    Read the article

  • Capture Flash Audio in 4.7 Edge?

    - by emcmanus
    Is there a way to capture plugin (Flash) audio before it gets to the sound card? I'd like to record plugin audio, hopefully without actually playing the sound. Capturing audio at the device level is an absolute last resort, as the application would pick up all system audio rather than just the Webkit plugin. I'm aware of the recent switch back from QTMultimedia; is this possible with phonon? I spent the night looking for some way to access the phonon graph via QWebFrame (or any of the QtWebkit widgets) -- and didn't turn up much. I also started digging through QTWebkit, particularly NPAPI, without success. For reference, I'm using the edge version of 4.7 (6aa50af000f85cc4497749fcf0860c8ed244a60e) This seems to be a fairly challenging problem. Any hints would be greatly appreciated.

    Read the article

  • Android MediaPlayer crashing app

    - by user1555863
    I have an android app with a button that plays a sound. the code for playing the sound: if (mp != null) { mp.release(); } mp = MediaPlayer.create(this, R.raw.match); mp.start(); mp is a field in the activity: public class Game extends Activity implements OnClickListener { /** Called when the activity is first created. */ //variables: MediaPlayer mp; //... The app runs ok, but after clicking the button about 200 times on the emulator, app crashed and gave me this error https://dl.dropbox.com/u/5488790/error.txt (couldn't figure how to post it here so it will appear decently) i am assuming this is because the MediaPlayer object is consuming up too much memory, but isn't mp.release() supposed to take care of this? What am i doing wrong here?

    Read the article

  • Making a DVD video with a still image and PCM 16bit audio with ffmpeg

    - by João
    I'm trying to make a small video with a still image and a sound file playing in the background to pass it to dvdauthor and create a DVD. The command I'm using is this: ffmpeg -loop_input -i image.jpg -qscale 2 -i song.flac -aspect 4:3 -target pal-dvd -acodec pcm_s16le -shortest output.mpg However, the resulting video file doesn't have sound at all (testing it on VLC Player). I don't know if I can't combine "-acodec pcm_s16le" with "-target pal-dvd" to override the later, or if there is something else wrong with the command. If I try without the "-acodec pcm_s16le" parameter the video and audio works, I can even create a DVD ISO with it. However, the audio stays as AC3. I wanted to include with the video the lossless audio, not a compressed one. I suppose the DVD standart allows to have PCM audio in it, am I right?

    Read the article

  • How to Access a Private Variable?

    - by SoulBeaver
    This question isn't meant to sound as blatantly insulting as it probably is right now. This is a homework assignment, and the spec sheet is scarce and poorly designed to say the least. We have a function: double refuel( int liter, GasStation *gs ) { // TODO: Access private variable MaxFuel of gs and decrement. } Sound simple enough? It should be, but the class GasStation comes with no function that accesses the private variable MaxFuel. So how can I access it anyway using the function refuel? I'm not considering creating a function setFuel( int liter ) because the teacher always complains rather energetically if I change his specification. So... I guess I have to do some sort of hack around it, but I'm not sure how to go about this without explicitely changing the only function in GasStation and giving it a parameter so that I can call it here. Any hints perhaps?

    Read the article

  • Android PCM Bytes

    - by Pintac
    Hi I am using the AudioRecord class to analize raw pcm bytes as it comes in the mic. So thats working nicely. Now i need convert the pcm bytes into decibel. I have a formula that takes sound presure in Pa into db. db = 20 * log10(Pa/ref Pa) So the question is the bytes i am getting from audiorecorder from the buffer what is it is it amplitude pascal sound pressure or what. I tried to putting the value into te formula but it comes back with very hight db so i do not think its right thanks

    Read the article

  • Is MacBook powerful enough to do Ipad development? or do I need a MacBook Pro?

    - by ronaldwidha
    The title probably says it all. Considering an ipad's processor is nothing compared to a macbook, I would think a Macbook should be more than capable to run the simulator. However, not knowing much about iphone/ipad development, I'd like to get some opinions on this. for e.g. how many apps are typically need to be run for ipad dev (editor, debugger, perf monitor, trace log, etc). are these apps resource (memory, cpu) intensive? please do not take into consideration the actual image, 3d, video, sound development. I understand one would need quite a beefy machine to produce these type of creative assets. What I'm looking at is a machine to do code development, physics, putting together the produced assets (images, vector graphics, 3d video, sound, etc).

    Read the article

  • How can I break from a method prematurely that's being called by NSTimer

    - by jammur
    Basically I'm writing a metronome app, but I'm using a sound file that, depending on the BPM, might not be finished playing when the "play" method is called again. For example, if the sound file is 0.5 seconds long, but the BPM is 200, the "play" method needs to be called every 0.3 seconds. I'm not overly familiar with NSTimer, but it appears that if it is supposed to fire before the previous invocation has completed, it doesn't, and just waits for the next time around. I could be completely wrong about that though. What I need to do is have the previous invocation end prematurely, and have the "play" method called again when the time is supposed to fire. Any help would be appreciated!

    Read the article

  • Preload *.wav with SystemSoundID?

    - by fuzzygoat
    I am playing a wav file to give a little audio feedback when a button in my UI is pressed. My question is when you first press the button there is a delay (about 1.5secs) whilst the sound file "sound.wav" is loaded and cached. Is there a way to pre-cache this file (maybe in my viewDidLoad)? I guess I could do it by just playing it a viewDidLoad, but would really need to disable the audio so it does not "beeb" each time the app starts. many thanks for and help. gary EDIT: Looks like my question is a duplicate of this post unless anyone has any new info? Maybe a way to turn the play volume down temporarily, unless the audio is cleared each time through the run loop.

    Read the article

  • Voice Communication over TCP/IP

    - by Micha
    Hello, I'm currently developing application using DirectSound for communication on an intranet. I've had working solution using UDP but then my boss told me he wants to use TCP/IP for some reason. I've tried to implement it in pretty much the same way as UDP, but with very little success. What I get is basically just noise. 20% of it is the recorded sound and the rest is just weird noise. My guess for the reason is that TCP needs to read all the accepted data several times until it gets the final sound I can play. Now two questions: Am I on the right tracks? Is it even good idea to use TCP/IP for this kind of application (voice conferencing of sorts)? I'm doing it in C# but I don't think this is language specific.

    Read the article

  • Html5 Audio plays only once in my Javascript code.

    - by Poul
    I have a dashboard web-app that I want to play an alert sound if its having problems connecting. The site's ajax code will poll for data and throttle down its refresh rate if it can't connect. Once the server comes back up, the site will continue working. In the mean time I would like a sound to play each time it can't connect (so I know to check the server). Here is that code. This code works. var error_audio = new Audio("audio/"+settings.refresh.error_audio); error_audio.load(); //this gets called when there is a connection error. function onConnectionError() { error_audio.play(); } However the 2nd time through the function the audio doesn't play. Digging around in Chrome's debugger the 'played' attribute in the audio element gets set to true. Setting it to false has no results. Any ideas?

    Read the article

  • Geocoding non-addresses: Geopy

    - by Phil Donovan
    Using geopy to geocode alcohol outlets in NZ. The problem I have is that some places do not have street addresses but are places in Google Maps. For example, plugging: Furneaux Lodge, Endeavour Inlet, Queen Charlotte Sound, Marlborough 7250 into Google Maps via the browser GUI gives me However, using that in Geopy I get a GQueryError saying this geographic location does not exist. Here is the code for geocoding: def GeoCode(address): g=geocoders.Google(domain="maps.google.co.nz") geoloc = g.geocode(address, exactly_one=False) place, (lat, lng) = geoloc[0] GeoOut = [] GeoOut.extend([place, lat, lng]) return GeoOut GeoCode("Furneaux Lodge, Endeavour Inlet, Queen Charlotte Sound, Marlboroguh 7250") Meanwhile, I notice that "Eiffel Tower" works fine. Is there away to solve this and can someone explain the difference between The Eiffel Tower and Furneaux Lodge within Google 'locations'?

    Read the article

  • Python How to make a cross-module function?

    - by Evan
    I want to be able to call a global function from an imported class, for example In file PetStore.py class AnimalSound(object): def __init__(self): if 'makenoise' in globals(): self.makenoise = globals()['makenoise'] else: self.makenoise = lambda: 'meow' def __str__(self): return self.makenoise() Then when I test in the Python Interpreter >>> def makenoise(): ... return 'bark' ... >>> from PetStore import AnimalSound >>> sound = AnimalSound() >>> sound.makenoise() 'meow' I get a 'meow' instead of 'bark'. I have tried using the solutions provided in python-how-to-make-a-cross-module-variable with no luck.

    Read the article

  • Html5 - Callback when media is ready on iPad wont work

    - by Kap
    I'm trying to add a callback to a HTML5 audio element on an iPad. I added an eventlistener to the element, the myOtherThing() starts but there is no sound. If I pause and the play the sound again the audio starts. This works in chrome. Does anyone have an idea how I can do this? myAudioElement.src = "path_to_file"; addEventListener("canplay", function(){ myAudioElement.play(); myOtherThing.start(); }); SOLVED Just wanted to share my solution here, just in case someone else needs it. As far as I understand the iPad does not trigger any events without user interactions. So to be able to use "canply", "playing" and all the other events you need to use the built in media controller. Once you press play in that controller, the events gets triggered. After that you can use your custom interface.

    Read the article

  • GUI Control For Audio Presentation

    - by Boris
    I need GUI control for audio file presentation. The language is not very important but it should run on windows platform. I should be able to :- load the file play the sound put and move markers across the audio bar. it would be nice if it can load itself from RTP wireshark captures (and not wav files). An example may be seen in audacity (may be someone even had an experience extracting it from there). Writing nyquist scripts in audacity is not a good option because I have to operate on RTP captures and not on raw sound samples. Another example of such control is wireshark RTP analyzer. Any advise?

    Read the article

  • Notifying when screen is off

    - by Al
    I'm trying to generate a notification which vibrates the phone and plays a sound when the screen is off (cpu turned off). According to the Log messages, the notification is being sent, but the phone doesn't vibrate or play the sound until I turn the screen on again. I tried holding a 2 second temporary wakelock (PowerManager.PARTIAL_WAKE_LOCK), which I thought would be ample time for the notification to be played, but alas, it still doesn't. Any pointers to get the notification to run reliably? I'm testing this on an G1 running Android 1.6. Code I'm using: notif.vibrate = new long[] {100, 1000}; notif.defaults |= Notification.DEFAULT_SOUND; notif.ledARGB = Color.RED; notif.ledOnMS = 1; notif.ledOffMS = 0; notif.flags = Notification.FLAG_SHOW_LIGHTS; notif.flags |= NOTIF_FLAGS; //static var if (!screenOn) { //var which updates when screen turns off/on mWakeLock.acquire(2000); } manager.notify(NOTIF_ID, notif);

    Read the article

  • Strange beep when using cout

    - by Unknown
    Hello everyone, today when I was working on some code of mine I came across a beeping sound when printing a buffer to the screen. Here's the mysterious character that produces the beep: '' I don't know if you can see it, but my computer beeps when I try to print it like this: cout<<(char)7<<endl; Another point of interest is that the 'beep' doesn't originate from my on board beeper, but from my headphone/speaker Is this just my computer or there something wrong with the cout function? EDIT: But then why does printing this character produce the beep sound? does that mean that I could send other such characters through the cout function to produce different effects?

    Read the article

  • SpeechBackground

    - by abinila
    Hai everyone, I have used the SpeechBackground application in asterisk. I used the version 1.6.0.6. I have a entry like, ;;SpeechCreate exten => s,1,SpeechCreate() exten => s,2,SpeechActivateGrammar(yesno) exten => s,3,SpeechStart() exten => s,4,SpeechBackground(demo-instruct) exten => s,5,SpeechDeactivateGrammar(yesno) I don't know which file I meed to give in SpeechBackground application. Please give me any idea. I have given the sound file from /sounds directory. If I call to 's' the call will be immediately released.I didn't get any audio sound. Please any one help me...

    Read the article

  • Windows Mobile 6.5 SndPlayAsync - C# wrapper?

    - by dominolog
    Hello I'm implementing mp3 playback on Windows Mobile 6.5. I need to use SndPlayAsync API function since I don't want to block calling thread until the file is played (SndPlaySync blocks until the audio file is playing). Unfortunately the SndPlayAsync method takes sound handle instead of sound file path as parameter so there's a need to open the handle before and release of it after playback. The problem is that I don't have any information about the playback completion in this API. Did anybody use a C# wrapper for this API? Where can I get one? I've looked up OPENNETCF but they seem not to support this API. Regards

    Read the article

< Previous Page | 122 123 124 125 126 127 128 129 130 131 132 133  | Next Page >