Search Results

Search found 4054 results on 163 pages for 'surround sound'.

Page 65/163 | < Previous Page | 61 62 63 64 65 66 67 68 69 70 71 72  | Next Page >

  • Making a DVD video with a still image and PCM 16bit audio with ffmpeg

    - by João
    I'm trying to make a small video with a still image and a sound file playing in the background to pass it to dvdauthor and create a DVD. The command I'm using is this: ffmpeg -loop_input -i image.jpg -qscale 2 -i song.flac -aspect 4:3 -target pal-dvd -acodec pcm_s16le -shortest output.mpg However, the resulting video file doesn't have sound at all (testing it on VLC Player). I don't know if I can't combine "-acodec pcm_s16le" with "-target pal-dvd" to override the later, or if there is something else wrong with the command. If I try without the "-acodec pcm_s16le" parameter the video and audio works, I can even create a DVD ISO with it. However, the audio stays as AC3. I wanted to include with the video the lossless audio, not a compressed one. I suppose the DVD standart allows to have PCM audio in it, am I right?

    Read the article

  • How to Access a Private Variable?

    - by SoulBeaver
    This question isn't meant to sound as blatantly insulting as it probably is right now. This is a homework assignment, and the spec sheet is scarce and poorly designed to say the least. We have a function: double refuel( int liter, GasStation *gs ) { // TODO: Access private variable MaxFuel of gs and decrement. } Sound simple enough? It should be, but the class GasStation comes with no function that accesses the private variable MaxFuel. So how can I access it anyway using the function refuel? I'm not considering creating a function setFuel( int liter ) because the teacher always complains rather energetically if I change his specification. So... I guess I have to do some sort of hack around it, but I'm not sure how to go about this without explicitely changing the only function in GasStation and giving it a parameter so that I can call it here. Any hints perhaps?

    Read the article

  • Is MacBook powerful enough to do Ipad development? or do I need a MacBook Pro?

    - by ronaldwidha
    The title probably says it all. Considering an ipad's processor is nothing compared to a macbook, I would think a Macbook should be more than capable to run the simulator. However, not knowing much about iphone/ipad development, I'd like to get some opinions on this. for e.g. how many apps are typically need to be run for ipad dev (editor, debugger, perf monitor, trace log, etc). are these apps resource (memory, cpu) intensive? please do not take into consideration the actual image, 3d, video, sound development. I understand one would need quite a beefy machine to produce these type of creative assets. What I'm looking at is a machine to do code development, physics, putting together the produced assets (images, vector graphics, 3d video, sound, etc).

    Read the article

  • Android PCM Bytes

    - by Pintac
    Hi I am using the AudioRecord class to analize raw pcm bytes as it comes in the mic. So thats working nicely. Now i need convert the pcm bytes into decibel. I have a formula that takes sound presure in Pa into db. db = 20 * log10(Pa/ref Pa) So the question is the bytes i am getting from audiorecorder from the buffer what is it is it amplitude pascal sound pressure or what. I tried to putting the value into te formula but it comes back with very hight db so i do not think its right thanks

    Read the article

  • Preload *.wav with SystemSoundID?

    - by fuzzygoat
    I am playing a wav file to give a little audio feedback when a button in my UI is pressed. My question is when you first press the button there is a delay (about 1.5secs) whilst the sound file "sound.wav" is loaded and cached. Is there a way to pre-cache this file (maybe in my viewDidLoad)? I guess I could do it by just playing it a viewDidLoad, but would really need to disable the audio so it does not "beeb" each time the app starts. many thanks for and help. gary EDIT: Looks like my question is a duplicate of this post unless anyone has any new info? Maybe a way to turn the play volume down temporarily, unless the audio is cleared each time through the run loop.

    Read the article

  • How can I break from a method prematurely that's being called by NSTimer

    - by jammur
    Basically I'm writing a metronome app, but I'm using a sound file that, depending on the BPM, might not be finished playing when the "play" method is called again. For example, if the sound file is 0.5 seconds long, but the BPM is 200, the "play" method needs to be called every 0.3 seconds. I'm not overly familiar with NSTimer, but it appears that if it is supposed to fire before the previous invocation has completed, it doesn't, and just waits for the next time around. I could be completely wrong about that though. What I need to do is have the previous invocation end prematurely, and have the "play" method called again when the time is supposed to fire. Any help would be appreciated!

    Read the article

  • Voice Communication over TCP/IP

    - by Micha
    Hello, I'm currently developing application using DirectSound for communication on an intranet. I've had working solution using UDP but then my boss told me he wants to use TCP/IP for some reason. I've tried to implement it in pretty much the same way as UDP, but with very little success. What I get is basically just noise. 20% of it is the recorded sound and the rest is just weird noise. My guess for the reason is that TCP needs to read all the accepted data several times until it gets the final sound I can play. Now two questions: Am I on the right tracks? Is it even good idea to use TCP/IP for this kind of application (voice conferencing of sorts)? I'm doing it in C# but I don't think this is language specific.

    Read the article

  • Geocoding non-addresses: Geopy

    - by Phil Donovan
    Using geopy to geocode alcohol outlets in NZ. The problem I have is that some places do not have street addresses but are places in Google Maps. For example, plugging: Furneaux Lodge, Endeavour Inlet, Queen Charlotte Sound, Marlborough 7250 into Google Maps via the browser GUI gives me However, using that in Geopy I get a GQueryError saying this geographic location does not exist. Here is the code for geocoding: def GeoCode(address): g=geocoders.Google(domain="maps.google.co.nz") geoloc = g.geocode(address, exactly_one=False) place, (lat, lng) = geoloc[0] GeoOut = [] GeoOut.extend([place, lat, lng]) return GeoOut GeoCode("Furneaux Lodge, Endeavour Inlet, Queen Charlotte Sound, Marlboroguh 7250") Meanwhile, I notice that "Eiffel Tower" works fine. Is there away to solve this and can someone explain the difference between The Eiffel Tower and Furneaux Lodge within Google 'locations'?

    Read the article

  • Html5 Audio plays only once in my Javascript code.

    - by Poul
    I have a dashboard web-app that I want to play an alert sound if its having problems connecting. The site's ajax code will poll for data and throttle down its refresh rate if it can't connect. Once the server comes back up, the site will continue working. In the mean time I would like a sound to play each time it can't connect (so I know to check the server). Here is that code. This code works. var error_audio = new Audio("audio/"+settings.refresh.error_audio); error_audio.load(); //this gets called when there is a connection error. function onConnectionError() { error_audio.play(); } However the 2nd time through the function the audio doesn't play. Digging around in Chrome's debugger the 'played' attribute in the audio element gets set to true. Setting it to false has no results. Any ideas?

    Read the article

  • Python How to make a cross-module function?

    - by Evan
    I want to be able to call a global function from an imported class, for example In file PetStore.py class AnimalSound(object): def __init__(self): if 'makenoise' in globals(): self.makenoise = globals()['makenoise'] else: self.makenoise = lambda: 'meow' def __str__(self): return self.makenoise() Then when I test in the Python Interpreter >>> def makenoise(): ... return 'bark' ... >>> from PetStore import AnimalSound >>> sound = AnimalSound() >>> sound.makenoise() 'meow' I get a 'meow' instead of 'bark'. I have tried using the solutions provided in python-how-to-make-a-cross-module-variable with no luck.

    Read the article

  • Html5 - Callback when media is ready on iPad wont work

    - by Kap
    I'm trying to add a callback to a HTML5 audio element on an iPad. I added an eventlistener to the element, the myOtherThing() starts but there is no sound. If I pause and the play the sound again the audio starts. This works in chrome. Does anyone have an idea how I can do this? myAudioElement.src = "path_to_file"; addEventListener("canplay", function(){ myAudioElement.play(); myOtherThing.start(); }); SOLVED Just wanted to share my solution here, just in case someone else needs it. As far as I understand the iPad does not trigger any events without user interactions. So to be able to use "canply", "playing" and all the other events you need to use the built in media controller. Once you press play in that controller, the events gets triggered. After that you can use your custom interface.

    Read the article

  • GUI Control For Audio Presentation

    - by Boris
    I need GUI control for audio file presentation. The language is not very important but it should run on windows platform. I should be able to :- load the file play the sound put and move markers across the audio bar. it would be nice if it can load itself from RTP wireshark captures (and not wav files). An example may be seen in audacity (may be someone even had an experience extracting it from there). Writing nyquist scripts in audacity is not a good option because I have to operate on RTP captures and not on raw sound samples. Another example of such control is wireshark RTP analyzer. Any advise?

    Read the article

  • Strange beep when using cout

    - by Unknown
    Hello everyone, today when I was working on some code of mine I came across a beeping sound when printing a buffer to the screen. Here's the mysterious character that produces the beep: '' I don't know if you can see it, but my computer beeps when I try to print it like this: cout<<(char)7<<endl; Another point of interest is that the 'beep' doesn't originate from my on board beeper, but from my headphone/speaker Is this just my computer or there something wrong with the cout function? EDIT: But then why does printing this character produce the beep sound? does that mean that I could send other such characters through the cout function to produce different effects?

    Read the article

  • Notifying when screen is off

    - by Al
    I'm trying to generate a notification which vibrates the phone and plays a sound when the screen is off (cpu turned off). According to the Log messages, the notification is being sent, but the phone doesn't vibrate or play the sound until I turn the screen on again. I tried holding a 2 second temporary wakelock (PowerManager.PARTIAL_WAKE_LOCK), which I thought would be ample time for the notification to be played, but alas, it still doesn't. Any pointers to get the notification to run reliably? I'm testing this on an G1 running Android 1.6. Code I'm using: notif.vibrate = new long[] {100, 1000}; notif.defaults |= Notification.DEFAULT_SOUND; notif.ledARGB = Color.RED; notif.ledOnMS = 1; notif.ledOffMS = 0; notif.flags = Notification.FLAG_SHOW_LIGHTS; notif.flags |= NOTIF_FLAGS; //static var if (!screenOn) { //var which updates when screen turns off/on mWakeLock.acquire(2000); } manager.notify(NOTIF_ID, notif);

    Read the article

  • SpeechBackground

    - by abinila
    Hai everyone, I have used the SpeechBackground application in asterisk. I used the version 1.6.0.6. I have a entry like, ;;SpeechCreate exten => s,1,SpeechCreate() exten => s,2,SpeechActivateGrammar(yesno) exten => s,3,SpeechStart() exten => s,4,SpeechBackground(demo-instruct) exten => s,5,SpeechDeactivateGrammar(yesno) I don't know which file I meed to give in SpeechBackground application. Please give me any idea. I have given the sound file from /sounds directory. If I call to 's' the call will be immediately released.I didn't get any audio sound. Please any one help me...

    Read the article

  • Windows Mobile 6.5 SndPlayAsync - C# wrapper?

    - by dominolog
    Hello I'm implementing mp3 playback on Windows Mobile 6.5. I need to use SndPlayAsync API function since I don't want to block calling thread until the file is played (SndPlaySync blocks until the audio file is playing). Unfortunately the SndPlayAsync method takes sound handle instead of sound file path as parameter so there's a need to open the handle before and release of it after playback. The problem is that I don't have any information about the playback completion in this API. Did anybody use a C# wrapper for this API? Where can I get one? I've looked up OPENNETCF but they seem not to support this API. Regards

    Read the article

  • The right way to delete file to trash in Snow Leopard using Cocoa ?

    - by Irwan
    I mean the right way must able to "Put Back" in Finder and isn't playing sound Here are the methods I tried so far: NSString * name = @"test.zip"; NSArray * files = [NSArray arrayWithObject: name]; NSWorkspace * ws = [NSWorkspace sharedWorkspace]; [ws performFileOperation: NSWorkspaceRecycleOperation source: @"/Users/" destination: @"" files: files tag: 0]; Downturn : can't "Put Back" in Finder OSStatus status = FSPathMoveObjectToTrashSync( "/Users/test.zip", NULL, kFSFileOperationDefaultOptions ); Downturn : can't "Put Back" in Finder tell application "Finder" set deletedfile to alias "Snow Leopard:Users:test.zip" delete deletedfile end tell Downturn : playing sound so it's annoying if I execute it repeatedly

    Read the article

  • problems with matlab wavrecord and wavread

    - by user504363
    Hi all I have a problem in matlab I want to record a speech for 2 seconds then read the recorded sound and plot it I use the code FS = 8000; new_wav = wavrecord(2*FS,FS,'int16'); x = wavread(new_wav); plot(x); but the error appears ??? Error using ==> fileparts at 20 Input must be a row vector of characters. Error in ==> wavread>open_wav at 193 [pat,nam,ext] = fileparts(file); Error in ==> wavread at 65 [fid,msg] = open_wav(file); Error in ==> test at 2 x = wavread(new_wav); I plotted correctly recorded sound files, but when I want to record new one through matlab I get this errors. I tried many ways by changing FS and 'int16' but nothing happens. thanks

    Read the article

  • Is there any advantage to having more than 16gb ram on a Windows Dev machine?

    - by Robert Kozak
    Assuming a machine (Dual Quad Core Xeon (2.26GHz) with 24GB RAM) running Windows Server 2008 and Hyper-V. How many VMs can I expect to run at the same time with good performance. Is this overkill? Can you really have too much RAM? Assuming 2GB per VM thats around 16GB for the VMs with 8GB left over for the Main OS and Hyper-V. Sound about right? Edit: Tried to make the question sound less like bragging. Was never my intention. Its a hard question to write.

    Read the article

  • Change Preference Item Summary Text Color in Android 4

    - by AntounG
    I have the below sample of preference items <CheckBoxPreference android:key="chkSound" android:summary="Sound is Off" android:title="Sound" /> I use a theme in the res/values to change the Summary text color <style name="ThemeDarkText"> <item name="android:textColor">#000000</item> </style> And in the code I write this line setTheme(R.style.ThemeDarkText); Its working fine in Android 2.1 but when I tried to run it on a different os (ex Android 4.0) It didn't change the summary text color just the title color only..!! Any help?

    Read the article

  • form_for with json return

    - by Lowgain
    I currently have a form like this: <% form_for @stem, :html => {:multipart => true} do |f| %> <%= f.file_field :sound %> <% end %> This outputs (essentially): <form method="post" id="new_stem" class="new_stem" action="/stems"> <input type="file" size="30" name="stem[sound]" id="stem_sound"> </form> However I'm planning to use jQuery's ajaxForm plugin here and would like the new stem to be returned in JSON format. I know if the form's action was "/stems.json" this would work, but is there a parameter I can put into the form_for call to ask it to return JSON? I tried doing <% form_for @stem, :html => {:multipart => true, :action => '/stems.json'} do |f| %> but this didn't appear to work.

    Read the article

< Previous Page | 61 62 63 64 65 66 67 68 69 70 71 72  | Next Page >