Search Results

Search found 10608 results on 425 pages for 'video recording'.

Page 141/425 | < Previous Page | 137 138 139 140 141 142 143 144 145 146 147 148  | Next Page >

  • What output and recording ports does the Java Sound API find on your computer?

    - by Dave Carpeneto
    Hi all - I'm working with the Java Sound API, and it turns out if I want to adjust recording volumes I need to model the hardware that the OS exposes to Java. Turns out there's a lot of variety in what's presented. Because of this I'm humbly asking that anyone able to help me run the following on their computer and post back the results so that I can get an idea of what's out there. A thanks in advance to anyone that can assist :-) import javax.sound.sampled.*; public class SoundAudit { public static void main(String[] args) { try { System.out.println("OS: "+System.getProperty("os.name")+" "+ System.getProperty("os.version")+"/"+ System.getProperty("os.arch")+"\nJava: "+ System.getProperty("java.version")+" ("+ System.getProperty("java.vendor")+")\n"); for (Mixer.Info thisMixerInfo : AudioSystem.getMixerInfo()) { System.out.println("Mixer: "+thisMixerInfo.getDescription()+ " ["+thisMixerInfo.getName()+"]"); Mixer thisMixer = AudioSystem.getMixer(thisMixerInfo); for (Line.Info thisLineInfo:thisMixer.getSourceLineInfo()) { if (thisLineInfo.getLineClass().getName().equals( "javax.sound.sampled.Port")) { Line thisLine = thisMixer.getLine(thisLineInfo); thisLine.open(); System.out.println(" Source Port: " +thisLineInfo.toString()); for (Control thisControl : thisLine.getControls()) { System.out.println(AnalyzeControl(thisControl));} thisLine.close();}} for (Line.Info thisLineInfo:thisMixer.getTargetLineInfo()) { if (thisLineInfo.getLineClass().getName().equals( "javax.sound.sampled.Port")) { Line thisLine = thisMixer.getLine(thisLineInfo); thisLine.open(); System.out.println(" Target Port: " +thisLineInfo.toString()); for (Control thisControl : thisLine.getControls()) { System.out.println(AnalyzeControl(thisControl));} thisLine.close();}}} } catch (Exception e) {e.printStackTrace();}} public static String AnalyzeControl(Control thisControl) { String type = thisControl.getType().toString(); if (thisControl instanceof BooleanControl) { return " Control: "+type+" (boolean)"; } if (thisControl instanceof CompoundControl) { System.out.println(" Control: "+type+ " (compound - values below)"); String toReturn = ""; for (Control children: ((CompoundControl)thisControl).getMemberControls()) { toReturn+=" "+AnalyzeControl(children)+"\n";} return toReturn.substring(0, toReturn.length()-1);} if (thisControl instanceof EnumControl) { return " Control:"+type+" (enum: "+thisControl.toString()+")";} if (thisControl instanceof FloatControl) { return " Control: "+type+" (float: from "+ ((FloatControl) thisControl).getMinimum()+" to "+ ((FloatControl) thisControl).getMaximum()+")";} return " Control: unknown type";} } All the application does is print out a line about the OS, a line about the JVM, and a few lines about the hardware found that may pertain to recording hardware. For example on my PC at work I get the following: OS: Windows XP 5.1/x86 Java: 1.6.0_07 (Sun Microsystems Inc.) Mixer: Direct Audio Device: DirectSound Playback [Primary Sound Driver] Mixer: Direct Audio Device: DirectSound Playback [SoundMAX HD Audio] Mixer: Direct Audio Device: DirectSound Capture [Primary Sound Capture Driver] Mixer: Direct Audio Device: DirectSound Capture [SoundMAX HD Audio] Mixer: Software mixer and synthesizer [Java Sound Audio Engine] Mixer: Port Mixer [Port SoundMAX HD Audio] Source Port: MICROPHONE source port Control: Microphone (compound - values below) Control: Select (boolean) Control: Microphone Boost (boolean) Control: Front panel microphone (boolean) Control: Volume (float: from 0.0 to 1.0) Source Port: LINE_IN source port Control: Line In (compound - values below) Control: Select (boolean) Control: Volume (float: from 0.0 to 1.0) Control: Balance (float: from -1.0 to 1.0)

    Read the article

  • How to show quicktime videos in succession

    - by Eric Frank
    How do I have two or more quicktime videos to play one after the other, with no action taken by the user? I've seen an example of the technique here: http://untitled.wiredrive.com//l/p/?presentation=7c79bedbb8b02d2b1da45b033cc20345 I can't seem to boil down their code to the good stuff. Thanks!

    Read the article

  • How to install FFMpeg in WampServer 2.0 (Windows XP)

    - by Richard Knop
    I need to install the ffmpeg PHP extension on my localhost so I can test few of my scripts but I am having troubles figuring out how to do that. I have WampServer 2.0 with PHP 5.2.9-2, my OS is Windows XP. Please somebody give me step by step instructions. I have found some Windows builds here: http://sourceforge.net/projects/ffmpeg-php/files/ But I don't know which one to download and what to do with files. EDITED: What I have done so far: Download ffmpeg_new Copy php_ffmpeg.dll from the php5 folder to the C:\wamp\bin\php\php5.2.9-2\ext Copy files from common to the windows/system32 folder Add extension=php_ffmpeg.dll to php.ini file Restarted all services (Apache, PHP...) I am gettings an error after using this code: $extension = 'ffmpeg'; $extension_soname = 'php_ffmpeg.dll'; $extension_fullname = PHP_EXTENSION_DIR . "/" . $extension_soname; // load extension if(false === extension_loaded($extension)) { if (false === dl($extension_soname)) throw new Exception("Can't load extension $extension_fullname\n"); } The error: Warning: dl() [function.dl]: Not supported in multithreaded Web servers - use extension=ffmpeg.dll in your php.ini in C:\wamp\www\hunnyhive\application\modules\default\controllers\MyAccountController.php on line 314 Plus I also get the exception from above.

    Read the article

  • Installing Jffmpeg to JMF

    - by Krt_Malta
    Hi! I'm using JMF in my application. I'm trying to install jffmpeg since I'm encountering the format not supported exception. I've tried following this http://jffmpeg.sourceforge.net/download.html but I got lost. I enter JMF registry ok but what should I do to include the new codecs? (Also when I press Add "Could not add item" comes up... I'm running on Windows 7). Thanks and regards, Krt_Malta

    Read the article

  • Keystone correction in vlc.

    - by Kurru
    Hi How can I set up keystone correction in VLC? If this is not a supported feature, has anyone had any experience writing an add-on filter for VLC? If so, links/examples would be very appreciated! Thank you

    Read the article

  • MediaElement fails after several plays.

    - by basilkot
    Hi! I have a problem with MediaElement control. I've put six MediaElements on my form, then I start them and change played files by timer. After several times, these elements stop playing. Here is the sample XAML: <Grid> <Grid.RowDefinitions> <RowDefinition Height="*" /> <RowDefinition Height="*" /> </Grid.RowDefinitions> <Grid.ColumnDefinitions> <ColumnDefinition Width="*" /> <ColumnDefinition Width="*" /> <ColumnDefinition Width="*" /> </Grid.ColumnDefinitions> <MediaElement x:Name="element1" UnloadedBehavior="Close" LoadedBehavior="Manual" Grid.Column="0" Grid.Row="0" /> <MediaElement x:Name="element2" UnloadedBehavior="Close" LoadedBehavior="Manual" Grid.Column="1" Grid.Row="0" /> <MediaElement x:Name="element3" UnloadedBehavior="Close" LoadedBehavior="Manual" Grid.Column="2" Grid.Row="0" /> <MediaElement x:Name="element4" UnloadedBehavior="Close" LoadedBehavior="Manual" Grid.Column="0" Grid.Row="1" /> <MediaElement x:Name="element5" UnloadedBehavior="Close" LoadedBehavior="Manual" Grid.Column="1" Grid.Row="1" /> <MediaElement x:Name="element6" UnloadedBehavior="Close" LoadedBehavior="Manual" Grid.Column="2" Grid.Row="1" /> Here is the sample code: // The code below repeats for each MediaElement List<string> playlist1 = new List<string>() { @"file1.wmv", @"file2.wmv", @"file3.wmv", @"file4.wmv" }; DispatcherTimer timer1 = null; int index1 = 0; ... void Window1_Loaded(object sender, RoutedEventArgs e) { timer1 = new DispatcherTimer(); timer1.Tick += new EventHandler(timer1_Elapsed); timer1.Interval = TimeSpan.FromSeconds(10); element1.Source = new Uri(playlist1[index1]); timer1.Start(); element1.Play(); ... } void timer1_Elapsed(object sender, EventArgs e) { Dispatcher.BeginInvoke(DispatcherPriority.Normal, (System.Threading.ThreadStart)delegate() { element1.Stop(); element1.Close(); timer1.Stop(); index1++; if (index1 >= playlist1.Count) { index1 = 0; } element1.Source = new Uri(playlist1[index1]); timer1.Start(); element1.Play(); }); } ... Does anybody have similar problems?

    Read the article

  • streaming desktop to flash player

    - by Erwing
    hi I would like to stream my desktop screen (or one of the application which i select) to flash player i woul like to publish my desktop in web i have wowze media player to multicast it but i have to create stream and give it to the wowze do you have any idea how to start what would be the best for it can you recomend something?

    Read the article

  • ffdshow codec can not work with media play 12 in windows 7 ?

    - by James shen
    The reason of this issue is caused by media player 12 (in windows 7 professional) using mircsoft default codec which don't support mp4 decode. The ffdshow codec can not replace microsoft default codec in windows 7 (windows7 does not allow that in default). I find some third party tools can switch media player default codec to ffdshow but this is really inconvenience for our customer. Is there any way I can make a program to switch microsoft default codec to ffdshow codec? Like change some entries in windows registry?

    Read the article

  • Converting stream of jpg files to FLV stream

    - by Mark
    I work with a Panasonic hcm280a camera that can be controlled by my software, It generates a stream of jpeg files that are huge and I want to convert this stream to a FLV stream preferably with a good compressional ration Does FFMpeg do that? I am basically looking for an off the shelve open source software (or commercial software) that can generate that streaming media for me. Again my input is a stream of jpg files that come from the camera server. Any insight or comment would be greatly appreciated Thanks

    Read the article

  • hosting environment for delivering FLVs [closed]

    - by Gotys
    What would be the ideal hardware setup for pushing lots of bandwith on a tube site? We have ever-expanding cloud storage where users upload the movies, then we have these web-delivery machines which cache the FLV files on its local harddrives and deliver them to users. Each cache machine can deliver 1200 mbits/s , if it has SAS 8 harddrives. Such a cache machine costs us $550/month for 8x160gb -- so each machine can cache only 160GB at any given time. If we want to cache more then 160gb , we need to add another machine..another $550/month..etc. This is very un-economical so I am wondering if we have any experts here who can figure out a better setup. I've been looking into "gluster FS", but I am not sure if this thing can push a lot of bandwith. Any ideas highly appreciated. Thank you!

    Read the article

  • what is .motn file?

    - by Wind Chimez
    In a flash based project, i got a few files with extension as ".motn". I am not sure what this file is or more importantly, with what editor/tool i can work on this file. What i guess is this might be a way to create flv movies, out of pictures , vectors and otehr data, but it's just a guess.So, basically i have two doubts: 1. what is a .motn file 2. How /with what tool can i work on a .motn file efficiently. Can anybody help ?

    Read the article

  • (Android SDk 2.1) Getting error when I use setAudioSource and setVideoSource

    - by Rainfer
    I got the follow error when I run setAudioSource and setVideoSource. 03-16 10:26:25.302: ERROR/audio_input(52): unsupported parameter: x-pvmf/media-input-node/cap-config-interface;valtype=key_specific_value 03-16 10:26:25.302: ERROR/audio_input(52): VerifyAndSetParameter failed 03-16 10:26:25.302: ERROR/CameraInput(52): Unsupported parameter(x-pvmf/media-input-node/cap-config-interface;valtype=key_specific_value) 03-16 10:26:25.302: ERROR/CameraInput(52): VerifiyAndSetParameter failed on parameter #0 This error happen on both emulator and the device. (I am using Google nexus one) I have set the CAMERA and RECORD_AUDIO user permission already. I spent many days but I still cannot figure out what is the cause of this runtime error.

    Read the article

  • AVAudioPlayer crash after playing from an AVAudioRecorder

    - by munchine
    I've got a button the user tap to start recording and tap again to stop. When it stop I want the recorded voice 'echo' back so the user can hear what was recorded. This works fine the first time. If I hit the button for the third time, it starts a new recording and when I hit stop it crashes with EXC_BAD_ACCESS. - (IBAction) readToMeTapped { if(recording) { recording = NO; [readToMeButton setTitle:@"Stop Recording" forState: UIControlStateNormal ]; NSMutableDictionary *recordSetting = [[NSDictionary alloc] initWithObjectsAndKeys: [NSNumber numberWithFloat: 44100.0], AVSampleRateKey, [NSNumber numberWithInt: kAudioFormatAppleLossless], AVFormatIDKey, [NSNumber numberWithInt: 1], AVNumberOfChannelsKey, [NSNumber numberWithInt: AVAudioQualityMax], AVEncoderAudioQualityKey, nil]; // Create a new dated file NSDate *now = [NSDate dateWithTimeIntervalSinceNow:0]; NSString *caldate = [now description]; recordedTmpFile = [NSURL fileURLWithPath:[[NSString stringWithFormat:@"%@/%@.caf", DOCUMENTS_FOLDER, caldate] retain]]; error = nil; recorder = [[ AVAudioRecorder alloc] initWithURL:recordedTmpFile settings:recordSetting error:&error]; [recordSetting release]; if(!recorder){ NSLog(@"recorder: %@ %d %@", [error domain], [error code], [[error userInfo] description]); UIAlertView *alert = [[UIAlertView alloc] initWithTitle: @"Warning" message: [error localizedDescription] delegate: nil cancelButtonTitle:@"OK" otherButtonTitles:nil]; [alert show]; [alert release]; return; } NSLog(@"Using File called: %@",recordedTmpFile); //Setup the recorder to use this file and record to it. [recorder setDelegate:self]; [recorder prepareToRecord]; [recorder recordForDuration:(NSTimeInterval) 5]; //recording for a limited time } else { // it crashes the second time it gets here! recording = YES; NSLog(@"Recording YES Using File called: %@",recordedTmpFile); [readToMeButton setTitle:@"Start Recording" forState:UIControlStateNormal ]; [recorder stop]; //Stop the recorder. //playback recording AVAudioPlayer * newPlayer = [[AVAudioPlayer alloc] initWithContentsOfURL:recordedTmpFile error:&error]; [recordedTmpFile release]; self.aPlayer = newPlayer; [newPlayer release]; [aPlayer setDelegate:self]; [aPlayer prepareToPlay]; [aPlayer play]; } } - (void)audioRecorderDidFinishRecording:(AVAudioRecorder *)sender successfully:(BOOL)flag { NSLog (@"audioRecorderDidFinishRecording:successfully:"); [recorder release]; recorder = nil; } Checking the debugger, it flags the error here @synthesize aPlayer, recorder; This is the part I don't understand. I thought it may have something to do with releasing memory but I've been careful. Have I missed something?

    Read the article

  • The name capture does not exist in the current context ERROR

    - by Haxed
    Hi I am developing a campera capture application. I am currently using EmguCV 2.0. I get an error with the following line of code : Image image = capture.QueryFrame(); I have added all the required references of EmguCV like Emgu.CV,Emgu.CV.UI, Emgu.CV.ML, Emgu.Util, but still it gives a error saying : Error 1 The name 'capture' does not exist in the current context C:\Documents and Settings\TLNA\my documents\visual studio 2010\Projects\webcamcapture\webcamcapture\Form1.cs 27 38 webcamcapture I got this code from here. The full program code is given below:- using System; using System.Collections.Generic; using System.ComponentModel; using System.Data; using System.Drawing; using System.Linq; using System.Text; using System.Windows.Forms; using Emgu.CV; using Emgu.CV.UI; using Emgu.CV.Structure; using Emgu.CV.ML; namespace webcamcapture { public partial class Form1 : Form { public Form1() { InitializeComponent(); } private void timer1_Tick(object sender, EventArgs e) { Image<Bgr, Byte> image = capture.QueryFrame(); pictureBox1.Image = image.ToBitmap(pictureBox1.Width, pictureBox1.Height); } } }

    Read the article

  • AVAudioPlayer crash after playing from an AVAudioRecord

    - by munchine
    I've got a button the user tap to start recording and tap again to stop. When it stop I want the recorded voice 'echo' back so the user can hear what was recorded. This works fine the first time. If I hit the button for the third time, it starts a new recording and when I hit stop it crashes with EXC_BAD_ACCESS. - (IBAction) readToMeTapped { if(recording) { recording = NO; [readToMeButton setTitle:@"Stop Recording" forState: UIControlStateNormal ]; NSMutableDictionary *recordSetting = [[NSDictionary alloc] initWithObjectsAndKeys: [NSNumber numberWithFloat: 44100.0], AVSampleRateKey, [NSNumber numberWithInt: kAudioFormatAppleLossless], AVFormatIDKey, [NSNumber numberWithInt: 1], AVNumberOfChannelsKey, [NSNumber numberWithInt: AVAudioQualityMax], AVEncoderAudioQualityKey, nil]; // Create a new dated file NSDate *now = [NSDate dateWithTimeIntervalSinceNow:0]; NSString *caldate = [now description]; recordedTmpFile = [NSURL fileURLWithPath:[[NSString stringWithFormat:@"%@/%@.caf", DOCUMENTS_FOLDER, caldate] retain]]; error = nil; recorder = [[ AVAudioRecorder alloc] initWithURL:recordedTmpFile settings:recordSetting error:&error]; if(!recorder){ NSLog(@"recorder: %@ %d %@", [error domain], [error code], [[error userInfo] description]); UIAlertView *alert = [[UIAlertView alloc] initWithTitle: @"Warning" message: [error localizedDescription] delegate: nil cancelButtonTitle:@"OK" otherButtonTitles:nil]; [alert show]; [alert release]; return; } NSLog(@"Using File called: %@",recordedTmpFile); //Setup the recorder to use this file and record to it. [recorder setDelegate:self]; [recorder prepareToRecord]; [recorder recordForDuration:(NSTimeInterval) 5]; //recording for a limited time } else { // it crashes the second time it gets here! recording = YES; NSLog(@"Recording YES Using File called: %@",recordedTmpFile); [readToMeButton setTitle:@"Start Recording" forState:UIControlStateNormal ]; [recorder stop]; //Stop the recorder. //playback recording AVAudioPlayer * newPlayer = [[AVAudioPlayer alloc] initWithContentsOfURL:recordedTmpFile error:&error]; [recordedTmpFile release]; self.aPlayer = newPlayer; [newPlayer release]; [aPlayer setDelegate:self]; [aPlayer prepareToPlay]; [aPlayer play]; } } - (void)audioRecorderDidFinishRecording:(AVAudioRecorder *)sender successfully:(BOOL)flag { NSLog (@"audioRecorderDidFinishRecording:successfully:"); [recorder release]; recorder = nil; } Checking the debugger, it flags the error here @synthesize aPlayer, recorder; This is the part I don't understand. I thought it may have something to do with releasing memory but I've been careful. Have I missed something?

    Read the article

  • How to view TV Tuner component input with OpenCV?

    - by monky822
    Hi Everyone, I'm trying to use my tvtuner instead of a webcam with opencv. The problem is that by default cvCaptureFromCAM(0) gives me the tv channel of the tv tuner, but what I actually want the input from my the RCA input of the tv tuner. I have tried usingcvCaptureFromCAM(-1) to check if there are additional camera devices found within the tvtuner, but it only gives me the general tvtuner as an option. Is there a way to change the channel of the input?

    Read the article

  • Where specifically do UIImageWriteToSavePhotosAlbum or UISaveVideoAtPathToSavedPhotosAlbum save thei

    - by Chris Markle
    Simple question I think and I'll be trying ot myself to see... When people talk about using UIImageWriteToSavePhotosAlbum or UISaveVideoAtPathToSavedPhotosAlbum is the "saved photos album" the "Camera Roll" or the "Photo Library" (or something else either than these two) that I see in the Photo Albums application? I don't think the doc makes it super-explicit as to which it is...

    Read the article

  • Rendering videos online using flash as3 and AIR, how does it work?

    - by David
    I came accross this link that talks about the technology used with Animoto.com And it seems like they use AIR to export their flash animations to bitmaps that ffmpeg compil as a movie. http://labs.animoto.com/2009/06/07/presenting-filmstrip/ "It also takes time to render, so what you’re seeing isn’t realtime. It’s a series of Flash-generated frames that have been saved out using AIR and processed into an MP4 after the fact using a utility called FFmpeg." Does that mean AIR is installed on the server? How would that work to export a dynamicaly created aninmation into a series of pics that ffmpeg can then easily convert into a movie? David

    Read the article

< Previous Page | 137 138 139 140 141 142 143 144 145 146 147 148  | Next Page >