Search Results

Search found 12365 results on 495 pages for 'core audio'.

Page 39/495 | < Previous Page | 35 36 37 38 39 40 41 42 43 44 45 46  | Next Page >

  • Playing audio stream not showing in pavucontrol

    - by user168505
    My Pulse Audio Volume Control (pavucontrol) is not shown volume bar which present below the volume control slider for right and left channels in the Play tab . However, I can hear audio,and the pavucontrol Play tab shows the name of the application(any media player vlc,mplayer etc.) which is running and volume control for right and left channels (FROM LEFT & FROM RIGHT Volume slider) I guess there may be a change in the system configuration/setting? How to reset it? I have reinstalled Pulseaudio, but the problem remains. I am using ubuntu 12.04 with default pulse audio.

    Read the article

  • Ubuntu Audio Pitch Shifting filter

    - by user777305
    I'm currently developing a video player software which intended to be an embeedded player. I'm using Java with VLCJ library for the video player. What i'm looking is a way, something to transform the audio output to make the output sound as oldman or a kid (pitch shifting, i guess is the name). VLC have this when enabling time stretch, but the video play speed is affected (slower to get oldman sound, but fast-forward-alike to get kid voice effect. Is there any solutions for this? I don't find this feature on VLC(J), so i think what i need is the audio output of the ubuntu itself (Ubuntu 12.04) to do this job. something like filtering audio output system wide. Any software or setting to do this? it also need to be controllable via command line to provide realtime effect changing. Thanks in advance.

    Read the article

  • Finding out if a FLAC or WAVPACK audio file is NOT originally encoded from a lossy source

    - by cornel
    Is there a way of checking that the so-called FLAC or WAVPACK audio file was originally encoded from a lossless source (WAV, CDA, APE, etc.) instead of a lossy source (MP3, AAC, ATRAC, etc.)? Say I have a lossy MP3 audio file (5.17Mb, 87% compressed from its original, source unknown). I then encode it to another lossless format, say FLAC or WAVPACK. The size increases (23.14Mb, 39% compressed from its original, source MP3)! ID tags, etc, remain the same and there's no way of checking the integrity of its origin. How do I go about doing that?

    Read the article

  • How do I make my Geforce GTS 250's power save mode stop causing audio stuttering?

    - by Matt
    Whenever my GTS 250 enters its power save mode, downscaling its frequencies, my audio stutters. This affects both my onboard audio and my Audigy Soundblaster 2 ZS. Changing Windows power save mode options such as PCI-E link state power management or Power Management Mode in the nVidia control panel have no effect on this issue. Replacing the power supply had no effect on this issue. The BIOS is the latest version, and I have the latest motherboard chipset and graphics drivers installed. I do not overclock. I started to see this issue after I upgraded my rig from its Socket 939 board to a Socket 1156 board with a Core i5-750 while simultaneously upgrading from Vista to 7.

    Read the article

  • How do I swap audio output of the left and right speakers?

    - by Manga Lee
    I have two speakers stereo speakers but when I use the sound control panel applet to test my audio configuration I get sound in the right speaker when the user interface indicates the right speaker and vice versa. Is there a way to swap the audio output from left to right and right to left? UPDATE: The reason for this question is that I've recently rearranged my workspace and because of physical constraints the left speaker has to go on the right side and vice versa. I could of course solve this problem with a hardware solution but I'd rather use a software solution if one is available.

    Read the article

  • How can you convert audio (3.5mm) to S/PDIF?

    - by SSumner
    I have a monitor that I want to use as my 'TV' for my gaming system. I connect it via HDMI, so sound and video go through the monitor, but I want sound to my headphones, which travels via optical (S/PDIF) cable. The monitor (Dell U2713HM) has a 3.5mm audio jack on it for line out, but I couldn't find anything that simply plugs in an converts the analog audio signal to a digital one so I can plug in a S/PDIF cable. What sort of device do I need to do this? (I am not asking for shopping recommendations, merely what options allow this conversion. I would prefer the smallest option, as space is limited).

    Read the article

  • Anyone experiencing audio issues with VirtualBox on Linux and has a solution?

    - by DoxaLogos
    I've been using Virtualbox (now at 3.0.2) on Kubuntu (now at 9.04) for a while now, and I seem to have a problem when running Windows. Sometime after a while the audio will cut out in Kubuntu. The only way I can get it to recover is to make sure VirtualBox is completely shutdown and either going into multimedia under "system settings" and test the audio or restart. I'm wondering if anyone else here has experienced similar issues and has come up with a more elegant solution. I can't seem to find a reasonable one at virtualbox.org.

    Read the article

  • Drawing custom graphics on the iPhone: CALayer vs. CGContext

    - by Henry Cooke
    Hi all, I have an application in which I'm doing some custom drawing, a bunch of lines on a gradient background, like so (ignore the text, they're just UILabels): At the moment, that's all done by starting a new CGContext, drawing stuff into it with CGContextDrawLinearGradient and CGContextStrokePath, then finally saving the resulting image with UIGraphicsGetImageFromCurrentImageContext. The positioning info is calculated while I'm laying out those labels, so it'd be a PITA (and duplication of effort) to calculate it all over again when the containing UIView is drawn with drawRect, so I'm drawing it ahead of time into a UIImage. All works fine, so far so good. However, I have a sneaking suspicion that it may be more efficient to use CALayers to do this drawing. My (cursory) understanding of the difference between the two approaches is that a CALayer is more like a bunch of instructions to draw stuff, and so takes up less memory until it's actually drawn onscreen, whereas drawing everything into a UIImage ahead of time means that you've got a sodding great bitmap kicking around in memory all the time, whether it's drawn or not. Is that a correct understanding? What is generally considered to be the best way of drawing custom images on the iPhone?

    Read the article

  • CATiledLayer: Determining levelsOfDetail when in drawLayer

    - by Russell Quinn
    I have a CATiledLayer inside a UIScrollView and all is working fine. Now I want to add support for showing different tiles for three levels of zooming. I have set levelsOfDetail to 3 and my tile size is 300 x 300. This means I need to provide three sets of tiles (I'm supplying PNGs) to cover: 300 x 300, 600 x 600 and 1200 x 1200. My problem is that inside "(void)drawLayer:(CALayer *)layer inContext:(CGContextRef)ctx" I cannot work out which levelOfDetail is currently being drawn. I can retrieve the bounds currently required by using CGContextGetClipBoundingBox and usually this requests a rect for one of the above sizes, but at layer edges the tiles are usually smaller and therefore this isn't a good method. Basically, if I have set levelsOfDetail to 3, how do I find out if drawLayer is requesting level 1, 2 or 3 when it's called? Thanks, Russell.

    Read the article

  • How to get UIScrollView contentOffset during an animation

    - by user249488
    I am trying to get the contentOffset property of a UIScrollView in the middle of a setContentOffset animation. Note I am not using the animated property of the method, but instead enclosing the setContentOffset operation within a UIView animation block so I can have finer control. When I try to read the value of contentOffset in the middle of the animation, it returns the final value rather than the value at that moment. I know that you can use the presentationLayer to get the current layer, but is there any way you can get the current offset in the middle of an animation?

    Read the article

  • Drawing performance with CGImageCreateWithJPEGDataProvider?

    - by Rnegi
    I've actually curious about this for the iPhone. I am getting an MJPEG stream from a server and trying to render it natively on the iphone (without the use of safari class). Reasons for this is because the safari class while CAN render MJPEG natively, does not do so at the framerate I would like. So I tried drawing it natively, but I've come up with performance issues, namely a syncing issue between what I'm getting from the server and what I am able to draw onto the screen of the phone. (There should be a little lag, but the drift gets really bad, which is what I want to avoid). So I have a connection set up to my server and I do get the JPEGS. It's just data I insert into a NSMutableArray buffer CFMutableDataRef _t_data_ref = (CFMutableDataRef)[_buffer_array objectAtIndex:0]; //CGDataProviderRef imgDataProvider = CGDataProviderCreateWithCFData (_cf_buffer_data); CGDataProviderRef imgDataProvider = CGDataProviderCreateWithCFData(_t_data_ref); CGImageRef image = CGImageCreateWithJPEGDataProvider(imgDataProvider, NULL, true, kCGRenderingIntentDefault); CGImageRef imgRef = image; CGContextDrawImage(context, CGRectMake(0, 17, 380, 285), imgRef); CGImageRelease(image); CGDataProviderRelease(imgDataProvider); please note this is the gist of my code, but it should summarize what I am trying to accomplish with regards to drawing. Also in order to get the framerate in sync, I had to detach a separate thread that sleeps X seconds and calls [self setNeedsDisplay]. NSAutoreleasePool *pool = [[NSAutoreleasePool alloc] init]; // Top-level pool while(1) { //[NSThread sleepForTimeInterval:TIMER_REFRESH_VALUE]; //sleep(unsigned int ); usleep(MICRO_REFRESH_VALUE); if ([_buffer_array count] > 10) { //NSLog(@"stuff %d", [_buffer_array count]); //[self setNeedsDisplay]; [self performSelectorOnMainThread:@selector(setNeedsDisplay) withObject:nil waitUntilDone:NO]; } } [pool release]; // Release the objects in the pool. My buffer of jpeg data actually fills up quite quick, but I can't seem to actually consume what i'm getting at the same rate, actually much slower. Are there any documents that can describe what kind of performance tuning I can do to make it go faster when rendering the JPEG to the screen? Or am I kind of stuck here? Thanks!

    Read the article

  • I just don't get AudioFileReadPackets

    - by Eric Christensen
    I've tried to write the smallest chunk of code to narrow down a problem. It's now just a few lines and it doesn't work, which makes it pretty clear that I have a fundamental misunderstanding of how to use AudioFileReadPackets. I've read the docs and other examples online, and apparently I'm just not getting. Could you explain it to me? Here's what this block should do: I've previously opened a file. I want to read just one packet - the first one of the file - and then print it. But it crashes on the AudioFileReadPackets line: AudioFileID mAudioFile2; AudioFileOpenURL (audioFileURL, 0x01, 0, &mAudioFile2); UInt32 *audioData2 = (UInt32 *)malloc(sizeof(UInt32) * 1); AudioFileReadPackets(mAudioFile2, false, NULL, NULL, 0, (UInt32*)1, audioData2); NSLog(@"first packet:%i",audioData2[0]); (For clarity, I've stripped out all error handling.) It's the AFRP line that crashes out. (I understand that the third and fourth argument are useful, and in my "real" code, I use them, but they're not required, right? So NULL in this case should work, right?) So then what's going on? Any guidance would be much appreciated. Thanks.

    Read the article

  • record output sound in python

    - by aaronstacy
    i want to programatically record sound coming out of my laptop in python. i found PyAudio and came up with the following program that accomplishes the task: import pyaudio, wave, sys chunk = 1024 FORMAT = pyaudio.paInt16 CHANNELS = 1 RATE = 44100 RECORD_SECONDS = 5 WAVE_OUTPUT_FILENAME = sys.argv[1] p = pyaudio.PyAudio() channel_map = (0, 1) stream_info = pyaudio.PaMacCoreStreamInfo( flags = pyaudio.PaMacCoreStreamInfo.paMacCorePlayNice, channel_map = channel_map) stream = p.open(format = FORMAT, rate = RATE, input = True, input_host_api_specific_stream_info = stream_info, channels = CHANNELS) all = [] for i in range(0, RATE / chunk * RECORD_SECONDS): data = stream.read(chunk) all.append(data) stream.close() p.terminate() data = ''.join(all) wf = wave.open(WAVE_OUTPUT_FILENAME, 'wb') wf.setnchannels(CHANNELS) wf.setsampwidth(p.get_sample_size(FORMAT)) wf.setframerate(RATE) wf.writeframes(data) wf.close() the problem is i have to connect the headphone jack to the microphone jack. i tried replacing these lines: input = True, input_host_api_specific_stream_info = stream_info, with these: output = True, output_host_api_specific_stream_info = stream_info, but then i get this error: Traceback (most recent call last): File "./test.py", line 25, in data = stream.read(chunk) File "/Library/Python/2.5/site-packages/pyaudio.py", line 562, in read paCanNotReadFromAnOutputOnlyStream) IOError: [Errno Not input stream] -9975 is there a way to instantiate the PyAudio stream so that it inputs from the computer's output and i don't have to connect the headphone jack to the microphone? is there a better way to go about this? i'd prefer to stick w/ a python app and avoid cocoa.

    Read the article

  • getAudioInputStream can not convert [stereo, 4 bytes/frame] stream to [mono, 2 bytes/frame]

    - by brian_d
    Hello. I am using javasound and have an AudioInputStream of format PCM_SIGNED 8000.0 Hz, 16 bit, stereo, 4 bytes/frame, little-endian Using AudioSystem.getAudioInputStream(target_format, original_stream) produces an 'IllegalArgumentException: Unsupported Conversion' when the target_format is PCM_SIGNED 8000.0 Hz, 16 bit, mono, 2 bytes/frame, little-endian Is it possible to convert this stream manually after every read() call? And if yes, how? In general, how can you compare two formats and tell if a conversion is possible?

    Read the article

  • Optimizing drawing on UITableViewCell

    - by Brian
    I am drawing content to a UITableViewCell and it is working well, but I'm trying to understand if there is a better way of doing this. Each cell has the following components: Thumbnail on the left side - could come from server so it is loaded async Title String - variable length so each cell could be different height Timestamp String Gradient background - the gradient goes from the top of the cell to the bottom and is semi-transparent so that background colors shine through with a gloss It currently works well. The drawing occurs as follows: UITableViewController inits/reuses a cell, sets needed data, and calls [cell setNeedsDisplay] The cell has a CALayer for the thumbnail - thumbnailLayer In the cell's drawRect it draws the gradient background and the two strings The cell's drawRect it then calls setIcon - which gets the thumbnail and sets the image as the contents of the thumbnailLayer. If the image is not found locally, it sets a loading image as the contents of the thumbnailLayer and asynchronously gets the thumbnail. Once the thumbnail is received, it is reset by calling setIcon again & resets the thumbnailLayer.contents This all currently works, but using Instruments I see that the thumbnail is compositing with the gradient. I have tried the following to fix this: setting the cell's backgroundView to a view whose drawRect would draw the gradient so that the cell's drawRect could draw the thumbnail and using setNeedsDisplayInRect would allow me to only redraw the thumbnail after it loaded --- but this resulted in the backgroundView's drawing (gradient) covering the cell's drawing (text). I would just draw the thumbnail in the cell's drawRect, but when setNeedsDisplay is called, drawRect will just overlap another image and the loading image may show through. I would clear the rect, but then I would have to redraw the gradient. I would try to draw the gradient in a CAGradientLayer and store a reference to it, so I can quickly redraw it, but I figured I'd have to redraw the gradient if the cell's height changes. Any ideas? I'm sure I'm missing something so any help would be great.

    Read the article

  • Is NSManagedObjectContext autosaved or am I looking at NSFetchedResultsController's cache?

    - by Andreas
    I'm developing an iPhone app where I use a NSFetchedResultsController in the main table view controller. I create it like this in the viewDidload of the main table view controller: NSSortDescriptor *sortDescriptorDate = [[NSSortDescriptor alloc] initWithKey:@"date" ascending:YES]; NSSortDescriptor *sortDescriptorTime = [[NSSortDescriptor alloc] initWithKey:@"start" ascending:YES]; NSArray *sortDescriptors = [[NSArray alloc] initWithObjects:sortDescriptorDate,sortDescriptorTime, nil]; [fetchRequest setSortDescriptors:sortDescriptors]; [sortDescriptorDate release]; [sortDescriptorTime release]; [sortDescriptors release]; controller = [[NSFetchedResultsController alloc] initWithFetchRequest:fetchRequest managedObjectContext:context sectionNameKeyPath:@"date" cacheName:nil]; [fetchRequest release]; NSError *error; BOOL success = [controller performFetch:&error]; Then, in a subsequent view, I create a new object on the context: TestObject *testObject = [NSEntityDescription insertNewObjectForEntityForName:@"TestObject" inManagedObjectContext:context]; The TestObject has several related object which I create in the same way and add to the testObject using the provided add...Objects methods. Then, if before saving the context, I press cancel and go back to the main table view, nothing is shown as expected. However, if I restart the app, the object I created on the context shows in the main table view. How come? At first, I thought it was because the NSFetchedResultsController was reading from the cache, but as you can see I set this to nil just to test. Also, [context hasChanges] returns true after I restart. What am I missing here?

    Read the article

  • Simple sound effect loop using AudioToolKit

    - by Typeoneerror
    I've created a few sounds for use in my game. I can play them at certain events without issue: // create sounds CFBundleRef mainBundle; mainBundle = CFBundleGetMainBundle(); _soundFileShake = CFBundleCopyResourceURL(mainBundle, CFSTR("shake"), CFSTR("wav"), NULL); AudioServicesCreateSystemSoundID(_soundFileShake, &_soundIdShake); // later... AudioServicesPlaySystemSound(_soundIdShake); The game has a mechanism which allows you to shake the device to activate some functionality. I've got the shaking code done so I get get a "shaking started" and "shaking ended" message to my game. What I need to have happen is start playing "shave.wav" when shaking starts and loop it until it stops. Is there a way to do this with AudioToolbox/AudioServices? How could I do this if not?

    Read the article

  • Rotate rectangle around center

    - by ESoft
    I am playing with Brad Larsen's adaption of the trackball app. I have two views at a 60 degree angle to each other and was wondering how I get the rotation to be in the center of this (non-closed) rectangle? In the images below I would have liked the rotation to take place all within the blue lines. Code (modified to only rotate around x axis): #import "MyView.h" //===================================================== // Defines //===================================================== #define DEGREES_TO_RADIANS(degrees) \ (degrees * (M_PI / 180.0f)) //===================================================== // Public Interface //===================================================== @implementation MyView - (void)awakeFromNib { transformed = [CALayer layer]; transformed.anchorPoint = CGPointMake(0.5f, 0.5f); transformed.frame = self.bounds; [self.layer addSublayer:transformed]; CALayer *imageLayer = [CALayer layer]; imageLayer.frame = CGRectMake(10.0f, 4.0f, self.bounds.size.width / 2.0f, self.bounds.size.height / 2.0f); imageLayer.transform = CATransform3DMakeRotation(DEGREES_TO_RADIANS(60.0f), 1.0f, 0.0f, 0.0f); imageLayer.contents = (id)[[UIImage imageNamed:@"IMG_0051.png"] CGImage]; imageLayer.borderColor = [UIColor yellowColor].CGColor; imageLayer.borderWidth = 2.0f; [transformed addSublayer:imageLayer]; imageLayer = [CALayer layer]; imageLayer.frame = CGRectMake(10.0f, 120.0f, self.bounds.size.width / 2.0f, self.bounds.size.height / 2.0f); imageLayer.transform = CATransform3DMakeRotation(DEGREES_TO_RADIANS(-60.0f), 1.0f, 0.0f, 0.0f); imageLayer.contents = (id)[[UIImage imageNamed:@"IMG_0089.png"] CGImage]; imageLayer.borderColor = [UIColor greenColor].CGColor; imageLayer.borderWidth = 2.0f; transformed.borderColor = [UIColor whiteColor].CGColor; transformed.borderWidth = 2.0f; [transformed addSublayer:imageLayer]; UIView *line = [[UIView alloc] initWithFrame:CGRectMake(0, self.bounds.size.height / 2.0f, self.bounds.size.width, 2)]; [line setBackgroundColor:[UIColor redColor]]; [self addSubview:line]; line = [[UIView alloc] initWithFrame:CGRectMake(0, self.bounds.size.height * (1.0f / 4.0f), self.bounds.size.width, 2)]; [line setBackgroundColor:[UIColor blueColor]]; [self addSubview:line]; line = [[UIView alloc] initWithFrame:CGRectMake(0, self.bounds.size.height * (3.0f / 4.0f), self.bounds.size.width, 2)]; [line setBackgroundColor:[UIColor blueColor]]; [self addSubview:line]; } - (void)touchesBegan:(NSSet *)touches withEvent:(UIEvent *)event { previousLocation = [[touches anyObject] locationInView:self]; } - (void)touchesMoved:(NSSet *)touches withEvent:(UIEvent *)event { CGPoint location = [[touches anyObject] locationInView:self]; //location = CGPointMake(previousLocation.x, location.y); CATransform3D currentTransform = transformed.sublayerTransform; //CGFloat displacementInX = location.x - previousLocation.x; CGFloat displacementInX = previousLocation.x - location.x; CGFloat displacementInY = previousLocation.y - location.y; CGFloat totalRotation = sqrt((displacementInX * displacementInX) + (displacementInY * displacementInY)); CGFloat angle = DEGREES_TO_RADIANS(totalRotation); CGFloat x = ((displacementInX / totalRotation) * currentTransform.m12 + (displacementInY/totalRotation) * currentTransform.m11); CATransform3D rotationalTransform = CATransform3DRotate(currentTransform, angle, x, 0, 0); previousLocation = location; transformed.sublayerTransform = rotationalTransform; } - (void)touchesEnded:(NSSet *)touches withEvent:(UIEvent *)event { } - (void)dealloc { [super dealloc]; } @end

    Read the article

  • Big trouble after app update. CoreData migration error

    - by MrBr
    this morning we had a big trouble with our iphone app. We had to even take it off the store. The thing is that we made real small changes to our xcdatamodel. We thought that the update process is automatically taking care about exchanging it the right way until we found out something like CoreData migration exists. We are using the UIManagedDocument to connect to the persistent store. How is it possible to exchange this file with the new one? While we were developing we just uninstalled the whole app from the device and then installed it again and everything worked. How can we simulate this process in the app store with updates? UPDATE I try to set the migration option like this _database = [[UIIManagedDocument alloc] init]; NSMutableDictionary *options = [[NSMutableDictionary alloc] init]; [options setObject:[NSNumber numberWithBool:YES] forKey:NSMigratePersistentStoresAutomaticallyOption], _database.persistentStoreOptions = options; but the app is still crashing with ** Terminating app due to uncaught exception 'NSInternalInconsistencyException', reason: 'This NSPersistentStoreCoordinator has no persistent stores. It cannot perform a save operation.'

    Read the article

< Previous Page | 35 36 37 38 39 40 41 42 43 44 45 46  | Next Page >