Search Results

Search found 12365 results on 495 pages for 'core audio'.

Page 85/495 | < Previous Page | 81 82 83 84 85 86 87 88 89 90 91 92  | Next Page >

  • I have a Segmentation fault (core dumped) when using strcpy, malloc, and struct

    - by malsh002
    Okay, when I run this code, I have a segmentation fault #include<stdio.h> #include<stdlib.h> #include<string.h> #define MAX 64 struct example { char *name; }; int main() { struct example *s = malloc (MAX); strcpy(s->name ,"Hello World!!"); return !printf("%s\n", s->name); } the terminal output: alshamlan@alshamlan-VGN-CR520E:/tmp/interview$ make q1 cc -Wall -g q1.c -o q1 alshamlan@alshamlan-VGN-CR520E:/tmp/interview$ ./q1 Segmentation fault (core dumped) alshamlan@alshamlan-VGN-CR520E:/tmp/interview$ gedit q1.c Can someone explain what's going on? thanks.

    Read the article

  • audio cd s not burning to mp3 format-burning to wav format in k3b and brasero using ubuntu 12.04.2

    - by robert
    It started in ubuntu 13.04-I was doing what I usually do,I opened brasero to make an audio cd from a few mp3 audio files..When burned I noticed the files on cd were in wav format.I then tried k3b with the same result.At that point and because of several issues with 13.04 I formatted my hdd and dropped back to ubuntu 12.04.On 12.04 I tried brasero and k3b once again with same results.I know that when I used to burn cd s using brasero they were burned to cd in mp3 format not wave.Can anyone tell me a fix for this?I have restricted codecs installed.

    Read the article

  • Using Audio Queue Services to play PCM data over a socket connection

    - by Rohan
    I'm writing a remote desktop client for the iPhone and I'm trying to implement audio redirection. The client is connected to the server over a socket connection, and the server sends 32K chunks of PCM data at a time. I'm trying to use AQS to play the data and it plays the first two seconds (1 buffer worth). However, since the next chunk of data hasn't come in over the socket yet, the next AudioQueueBuffer is empty. When the data comes in, I fill the next available buffer with the data and enqueue it with AudioQueueEnqueueBuffer. However, it never plays these buffers. Does the queue stop playing if there are no buffers in the queue, even if you later add a buffer? Here's the relevant part of the code: void wave_out_write(STREAM s, uint16 tick, uint8 index) { if(items_in_queue == NUM_BUFFERS){ return; } if(!playState.busy){ OSStatus status; status = AudioQueueNewOutput(&playState.dataFormat, AudioOutputCallback, &playState, CFRunLoopGetCurrent(), NULL, 0, &playState.queue); if(status == 0){ for(int i=0; i<NUM_BUFFERS; i++){ AudioQueueAllocateBuffer(playState.queue, 40000, &playState.buffers[i]); } AudioQueueAddPropertyListener(playState.queue, kAudioQueueProperty_IsRunning, MyAudioQueuePropertyListenerProc, &playState); status = AudioQueueStart(playState.queue, NULL); if(status ==0){ playState.busy = True; } else{ return; } } else{ return; } } playState.buffers[queue_hi]->mAudioDataByteSize = s->size; memcpy(playState.buffers[queue_hi]->mAudioData, s->data, s->size); AudioQueueEnqueueBuffer(playState.queue, playState.buffers[queue_hi], 0, 0); queue_hi++; queue_hi = queue_hi % NUM_BUFFERS; items_in_queue++; } void AudioOutputCallback(void* inUserData, AudioQueueRef outAQ, AudioQueueBufferRef outBuffer) { PlayState *playState = (PlayState *)inUserData; items_in_queue--; } Thanks!

    Read the article

  • Play and record streaming audio

    - by Igor
    I'm working on an iPhone app that should be able to play and record audio streaming data simultaneously. Is it actually possible? I'm trying to mix SpeakHere and AudioRecorder samples and getting an empty file with no audio data... Here is my .m code: import "AzRadioViewController.h" @implementation azRadioViewController static const CFOptionFlags kNetworkEvents = kCFStreamEventOpenCompleted | kCFStreamEventHasBytesAvailable | kCFStreamEventEndEncountered | kCFStreamEventErrorOccurred; void MyAudioQueueOutputCallback( void* inClientData, AudioQueueRef inAQ, AudioQueueBufferRef inBuffer, const AudioTimeStamp inStartTime, UInt32 inNumberPacketDescriptions, const AudioStreamPacketDescription inPacketDesc ) { NSLog(@"start MyAudioQueueOutputCallback"); MyData* myData = (MyData*)inClientData; NSLog(@"--- %i", inNumberPacketDescriptions); if(inNumberPacketDescriptions == 0 && myData-dataFormat.mBytesPerPacket != 0) { inNumberPacketDescriptions = inBuffer-mAudioDataByteSize / myData-dataFormat.mBytesPerPacket; } OSStatus status = AudioFileWritePackets(myData-audioFile, FALSE, inBuffer-mAudioDataByteSize, inPacketDesc, myData-currentPacket, &inNumberPacketDescriptions, inBuffer-mAudioData); if(status == 0) { myData-currentPacket += inNumberPacketDescriptions; } NSLog(@"status:%i curpac:%i pcdesct: %i", status, myData-currentPacket, inNumberPacketDescriptions); unsigned int bufIndex = MyFindQueueBuffer(myData, inBuffer); pthread_mutex_lock(&myData-mutex); myData-inuse[bufIndex] = false; pthread_cond_signal(&myData-cond); pthread_mutex_unlock(&myData-mutex); } OSStatus StartQueueIfNeeded(MyData* myData) { NSLog(@"start StartQueueIfNeeded"); OSStatus err = noErr; if (!myData-started) { err = AudioQueueStart(myData-queue, NULL); if (err) { PRINTERROR("AudioQueueStart"); myData-failed = true; return err; } myData-started = true; printf("started\n"); } return err; } OSStatus MyEnqueueBuffer(MyData* myData) { NSLog(@"start MyEnqueueBuffer"); OSStatus err = noErr; myData-inuse[myData-fillBufferIndex] = true; AudioQueueBufferRef fillBuf = myData-audioQueueBuffer[myData-fillBufferIndex]; fillBuf-mAudioDataByteSize = myData-bytesFilled; err = AudioQueueEnqueueBuffer(myData-queue, fillBuf, myData-packetsFilled, myData-packetDescs); if (err) { PRINTERROR("AudioQueueEnqueueBuffer"); myData-failed = true; return err; } StartQueueIfNeeded(myData); return err; } void WaitForFreeBuffer(MyData* myData) { NSLog(@"start WaitForFreeBuffer"); if (++myData-fillBufferIndex = kNumAQBufs) myData-fillBufferIndex = 0; myData-bytesFilled = 0; myData-packetsFilled = 0; printf("-lock\n"); pthread_mutex_lock(&myData-mutex); while (myData-inuse[myData-fillBufferIndex]) { printf("... WAITING ...\n"); pthread_cond_wait(&myData-cond, &myData-mutex); } pthread_mutex_unlock(&myData-mutex); printf("<-unlock\n"); } int MyFindQueueBuffer(MyData* myData, AudioQueueBufferRef inBuffer) { NSLog(@"start MyFindQueueBuffer"); for (unsigned int i = 0; i < kNumAQBufs; ++i) { if (inBuffer == myData-audioQueueBuffer[i]) return i; } return -1; } void MyAudioQueueIsRunningCallback( void* inClientData, AudioQueueRef inAQ, AudioQueuePropertyID inID) { NSLog(@"start MyAudioQueueIsRunningCallback"); MyData* myData = (MyData*)inClientData; UInt32 running; UInt32 size; OSStatus err = AudioQueueGetProperty(inAQ, kAudioQueueProperty_IsRunning, &running, &size); if (err) { PRINTERROR("get kAudioQueueProperty_IsRunning"); return; } if (!running) { pthread_mutex_lock(&myData-mutex); pthread_cond_signal(&myData-done); pthread_mutex_unlock(&myData-mutex); } } void MyPropertyListenerProc( void * inClientData, AudioFileStreamID inAudioFileStream, AudioFileStreamPropertyID inPropertyID, UInt32 * ioFlags) { NSLog(@"start MyPropertyListenerProc"); MyData* myData = (MyData*)inClientData; OSStatus err = noErr; printf("found property '%c%c%c%c'\n", (inPropertyID24)&255, (inPropertyID16)&255, (inPropertyID8)&255, inPropertyID&255); switch (inPropertyID) { case kAudioFileStreamProperty_ReadyToProducePackets : { AudioStreamBasicDescription asbd; UInt32 asbdSize = sizeof(asbd); err = AudioFileStreamGetProperty(inAudioFileStream, kAudioFileStreamProperty_DataFormat, &asbdSize, &asbd); if (err) { PRINTERROR("get kAudioFileStreamProperty_DataFormat"); myData-failed = true; break; } err = AudioQueueNewOutput(&asbd, MyAudioQueueOutputCallback, myData, NULL, NULL, 0, &myData-queue); if (err) { PRINTERROR("AudioQueueNewOutput"); myData-failed = true; break; } for (unsigned int i = 0; i < kNumAQBufs; ++i) { err = AudioQueueAllocateBuffer(myData-queue, kAQBufSize, &myData-audioQueueBuffer[i]); if (err) { PRINTERROR("AudioQueueAllocateBuffer"); myData-failed = true; break; } } UInt32 cookieSize; Boolean writable; err = AudioFileStreamGetPropertyInfo(inAudioFileStream, kAudioFileStreamProperty_MagicCookieData, &cookieSize, &writable); if (err) { PRINTERROR("info kAudioFileStreamProperty_MagicCookieData"); break; } printf("cookieSize %d\n", cookieSize); void* cookieData = calloc(1, cookieSize); err = AudioFileStreamGetProperty(inAudioFileStream, kAudioFileStreamProperty_MagicCookieData, &cookieSize, cookieData); if (err) { PRINTERROR("get kAudioFileStreamProperty_MagicCookieData"); free(cookieData); break; } err = AudioQueueSetProperty(myData-queue, kAudioQueueProperty_MagicCookie, cookieData, cookieSize); free(cookieData); if (err) { PRINTERROR("set kAudioQueueProperty_MagicCookie"); break; } err = AudioQueueAddPropertyListener(myData-queue, kAudioQueueProperty_IsRunning, MyAudioQueueIsRunningCallback, myData); if (err) { PRINTERROR("AudioQueueAddPropertyListener"); myData-failed = true; break; } break; } } } static void ReadStreamClientCallBack(CFReadStreamRef stream, CFStreamEventType type, void *clientCallBackInfo) { NSLog(@"start ReadStreamClientCallBack"); if(type == kCFStreamEventHasBytesAvailable) { UInt8 buffer[2048]; CFIndex bytesRead = CFReadStreamRead(stream, buffer, sizeof(buffer)); if (bytesRead < 0) { } else if (bytesRead) { OSStatus err = AudioFileStreamParseBytes(globalMyData-audioFileStream, bytesRead, buffer, 0); if (err) { PRINTERROR("AudioFileStreamParseBytes"); } } } } void MyPacketsProc(void * inClientData, UInt32 inNumberBytes, UInt32 inNumberPackets, const void * inInputData, AudioStreamPacketDescription inPacketDescriptions) { NSLog(@"start MyPacketsProc"); MyData myData = (MyData*)inClientData; printf("got data. bytes: %d packets: %d\n", inNumberBytes, inNumberPackets); for (int i = 0; i < inNumberPackets; ++i) { SInt64 packetOffset = inPacketDescriptions[i].mStartOffset; SInt64 packetSize = inPacketDescriptions[i].mDataByteSize; size_t bufSpaceRemaining = kAQBufSize - myData-bytesFilled; if (bufSpaceRemaining < packetSize) { MyEnqueueBuffer(myData); WaitForFreeBuffer(myData); } AudioQueueBufferRef fillBuf = myData-audioQueueBuffer[myData-fillBufferIndex]; memcpy((char*)fillBuf-mAudioData + myData-bytesFilled, (const char*)inInputData + packetOffset, packetSize); myData-packetDescs[myData-packetsFilled] = inPacketDescriptions[i]; myData-packetDescs[myData-packetsFilled].mStartOffset = myData-bytesFilled; myData-bytesFilled += packetSize; myData-packetsFilled += 1; size_t packetsDescsRemaining = kAQMaxPacketDescs - myData-packetsFilled; if (packetsDescsRemaining == 0) { MyEnqueueBuffer(myData); WaitForFreeBuffer(myData); } } } (IBAction)buttonPlayPressedid)sender { label.text = @"Buffering"; [self connectionStart]; } (IBAction)buttonSavePressedid)sender { NSLog(@"save"); AudioFileClose(myData.audioFile); AudioQueueDispose(myData.queue, TRUE); } bool getFilename(char* buffer,int maxBufferLength) { NSArray paths = NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES); NSString docDir = [paths objectAtIndex:0]; NSString* file = [docDir stringByAppendingString:@"/rec.caf"]; return [file getCString:buffer maxLength:maxBufferLength encoding:NSUTF8StringEncoding]; } -(void)connectionStart { @try { MyData* myData = (MyData*)calloc(1, sizeof(MyData)); globalMyData = myData; pthread_mutex_init(&myData-mutex, NULL); pthread_cond_init(&myData-cond, NULL); pthread_cond_init(&myData-done, NULL); NSLog(@"Start"); myData-dataFormat.mSampleRate = 16000.0f; myData-dataFormat.mFormatID = kAudioFormatLinearPCM; myData-dataFormat.mFramesPerPacket = 1; myData-dataFormat.mChannelsPerFrame = 1; myData-dataFormat.mBytesPerFrame = 2; myData-dataFormat.mBytesPerPacket = 2; myData-dataFormat.mBitsPerChannel = 16; myData-dataFormat.mReserved = 0; myData-dataFormat.mFormatFlags = kLinearPCMFormatFlagIsSignedInteger | kLinearPCMFormatFlagIsPacked; int i, bufferByteSize; UInt32 size; AudioQueueNewInput( &myData-dataFormat, MyAudioQueueOutputCallback, &myData, NULL /* run loop /, kCFRunLoopCommonModes / run loop mode /, 0 / flags */, &myData-queue); size = sizeof(&myData-dataFormat); AudioQueueGetProperty(&myData-queue, kAudioQueueProperty_StreamDescription, &myData-dataFormat, &size); CFURLRef fileURL; char path[256]; memset(path,0,sizeof(path)); getFilename(path,256); fileURL = CFURLCreateFromFileSystemRepresentation(NULL, (UInt8*)path, strlen(path), FALSE); AudioFileCreateWithURL(fileURL, kAudioFileCAFType, &myData-dataFormat, kAudioFileFlags_EraseFile, &myData-audioFile); OSStatus err = AudioFileStreamOpen(myData, MyPropertyListenerProc, MyPacketsProc, kAudioFileMP3Type, &myData-audioFileStream); if (err) { PRINTERROR("AudioFileStreamOpen"); return 1; } CFStreamClientContext ctxt = {0, self, NULL, NULL, NULL}; CFStringRef bodyData = CFSTR(""); // Usually used for POST data CFStringRef headerFieldName = CFSTR("X-My-Favorite-Field"); CFStringRef headerFieldValue = CFSTR("Dreams"); CFStringRef url = CFSTR(RADIO_LOCATION); CFURLRef myURL = CFURLCreateWithString(kCFAllocatorDefault, url, NULL); CFStringRef requestMethod = CFSTR("GET"); CFHTTPMessageRef myRequest = CFHTTPMessageCreateRequest(kCFAllocatorDefault, requestMethod, myURL, kCFHTTPVersion1_1); CFHTTPMessageSetBody(myRequest, bodyData); CFHTTPMessageSetHeaderFieldValue(myRequest, headerFieldName, headerFieldValue); CFReadStreamRef stream = CFReadStreamCreateForHTTPRequest(kCFAllocatorDefault, myRequest); if (!stream) { NSLog(@"Creating the stream failed"); return; } if (!CFReadStreamSetClient(stream, kNetworkEvents, ReadStreamClientCallBack, &ctxt)) { CFRelease(stream); NSLog(@"Setting the stream's client failed."); return; } CFReadStreamScheduleWithRunLoop(stream, CFRunLoopGetCurrent(), kCFRunLoopCommonModes); if (!CFReadStreamOpen(stream)) { CFReadStreamSetClient(stream, 0, NULL, NULL); CFReadStreamUnscheduleFromRunLoop(stream, CFRunLoopGetCurrent(), kCFRunLoopCommonModes); CFRelease(stream); NSLog(@"Opening the stream failed."); return; } } @catch (NSException *exception) { NSLog(@"main: Caught %@: %@", [exception name], [exception reason]); } } (void)viewDidLoad { [[UIApplication sharedApplication] setIdleTimerDisabled:YES]; [super viewDidLoad]; } (void)didReceiveMemoryWarning { [super didReceiveMemoryWarning]; } (void)viewDidUnload { } (void)dealloc { [super dealloc]; } @end

    Read the article

  • Stopping and Play button for Audio (Android)

    - by James Rattray
    I have this problem, I have some audio I wish to play... And I have two buttons for it, 'Play' and 'Stop'... Problem is, after I press the stop button, and then press the Play button, nothing happens. -The stop button stops the song, but I want the Play button to play the song again (from the start) Here is my code: final MediaPlayer mp = MediaPlayer.create(this, R.raw.megadeth); And then the two public onclicks: (For playing...) button.setOnClickListener(new View.OnClickListener() { public void onClick(View v) { // Perform action on click button.setText("Playing!"); try { mp.prepare(); } catch (IllegalStateException e) { // TODO Auto-generated catch block e.printStackTrace(); } catch (IOException e) { // TODO Auto-generated catch block e.printStackTrace(); } mp.start(); // } }); And for stopping the track... final Button button2 = (Button) findViewById(R.id.cancel); button2.setOnClickListener(new View.OnClickListener() { public void onClick(View v) { mp.stop(); mp.reset(); } }); Can anyone see the problem with this? If so could you please fix it... (For suggest) Thanks alot... James

    Read the article

  • How Do You Calculate Processor Speed on Multi-core Processors?

    - by Jason Fitzpatrick
    The advent of economical consumer grade multi-core processors raises the question for many users: how do you effectively calculate the real speed of a multi-core system? Is a 4-core 3Ghz system really 12Ghz? Read on as we investigate. Today’s Question & Answer session comes to us courtesy of SuperUser—a subdivision of Stack Exchange, a community-drive grouping of Q&A web sites. 6 Start Menu Replacements for Windows 8 What Is the Purpose of the “Do Not Cover This Hole” Hole on Hard Drives? How To Log Into The Desktop, Add a Start Menu, and Disable Hot Corners in Windows 8

    Read the article

  • Windows Phone 8, Windows 8 : un seul et même Core, l'OS mobile de Microsoft s'ouvre au C/C++ et à Direct X

    Windows Phone 8, Windows 8 : un seul et même Core L' OS mobile de Microsoft s'ouvre au C/C++ et à Direct X Microsoft a officiellement parlé pour la première fois hier soir de la Plateform Preview du prochain Windows Phone 8. Avec une surprise qui n'en était presque plus une : l'OS et Windows 8 s'appuieront sur un même « Shared Windows Core ». Par « Core », Joe Belfiore ? responsable de la plateforme de Windows Phone - a précisé sur la scène du Windows Phone Summit de San Francisco que ce rapprochement concenrnait le Kernel, les drivers, la sécurité, le Networking, le système de fichiers ou la gestion des médias. Sans oublier, bien sûr, les outils de développem...

    Read the article

  • Les premiers smartphones dual-core présentés au CES hier, à quoi servira une telle puissance ?

    Les premiers smartphones dual-core présentés au CES hier, à quoi servira une telle puissance ? Mise à jour du 06.01.2011 par Katleen Début décembre 2010, Nvidia affirmait que "les processeurs dual-core seront le standard en 2011" pour les smartphones et pour les tablettes. Cette prédiction semble être sur le chemin de la réalisation, comme l'ont démontré certains acteurs du secteur hier au CES. En effet, les premiers modèles de téléphones mobiles multi-coeurs y ont été présentés. Motorola a dévoilé son Atrix 4G qu'il vante comme «le smartphone le plus puissant du monde». Dans ses entrailles, on trouve un processeur dual-core Tegra 2 cadencé à 2 Ghz et 1 Go de RAM. De quoi réaliser de ...

    Read the article

  • WPF Storyboard delay in playing wma files

    - by Rita
    I'm a complete beginner in WPF and have an app that uses StoryBoard to play a sound. public void PlaySound() { MediaElement m = (MediaElement)audio.FindName("MySound.wma"); m.IsMuted = false; FrameworkElement audioKey = (FrameworkElement)keys.FindName("MySound"); Storyboard s = (Storyboard)audioKey.FindResource("MySound.wma"); s.Begin(audioKey); } <Storyboard x:Key="MySound.wma"> <MediaTimeline d:DesignTimeNaturalDuration="1.615" BeginTime="00:00:00" Storyboard.TargetName="MySound.wma" Source="Audio\MySound.wma"/> </Storyboard> I have a horrible lag and sometimes it takes good 10 seconds for the sound to be played. I suspect this has something to do with the fact that no matter how long I wait - The sound doesn't get played until after I leave the function. I don't understand it. I call Begin, and nothing happens. Is there a way to replace this method, or StoryBoard object with something that plays instantly and without a lag?

    Read the article

  • Why does this gstreamer pipeline stall ?

    - by timday
    I've been playing around with gstreamer pipelines using gst-launch. I don't have any problems if I just want to process audio or video separately (to separate files, or to alsasink/ximagesink), but I'm confused by what I need to do to mux the streams back together using, say avimux. This gst-launch-0.10 filesrc location=MVI_2034.AVI ! decodebin name=dec \ dec. ! queue ! audioconvert ! 'audio/x-raw-int,rate=44100,channels=1' ! queue ! mux. \ dec. ! queue ! videoflip 1 ! ffmpegcolorspace ! jpegenc ! queue ! mux. \ avimux name=mux ! filesink location=out.avi just outputs Setting pipeline to PAUSED ... Pipeline is PREROLLING ... and then stalls indefinitely. What's the trick ?

    Read the article

  • What do you use to play sound in iPhone games?

    - by zoul
    Hello! I have a performance-intensive iPhone game I would like to add sounds to. There seem to be about three main choices: (1) AVAudioPlayer, (2) Audio Queues and (3) OpenAL. I’d hate to write pages of low-level code just to play a sample, so that I would like to use AVAudioPlayer. The problem is that it seems to kill the performace – I’ve done a simple measuring using CFAbsoluteTimeGetCurrent and the play message seems to take somewhere from 9 to 30 ms to finish. That’s quite miserable, considering that 25 ms == 40 fps. Of course there is the prepareToPlay method that should speed things up. That’s why I wrote a simple class that keeps several AVAudioPlayers at its disposal, prepares them beforehand and then plays the sample using the prepared player. No cigar, still it takes the ~20 ms I mentioned above. Such performance is unusable for games, so what do you use to play sounds with a decent performance on iPhone? Am I doing something wrong with the AVAudioPlayer? Do you play sounds with Audio Queues? (I’ve written something akin to AVAudioPlayer before 2.2 came out and I would love to spare that experience.) Do you use OpenAL? If yes, is there a simple way to play sounds with OpenAL, or do you have to write pages of code? Update: Yes, playing sounds with OpenAL is fairly simple.

    Read the article

  • JMF microphone volume controller

    - by TacB0sS
    How to obtain the Microphone volume controller in JMF? this is what I have: I tried this implementation concept of yours, but I keep getting a null from the first volume processor when I try to get the stream, here is how I do it: // the device is the media device specifically audio Processor processorForVolume = Manager.createProcessor(device.getLocator()); // wait until configured ProcessorStates newState = new ProcessorStateListener(Processor.Configured).waitForProcessorState(processorForVolume); System.out.println("volumeProcessorState: "+newState); // setting the content descriptor to null - read in another thread this allows to get the gain control processorForVolume.setContentDescriptor(null); // set the track control format to one supported by the device and the track control. // I didn't match it to an RTP allowed format, but I don't think this has anything to do with it... TrackControl[] trackControls = processorForVolume.getTrackControls(); if (trackControls.length == 0) throw new MC_Exception("No track controls where found for this device:", new Object[]{device}); for (TrackControl control : trackControls) trackManipulator.manipulateTrackControls(control); // wait until the processor is realized newState = new ProcessorStateListener(Controller.Realized).waitForProcessorState(processorForVolume); System.out.println("volumeProcessorState: "+newState); // receives the gain control micVolumeController = processorForVolume.getGainControl(); // cannot get the output stream to process further... any suggestions? processor = Manager.createProcessor(processorForVolume.getDataOutput()); new ProcessorStateListener(Processor.Configured).waitForProcessorState(processor); processor.setContentDescriptor(DeviceCapturingManager.RAW_RTP); new ProcessorStateListener(Controller.Realized).waitForProcessorState(processor); this is the output It generates: volumeProcessorState: Configured format set to track control - com.sun.media.ProcessEngine$ProcTControl@1627c16: LINEAR, 48000.0 Hz, 16-bit, Stereo, LittleEndian, Signed volumeProcessorState: Realized and the data output from the processor is Null. I should make clear that when the content descriptor != null I do get an output stream but not the volume controller, and the when it is null I get the controller, but no stream. I try to connect to an audio microphone device Adam.

    Read the article

  • Graphing the pitch (frequency) of a sound

    - by Coronatus
    I want to plot the pitch of a sound into a graph. Currently I can plot the amplitude. The graph below is created by the data returned by getUnscaledAmplitude(): AudioInputStream audioInputStream = AudioSystem.getAudioInputStream(new BufferedInputStream(new FileInputStream(file))); byte[] bytes = new byte[(int) (audioInputStream.getFrameLength()) * (audioInputStream.getFormat().getFrameSize())]; audioInputStream.read(bytes); // Get amplitude values for each audio channel in an array. graphData = type.getUnscaledAmplitude(bytes, this); public int[][] getUnscaledAmplitude(byte[] eightBitByteArray, AudioInfo audioInfo) { int[][] toReturn = new int[audioInfo.getNumberOfChannels()][eightBitByteArray.length / (2 * audioInfo. getNumberOfChannels())]; int index = 0; for (int audioByte = 0; audioByte < eightBitByteArray.length;) { for (int channel = 0; channel < audioInfo.getNumberOfChannels(); channel++) { // Do the byte to sample conversion. int low = (int) eightBitByteArray[audioByte]; audioByte++; int high = (int) eightBitByteArray[audioByte]; audioByte++; int sample = (high << 8) + (low & 0x00ff); if (sample < audioInfo.sampleMin) { audioInfo.sampleMin = sample; } else if (sample > audioInfo.sampleMax) { audioInfo.sampleMax = sample; } toReturn[channel][index] = sample; } index++; } return toReturn; } But I need to show the audio's pitch, not amplitude. Fast Fourier transform appears to get the pitch, but it needs to know more variables than the raw bytes I have, and is very complex and mathematical. Is there a way I can do this?

    Read the article

  • Python on Mac: Fink? MacPorts? Builtin? Homebrew? Binary installer?

    - by BastiBechtold
    For the last few days, I have been trying to use Python for some audio development. The thing is, Mac OSX does not handle uninstalling stuff well. Actually, there is no way to uninstall anything. Once it is on your system, you better pray that it didn't do any funny stuff. Hence, I don't really want to rely on installer packages for Python. So I turn to Homebrew and install Python using Homebrew. Works fabulously. Using pip, Numpy, SciPy, Matplotlib were no (big) problem, either. Now I want to play audio. There is a host of different packages out there, but pip does not seem willing to install any. But, there is a binary distribution for PyGame, which I guess should work with the built-in Python. Hence my question: What would you do? Would you just install the binary distributions and hope that they interoperate well and never need uninstalling? Would you hack your way through whichever package control management system you prefer and deal with its problems? Something else?

    Read the article

  • Syncing two AS3 NetStreams

    - by Lowgain
    I'm writing an app that requires an audio stream to be recording while a backing track is played. I have this working, but there is an inconsistent gap in between playback and record starting. I don't know if I can do anything to make the sync perfect every time, so I've been trying to track what time each stream starts so I can calculate the delay and trim it server-side. This also has proved to be a challenge as no events seem to be sent when a connection starts (as far as I know). I've tried using various properties like the streams' buffer sizes, etc. I'm thinking now that as my recorded audio is only mono, I may be able to put some kind of 'control signal' on the second stereo track which I could use to determine exactly when a sound starts recording (or stick the whole backing track in that channel so I can sync them that way). This leaves me with the new problem of properly injecting this sound into the NetStream. If anyone has any idea whether or not any of these ideas will work, how to execute them, or some alternatives, that would be extremely helpful! Been working on this issue for awhile

    Read the article

  • Playing a sequence of sounds without gaps (iPhone)

    - by Fiire
    I thought maybe the fastest way was to go with Sound Services. It is quite efficient, but I need to play sounds in a sequence, not overlapped. Therefore I used a callback method to check when the sound has finished. This cycle produces around 0.3 seconds in lag. I know this sounds very strict, but it is basically the main axis of the program. EDIT: I now tried using AVAudioPlayer, but I can't play sounds in a sequence without using audioPlayerDidFinishPlaying since that would put me in the same situation as with the callback method of SoundServices. EDIT2: I think that if I could somehow get to join the parts of the sounds I want to play into a large file, I could get the whole audio file to sound continuously. EDIT3: I thought this would work, but the audio overlaps: waitTime = player.deviceCurrentTime; for (int k = 0; k < [colores count]; k++) { player.currentTime = 0; [player playAtTime:waitTime]; waitTime += player.duration; } Thanks

    Read the article

  • Could not load file or assembly FSharp.Core, Version=4.0.0.0

    - by Ken
    I'm trying to deploy a web application which uses F# 4.0 on Windows Server 2008. It works on my computer where VS2010 is installed but it doesn't work on the server. Everytime you open the page you'll get this error message: Could not load file or assembly 'FSharp.Core, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a' or one of its dependencies. The system cannot find the file specified. I've installed .NET 4 using the web platform installer. F# PowerPack is installed too. I found this page: http://connect.microsoft.com/VisualStudio/feedback/details/507202/error-in-working-with-f It suggests you to reinstall F#, but the link to download F# seems to be broken. And it might not be the same problem I have. I've also tried to install Microsoft F# 2.0.0.0 since it's the only F# redistribution I could find. But it doesn't help at all. Has anyone get something like this to work? Any help would be appreciated. Thanks.

    Read the article

  • StackOverflow in compojure web project

    - by Anders Rune Jensen
    Hi I've been playing around with clojure and have been using it to build a simple little audio player. The strange thing is that sometimes, maybe one out of twenty, when contact the server I will get the following error: 2010-04-20 15:33:20.963::WARN: Error for /control java.lang.StackOverflowError at clojure.lang.RT.seq(RT.java:440) at clojure.core$seq__4245.invoke(core.clj:105) at clojure.core$filter__5084$fn__5086.invoke(core.clj:1794) at clojure.lang.LazySeq.sval(LazySeq.java:42) at clojure.lang.LazySeq.seq(LazySeq.java:56) at clojure.lang.RT.seq(RT.java:440) at clojure.core$seq__4245.invoke(core.clj:105) at clojure.core$filter__5084$fn__5086.invoke(core.clj:1794) at clojure.lang.LazySeq.sval(LazySeq.java:42) at clojure.lang.LazySeq.seq(LazySeq.java:56) at clojure.lang.RT.seq(RT.java:440) at clojure.core$seq__4245.invoke(core.clj:105) at clojure.core$filter__5084$fn__5086.invoke(core.clj:1794) at clojure.lang.LazySeq.sval(LazySeq.java:42) at clojure.lang.LazySeq.seq(LazySeq.java:56) at clojure.lang.RT.seq(RT.java:440) at clojure.core$seq__4245.invoke(core.clj:105) at clojure.core$filter__5084$fn__5086.invoke(core.clj:1794) at clojure.lang.LazySeq.sval(LazySeq.java:42) at clojure.lang.LazySeq.seq(LazySeq.java:56) at clojure.lang.RT.seq(RT.java:440) ... If I do it right after again it always works. So it appears to be related to timing or something. The code in question is: (defn add-track [t] (common/ref-add tracks t)) (defn add-collection [coll] (doseq [track coll] (add-track track))) and (defn ref-add [ref value] (dosync (ref-set ref (conj @ref value)))) where coll is extracted from this function: (defn tracks-by-album [album] (sort sort-tracks (filter #(= (:album %) album) @tracks))) so it does appear to be the tracks-by-album function from the stack trace. I just don't see why it sometimes works and sometimes doesn't.

    Read the article

  • Core Graphics Rotating a Path

    - by Scott Langendyk
    This should be a simple one, basically I have a few paths drawn with core graphics, and I want to be able to rotate them (for convenience). I've tried using CGContextRotateCTM(context); but it's not rotating anything. Am I missing something? Here's the source for drawRect - (void)drawRect:(CGRect)rect { CGContextRef context = UIGraphicsGetCurrentContext(); CGContextSetLineWidth(context, 1.5); CGContextSetStrokeColorWithColor(context, [UIColor whiteColor].CGColor); CGContextSetShadow(context, CGSizeMake(0, 1), 0); CGContextBeginPath(context); CGContextMoveToPoint(context, 13.5, 13.5); CGContextAddLineToPoint(context, 30.5, 13.5); CGContextAddLineToPoint(context, 30.5, 30.5); CGContextAddLineToPoint(context, 13.5, 30.5); CGContextAddLineToPoint(context, 13.5, 13.5); CGContextClosePath(context); CGContextMoveToPoint(context, 26.2, 13.5); CGContextAddLineToPoint(context, 26.2, 17.8); CGContextAddLineToPoint(context, 30.5, 17.8); CGContextMoveToPoint(context, 17.8, 13.5); CGContextAddLineToPoint(context, 17.8, 17.8); CGContextAddLineToPoint(context, 13.5, 17.8); CGContextMoveToPoint(context, 13.5, 26.2); CGContextAddLineToPoint(context, 17.8, 26.2); CGContextAddLineToPoint(context, 17.8, 30.5); CGContextStrokePath(context); CGContextSetFillColorWithColor(context, [UIColor whiteColor].CGColor); CGContextSetShadowWithColor(context, CGSizeMake(0, 0), 0, [UIColor clearColor].CGColor); CGContextFillRect(context, CGRectMake(26.2, 13.5, 4.3, 4.3)); CGContextFillRect(context, CGRectMake(13.5, 13.5, 4.3, 4.3)); CGContextFillRect(context, CGRectMake(13.5, 26.2, 4.3, 4.3)); CGContextRotateCTM(context, M_PI / 4); }

    Read the article

  • Qt 5.3 OpenGL - vertex buffer object drawing using the core profile

    - by user3700881
    Im using Qt 5.3 to create a QWindow to do some basic rendering stuff. The QWindow is declared like this: class OpenGLWindow : public QWindow, protected QOpenGLFunctions_3_3_Core { Q_OBJECT ... } It is initialized in the constructor: OpenGLWindow::OpenGLWindow(QWindow *parent) : QWindow(parent) { QSurfaceFormat format; format.setVersion(3,3); format.setProfile(QSurfaceFormat::CoreProfile); this->setSurfaceType(OpenGLSurface); this->setFormat(format); this->create(); _context = new QOpenGLContext; _context->setFormat(format); _context->create(); _context->makeCurrent(this); this->initializeOpenGLFunctions(); ... } And that's the rendering code: void OpenGLWindow::render() { if(!isExposed()) return; _context->makeCurrent(this); glClear(GL_COLOR_BUFFER_BIT); glUseProgram(_shaderProgram); glBindBuffer(GL_ARRAY_BUFFER, _positionBufferObject); glEnableVertexAttribArray(0); glVertexAttribPointer(0, 4, GL_FLOAT, GL_FALSE, 0, 0); glDrawArrays(GL_TRIANGLES, 0, 3); glDisableVertexAttribArray(0); glUseProgram(0); _context->swapBuffers(this); } I am trying to draw a simple triangle using a vertex and fragment shader. The problem is that the triangle is not showing up when the core profile is set. Only when I set the OpenGL version to 2.0 or when I use the compatibility profile, it shows up. From my point of view that doesn't make any sense because I am not using fixed functionality at all. What am I missing?

    Read the article

  • Beyond core java

    - by Paul
    Coming to the end of the first year of my CS degree, we've done some Java but just the core stuff; manipulating strings and arrays, inheritance, implementing logic etc. I visit this website daily and I see so much stuff that is beyond me; using frameworks; managing databases etc. It makes me feel like I've just learned the syntax of Java, and there is so much more to do with it. My question is though, how do I get there? I don't think I'm advanced enough to join an open source project, which seems to be suggested often (though I'd love to) and I've looked at other similar questions on here (like this one) but even then I don't think that'd work for me.. firstly, could somebody try give me some commonly used frameworks etc and how and what they are used for? Where would be a good place to start? How did you get started in using the things you do? Or perhaps you think I'm going down the wrong route. Should I learn another language, and just wait until the moment occurs where it's clear what I should be doing? I'm only in the first year of my degree, so far we've lightly covered Haskell and Java, and I've done a little HTML and CSS in my free time. I know that next year we cover python, so perhaps I should just wait till then and see if I prefer that? I feel like I also risk learning something in depth and then never using it... I suppose I'm also asking for personal experiences; was there a point where you felt you'd exceeded the basic grasp of a language (does not necessarily have to be Java related) and reach a more "advanced" level? I guess I'll put subjective tag on this, but really I just want to know how to get beyond the basic understanding of a language.

    Read the article

  • Core Data confusion: fetch without tableview.

    - by Mr. McPepperNuts
    I have completed and reproduced Core Data tutorials using a tableview to display contents. However, I want to access an Entity through a fetch on a view without a tableview. I used the following fetch code, but the count returned is always 0. The data exists when the database is opened using SQLite tools. NSManagedObject *entryObj; XYZDelegate *appDelegate = [[UIApplication sharedApplication] delegate]; NSManagedObjectContext *managedObjectContext = appDelegate.managedObjectContext; NSFetchRequest *request = [[NSFetchRequest alloc] init]; NSEntityDescription *entity = [NSEntityDescription entityForName:@"Quote" inManagedObjectContext:managedObjectContext]; NSSortDescriptor *sortDescriptor = [[NSSortDescriptor alloc] initWithKey:@"id" ascending:YES]; NSArray *sortDescriptors = [[NSArray alloc] initWithObjects:sortDescriptor, nil]; [request setSortDescriptors:sortDescriptors]; [request setEntity: entity]; NSArray *results = [managedObjectContext executeFetchRequest:request error:nil]; if (results == nil) { NSLog(@"No results found"); entryObj = nil; }else { NSLog(@"results %d", [results count]); } [request release]; [sortDescriptors release]; count returned is always 0; it should be 5. Can anyone point me to a reference or tutorial regarding creating a controller not to be used with a tableview.

    Read the article

  • Magento config XML for adding a controller action to a core admin controller

    - by N. B.
    I'm trying to add a custom action to a core controller by extending it in a local module. Below I have the class definition which resides in magento1_3_2_2/app/code/local/MyCompany/MyModule/controllers/Catalog/ProductController.php class MyCompany_MyModule_Catalog_ProductController extends Mage_Adminhtml_Catalog_ProductController { public function massAttributeSetAction(){ ... } } Here is my config file at magento1_3_2_2/app/code/local/MyCompany/MyModule/etc/config.xml: ... <global> <rewrite> <mycompany_mymodule_catalog_product> <from><![CDATA[#^/catalog_product/massAttributeSet/#]]></from> <to>/mymodule/catalog_product/massAttributeSet/</to> </mycompany_mymodule_catalog_product> </rewrite> <admin> <routers> <MyCompany_MyModule> <use>admin</use> <args> <module>MyCompany_MyModule</module> <frontName>MyModule</frontName> </args> </MyCompany_MyModule> </routers> </admin> </global> ... However, https://example.com/index.php/admin/catalog_product/massAttributeSet/ simply yields a admin 404 page. I know that the module is active - other code is executing fine. I feel it's simply a problem with my xml syntax. Am I going about this the write way? I'm hesitant because I'm not actually rewriting a controller method... I'm adding one entirely. However it does make sense in that, the original admin url won't respond to that action name and it will need to be redirected. I'm using Magento 1.3.2.2 Thanks for any guidance.

    Read the article

< Previous Page | 81 82 83 84 85 86 87 88 89 90 91 92  | Next Page >