Search Results

Search found 4461 results on 179 pages for 'pic audio'.

Page 25/179 | < Previous Page | 21 22 23 24 25 26 27 28 29 30 31 32  | Next Page >

  • Is it possible to play multiple audio streams from one "jukebox" to multiple Airport Express devices?

    - by Alex Reynolds
    I have set up a Mac mini as a jukebox that streams audio to an Airport Express in another room in the house, using the AirPlay/AirTunes feature in iTunes. I control this with the iOS Remote app, and this works great. At the present time, it looks like the Mac mini's copy of iTunes gets taken over by the Remote app, while streaming. If I set up a second Airport Express in room B, is there a way to set it up (as well as the jukebox) so that it can receive and play its own unique music stream ("stream B"), separate from what's going on at the Mac mini, or in room A, which is playing stream A? To accomplish this, I would be happy to buy a copy of Rogue Amoeba's AirFoil if it will allow sending multiple, separate audio streams from one computer to the multiple wireless bridges, while using the Remote app (or a Rogue Amoeba equivalent for iOS). However, it is unclear to me from their site documentation, whether that is possible or not. I'd prefer to give the points to an answer that solves this problem. If you don't know if it can be done, or do not think it can be done, please allow others to answer. I appreciate your help. Thanks for your advice.

    Read the article

  • Setup for a live (low-latency) audio video broadcast over Wi-Fi?

    - by Majal Mirasol
    The Upgrade We are capturing audio (from mixer) and video (from a camera) from a main auditorium and passing it to separate rooms within the building. We used to have done this via manual audio/video cables and wires. We wanted to "upgrade" the system and wirelessly broadcast the stream via Wi-Fi. The Problem In our current setup (Wirecast running on A10 on a Wireless-N network), we have the problem of delay. Our streams are delayed from a minute up to five minutes on the clients (laptop/iPad/Android). This had not been a problem from the previous wired connections. Since the wireless network is local, we thought that a delay of less than a second should be achievable. Our Question And so it goes. Anybody there who has any experience for a setup that has both low latency and at the same time user-friendly to clients streaming in the program? Any recommendations would be highly appreciated. (Our current setup in on Windows 7, but setup on a dedicated Linux box is preferred, if achievable.)

    Read the article

  • iphone audio streaming

    - by mobapps99
    Hi , i'm developing an application which uses audio streaming. For streaming audio from internet i'm using the AudioStreamer class. The audio streamer has four state isPlaying, isPaused ,isWaiting, and isIdle . My problem is that when the audio streamer is in the state "isWaiting" and at that time if i get a phone call Audio queue fails giving the error "Audio queue start failed." Any has solution for this? help....

    Read the article

  • email/pic format problem

    - by user26610
    A client sent me some vertical pics but they show up on my server as horizontal images. I viewed them via the browser -did not use an email client to view them. I forwarded his email to him and he says when he opens it the images are vertical. I don't have a clue.

    Read the article

  • Figuring out the performance limitation of an ADC on a PIC microcontroller

    - by AKE
    I'm spec-ing the suitability of a microcontroller like PIC for an analog-to-digital application. This would be preferable to using external A/D chips. To do that, I've had to run through some computations, pulling the relevant parameters from the datasheets. I'm not sure I've got it right -- would appreciate a check! Here's the simplest example: PIC10F220 is the simplest possible PIC with an ADC. Runs at clock speed of 8MHz. Has an instruction cycle of 0.5us (4 clock steps per instruction) So: Taking Tacq = 6.06 us (acquisition time for ADC, assume chip temp. = 50*C) [datasheet p34] Taking Fosc = 8MHz (? clock speed) Taking divisor = 4 (4 clock steps per CPU instruction) This gives TAD = 0.5us (TAD = 1/(Fosc/divisor) ) Conversion time is 13*TAD [datasheet p31] This gives conversion time 6.5us ADC duration is then 12.56 us [? Tacq + 13*TAD] Assuming at least 2 instructions for load/store: This is another 1 us [0.5 us per instruction] Which would give max sampling rate of 73.7 ksps (1/13.56) Supposing 8 more instructions for real-time processing: This is another 4 us Thus, total ADC/handling time = 17.56us (12.56us + 1us + 4us) So expected upper sampling rate is 56.9 ksps. Nyquist frequency for this sampling rate is therefore 28 kHz. If this is right, it suggests the (theoretical) performance suitability of this chip's A/D is for signals that are bandlimited to 28 kHz. Is this a correct interpretation of the information given in the data sheet? Any pointers would be much appreciated! AKE

    Read the article

  • Figuring out the Nyquist performance limitation of an ADC on an example PIC microcontroller

    - by AKE
    I'm spec-ing the suitability of a dsPIC microcontroller for an analog-to-digital application. This would be preferable to using dedicated A/D chips and a separate dedicated DSP chip. To do that, I've had to run through some computations, pulling the relevant parameters from the datasheets. I'm not sure I've got it right -- would appreciate a check! (EDITED NOTE: The PIC10F220 in the example below was selected ONLY to walk through a simple example to check that I'm interpreting Tacq, Fosc, TAD, and divisor correctly in working through this sort of Nyquist analysis. The actual chips I'm considering for the design are the dsPIC33FJ128MC804 (with 16b A/D) or dsPIC30F3014 (with 12b A/D).) A simple example: PIC10F220 is the simplest possible PIC with an ADC Runs at clock speed of 8MHz. Has an instruction cycle of 0.5us (4 clock steps per instruction) So: Taking Tacq = 6.06 us (acquisition time for ADC, assume chip temp. = 50*C) [datasheet p34] Taking Fosc = 8MHz (? clock speed) Taking divisor = 4 (4 clock steps per CPU instruction) This gives TAD = 0.5us (TAD = 1/(Fosc/divisor) ) Conversion time is 13*TAD [datasheet p31] This gives conversion time 6.5us ADC duration is then 12.56 us [? Tacq + 13*TAD] Assuming at least 2 instructions for load/store: This is another 1 us [0.5 us per instruction] Which would give max sampling rate of 73.7 ksps (1/13.56) Supposing 8 more instructions for real-time processing: This is another 4 us Thus, total ADC/handling time = 17.56us (12.56us + 1us + 4us) So expected upper sampling rate is 56.9 ksps. Nyquist frequency for this sampling rate is therefore 28 kHz. If this is right, it suggests the (theoretical) performance suitability of this chip's A/D is for signals that are bandlimited to 28 kHz. Is this a correct interpretation of the information given in the data sheet in obtaining the Nyquist performance limit? Any opinions on the noise susceptibility of ADCs in PIC / dsPIC chips would be much appreciated! AKE

    Read the article

  • Loading Facebook fb:profile-pic via AJAX in Facebook Connect site

    - by mbrevoort
    After a page loads, I'm making an AJAX request to pull down an HTML chunk that contains tags representing a Facebook user profile picture. I append the result to a point in the DOM but the logos don't load, instead all I see is the default silhouette. Here's simply how I'm loading the HTML chunk with jQuery $.ajax({ url: "/facebookprofiles" success: function(result) { $('#profiles').append(result); } }); The HTML that I'm appending is a list of diffs like this: <div class="status Accepted"> <fb:profile-pic class="image" facebook-logo="true" linked="true" size="square" uid="1796055648"></fb:profile-pic> <p> <strong>Corona Kingsly</strong>My Status Update<br/> <span style="font-size: 0.8em">52 minutes ago</span> </p> </div> Any ideas? I assume the fb tags are not being processed once the dom is loaded. Is there any way to make that happen? I'm not seeing any exceptions or errors in my Firebug console. Thanks

    Read the article

  • what's the "best" approach to creating the UI of an audio plugin that will be both audio unit and VST for OS X and Windows?

    - by SaldaVonSchwartz
    I'm working on a couple audio plugins. Right now, they are audio units. And while the "DSP" code won't change for the most part between implementations / ports, I'm not sure how to go about the GUI. For instance, I was looking at the Apple-supplied AUs in Lion. Does anyone know how did they go about the UI? Like, are the knobs and controls just subclasses of Cocoa controls? are they using some separate framework or coding these knobs and such from scratch? And then, the plugs I'm working on are going to be available too as VSTs for Windows. I already have them up and running with generic interfaces. But I'm wondering if I should just get over it and recreate all my interfaces with the vstgui code provided by Steinberg or if there's a more practical approach to making the interfaces cross-platform.

    Read the article

  • audio frameworks in iPhone

    - by suse
    Hello, I would like to know the follwing information about iPhone audio system Heirarchy of the audio framework in iPhone OS. i know that there are 3 main audio frameworks in iPhone OS.i.e AVFoundation Framework CoreAudio Framework OpenAL Framework what are the audio formats supported in each of the above framework?I mean will all the framework support all audio formats or are they dependent about the audio formats it support? Thank You

    Read the article

  • AAC Sample Rate and Bit Rate for High Quality Audio?

    - by marco.ragogna
    What are the AAC Sample Rate and Bit Rate settings to set in order to encode an audio track with a quality comparable to MP3 320kbps? I need to backup a DVD movie, the default settings for AAC are Bitrate (KB/s) 128 Sample Rate (HZ) 44100 should I set Bitrate (KB/s) 320 Sample Rate (HZ) 48000 or the default are already good?

    Read the article

  • Linux program to convert audio file of fax transmission to image?

    - by bdk
    I have a number of uncompressed audio files recorded off of an analog (POTS) telephone line of fax transmissions. Is there a Linux utility or library I could use to convert these files into images of the fax they contain? I'm not looking to send/receive a fax via a modem, but just to "replay" the communications tones and parse out the fax message.I'm guessing this may not be possible due to duplex issues and not knowing which end of the conversation is sending what,but thought I'd ask to see if anyone knew of something.

    Read the article

  • How can I split the 5.1 audio channels from an AC3 file into individuals streams (preferably on a Ma

    - by Drarok
    I have a file that I've pulled from a DVD that is apparently in AC3 5.1 format. The extension is .AC3 and it opens an plays in QuickTime, VLC etc. What I want is each individual channel in a separate file, but I can't seem to find any tools that will allow be to do that. Is there a way to split the file I have, or alternatively is there a way to pull the audio streams from a 5.1 DVD?

    Read the article

  • Why do my 3GP videos not play audio using VLC?

    - by GiH
    I have a Nexus One phone and when I record video the container seems to be 3GP. When I try to playback using VLC I'm getting no audio, and an error saying that there is nothing I can do because VLC does not support the "samr" codec. Is there really nothing I can do to watch my videos on VLC? If not, whats the alternative? I really like VLC specifically because I never have to download codecs...

    Read the article

  • Why is preserving the pitch in audio playback (allegedly) less performant?

    - by Markus Unterwaditzer
    In VLC for Android, i discovered an option to preserve the pitch during faster-than-normal playback: The "requires a fast device" obviously implies that faster playback is more performant when the pitch is changed too. Why is that so? What i've tried: Before posting this question i did some shallow research through Google. According to Wikipedia, there are several methods for faster playback of audio, the "simplest" one (Resampling) changes the pitch.

    Read the article

  • How much audio latency is noticeable by human brain?

    - by Borek
    I am choosing a wireless headset for my PC (I hate cables) and am looking at Sennheiser RS 170 / 180. They supposedly sound great, however, there is a 25ms audio latency. I've heard that this is OK when watching TV or listening musing but is bad for games. The question is - has there been any research / hard data that would show how much of a delay is noticeable by human brain? 25ms doesn't sound like a lot but I may be wrong.

    Read the article

  • ATI HDMI Audio disappears in Windows 7 when a TV is connected

    - by jsalonen
    So far I have unsuccessfully googled for HOURS with no luck fixing this very annoying problem. The settings is the following: I have PC running Windows 7 RC (64-bit) Video card is a ATI Radeon 4850 series card (Sapphire HD 4850 512MB to be exact) The video card has HDMI out with built-in audio chip I have an HDMI cable connecting the PC to a TV (Sony Bravia series) The problem is that when I connect the HDMI cable to the TV, the ATI HDMI Sound output device disappears completely from the list of playback devices in Windows. As a workaround I can restore the audio by re-installing the HDMI audio driver. However, when I disconnect the TV the driver disappears again. So basically, every time I want to watch stuff on my TV, I have to reinstall audio driver, which of course is VERY annoying. EDIT: I have figured out that I do not need to re-istall the HDMI audio driver to restore sound; I only need to reboot my computer with the HDMI cable plugged in to restore the audio driver. This suggests that the problem has something to do with information passed from TV to computer, which makes my HDMI Audio driver disappear. Are there any other, more elegant workarounds for this problem? All help is much appreciated!

    Read the article

  • Can I boot up a virtual machine natively?

    - by Anshul
    My question is: Is is possible to run a virtual machine natively on your hardware if you have installed the proper drivers etc? In other words, can I use a VHD as a regular hard drive to boot from? The reason I want to do this is that I do both graphics-intensive and audio-intensive work, but my computer is not powerful enough to handle both at the same time and many times I install a bunch of audio programs that I don't want affecting the stability of my graphics programs. Basically I wanted to have sandboxing between the two sets of applications. So I tried running the graphics-intensive programs in a VirtualBox VM and the audio-intensive work natively (simply because it's a pain to route ASIO audio devices in/out of VirtualBox). This kind-of works - the graphics-intensive stuff is tolerable, but still relatively slow, because it's running inside a VM. So my next idea was to just dual-boot and install the graphics and audio programs in separate partitions but I frequently use them in tandem, so it wouldn't be practical to reboot my machine every time I need to use the other set of programs. But I could live with this scenario: If I need to do more audio-intensive stuff, I'll just boot up to the audio partition and run the graphics programs in a VM, and then when I'm working heavily on the graphics part, I'll just boot the graphics partition as a regular OS directly on the hardware. Is this possible? For example by booting up a VHD as a regular hard drive? Or by setting up dual-boot, and every time the audio partition is shut down, synchronize the graphics VM VHD with the native graphics partition? Is it practical, given the above scenario? And if it's not possible, barring buying another computer, can anyone suggest a best-of-all-worlds setup (the two worlds being performance, sandboxing, and running in parallel) for the above scenario? Thanks in advance.

    Read the article

  • Objective-c - How to serialize audio file into small packets that can be played?

    - by vfn
    Hi there, So, I would like to get a sound file and convert it in packets, and send it to another computer. I would like that the other computer be able to play the packets as they arrive. I am using AVAudioPlayer to try to play this packets, but I couldn't find a proper way to serialize the data on the peer1 that the peer2 can play. The scenario is, peer1 has a audio file, split the audio file in many small packets, put them on a NSData and send them to peer2. Peer 2 receive the packets and play one by one, as they arrive. Does anyone have know how to do this? or even if it is possible? EDIT: Here it is some piece of code to illustrate what I would like to achieve. // This code is part of the peer1, the one who sends the data - (void)sendData { int packetId = 0; NSString *soundFilePath = [[NSBundle mainBundle] pathForResource:@"myAudioFile" ofType:@"wav"]; NSData *soundData = [[NSData alloc] initWithContentsOfFile:soundFilePath]; NSMutableArray *arraySoundData = [[NSMutableArray alloc] init]; // Spliting the audio in 2 pieces // This is only an illustration // The idea is to split the data into multiple pieces // dependin on the size of the file to be sent NSRange soundRange; soundRange.length = [soundData length]/2; soundRange.location = 0; [arraySoundData addObject:[soundData subdataWithRange:soundRange]]; soundRange.length = [soundData length]/2; soundRange.location = [soundData length]/2; [arraySoundData addObject:[soundData subdataWithRange:soundRange]]; for (int i=0; i // This is the code on peer2 that would receive an play the piece of audio on each packet - (void) receiveData:(NSData *)data { NSKeyedUnarchiver* unarchiver = [[NSKeyedUnarchiver alloc] initForReadingWithData:data]; if ([unarchiver containsValueForKey:PACKET_ID]) NSLog(@"DECODED PACKET_ID: %i", [unarchiver decodeIntForKey:PACKET_ID]); if ([unarchiver containsValueForKey:PACKET_SOUND_DATA]) { NSLog(@"DECODED sound"); NSData *sound = (NSData *)[unarchiver decodeObjectForKey:PACKET_SOUND_DATA]; if (sound == nil) { NSLog(@"sound is nil!"); } else { NSLog(@"sound is not nil!"); AVAudioPlayer *audioPlayer = [AVAudioPlayer alloc]; if ([audioPlayer initWithData:sound error:nil]) { [audioPlayer prepareToPlay]; [audioPlayer play]; } else { [audioPlayer release]; NSLog(@"Player couldn't load data"); } } } [unarchiver release]; } So, here is what I am trying to achieve...so, what I really need to know is how to create the packets, so peer2 can play the audio. It would be a kind of streaming. Yes, for now I am not worried about the order that the packet are received or played...I only need to get the sound sliced and them be able to play each piece, each slice, without need to wait for the whole file be received by peer2. Thanks!

    Read the article

  • PIC 18 controller as serial to ethernet bridge

    - by Surjya Narayana Padhi
    Hi Geeks, I am planning to use PIC18F6*** serial microntroller for my project serial-ethernet converter. Once I will put my hex code in PIC micro-controller for send recieve serial port data I will use the windows hyper-terminal and for checking the ethernet data is there any application in windows? If my question is not clear I am ready to explain it better... please let me know.....

    Read the article

< Previous Page | 21 22 23 24 25 26 27 28 29 30 31 32  | Next Page >