Search Results

Search found 5304 results on 213 pages for 'audio streaming'.

Page 109/213 | < Previous Page | 105 106 107 108 109 110 111 112 113 114 115 116  | Next Page >

  • Listening to the iPhone mic with SCListener and playing music at the same time: how?

    - by Eamon Ford
    Hello, I am using Stephen Celis' SCListener class (for iPhone) to "listen" from the microphone, but I also need to be playing music at the same time using the MediaPlayer framework. However, when I start listening with SCListener, the music fades out and stops. I have set the kAudioSessionCategory_PlayAndRecord property on the audio session in SCListener, which should allow me to play audio and record audio at the same time, but as far as I can tell it has no effect. I'm confused, because according to other developers' results, this works just fine, but not for me. I'm thinking maybe the kAudioSessionCategory_PlayAndRecord property allows you to play sound and record if you're using the AVAudioPlayer framework or something to play the sound, but maybe not the MediaPlayer framework? This would be a problem for me because I need to play music from the user's iPod library, which, as far as I know is only possible to do using the MediaPlayer framework. Does anyone know how I can get around this problem? Thanks in advance!

    Read the article

  • using AudioQueues with AudioFileReadBytes

    - by Santosh
    Hey Im trying to work with Audio queues to play a very big mp3 file (arround 23 hours long). when audio queue asks for buffers though callback, im using AudioFileReadBytes() API to read the bytes from audio file and feed the queue. startQueue fails with the error : prime failed any inputs????? Also I succeeded playing file using AudioFileReadPackets API instead of AudioFileReadBytes(). But the problem with API is that when I seek (fast forward) by a long interval, say 9 hours (for example fast forward from 32 mins playtime to 9:32 mins) then AudioFileReadPackets() takes a long time (almost 2 mins) to read from new location. any comments would be greatly appreciated.

    Read the article

  • Adobe Flash player Secuirty Pop-Up question

    - by kapildalwani
    I am building a Audio Recording tool using Flash and Wowza. I dont want to start the recording until the use clicks the Allow Button is the Security Pop-up question represented here http://www.macromedia.com/support/documentation/en/flashplayer/help/help05.html In Audio I dont get this until I attach the stream to it. In Video can get thsi question when I attach the camera to Video. I want to avoid making a connection until the user clicks Accept and this doesn't happen until I make the connection request in Audio. I am able to display the http://www.macromedia.com/support/documentation/en/flashplayer/help/help09.html pop-up using SecurityManager Is there a way I can call the pop-up from my code. http://www.macromedia.com/support/documentation/en/flashplayer/help/help05.html

    Read the article

  • Sending most correct mimetype

    - by Roland Franssen
    Hi all, I have a list of extension to mimetype in a INI file. However some extensions have multiple mimetypes, for example; midi[] = "application/x-midi" midi[] = "audio/midi" midi[] = "audio/x-mid" midi[] = "audio/x-midi" midi[] = "music/crescendo" midi[] = "x-music/x-midi" 6 (possible) mimetypes for 1 extension. Whats common practice to determine the correct mimetype? (e.g. i need to set a HTTP content-type header). I know its not ideal; determining mimetypes based on extension.. but i need consistent (cross-server) results (e.g. fileinfo extension in PHP is making terrible guesses*). * Some fileinfo results for example; js - text/plain css - text/c-h

    Read the article

  • Expression Encoder SDK - WMA Pro Codec Issues with Windows Server 2003

    - by PortageMonkey
    I am using the Expression Encoder SDK to encode .avi and Flash files to a .wmv format suitable for Silverlight. By default, EE encodes files with audio using the the WMA PRO codec. If you are running Windows Server 2003, this is a problem as it doesn't support the WMA PRO codec and produces and error message similar to the following. Error Message: The Audio Profile settings do not match a valid system profile. Error Source: Microsoft.Expression.Encoder Error Target Site: System.String GetProfileString() I am looking for a way to change the default audio codec to something suitable for WS 2003. I am aware that although not supported natively, there is a highly invasive way to install Windows Media Player 11 and it's codecs on WS 2003 but this involves registry tinkering and other hacks not suitable for our production environments so that solution is out.

    Read the article

  • AVAudioPlayer Memory Leak - Media Player Framework

    - by Krishnan
    Hi Friends, I am using AVAudioPlayer object to play an audio. I create an audioPlayer object initially. I play an animation and when ever animation starts I play the audio and pause the audio when the animation is finished. I initially found three memory Leaks using Instruments. (The responsible caller mentioned was RegisterEmbedCodecs). After suggestion from a "ahmet emrah" in this forum to add MediaPlayer framework, the number of leaks reduced to one. And is there any way to completely get rid of it? Thanks and regards, krishnan.

    Read the article

  • unable to download a file from rtmp server

    - by user309815
    Hi Team I want to download an audio file from red5 server using rtmp server. string strUri; strUri = "rtmp://XXX/oflaDemo/" + Session["streamName"].ToString(); string strUploadto; strUploadto = Server.MapPath("") + "\Audio\" + "myaudio.flv"; WebClient webClient = new WebClient(); //webClient.DownloadFile("rtmp://begoniaprojects.com/oflaDemo/" + Session["streamName"].ToString(), Page.MapPath("") + "\Audio\" +"myaudio.flv"); webClient.DownloadFile(strUri, strUploadto); but i am getting uri prefix is not recognized message while downloading. please suggest me.

    Read the article

  • Is it possible to avoid downloading song up to the supplied position using SoundManager2?

    - by dinjas
    I'm using the SoundManager2 JavaScript SDK on a site that streams synchronized audio from SoundCloud to multiple clients simultaneously. When a new user loads the page, the audio is loaded, and a position parameter is set to specify where playback should begin. The problem arises when the track is really long (say 60 minutes), and the current track position is substantially far into the track (e.g. 30 minutes). When this is the case, it takes a really long time before playback begins because the track has to download/buffer up to the current position. Is there a way to avoid downloading the 30 minutes of audio that I don't need?

    Read the article

  • Preload *.wav with SystemSoundID?

    - by fuzzygoat
    I am playing a wav file to give a little audio feedback when a button in my UI is pressed. My question is when you first press the button there is a delay (about 1.5secs) whilst the sound file "sound.wav" is loaded and cached. Is there a way to pre-cache this file (maybe in my viewDidLoad)? I guess I could do it by just playing it a viewDidLoad, but would really need to disable the audio so it does not "beeb" each time the app starts. many thanks for and help. gary EDIT: Looks like my question is a duplicate of this post unless anyone has any new info? Maybe a way to turn the play volume down temporarily, unless the audio is cleared each time through the run loop.

    Read the article

  • SOS: AudioFormat when writing to file in FreeTTS

    - by user330793
    Very annoying problem. I have developed a freeTTS application of the freetts class that write captured audio to file however I am having some very annoying problems. When setting the audio player to singlefileaudio player I try to also set the audioformat with my own default values for sampleRate, sampleSizeInBits, channels, signed and bigEndian. Now I access AudioPlayer.get methods to show these values in runtime just to ensure they are set to what I set them and they match those values. However when file writing completes and I check the properties of the resulting wave file, they are set to the audioPlayer default settings. Normally this will be fine except I have to read the files into another application which has fixed audio property settings so I always get a resulting output that sounds like am fast forwarding the sound and listening to it at the same time. Obviously because of the different sampling rates. I need help please. Thanx, Henry

    Read the article

  • Is it bad practice to have a long initialization method?

    - by Paperflyer
    many people have argued about function size. They say that functions in general should be pretty short. Opinions vary from something like 15 lines to "about one screen", which today is probably about 40-80 lines. Also, functions should always fulfill one task only. However, there is one kind of function that frequently fails in both criteria in my code: initialization functions. For example in an audio application, the audio hardware/API has to be set up, audio data has to be converted to a suitable format and the object state has to properly initialized. These are clearly three different tasks and depending on the API this can easily span more than 50 lines. The thing with init-functions is that they are generally only called once, so there is no need to re-use any of the components. Would you still break them up into several smaller functions would you consider big initialization functions to be ok?

    Read the article

  • Does background iOS require specific provisioning?

    - by Moshe
    I've added the appropriate XML to my PLIST ("audio key") and my app did stream audio in the background. I installed the app on to an iPhone 3GS running iOS 4 via Ad Hoc distribution. I played audio in my app and pressed the home key. It was still playing. Then I switched computers and reset my provisioning profile. I recompiled and exported with a generic provisioning profile. I sent the app to someone else to test on their 3GS via Ad Hoc and the app does not work in the background.

    Read the article

  • BlackBerry Technical Specification

    - by Sam
    I'm having trouble locating BlackBerry techical specifications and their website is a mess. They also don't have a number that I can use to easily contact them. This isn't exactly a coding question, but what does the BlackBerry audio API look like, and where can I get technical specifications on audio? Specifically, I'm trying to find out more information on Audio-In, specifically, through the Mic-In on the 3.5 mm jack. Unfortunately, before I can proceed, I need to know such things like sampling rate, data width, etc. Direction to the right resource or if you know off of the top of your head is appreciated.

    Read the article

  • Installing Skype on Amazon EC2 instance

    - by Adrian
    For my application, I need to have Skype working on my Amazon EC2 Windows instance. I got the application installed and am able to log in, however, I can't make a phone call, since I am getting an 'Can't detect your sound card' error. Since I'm trying to inject audio from an audio file into the phone call, I don't need the sound card on the server. Thus, I need a way to bypass this error message. I have tried installing Virtual Audio Cable, which unfortunately didn't work (even though it worked on my desktop machine).

    Read the article

  • Easy way to combine php code (lame question)

    - by alekseygr
    Hi, I have vary easy and LAME question. I have code: <?php if (function_exists("insert_audio_player")) {insert_audio_player("[audio:|titles=]"} ?> This code outputs audio player to my page in wordpress. I need to call custom field inside this code. My custom field code is: <?php print_custom_field('tc_filename'); ?> Something like: <?php if (function_exists("insert_audio_player")) {insert_audio_player("[audio:<?php print_custom_field('tc_filename'); ?>|titles=<?php print_custom_field('tc_title'); ?>]"} ?> How can I call for second code inside first? Thx.

    Read the article

  • Paperclip generating wrong URLs in Heroku

    - by Tony
    Paperclip is generating wrong URLs in Heroku. I have an Audio model which has a mp3 field as follows: class Audio < ActiveRecord::Base has_attached_file :mp3, :storage => :s3, :s3_credentials => S3_CREDENTIALS, :bucket => S3_CREDENTIALS[:bucket], :path => ":rails_root/public/system/:attachment/:id/:style/:filename", :url => "/system/:attachment/:id/:style/:filename" I am calling audio.mp3.url from a controller, and it returns http://s3.amazonaws.com/MyApp/audios/mp3s//original/96a9ae89302fdf8462ee05eb829f2e17578b144e20120908-2-11f61zr.mp3?1347135050 instead of http://s3.amazonaws.com/MyApp/audios/mp3s/000/000/004/original/96a9ae89302fdf8462ee05eb829f2e17578b144e20120908-2-11f61zr.mp3?1347135050 (which works) Why is it missing the '000/000/004' part of the route? The same model is generating the right URL when used in a view. Any help? I am using paperclip 3.2.0 and Rails 3.1.8. Any help?

    Read the article

  • Need to sync two lists with atrribute time, but times aren't equal

    - by virgula24
    I gonna try to describe my problem the best i can. I have two lists, one with audio frames and other with color frames (not relevant). Both of them have timestamps, they were captured at the same moment but at different instants. So, i have like this: index COLOR AUDIO 0 841 846 1 873 897 2 905 948 3 940 1000 the frames start at high numbers because they were captured and then trimmed to specific parts, but im shot, frame 0 is synced with only 5ms apart(timestamp in ms). On every case i have, the audio frames count is less than the color count. I need to make them have the same count. The stating frames may be coloraudio, color

    Read the article

  • Using VLC as RTSP server

    - by StackedCrooked
    I'm trying to figure out how to use the server capabilities of VLC. More specifically, how to export an SDP file when RTP streaming. In chapter 4 in the section related to RTP Streaming examples for server and client are given: vlc -vvv input_stream --sout '#rtp{dst=192.168.0.12,port=1234,sdp=rtsp://server.example.org:8080/test.sdp}' vlc rtsp://server.example.org:8080/test.sdp It's not very clear to me how to make it actually work. I have tried these two commands for server and client using two cmd instances: vlc -I rc screen:// --sout=#rtp{dst=127.0.0.1,port=4444,sdp=rtsp://localhost:8080/test.sdp} vlc -I rc rtsp://localhost:8080/test.sdp Invoking the second command causes the first one to crash. The second command shows the error message "could not connect to localhost:8080".

    Read the article

  • Should I use 802.11n with a 15 Mbps ISP (Comcast Cable)?

    - by stackoverflowuser2010
    I currently own a LinkSys-WRT54GL 802.11a/b/g wireless router, and my ISP is Comcast Cable providing me with 15 Mbps (that's bits per second, I believe) download speed. I am wondering if there is any benefit with using an 802.11n wireless router to access the Internet? The maximum theoretical speed of the WRT54GL router is 54 Mbps (802.11g), which is faster than the 15 Mbps provided by my ISP. I know that 802.11n has a max bandwidth of 300 Mbps, and it would help for intra-house transfers, such as streaming video from one computer to another. But is there any benefit to 802.11n for Internet activity, such as web browswing, gaming, and streaming video from Netflix?

    Read the article

  • Disable Windows Media Player "media server" network locations

    - by Moses
    I'm running Windows 8 and in the Computer menu, I see a huge list of "media server" network locations of many of the PCs in my network (most running Windows 7). Is there a way to either locally disable this so I don't see this list every time, or disable this sharing feature on the other computers? I've tried disabling "Media Streaming options" from the Network and Sharing Center (on my PC), but that had no effect. Another thing I tried was enabling Media Streaming, but then selecting all the found clients and clicking Blocked in the list of found clients. That had no effect in removing the list either. I've also attempted disabling the Windows Media Player Network Sharing Service, but alas, the list remains. I'm starting to believe there's a magic registry key to unbury and flip to a "1", but all the searching I've done has come up empty.

    Read the article

  • Windows 7 Media Center suddenly Jerky..

    - by Kris Erickson
    Media Center in Windows 7 has been running great for me for the past few months since I switched to Windows 7. I can watch HD content on my Xbox, and I have had very few issues with it (occasionally it has locked up on the Xbox, but restarting the Xbox usually fixes that). All of a sudden it has gotten jerky, SD content playing on my computer (not even streaming to the Xbox) is Jerky. VLC and Windows Media Player play the same content perfectly, it is just when it is playing in Windows Media Center (whether streaming or not). Any ideas what could be the cause of this? And yes I have reboot several times...

    Read the article

  • VLC Media Server

    - by Josh
    We are using VLC on ubuntu, and trying to set up a streaming media server. We have the http interface working fine from remote computers, and we can also see the video playing as text if we don't screen VLC. Our problem is the output streaming. When we use the main VLC page you get when you goto the servers IP it does not save the output MRL (refreshing page it will go away, even after clicking save.) We tried to VLM page and it appears to work fine from the http page (it buffers, plays, timers go up when not paused, etc.) However, we still cannot connect remotely with a VLC client. The output parameters do save properly on the VLM page. We are noobs when it comes to this. Does anyone have a very to the point procedure of getting a file X to play and stream on ubuntu using VLC assuming VLC is installed?

    Read the article

  • Ubuntu stopped recognizing my iPod

    - by flashnode
    Rythmbox on Ubuntu 10.10 used to recognize my 3rd gen Nano and transfer mp3s. Now I plug it in and Ubuntu doesn't pop-up that box that asks what you want to do anymore. It is only recognized if I reboot and the thing is plugged in. Here is the output to 'lsusb -v -s bus:device' Bus 001 Device 008: ID 05ac:1262 Apple, Inc. iPod Nano 3.Gen Device Descriptor: bLength 18 bDescriptorType 1 bcdUSB 2.00 bDeviceClass 0 (Defined at Interface level) bDeviceSubClass 0 bDeviceProtocol 0 bMaxPacketSize0 64 idVendor 0x05ac Apple, Inc. idProduct 0x1262 iPod Nano 3.Gen bcdDevice 0.01 iManufacturer 1 Apple Inc. iProduct 2 iPod iSerial 3 000A27001A670128 bNumConfigurations 2 Configuration Descriptor: bLength 9 bDescriptorType 2 wTotalLength 32 bNumInterfaces 1 bConfigurationValue 1 iConfiguration 0 bmAttributes 0xc0 Self Powered MaxPower 500mA Interface Descriptor: bLength 9 bDescriptorType 4 bInterfaceNumber 0 bAlternateSetting 0 bNumEndpoints 2 bInterfaceClass 8 Mass Storage bInterfaceSubClass 6 SCSI bInterfaceProtocol 80 Bulk (Zip) iInterface 0 Endpoint Descriptor: bLength 7 bDescriptorType 5 bEndpointAddress 0x83 EP 3 IN bmAttributes 2 Transfer Type Bulk Synch Type None Usage Type Data wMaxPacketSize 0x0200 1x 512 bytes bInterval 0 Endpoint Descriptor: bLength 7 bDescriptorType 5 bEndpointAddress 0x02 EP 2 OUT bmAttributes 2 Transfer Type Bulk Synch Type None Usage Type Data wMaxPacketSize 0x0200 1x 512 bytes bInterval 0 Configuration Descriptor: bLength 9 bDescriptorType 2 wTotalLength 149 bNumInterfaces 3 bConfigurationValue 2 iConfiguration 4 iPod USB Interface bmAttributes 0xc0 Self Powered MaxPower 500mA Interface Descriptor: bLength 9 bDescriptorType 4 bInterfaceNumber 0 bAlternateSetting 0 bNumEndpoints 0 bInterfaceClass 1 Audio bInterfaceSubClass 1 Control Device bInterfaceProtocol 0 iInterface 0 AudioControl Interface Descriptor: bLength 9 bDescriptorType 36 bDescriptorSubtype 1 (HEADER) bcdADC 1.00 wTotalLength 30 bInCollection 1 baInterfaceNr( 0) 1 AudioControl Interface Descriptor: bLength 12 bDescriptorType 36 bDescriptorSubtype 2 (INPUT_TERMINAL) bTerminalID 1 wTerminalType 0x0201 Microphone bAssocTerminal 2 bNrChannels 2 wChannelConfig 0x0003 Left Front (L) Right Front (R) iChannelNames 0 iTerminal 0 AudioControl Interface Descriptor: bLength 9 bDescriptorType 36 bDescriptorSubtype 3 (OUTPUT_TERMINAL) bTerminalID 2 wTerminalType 0x0101 USB Streaming bAssocTerminal 1 bSourceID 1 iTerminal 0 Interface Descriptor: bLength 9 bDescriptorType 4 bInterfaceNumber 1 bAlternateSetting 0 bNumEndpoints 0 bInterfaceClass 1 Audio bInterfaceSubClass 2 Streaming bInterfaceProtocol 0 iInterface 0 Interface Descriptor: bLength 9 bDescriptorType 4 bInterfaceNumber 1 bAlternateSetting 1 bNumEndpoints 1 bInterfaceClass 1 Audio bInterfaceSubClass 2 Streaming bInterfaceProtocol 0 iInterface 0 AudioStreaming Interface Descriptor: bLength 7 bDescriptorType 36 bDescriptorSubtype 1 (AS_GENERAL) bTerminalLink 2 bDelay 1 frames wFormatTag 1 PCM AudioStreaming Interface Descriptor: bLength 35 bDescriptorType 36 bDescriptorSubtype 2 (FORMAT_TYPE) bFormatType 1 (FORMAT_TYPE_I) bNrChannels 2 bSubframeSize 2 bBitResolution 16 bSamFreqType 9 Discrete tSamFreq[ 0] 8000 tSamFreq[ 1] 11025 tSamFreq[ 2] 12000 tSamFreq[ 3] 16000 tSamFreq[ 4] 22050 tSamFreq[ 5] 24000 tSamFreq[ 6] 32000 tSamFreq[ 7] 44100 tSamFreq[ 8] 48000 Endpoint Descriptor: bLength 9 bDescriptorType 5 bEndpointAddress 0x81 EP 1 IN bmAttributes 1 Transfer Type Isochronous Synch Type None Usage Type Data wMaxPacketSize 0x00c0 1x 192 bytes bInterval 4 bRefresh 0 bSynchAddress 0 AudioControl Endpoint Descriptor: bLength 7 bDescriptorType 37 bDescriptorSubtype 1 (EP_GENERAL) bmAttributes 0x01 Sampling Frequency bLockDelayUnits 0 Undefined wLockDelay 0 Undefined Interface Descriptor: bLength 9 bDescriptorType 4 bInterfaceNumber 2 bAlternateSetting 0 bNumEndpoints 1 bInterfaceClass 3 Human Interface Device bInterfaceSubClass 0 No Subclass bInterfaceProtocol 0 None iInterface 0 HID Device Descriptor: bLength 9 bDescriptorType 33 bcdHID 1.01 bCountryCode 0 Not supported bNumDescriptors 1 bDescriptorType 34 Report wDescriptorLength 208 Report Descriptors: ** UNAVAILABLE ** Endpoint Descriptor: bLength 7 bDescriptorType 5 bEndpointAddress 0x83 EP 3 IN bmAttributes 3 Transfer Type Interrupt Synch Type None Usage Type Data wMaxPacketSize 0x0040 1x 64 bytes bInterval 1 Device Qualifier (for other device speed): bLength 10 bDescriptorType 6 bcdUSB 2.00 bDeviceClass 0 (Defined at Interface level) bDeviceSubClass 0 bDeviceProtocol 0 bMaxPacketSize0 64 bNumConfigurations 2 Device Status: 0x0000 (Bus Powered) This ubuntu forum told me to check the automount settings under /apps/nautilus/preferences/media_automount_open in gconf-editor. And I did that. Any clues?

    Read the article

  • Able to ping but does not get the data

    - by Dany
    I am facing a problem in my client server program; when using wireless I can ping but not receive any data. There is a source which receives a streaming request from client via server. This works fine when all the machines are connected through LAN cable but when I put all the computers in wi-fi network, all the machine are able to ping each other but when the client send the stream request to the server the ping request between server and client says destination unreachable. It works all well until the client does not send the streaming request. What might be the issue?

    Read the article

  • Flash Media Server slow over SSL

    - by Antilogic
    We are using FMS to host a VoD site. We host FMS internally (we do not use a CDN). We recently installed an SSL certificate to alleviate connection issues for clients (they're networks either block or don't support RTMP), however we're noticing that when streaming in RTMPS connections are drastically slower (on the order of Mbps). I know SSL causes some amount of over head but both client and server show almost no signs of exertion. Speedtest.net and a locally hosted speed test confirm that bandwidth is not an issue. I'm really not a network guru, so I'm at a loss as to where to check next. Do any of you have an idea why streaming media would run so slow over SSL?

    Read the article

< Previous Page | 105 106 107 108 109 110 111 112 113 114 115 116  | Next Page >