Search Results

Search found 5304 results on 213 pages for 'audio streaming'.

Page 85/213 | < Previous Page | 81 82 83 84 85 86 87 88 89 90 91 92  | Next Page >

  • How can a silverlight app download and play an mp3 file from a URL?

    - by Edward Tanguay
    I have a small Silverlight app which downloads all of the images and text it needs from a URL, like this: if (dataItem.Kind == DataItemKind.BitmapImage) { WebClient webClientBitmapImageLoader = new WebClient(); webClientBitmapImageLoader.OpenReadCompleted += new OpenReadCompletedEventHandler(webClientBitmapImageLoader_OpenReadCompleted); webClientBitmapImageLoader.OpenReadAsync(new Uri(dataItem.SourceUri, UriKind.Absolute), dataItem); } else if (dataItem.Kind == DataItemKind.TextFile) { WebClient webClientTextFileLoader = new WebClient(); webClientTextFileLoader.DownloadStringCompleted += new DownloadStringCompletedEventHandler(webClientTextFileLoader_DownloadStringCompleted); webClientTextFileLoader.DownloadStringAsync(new Uri(dataItem.SourceUri, UriKind.Absolute), dataItem); } and: void webClientBitmapImageLoader_OpenReadCompleted(object sender, OpenReadCompletedEventArgs e) { BitmapImage bitmapImage = new BitmapImage(); bitmapImage.SetSource(e.Result); DataItem dataItem = e.UserState as DataItem; CompleteItemLoadedProcess(dataItem, bitmapImage); } void webClientTextFileLoader_DownloadStringCompleted(object sender, DownloadStringCompletedEventArgs e) { DataItem dataItem = e.UserState as DataItem; string textFileContent = e.Result.ForceWindowLineBreaks(); CompleteItemLoadedProcess(dataItem, textFileContent); } Each of the images and text files are then put in a dictionary so that the application has access to them at any time. This works well. Now I want to do the same with mp3 files, but all information I find on the web about playing mp3 files in Silverlight shows how to embed them in the .xap file, which I don't want to do since I wouldn't be able to download them dynamically as I do above. How can I download and play mp3 files in Silverlight like I download and show images and text?

    Read the article

  • Displaying Video using a Window Handle

    - by fergs
    I'm working on a C# wrapper for Dallmeier camera's and currently have a working wrapper. I can connect to a camera via passing the window handle (in my application its a picture box handle), this is used to send video and messages. Once connected I can then send the StartLiveView command and then a live stream video will be shown in the picture box. Can someone explain how this works by just giving the window handle? And how can I grab an Image from this stream when Picturebox1.Image is null?

    Read the article

  • DirectShow: Video-Preview and Image (with working code)

    - by xsl
    Questions / Issues If someone can recommend me a good free hosting site I can provide the whole project file. As mentioned in the text below the TakePicture() method is not working properly on the HTC HD 2 device. It would be nice if someone could look at the code below and tell me if it is right or wrong what I'm doing. Introduction I recently asked a question about displaying a video preview, taking camera image and rotating a video stream with DirectShow. The tricky thing about the topic is, that it's very hard to find good examples and the documentation and the framework itself is very hard to understand for someone who is new to windows programming and C++ in general. Nevertheless I managed to create a class that implements most of this features and probably works with most mobile devices. Probably because the DirectShow implementation depends a lot on the device itself. I could only test it with the HTC HD and HTC HD2, which are known as quite incompatible. HTC HD Working: Video preview, writing photo to file Not working: Set video resolution (CRASH), set photo resolution (LOW quality) HTC HD 2 Working: Set video resolution, set photo resolution Problematic: Video Preview rotated Not working: Writing photo to file To make it easier for others by providing a working example, I decided to share everything I have got so far below. I removed all of the error handling for the sake of simplicity. As far as documentation goes, I can recommend you to read the MSDN documentation, after that the code below is pretty straight forward. void Camera::Init() { CreateComObjects(); _captureGraphBuilder->SetFiltergraph(_filterGraph); InitializeVideoFilter(); InitializeStillImageFilter(); } Dipslay a video preview (working with any tested handheld): void Camera::DisplayVideoPreview(HWND windowHandle) { IVideoWindow *_vidWin; _filterGraph->QueryInterface(IID_IMediaControl,(void **) &_mediaControl); _filterGraph->QueryInterface(IID_IVideoWindow, (void **) &_vidWin); _videoCaptureFilter->QueryInterface(IID_IAMVideoControl, (void**) &_videoControl); _captureGraphBuilder->RenderStream(&PIN_CATEGORY_PREVIEW, &MEDIATYPE_Video, _videoCaptureFilter, NULL, NULL); CRect rect; long width, height; GetClientRect(windowHandle, &rect); _vidWin->put_Owner((OAHWND)windowHandle); _vidWin->put_WindowStyle(WS_CHILD | WS_CLIPSIBLINGS); _vidWin->get_Width(&width); _vidWin->get_Height(&height); height = rect.Height(); _vidWin->put_Height(height); _vidWin->put_Width(rect.Width()); _vidWin->SetWindowPosition(0,0, rect.Width(), height); _mediaControl->Run(); } HTC HD2: If set SetPhotoResolution() is called FindPin will return E_FAIL. If not, it will create a file full of null bytes. HTC HD: Works void Camera::TakePicture(WCHAR *fileName) { CComPtr<IFileSinkFilter> fileSink; CComPtr<IPin> stillPin; CComPtr<IUnknown> unknownCaptureFilter; CComPtr<IAMVideoControl> videoControl; _imageSinkFilter.QueryInterface(&fileSink); fileSink->SetFileName(fileName, NULL); _videoCaptureFilter.QueryInterface(&unknownCaptureFilter); _captureGraphBuilder->FindPin(unknownCaptureFilter, PINDIR_OUTPUT, &PIN_CATEGORY_STILL, &MEDIATYPE_Video, FALSE, 0, &stillPin); _videoCaptureFilter.QueryInterface(&videoControl); videoControl->SetMode(stillPin, VideoControlFlag_Trigger); } Set resolution: Works great on HTC HD2. HTC HD won't allow SetVideoResolution() and only offers one low resolution photo resolution: void Camera::SetVideoResolution(int width, int height) { SetResolution(true, width, height); } void Camera::SetPhotoResolution(int width, int height) { SetResolution(false, width, height); } void Camera::SetResolution(bool video, int width, int height) { IAMStreamConfig *config; config = NULL; if (video) { _captureGraphBuilder->FindInterface(&PIN_CATEGORY_PREVIEW, &MEDIATYPE_Video, _videoCaptureFilter, IID_IAMStreamConfig, (void**) &config); } else { _captureGraphBuilder->FindInterface(&PIN_CATEGORY_STILL, &MEDIATYPE_Video, _videoCaptureFilter, IID_IAMStreamConfig, (void**) &config); } int resolutions, size; VIDEO_STREAM_CONFIG_CAPS caps; config->GetNumberOfCapabilities(&resolutions, &size); for (int i = 0; i < resolutions; i++) { AM_MEDIA_TYPE *mediaType; if (config->GetStreamCaps(i, &mediaType, reinterpret_cast<BYTE*>(&caps)) == S_OK ) { int maxWidth = caps.MaxOutputSize.cx; int maxHeigth = caps.MaxOutputSize.cy; if(maxWidth == width && maxHeigth == height) { VIDEOINFOHEADER *info = reinterpret_cast<VIDEOINFOHEADER*>(mediaType->pbFormat); info->bmiHeader.biWidth = maxWidth; info->bmiHeader.biHeight = maxHeigth; info->bmiHeader.biSizeImage = DIBSIZE(info->bmiHeader); config->SetFormat(mediaType); DeleteMediaType(mediaType); break; } DeleteMediaType(mediaType); } } } Other methods used to build the filter graph and create the COM objects: void Camera::CreateComObjects() { CoInitialize(NULL); CoCreateInstance(CLSID_CaptureGraphBuilder, NULL, CLSCTX_INPROC_SERVER, IID_ICaptureGraphBuilder2, (void **) &_captureGraphBuilder); CoCreateInstance(CLSID_FilterGraph, NULL, CLSCTX_INPROC_SERVER, IID_IGraphBuilder, (void **) &_filterGraph); CoCreateInstance(CLSID_VideoCapture, NULL, CLSCTX_INPROC, IID_IBaseFilter, (void**) &_videoCaptureFilter); CoCreateInstance(CLSID_IMGSinkFilter, NULL, CLSCTX_INPROC, IID_IBaseFilter, (void**) &_imageSinkFilter); } void Camera::InitializeVideoFilter() { _videoCaptureFilter->QueryInterface(&_propertyBag); wchar_t deviceName[MAX_PATH] = L"\0"; GetDeviceName(deviceName); CComVariant comName = deviceName; CPropertyBag propertyBag; propertyBag.Write(L"VCapName", &comName); _propertyBag->Load(&propertyBag, NULL); _filterGraph->AddFilter(_videoCaptureFilter, L"Video Capture Filter Source"); } void Camera::InitializeStillImageFilter() { _filterGraph->AddFilter(_imageSinkFilter, L"Still image filter"); _captureGraphBuilder->RenderStream(&PIN_CATEGORY_STILL, &MEDIATYPE_Video, _videoCaptureFilter, NULL, _imageSinkFilter); } void Camera::GetDeviceName(WCHAR *deviceName) { HRESULT hr = S_OK; HANDLE handle = NULL; DEVMGR_DEVICE_INFORMATION di; GUID guidCamera = { 0xCB998A05, 0x122C, 0x4166, 0x84, 0x6A, 0x93, 0x3E, 0x4D, 0x7E, 0x3C, 0x86 }; di.dwSize = sizeof(di); handle = FindFirstDevice(DeviceSearchByGuid, &guidCamera, &di); StringCchCopy(deviceName, MAX_PATH, di.szLegacyName); } Full header file: #ifndef __CAMERA_H__ #define __CAMERA_H__ class Camera { public: void Init(); void DisplayVideoPreview(HWND windowHandle); void TakePicture(WCHAR *fileName); void SetVideoResolution(int width, int height); void SetPhotoResolution(int width, int height); private: CComPtr<ICaptureGraphBuilder2> _captureGraphBuilder; CComPtr<IGraphBuilder> _filterGraph; CComPtr<IBaseFilter> _videoCaptureFilter; CComPtr<IPersistPropertyBag> _propertyBag; CComPtr<IMediaControl> _mediaControl; CComPtr<IAMVideoControl> _videoControl; CComPtr<IBaseFilter> _imageSinkFilter; void GetDeviceName(WCHAR *deviceName); void InitializeVideoFilter(); void InitializeStillImageFilter(); void CreateComObjects(); void SetResolution(bool video, int width, int height); }; #endif

    Read the article

  • NAudio playback wont stop successfully

    - by Kurru
    Hi When using NAudio to playback an mp3 [in the console], I cant figure out how to stop the playback. When I call waveout.Stop() the code just stops running and waveout.Dispose() never gets called. Is it something to do with the function callback? I dont know how to fix that if it is. static string MP3 = @"song.mp3"; static WaveOut waveout; static WaveStream playback; static void Main(string[] args) { waveout = new WaveOut(WaveCallbackInfo.FunctionCallback()); playback = OpenMp3Stream(MP3); waveout.Init(playback); waveout.Play(); Console.WriteLine("Started"); Thread.Sleep(2 * 1000); Console.WriteLine("Ending"); if (waveout.PlaybackState != PlaybackState.Stopped) waveout.Stop(); Console.WriteLine("Stopped"); waveout.Dispose(); Console.WriteLine("1st dispose"); playback.Dispose(); Console.WriteLine("2nd dispose"); } private static WaveChannel32 OpenMp3Stream(string fileName) { WaveChannel32 inputStream; WaveStream mp3Reader = new Mp3FileReader(fileName); WaveStream pcmStream = WaveFormatConversionStream.CreatePcmStream(mp3Reader); WaveStream blockAlignedStream = new BlockAlignReductionStream(pcmStream); inputStream = new WaveChannel32(blockAlignedStream); return inputStream; }

    Read the article

  • How to use RTPSocket to send RTP packets

    - by Afro Genius
    Hi there, am relatively new to JMF but have gone through the documents and have a sufficient understanding of how it works. That been said am having some trouble implementing a the server side for RTPSockets. After looking at their illustrations and example. I am still abit confused. Am I to develop a datasource and also datasink classes to handle the transfer? What am trying to do is stream data from my application to the underlying network and receive it back through another application. I have and understand receiving but just can't get my head around the steps involved for sending. Any help would be most appreciated.

    Read the article

  • Dividing a Video into Frames and Sending Frames to Streams

    - by Amit Kumar
    I have to implement a "demux" that divides up a video stream and sends each frame to one of multiple output streams in a round-robin fashion. I am trying to implement the demux as follows. The video stream contains one frame after another and is implemented via a java InputStream. Each frame has a frame header followed by the image data. The demux needs to read the frame header to know the size of the image data. The image data can then be redirected from the input video stream to one of the output streams (java OutputStream). My problem is about how to implement this redirection. That is, connect the InputStream to the OutputStream to send N bytes (here N is the size of the image data), and then disconnect and connect to another OutputStream. I have seen the interface of PipedInputStream etc but they do not seem to implement the disconnection.

    Read the article

  • Looping music with intro in XNA using SoundEffect

    - by Jordan Roher
    I have two sound files: Sound A is an 18 second intro designed to be played once Sound B is a 1 minute looping track I'd like to play Sound A once, then once Sound A is done, immediately play Sound B and keep looping Sound B until I tell it to stop. This is supposed to be looping town music in an RPG. I've tried doing this in code using just SoundEffect, but there's a tiny yet noticeable gap between the end of Sound A and the beginning of Sound B. Even if I put monitoring code watching Sound A's SoundEffectInstance.State in the Update() function, I haven't been able to start Sound B exactly when Sound A finishes so that it's seamless. I'd prefer to use SoundEffect because I can load WMA files rather than being stuck with WAVs in XACT.

    Read the article

  • Determining what frequencies correspond to the x axis in aurioTouch sample application

    - by eagle
    I'm looking at the aurioTouch sample application for the iPhone SDK. It has a basic spectrum analyzer implemented when you choose the "FFT" option. One of the things the app is lacking is X axis labels (i.e. the frequency labels). In the aurioTouchAppDelegate.mm file, in the function - (void)drawOscilloscope at line 652, it has the following code: if (displayMode == aurioTouchDisplayModeOscilloscopeFFT) { if (fftBufferManager->HasNewAudioData()) { if (fftBufferManager->ComputeFFT(l_fftData)) [self setFFTData:l_fftData length:fftBufferManager->GetNumberFrames() / 2]; else hasNewFFTData = NO; } if (hasNewFFTData) { int y, maxY; maxY = drawBufferLen; for (y=0; y<maxY; y++) { CGFloat yFract = (CGFloat)y / (CGFloat)(maxY - 1); CGFloat fftIdx = yFract * ((CGFloat)fftLength); double fftIdx_i, fftIdx_f; fftIdx_f = modf(fftIdx, &fftIdx_i); SInt8 fft_l, fft_r; CGFloat fft_l_fl, fft_r_fl; CGFloat interpVal; fft_l = (fftData[(int)fftIdx_i] & 0xFF000000) >> 24; fft_r = (fftData[(int)fftIdx_i + 1] & 0xFF000000) >> 24; fft_l_fl = (CGFloat)(fft_l + 80) / 64.; fft_r_fl = (CGFloat)(fft_r + 80) / 64.; interpVal = fft_l_fl * (1. - fftIdx_f) + fft_r_fl * fftIdx_f; interpVal = CLAMP(0., interpVal, 1.); drawBuffers[0][y] = (interpVal * 120); } cycleOscilloscopeLines(); } } From my understanding, this part of the code is what is used to decide which magnitude to draw for each frequency in the UI. My question is how can I determine what frequency each iteration (or y value) represents inside the for loop. For example, if I want to know what the magnitude is for 6kHz, I'm thinking of adding a line similar to the following: if (yValueRepresentskHz(y, 6)) NSLog(@"The magnitude for 6kHz is %f", (interpVal * 120)); Please note that although they chose to use the variable name y, from what I understand, it actually represents the x-axis in the visual graph of the spectrum analyzer, and the value of the drawBuffers[0][y] represents the y-axis.

    Read the article

  • Text to speech on iPhone

    - by lostInTransit
    Hi Is there any way we can convert text to speech in an iPhone app? Is it possible using the SDK? Thanks Are there any third-party TTS engines available for the iPhone? (AFAIK Acapela is not yet released)

    Read the article

  • How to open wav file with Lua

    - by Pete Webbo
    Hello, I am trying to do some wav processing using Lua, but have fallen a the first hurdle! I cannot find a function or library that will allow me to load a wav file and access the raw data. There is one library, but it onl allows playing of wavs, not access to the raw data. Are there any out there? Cheers, Pete.

    Read the article

  • High-Performance In-Browser Networking

    - by Jon Purdy
    (Similar in spirit to but different in practice from this question.) Is there any cross-browser-compatible, in-browser technology that allows a high-performance perstistent network connection between a server application and a client written in, say, Javascript? Think XmlHttpRequest on caffeine. I am working on a visualisation system that's restricted to at most a few users at once, and the server is pretty robust, so it can handle as much as it needs to. I would like to allow the client to have access to video streamed from the server at a minimum of about 20 frames per second, regardless of what their graphics hardware capabilities are. Simply put: is this doable without resorting to Flash or Java?

    Read the article

  • Problems with Acitivity LifeCycle with VideoView playback.

    - by Alex Volovoy
    Hi all i've ran into another problems with VideoView. Then video is playing, and i put device asleep, using hard button, onPause is called. But it followed by 03-17 11:26:33.779: WARN/ActivityManager(884): Activity pause timeout for HistoryRecord{4359f620 com.package/com.package.VideoViewActivity} And then i have onStart/onResume again and Video starts playing. I've try to move code around onStart/onStop - doesn't seems to make difference. sample code : public class VideoViewActivity extends Activity { private String path = ""; private VideoView mVideoView; private static final String MEDIA_URL = "media_url"; @Override public void onCreate(Bundle icicle) { super.onCreate(icicle); setContentView(R.layout.videoview); mVideoView = (VideoView)findViewById(R.id.surface_view); path = getIntent().getStringExtra(MEDIA_URL); } @Override public void onResume() { super.onResume(); mVideoView.setVideoPath(path); mVideoView.setMediaController(new MediaController(this)); mVideoView.requestFocus(); mVideoView.start(); } @Override public void onPause() { super.onPause(); mVideoView.stopPlayback(); mVideoView.setMediaController(null); } }

    Read the article

  • ShoutCast over SSL

    - by Honus Wagner
    So I've gone ahead and set up my ShoutCast server DNAS and set my DSP in Winamp on my host computer. The server listens on port 8000, so per some instructions I installed an output plugin for winamp (Shoutcast DSP) and used 8000 and the password to connect. Server accepts the connection. Now, what the heck do I do now? My host computer is SSL secured and the DNAS server is installed within the secure web directory (if that matters). My desired end result is that I want to listen to my ShoutCast setup at home (host computer) from any computer. I try browsing to my ip address and port 8000 (without using HTTPS) and it comes back with nothing. If I browse with HTTPS://my.server.com:8000, I get Error code: ssl_error_rx_record_too_long) Have I completely missed something, or am I just a total moron? Thanks.

    Read the article

  • AudioQueue in-memory playback example

    - by Jonesy
    Does anybody know of any examples using AudioQueue that play from an in-memory source? All the examples I can find play from files (using AudioFileReadPackets) but in my particular case I am generating the data myself in realtime so ideally, I want to enqueue the data myself rather than sucking it out of a file using the callback. Any help much appreciated.

    Read the article

  • How to play .3gp videos in mobile using RTMP (FMS) and HTTP?

    - by Sunil Kumar
    Hi I am not able to play video on mobile device which is .3gp container and H.263 / AMR_NB encoded. I just want to play my website videos in mobile device also just like youtube.com. I want to use RTMP and HTTP both. My requirement is as follows- Which codec and container will be best? Should I use FLV to play video on mobile device? RTSP required or can be use RTMP? Is NetStream and NetConnection methods different from Flash Player in Flash Lite Player? How to play 3gp video using RTMP stream ie. ns.play(“mp4:mobilevideo.3gp”, 0, -1, true) is it ok or any thing else required? For mobile browser and computer browser, can I use single player or I have to make different player for computer browser and mobile browser? It would be better if I can do it with single player for both mobile and computer browser. Sample code required for testing. If you can. I got below article in which they mention that we can play video 3gp container in mobile also. Please find the article. Articles URL- http://www.hsharma.com/tech/articles/flash-lite-30-video-formats-and-video-volume/ http://www.adobe.com/devnet/logged_in/dmotamedi_fms3.html Thanks Sunil Kumar

    Read the article

  • loading mp3 from file using random access to flash.media.Sound

    - by Irfan Mulic
    We are migrating application from Delphi to Flex (Air) that plays mp3 files from random access big file. it has positions and sizes to extract mp3 data to FileStream-MemoryStream and then we use bass.dll to play it from memory stream. Now I have to play those same mp3's in flex but I am not sure how... I was reading something similar for reading/writing data using ByteArray from here but how to apply it to flash.media.Sound ? http://livedocs.adobe.com/flex/3/html/help.html?content=ByteArrays_2.html Any help?

    Read the article

  • Mobile Video Detection

    - by aaroninfidel
    Hi, I'm using DeviceAtlas to detect mobile phones, I was wondering if anyone had some good resources in terms of standard codecs, video dimensions that are used and how you go about serving video to mobile devices. Thanks! -Aaron

    Read the article

  • video calling (center)

    - by rrejc
    We are starting to develop a new application and I'm searching for information/tips/guides on application architecture. Application should: read the data from an external (USB) device send the data to the remote server (through internet) receive the data from the remote server perform a video call with to the calling (support) center receive a video call call from the calling (support) center support touch screens In addition: some of the data should also be visible through the web page. So I was thinking about: On the server side: use the database (probably MS SQL) use ORM (nHibernate) to map the data from the DB to the domain objects create a layer with business logic in C# create a web (WCF) services (for client application) create an asp.net mvc application (for item 7.) to enable data view through the browser On the client side I would use WPF 4 application which will communicate with external device and the wcf services on the server. So far so good. Now the problem begins. I have no idea how to create a video call (outgoing or incoming) part of the application. I believe that there is no problem to communicate with microphone, speaker, camera with WPF/C#. But how to communicate with the call center? What protocol and encoding should be used? I think that I will need to create some kind of server which will: have a list of operators in the calling center and track which operator is occupied and which operator is free have a list of connected end users receive incoming calls from end users and delegate call to free operator delegate calls from calling center to the end user Any info, link, anything on where to start would be much appreciated. Many thanks!

    Read the article

  • Understanding PTS and DTS in video frames

    - by theateist
    I had fps issues when transcoding from avi to mp4(x264). Eventually the problem was in PTS and DTS values, so lines 12-15 where added before av_interleaved_write_frame function: 1. AVFormatContext* outContainer = NULL; 2. avformat_alloc_output_context2(&outContainer, NULL, "mp4", "c:\\test.mp4"; 3. AVCodec *encoder = avcodec_find_encoder(AV_CODEC_ID_H264); 4. AVStream *outStream = avformat_new_stream(outContainer, encoder); 5. // outStream->codec initiation 6. // ... 7. avformat_write_header(outContainer, NULL); 8. // reading and decoding packet 9. // ... 10. avcodec_encode_video2(outStream->codec, &encodedPacket, decodedFrame, &got_frame) 11. 12. if (encodedPacket.pts != AV_NOPTS_VALUE) 13. encodedPacket.pts = av_rescale_q(encodedPacket.pts, outStream->codec->time_base, outStream->time_base); 14. if (encodedPacket.dts != AV_NOPTS_VALUE) 15. encodedPacket.dts = av_rescale_q(encodedPacket.dts, outStream->codec->time_base, outStream->time_base); 16. 17. av_interleaved_write_frame(outContainer, &encodedPacket) After reading many posts I still do not understand: outStream->codec->time_base = 1/25 and outStream->time_base = 1/12800. The 1st one was set by me but I cannot figure out why and who set 12800? I noticed that before line (7) outStream->time_base = 1/90000 and right after it it changes to 1/12800, why? When I transcode from avi to avi, meaning changing the line (2) to avformat_alloc_output_context2(&outContainer, NULL, "avi", "c:\\test.avi"; , so before and after line (7) outStream->time_base remains always 1/25 and not like in mp4 case, why? What is the difference between time_base of outStream->codec and outStream? To calc the pts av_rescale_q does: takes 2 time_base, multiplies their fractions in cross and then compute the pts. Why it does this in this way? As I debugged, the encodedPacket.pts has value incremental by 1, so why changing it if it does has value? At the beginning the dts value is -2 and after each rescaling it still has negative number, but despite this the video played correctly! Shouldn't it be positive?

    Read the article

  • If I want to play the same sound 10 times per second, must I have 10 copies of that sound in memory?

    - by mystify
    I have a sound that needs to get played 10 times per second. The sound is 1 second long. So it does overlap like 10 times. However, as far as I understand the Finch sound library, I would need 10 different instances of a sound in place so that I can play it 10 times at almost the same time. When I have just one instance, the sound would stop and play from the beginning on every iteration, but not overlap with itself. How to do that?

    Read the article

  • Dimdim Change name

    - by islam
    i build dimdim v4.5 on my pc and its work fine with me. each time i want to start meeting i type my pc IP address like this : http://<my-ip-address>/dimdim i want to change the word dimdim to be anything else like : http://<my-ip-address>/meeting regards

    Read the article

< Previous Page | 81 82 83 84 85 86 87 88 89 90 91 92  | Next Page >