Search Results

Search found 5304 results on 213 pages for 'audio streaming'.

Page 82/213 | < Previous Page | 78 79 80 81 82 83 84 85 86 87 88 89  | Next Page >

  • NAudio demos not working anymore

    - by Kurru
    I just tried to run the NAudio demos and I'm getting a weird error: System.BadImageFormatException: Could not load file or a ssembly 'NAudio, Version=1.3.8.0, Culture=neutral, PublicKeyToken=null' or one o f its dependencies. An attempt was made to load a program with an incorrect form at. File name: 'NAudio, Version=1.3.8.0, Culture=neutral, PublicKeyToken=null' at NAudioWpfDemo.AudioGraph..ctor() at NAudioWpfDemo.ControlPanelViewModel..ctor(IWaveFormRenderer waveFormRender er, SpectrumAnalyser analyzer) in C:\Users\Admin\Downloads\NAudio-1.3\NAudio-1-3 \Source Code\NAudioWpfDemo\ControlPanelViewModel.cs:line 23 at NAudioWpfDemo.MainWindow..ctor() in C:\Users\Admin\Downloads\NAudio-1.3\NA udio-1-3\Source Code\NAudioWpfDemo\MainWindow.xaml.cs:line 15 WRN: Assembly binding logging is turned OFF. To enable assembly bind failure logging, set the registry value [HKLM\Software\M icrosoft\Fusion!EnableLog] (DWORD) to 1. Note: There is some performance penalty associated with assembly bind failure lo gging. To turn this feature off, remove the registry value [HKLM\Software\Microsoft\Fus ion!EnableLog]. Since the last time I used NAudio demos I have changed from 32bit Windows XP to 64bit Windows 7. Would this cause this issue? Its very annoying as I was about to try my hand at audio in C# again

    Read the article

  • Silverlight MediaElement Position Property Weirdness

    - by BarrettJ
    I have a MediaElement that is reporting its position incorrectly and weirdly, but consistently. It seems like when it gets to the last second of the audio (and it's always the last second, regardless if the sound is two seconds or 10), it doesn't update it's position until it finishes. Example output: Playback Progress: 0/3.99 - 0 Playback Progress: 0.01/3.99 - 0 Playback Progress: 0.03/3.99 - 0 Playback Progress: 0.06/3.99 - 1 Playback Progress: 0.07/3.99 - 1 Playback Progress: 0.08/3.99 - 2 Playback Progress: 0.11/3.99 - 2 Playback Progress: 0.14/3.99 - 3 Playback Progress: 0.19/3.99 - 4 Playback Progress: 0.23/3.99 - 5 Playback Progress: 0.25/3.99 - 6 Playback Progress: 0.28/3.99 - 7 Playback Progress: 0.3/3.99 - 7 Playback [SNIP] Playback Progress: 2.8/3.99 - 70 Playback Progress: 2.83/3.99 - 70 Playback Progress: 2.88/3.99 - 72 Playback Progress: 2.9/3.99 - 72 Playback Progress: 2.91/3.99 - 72 Playback Progress: 2.92/3.99 - 73 Playback Progress: 2.99/3.99 - 74 Playback Progress: 3/3.99 - 75 Playback Progress: 3/3.99 - 75 Playback Progress: 3/3.99 - 75 Playback Progress: 3/3.99 - 75 Playback Progress: 3/3.99 - 75 Playback Progress: 3/3.99 - 75 Playback Progress: 3/3.99 - 75 Playback Progress: 3/3.99 - 75 Playback Progress: 3/3.99 - 75 Playback Progress: 3.99/3.99 - 100 That is the result of: WriteLine("Playback Progress: " + Position + "/" + LengthInSeconds + " - " + (int)((Position / LengthInSeconds) * 100)); public double Position { get { return my_media_element != null ? my_media_element.Position.TotalSeconds : 0; } } public double LengthInSeconds { get { return my_media_element != null ? my_media_element.NaturalDuration.TimeSpan.TotalSeconds : 0; } } Anyone have any ideas why this is occurring?

    Read the article

  • How to get a volume measurement of iPhone recording in dB, with a limit of at least 120dB

    - by Cyber
    Hello, I am trying to make a simple volume meter for the iPhone. I want the volume displayed in dB. When using this turorial, I am only getting measurements up to 78 dB. I've read that that is because the dBFS spectrum for 16 bit audio recordings is only 96 dB. I tried modifying this piece of code in the init funcyion: dataFormat.mSampleRate = 44100.0f; dataFormat.mFormatID = kAudioFormatLinearPCM; dataFormat.mFramesPerPacket = 1; dataFormat.mChannelsPerFrame = 1; dataFormat.mBytesPerFrame = 2; dataFormat.mBytesPerPacket = 2; dataFormat.mBitsPerChannel = 16; dataFormat.mReserved = 0; I changed the value of mBitsPerChannel, hoping to increase the bit value of the recording. dataFormat.mBitsPerChannel = 32; With that variable set to 32, the "mAveragePower" function returns only 0. So, how can i measure more decibels? All my code is practically the same as in the tutorial i posted above. Thanks in advance, Thomas

    Read the article

  • Can't play wav file from Javascript in Firefox for Mac

    - by Mike Royle
    I have the following html file that plays a wav file when the user hovers over the 'Play' anchor tag. It works perfectly on IE, Chrome, Firefox, Opera, Safari on both Windows and Mac - except for Firefox on the Mac which does not play the file. <html> <head> <title></title> <script> function PlayAudio() { var s = document.getElementById("soundFile"); s.Play(); } </script> </head> <body> <embed src="MySound.wav" enablejavascript="true" type="audio/wav" autostart="false" width="0" height="0" id="soundFile" /> <a href="#" onmouseover="PlayAudio()">Play</a> </body> </html> If the autostart attribute of the embed tag is set to true then the wav file plays as expected in Firefox for Mac, but not on the mouseover of the anchor tag. Any ideas?

    Read the article

  • Debug NAudio MP3 reading difference?

    - by Conrad Albrecht
    My code using NAudio to read one particular MP3 gets different results than several other commercial apps. Specifically: My NAudio-based code finds ~1.4 sec of silence at the beginning of this MP3 before "audible audio" (a drum pickup) starts, whereas other apps (Windows Media Player, RealPlayer, WavePad) show ~2.5 sec of silence before that same drum pickup. The particular MP3 is "Like A Rolling Stone" downloaded from Amazon.com. Tested several other MP3s and none show any similar difference between my code and other apps. Most MP3s don't start with such a long silence so I suspect that's the source of the difference. Debugging problems: I can't actually find a way to even prove that the other apps are right and NAudio/me is wrong, i.e. to compare block-by-block my code's results to a "known good reference implementation"; therefore I can't even precisely define the "error" I need to debug. Since my code reads thousands of samples during those 1.4 sec with no obvious errors, I can't think how to narrow down where/when in the input stream to look for a bug. The heart of the NAudio code is a P/Invoke call to acmStreamConvert(), which is a Windows "black box" call which I can't think how to error-check. Can anyone think of any tricks/techniques to debug this?

    Read the article

  • How can I implement a volume meter for a song currently playing? (iPhone OS 3.1.3)

    - by Adam
    Hi i'm very new to core audio and I just would like some help in coding up a little volume meter for whatever's being outputted through headphones or built-in speaker. Like a dB meter. I have the following code, and have been trying to go through the apple source project "SpeakHere", but it's a nightmare trying to go through all that, without knowing how it works first... Could anyone shed some light? Here's the code I have so far... (void)displayWaveForm { while (musicIsPlaying == YES { NSLog(@"%f",sizeof(AudioQueueLevelMeterState)); } } (IBAction)playMusic { if (musicIsPlaying == NO) { NSURL *url = [NSURL fileURLWithPath:[NSString stringWithFormat:@"%@/track7.wav",[[NSBundle mainBundle] resourcePath]]]; NSError *error; music = [[AVAudioPlayer alloc] initWithContentsOfURL:url error:&error]; music.numberOfLoops = -1; music.volume = 0.5; [music play]; musicIsPlaying = YES; [self displayWaveForm]; } else { [music pause]; musicIsPlaying = NO; } }

    Read the article

  • How would i down-sample a .wav file then reconstruct it using nyquist? - in MATLAB

    - by Andrew
    This is all done in MATLAB 2010 My objective is to show the results of: undersampling, nyquist rate/ oversampling First i need to downsample the .wav file to get an incomplete/ or impartial data stream that i can then reconstuct. Heres the flow chart of what im going to be doing So the flow is analog signal - sampling analog filter - ADC - resample down - resample up - DAC - reconstruction analog filter what needs to be achieved: F= Frequency F(Hz=1/s) E.x. 100Hz = 1000 (Cyc/sec) F(s)= 1/(2f) Example problem: 1000 hz = Highest frequency 1/2(1000hz) = 1/2000 = 5x10(-3) sec/cyc or a sampling rate of 5ms This is my first signal processing project using matlab. what i have so far. % Fs = frequency sampled (44100hz or the sampling frequency of a cd) [test,fs]=wavread('test.wav'); % loads the .wav file left=test(:,1); % Plot of the .wav signal time vs. strength time=(1/44100)*length(left); t=linspace(0,time,length(left)); plot(t,left) xlabel('time (sec)'); ylabel('relative signal strength') **%this is were i would need to sample it at the different frequecys (both above and below and at) nyquist frequency.*I think.*** soundsc(left,fs) % shows the resaultant audio file , which is the same as original ( only at or above nyquist frequency however) Can anyone tell me how to make it better, and how to do the sampling at verious frequencies?

    Read the article

  • OpenAL not playing on Max OS X 10.6

    - by Grimless
    I've been working on getting a basic audio engine running on my Mac using OpenAL. It seems relatively straightforward after working with OpenGL for a while. However, despite the fact that I believe I have everything in place, my sound will not play. Here is the order of things I am doing: //Creating a new device ALCdevice* device = alcOpenDevice(NULL); //Create a new context with the device ALCcontext* context = alcCreateContext(device, NULL); //Make that context current alcMakeContextCurrent(context); //Do lots of loading stuff to bring in an AIFF... voodooAIFF = myAIFFLoader("name"); //Then use that data ALuint buf; alGenBuffers(1, &buf); //Check for errors, but none happen... //Bind buffer data. alBufferData(buf, voodooAIFF.format, voodooAIFF.data, voodooAIFF.sizeInBytes, voodooAIFF.frequency); //Check for errors, none here either... //Create Source ALuint src; alGenSources(1, &src); //Error check again, no errors. //Bind source to buffer alSourcei(src, AL_BUFFER, buf); //Set reference distance alSourcei(sourceID, AL_REFERENCE_DISTANCE, 1); //Set source attributes including gain and pitch to 1 (direction set to 0,0,0) //Check for errors, nothing... //Set up listener attributes. //Check for errors, no errors. //Begin playing. alSourcePlay(src); Observe silence... Any insight, what steps am I missing here?

    Read the article

  • Difficulty porting raw PCM output code from Java to Android AudioTrack API.

    - by IndigoParadox
    I'm attempting to port an application that plays chiptunes (NSF, SPC, etc) music files from Java SE to Android. The Android API seems to lack the javax multimedia classes that this application uses to output raw PCM audio. The closest analog I've found in the API is AudioTrack and so I've been wrestling with that. However, when I try to run one of my sample music files through my port-in-progress, all I get back is static. My suspicion is that it's the AudioTrack I've setup which is at fault. I've tried various different constructors but it all just outputs static in the end. The DataLine setup in the original code is something like: AudioFormat audioFormat = new AudioFormat( AudioFormat.Encoding.PCM_SIGNED, 44100, 16, 2, 4, 44100, true ); DataLine.Info lineInfo = new DataLine.Info( SourceDataLine.class, audioFormat ); DataLine line = (SourceDataLine)AudioSystem.getLine( lineInfo ); The constructor I'm using right now is: AudioTrack = new AudioTrack( AudioManager.STREAM_MUSIC, 44100, AudioFormat.CHANNEL_CONFIGURATION_STEREO, AudioFormat.ENCODING_PCM_16BIT, AudioTrack.getMinBufferSize( 44100, AudioFormat.CHANNEL_CONFIGURATION_STEREO, AudioFormat.ENCODING_PCM_16BIT ), AudioTrack.MODE_STREAM ); I've replaced constants and variables in those so they make sense as concisely as possible, but my basic question is if there are any obvious problems in the assumptions I made when going from one format to the other.

    Read the article

  • Is there a best practice for concatenating MP3 Files, adjusting sample rates to match, while preserving original files?

    - by Scott
    Hello overflow community! Does anyone know if there is a "best practice" to concatenate mp3 files to create new files, while preserving the original files? I am working on a CentOS Linux machine, in command line. I will eventually call the command line from a PHP script. I have been doing research and I have come up with a process that I think could work. It combines general advice from different forums, blogs, and sources like this one. So here I go: Create a temporary folder Loop through files to create a new, converted copy, of file into a "raw" format (which one, I don't know. I didn't know "raw" files existed before too long ago. I could use some suggestions on this) Store the path to the temporary files, in the temporary folder, and then loop through the files to concatenate them and then put the new merged file the final "processed directory" Delete the contents of the temporary file with the temporary raw files inside. Convert the final file from "raw" to mp3 and enjoy the finished result I'm thinking that this course of action might be best because I can't necessarily control the quality of the original "source" mp3s. The only other option I could think of would be to create a script that would perform a similar process upon files being added to the system leaving only the files with the "proper" format and removing the original "erroneous" file. Hopefully you can see that I have put some thought into this and that I'm trying to leverage the collective knowledge of this community to choose the best direction. Perhaps there is a better path that I could take? By concatenate, I mean to join together in sequence to create a new audio file from the "concatenated files."

    Read the article

  • To display an album art from media store in android

    - by user1834724
    I'm not able to display album art from media store while listing albums,I'm getting following error Bad request for field slot 0,-1. numRows = 32, numColumns = 7 01-02 02:48:16.789: D/AndroidRuntime(4963): Shutting down VM 01-02 02:48:16.789: W/dalvikvm(4963): threadid=1: thread exiting with uncaught exception (group=0x4001e578) 01-02 02:48:16.804: E/AndroidRuntime(4963): FATAL EXCEPTION: main 01-02 02:48:16.804: E/AndroidRuntime(4963): java.lang.IllegalStateException: get field slot from row 0 col -1 failed Can anyone kindly help with this issue,Thanks in advance public class AlbumbsListActivity extends Activity { private ListAdapter albumListAdapter; private HashMap<Integer, Integer> albumInfo; private HashMap<Integer, Integer> albumListInfo; private HashMap<Integer, String> albumListTitleInfo; private String audioMediaId; private static final String TAG = "AlbumsListActivity"; Boolean showAlbumList = false; Boolean AlbumListTitle = false; ImageView album_art ; public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.albums_list_layout); Cursor cursor; ContentResolver cr = getApplicationContext().getContentResolver(); if (getIntent().hasExtra(Util.ALBUM_ID)) { int albumId = getIntent().getIntExtra(Util.ALBUM_ID, Util.MINUS_ONE); String[] projection = new String[] { Albums._ID, Albums.ALBUM, Albums.ARTIST, Albums.ALBUM_ART, Albums.NUMBER_OF_SONGS }; String selection = null; String[] selectionArgs = null; String sortOrder = Media.ALBUM + " ASC"; cursor = cr.query(Albums.EXTERNAL_CONTENT_URI, projection, selection, selectionArgs, sortOrder); /* final String[] ccols = new String[] { //MediaStore.Audio.Albums., MediaStore.Audio.Albums._ID, MediaStore.Audio.Albums.ALBUM, MediaStore.Audio.Albums.ARTIST, MediaStore.Audio.Albums.ALBUM_ART, MediaStore.Audio.Albums.NUMBER_OF_SONGS }; cursor = cr.query(MediaStore.Audio.Albums.getContentUri( "external"), ccols, null, null, MediaStore.Audio.Albums.DEFAULT_SORT_ORDER);*/ showAlbumList = true; } else { String order = MediaStore.Audio.Albums.ALBUM + " ASC"; String where = MediaStore.Audio.Albums.ALBUM; cursor = managedQuery(Media.EXTERNAL_CONTENT_URI, DbUtil.projection, null, null, order); showAlbumList = false; } albumInfo = new HashMap<Integer, Integer>(); albumListInfo = new HashMap<Integer, Integer>(); ListView listView = (ListView) findViewById(R.id.mylist_album); listView.setFastScrollEnabled(true); listView.setOnItemLongClickListener(new ItemLongClickListener()); listView.setAdapter(new AlbumCursorAdapter(this, cursor, DbUtil.displayFields, DbUtil.displayViews,showAlbumList)); final Uri uri = MediaStore.Audio.Albums.EXTERNAL_CONTENT_URI; final Cursor albumListCursor = cr.query(uri, DbUtil.Albumprojection, null, null, null); } private class AlbumCursorAdapter extends SimpleCursorAdapter implements SectionIndexer{ private final Context context; private final Cursor cursorValues; private Time musicTime; private Boolean isAlbumList; private MusicAlphabetIndexer mIndexer; private int mTitleIdx; public AlbumCursorAdapter(Context context, Cursor cursor, String[] from, int[] to,Boolean isAlbumList) { super(context, 0, cursor, from, to); this.context = context; this.cursorValues = cursor; //musicTime = new Time(); this.isAlbumList = isAlbumList; } String albumName=""; String artistName = ""; String numberofsongs = ""; long albumid; @Override public View getView(int position, View convertView, ViewGroup parent) { View rowView = convertView; if (rowView == null) { LayoutInflater inflater = (LayoutInflater) context .getSystemService(Context.LAYOUT_INFLATER_SERVICE); rowView = inflater .inflate(R.layout.row_album_layout, parent, false); } this.cursorValues.moveToPosition(position); String title = ""; String artistName = ""; String albumName = ""; int count; long albumid = 0; String songDuration = ""; if (isAlbumList) { albumInfo.put( position, Integer.parseInt(this.cursorValues.getString(this.cursorValues .getColumnIndex(MediaStore.Audio.Albums._ID)))); artistName = this.cursorValues .getString(this.cursorValues .getColumnIndex(MediaStore.Audio.Albums.ARTIST)); albumName = this.cursorValues .getString(this.cursorValues .getColumnIndex(MediaStore.Audio.Albums.ALBUM)); albumid=Integer.parseInt(this.cursorValues.getString(this.cursorValues .getColumnIndex(MediaStore.Audio.Albums.ALBUM_ID))); } else { albumInfo.put(position, Integer.parseInt(this.cursorValues .getString(this.cursorValues .getColumnIndex(MediaStore.Audio.Media._ID)))); artistName = this.cursorValues.getString(this.cursorValues .getColumnIndex(MediaStore.Audio.Media.ARTIST)); albumName = this.cursorValues.getString(this.cursorValues .getColumnIndex(MediaStore.Audio.Media.ALBUM)); albumid=Integer.parseInt(this.cursorValues.getString(this.cursorValues .getColumnIndex(MediaStore.Audio.Media.ALBUM_ID))); } //code for Alphabetical Indexer mTitleIdx = cursorValues.getColumnIndex(MediaStore.Audio.Media.ALBUM); mIndexer = new MusicAlphabetIndexer(cursorValues, mTitleIdx, getResources().getString(R.string.fast_scroll_alphabet)); //end TextView metaone = (TextView) rowView.findViewById(R.id.album_name); TextView metatwo = (TextView) rowView.findViewById(R.id.artist_name); ImageView metafour = (ImageView) rowView.findViewById(R.id.album_art); TextView metathree = (TextView) rowView .findViewById(R.id.songs_count); metaone.setText(albumName); metatwo.setText(artistName); (metafour)getAlbumArt(albumid); System.out.println("albumid----------"+albumid); metaThree.setText(DbUtil.makeTimeString(context, secs)); getAlbumArt(albumid); } TextView metaone = (TextView) rowView.findViewById(R.id.album_name); TextView metatwo = (TextView) rowView.findViewById(R.id.artist_name); album_art = (ImageView) rowView.findViewById(R.id.album_art); //TextView metathree = (TextView) rowView.findViewById(R.id.songs_count); metaone.setText(albumName); metatwo.setText(artistName); return rowView; } } String albumArtUri = ""; private void getAlbumArt(long albumid) { Uri uri=ContentUris.withAppendedId(MediaStore.Audio.Albums.EXTERNAL_CONTENT_URI, albumid); System.out.println("hhhhhhhhhhh" + uri); Cursor cursor = getContentResolver().query( ContentUris.withAppendedId( MediaStore.Audio.Albums.EXTERNAL_CONTENT_URI, albumid), new String[] { MediaStore.Audio.AlbumColumns.ALBUM_ART }, null, null, null); if (cursor.moveToFirst()) { albumArtUri = cursor.getString(0); } System.out.println("kkkkkkkkkkkkkkkkkkk :" + albumArtUri); cursor.close(); if(albumArtUri != null){ Options opts = new Options(); opts.inJustDecodeBounds = true; Bitmap albumCoverBitmap = BitmapFactory.decodeFile(albumArtUri, opts); opts.inJustDecodeBounds = false; albumCoverBitmap = BitmapFactory.decodeFile(albumArtUri, opts); if(albumCoverBitmap != null) album_art.setImageBitmap(albumCoverBitmap); }else { // TODO: Options opts = new Options(); Bitmap albumCoverBitmap = BitmapFactory.decodeResource(getApplicationContext().getResources(), R.drawable.albumart_mp_unknown_list, opts); if(albumCoverBitmap != null) album_art.setImageBitmap(albumCoverBitmap); } } } }

    Read the article

  • How can I send audio input as chunked HTTP?

    - by Noli
    I am trying to create an interface with an external server, and don't know where to start. I would need to take audio as input to my computer, and send it to the remote server as a chunked HTTP request. The api that i'm trying to connect to is described here p1-5 http://dragonmobile.nuancemobiledeveloper.com/public/Help/HttpInterface/HTTP_Services_for_NDEV_v1.2_Silver_Version.pdf I have never worked with audio programmatically, so don't know what would be the most straighforward way to go about this? Are there solutions that exist out there that already do this? I've come across references to Shoutcast, VLC, Icecast, FFMPeg, Darkice, but I don't know if those are appropriate for what I'm trying to accomplish or not. Would appreciate any guidance, Thanks

    Read the article

  • Can Google Translate's audio files be used in a game?

    - by ashes999
    For my game, I need text-to-speech. Since it's Android, I decided to settle for MP3s, since the range of words spoken is few. For my prototype, I'm using Google Translate to generate the audio since it has awesome pronounciation across multiple languages. But can I use it in production? What if I sell my game for $1 on the app store? All I can find on SE is that the API may be LGPL, and that the licensing page mentions the API is only available for academic research -- nothing more. My usage is a bit different; I'm actually capturing the audio bits and using those instead. I'm curious to know the license for this; I can't find anything with my Google-fu.

    Read the article

  • CPU spikes cause audio stuttering in Audacious when browsing? (Lubuntu)

    - by Alucai Vivorvel
    My default audio player is Audacious, browser Google Chrome. I tried Firefox, and while I love it, the CPU load spikes when doing something as simple and small and switching a tab, which causes the audio playing to stutter (as sound is onboard and handled thru the CPU). Chrome doesn't do this as much, but there is the occasional stuttering when browsing, which is ridiculous, as not even Windows Vista does this. So I thought maybe it's something to do with how Lubuntu handles sound, I checked and only ALSA was installed. I tried installing PulseAudio, but, while the music "plays", nothing comes through the speakers. Immediately after switching back to ALSA the music pours out of them. So I was wondering if you had any idea what was going on here. I asked on Ubuntu Forums but apparently my problem is too complex, as it's been over a week since the last reply. Specs are: AMD Athlon 64 3200+ @ 2GHz 2GB Corsair 667MHz DDR2 RAM ATi HD Radeon 3650 (AGP) 512MB 500W Cooler Master PSU 80GB SATA II HDD (Vista is installed on 500GB drive) Biostar K8M800 Motherboard

    Read the article

  • Issue getting camera emulation to work with Tom G's HttpCamera

    - by user591524
    I am trying to use the android emulator to preview video from webcam. I have used the tom gibara sample code, minus the webbroadcaster (i am instead using VLC streaming via http). So, I have modified the SDK's "CameraPreview" app to use the HttpCamera, but the stream never appears. Debugging through doesn't give me any clues either. I wonder if anything obvious is clear to others? The preview app launches and remains black. Notes: 1) I have updated the original CameraPreview class as described here: http://www.inter-fuser.com/2009/09/live-camera-preview-in-android-emulator.html, but referencing httpCamera instead of socketcamera. 2) I updated Tom's original example to reference "Camera" type instead of deprecated "CameraDevice" type. 3) Below is my CameraPreview.java. 4) THANK YOU package com.example.android.apis.graphics; import android.app.Activity; import android.content.Context; import android.hardware.Camera; import android.os.Bundle; import android.view.SurfaceHolder; import android.view.SurfaceView; import android.view.Window; import java.io.IOException; import android.graphics.Canvas; // ---------------------------------------------------------------------- public class CameraPreview extends Activity { private Preview mPreview; @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); // Hide the window title. requestWindowFeature(Window.FEATURE_NO_TITLE); // Create our Preview view and set it as the content of our activity. mPreview = new Preview(this); setContentView(mPreview); } } // ---------------------------------------------------------------------- class Preview extends SurfaceView implements SurfaceHolder.Callback { SurfaceHolder mHolder; //Camera mCamera; HttpCamera mCamera;//changed Preview(Context context) { super(context); // Install a SurfaceHolder.Callback so we get notified when the // underlying surface is created and destroyed. mHolder = getHolder(); mHolder.addCallback(this); //mHolder.setType(SurfaceHolder.SURFACE_TYPE_PUSH_BUFFERS); mHolder.setType(SurfaceHolder.SURFACE_TYPE_NORMAL);//changed } public void surfaceCreated(SurfaceHolder holder) { // The Surface has been created, acquire the camera and tell it where // to draw. //mCamera = Camera.open(); this.StartCameraPreview(holder); } public void surfaceDestroyed(SurfaceHolder holder) { // Surface will be destroyed when we return, so stop the preview. // Because the CameraDevice object is not a shared resource, it's very // important to release it when the activity is paused. //mCamera.stopPreview();//changed mCamera = null; } public void surfaceChanged(SurfaceHolder holder, int format, int w, int h) { // Now that the size is known, set up the camera parameters and begin // the preview. //Camera.Parameters parameters = mCamera.getParameters(); //parameters.setPreviewSize(w, h); //mCamera.setParameters(parameters); //mCamera.startPreview(); this.StartCameraPreview(holder); } private void StartCameraPreview(SurfaceHolder sh) { mCamera = new HttpCamera("10.213.74.247:443", 640, 480, true);//changed try { //mCamera.setPreviewDisplay(holder); Canvas c = sh.lockCanvas(null); mCamera.capture(c); sh.unlockCanvasAndPost(c); } catch (Exception exception) { //mCamera.release(); mCamera = null; // TODO: add more exception handling logic here } } }

    Read the article

  • How to overlay audio file on .wmv video file using c#?

    - by Vipul jain
    Hello, I want to record video and audio files using C#. After recording of audio + video i want to merge them. There can be only one video file and 10 audio file. I want this ten files to overlay on one video file. I am assure that i want video file in .wmv format. Can you tell me i should record audios in which format so later i can overlay those audio files on .wmv format video file? Also please let me know how to overlay audio file on .wmv video file? Hope i will get prompt reply for this

    Read the article

  • MySQL Binary Storage using BLOB VS OS File System: large files, large quantities, large problems.

    - by Quantico773
    Hi Guys, Versions I am running (basically latest of everything): PHP: 5.3.1 MySQL: 5.1.41 Apache: 2.2.14 OS: CentOS (latest) Here is the situation. I have thousands of very important documents, ranging from customer contracts to voice signatures (recordings of customer authorisation for contracts), with file types including, but not limited to jpg, gif, png, tiff, doc, docx, xls, wav, mp3, pdf, etc. All of these documents are currently stored on several servers including Windows 32 bit, CentOS and Mac, among others. Some files are also stored on employees desktop computers and laptops, and some are still hard copies stored in hundreds of boxes and filing cabinets. Now because customers or lawyers could demand evidence of contracts at any time, my company has to be able to search and locate the correct document(s) effectively, for this reason ALL of these files have to be digitised (if not already) and correlated into some sort of order for searching and accessing. As the programmer, I have created a full Customer Relations Management tool that the whole company uses. This includes Customer Profiles management, Order and job Tracking tools, Job/sale creation and management modules, etc, and at the moment any file that is needed at a customer profile level (drivers licence, credit authority, etc) or at a job/sale level (contracts, voice signatures, etc) can be uploaded to the server and sits in a parent/child hierarchy structure, just like Windows Explorer or any other typical file managment model. The structure appears as such: drivers_license |- DL_123.jpg voice_signatures |- VS_123.wav |- VS_4567.wav contracts So the files are uplaoded using PHP and Apache, and are stored in the file system of the OS. At the time of uploading, certain information about the file(s) is stored in a MySQL database. Some of the information stored is: TABLE: FileUploads FileID CustomerID (the customer id that the file belongs to, they all have this.) JobID/SaleID (the id of the job/sale associated, if any.) FileSize FileType UploadedDateTime UploadedBy FilePath (the directory path the file is stored in.) FileName (current file name of uploaded file, combination of CustomerID and JobID/SaleID if applicable.) FileDescription OriginalFileName (original name of the source file when uploaded, including extension.) So as you can see, the file is linked to the database by the File Name. When I want to provide a customers' files for download to a user all I have to do is "SELECT * FROM FileUploads WHERE CustomerID = 123 OR JobID = 2345;" and this will output all the file details I require, and with the FilePath and FileName I can provide the link for download. http... server / FilePath / FileName There are a number of problems with this method: Storing files in this "database unconcious" environment means data integrity is not kept. If a record is deleted, the file may not be deleted also, or vice versa. Files are strewn all over the place, different servers, computers, etc. The file name is the ONLY thing matching the binary to the database and customer profile and customer records. etc, etc. There are so many reasons, some of which are described here: http://www.dreamwerx.net/site/article01 . Also there is an interesting article here too: sietch.net/ViewNewsItem.aspx?NewsItemID=124 . SO, after much research I have pretty much decided I am going to store ALL of these files in the database, as a BLOB or LONGBLOB, but there are still many considerations before I do this. I know that storing them in the database is a viable option, however there are a number of methods of storing them. I also know storing them is one thing; correlating and accessing them in a manageable way is another thing entirely. The article provided at this link: dreamwerx.net/site/article01 describes a way of splitting the uploaded binary files into 64kb chunks and storing each chunk with the FileID, and then streaming the actual binary file to the client using headers. This is a really cool idea since it alleviates preassure on the servers memory; instead of loading an entire 100mb file into the RAM and then sending it to the client, it is doing it 64kb at a time. I have tried this (and updated his scripts) and this is totally successful, in a very small frame of testing. So if you are in agreeance that this method is a viable, stable and robust long-term option to store moderately large files (1kb to couple hundred megs), and large quantities of these files, let me know what other considerations or ideas you have. Also, I am considering getting a current "File Management" PHP script that gives an interface for managing files stored in the File System and converting it to manage files stored in the database. If there is already any software out there that does this, please let me know. I guess there are many questions I could ask, and all the information is up there ^^ so please, discuss all aspects of this and we can pass ideas back and forth and teach each other. Cheers, Quantico773

    Read the article

  • Beat Detection on iPhone with wav files and openal

    - by Dmacpro
    Using this website i have tried to make a beat detection engine. http://www.gamedev.net/reference/articles/article1952.asp { ALfloat energy = 0; ALfloat aEnergy = 0; ALint beats = 0; bool init = false; ALfloat Ei[42]; ALfloat V = 0; ALfloat C = 0; ALshort *hold; hold = new ALshort[[myDat length]/2]; [myDat getBytes:hold length:[myDat length]]; ALuint uiNumSamples; uiNumSamples = [myDat length]/4; if(alDatal == NULL) alDatal = (ALshort *) malloc(uiNumSamples*2); if(alDatar == NULL) alDatar = (ALshort *) malloc(uiNumSamples*2); for (int i = 0; i < uiNumSamples; i++) { alDatal[i] = hold[i*2]; alDatar[i] = hold[i*2+1]; } energy = 0; for(int start = 0; start<(22050*10); start+=512){ //detect for 10 seconds of data for(int i = start; i<(start+512); i++){ energy+= fabs(alDatal[i]) + fabs(alDatar[i]); } aEnergy = 0; for(int i = 41; i>=0; i--){ if(i ==0){ Ei[0] = energy; } else { Ei[i] = Ei[i-1]; } if(start >= 21504){ aEnergy+=Ei[i]; } } aEnergy = aEnergy/43.f; if (start >= 21504) { for(int i = 0; i<42; i++){ V += (Ei[i]-aEnergy); } V = V/43.f; C = (-0.0025714*V)+1.5142857; init = true; if(energy >(C*aEnergy)) beats++; } } } alDatal and alDatar are (short*) type; myDat is NSdata that holds the actual audio data of a wav file formatted to 22050 khz and 16 bit stereo. This doesn't seem to work correctly. If anyone could help me out that would be amazing. I've been stuck on this for 3 days.

    Read the article

  • How can I get streaming radio working with Chromium on "fubar.com"?

    - by Jared Shirk
    I'm running 12.04 and use Chromium and a site that I go to that plays streaming radio doesn't seem to work for me. Python cannot find the right plug in and I was informed that the exact extension I need is for Windows Media Player HTML5 extension.. But alas, it doesn't work with Ubuntu.. is there a way around this so I can listen to the music? Fubar.com is a site that I've had for a while now and it just seems that any of the lounges that I go into, it's a chat room where they have live dj's that stream music. I can't get it to work.

    Read the article

  • U1 Music Streaming: Is it possible to have album cover art for OGG files?

    - by Will Daniels
    Just started trying out the U1 music streaming service and so far very pleased. The one issue I have that could be a deal-breaker when it comes time to pay up is that half of my collection is OGG Vorbis and I cannot find a way to show album art for OGG files. I already tried adding a cover.jpg to the folder and embedding the image via easyTAG (works for MP3 but not OGG). Does anybody have a solution besides transcoding them all to MP3? Will this likely be supported in future?

    Read the article

  • FMOD surround sound openframeworks

    - by user1449425
    Ok, I hope I don't mess this up, I have had a look for some answers but can't find anything. I am trying to make a simple sampler in openframeworks using the FMOD sound player in 3D mode. I can make a single instance work fine (recording a new file using libsndfilerecorder and then playing it back and moving it in surround. However I want to have 8 layers of looping audio that I can record and replace one layer at a time in a live show. I get a lot of problems as soon as I have more than 1 layer. The first part of my question relates to the FMOD 3D modes, it is listener relative, so I have to define the position of my listener for every sound (I would prefer to have head relative mode but I cannot make this work at all. Again this works fine when I am using a single player but with multiple players only the last listener I update actually works. The main problem I have is that when I use multiple players I get distortion, and often a mix of other currently playing sounds (even when the microphone cannot hear them) in my new recordings. Is there an incompatability with libsndfilerecorder and FMOD? Here I initialise the players for (int i=0; i<CHANNEL_COUNT; i++) { lvelocity[i].set(1, 1, 1); lup[i].set(0, 1, 0); lforward[i].set(0, 0, 1); lposition[i].set(0, 0, 0); sposition[i].set(3, 3, 2); svelocity[i].set(1, 1, 1); //player[1].initializeFmod(); //player[i].loadSound( "1.wav" ); player[i].setVolume(0.75); player[i].setMultiPlay(true); player[i].play(); setupHold[i]==false; recording[i]=false; channelHasFile[i]=false; settingOsc[i]=false; } When I am recording I unload the file and make sure the positions of the player that is not loaded are not updating. void fmodApp::recordingStart( int recordingId ){ if (recording[recordingId]==false) { setupHold[recordingId]=true; //this stops the position updating cout<<"Start recording Channel " + ofToString(recordingId+1)+" setup hold is true \n"; pt=getDateName() +".wav"; player[recordingId].stop(); player[recordingId].unloadSound(); audioRecorder.setup(pt); audioRecorder.setFormat(SF_FORMAT_WAV | SF_FORMAT_PCM_16); recording[recordingId]=true; //this starts the libSndFIleRecorder } else { cout<<"Channel" + ofToString(recordingId+1)+" is already recording \n"; } } And I stop the recording like this. void fmodApp::recordingEnd( int recordingId ){ if (recording[recordingId]=true) { recording[recordingId]=false; cout<<"Stop recording" + ofToString(recordingId+1)+" \n"; audioRecorder.finalize(); audioRecorder.close(); player[recordingId].loadSound(pt); setupHold[recordingId]=false; channelHasFile[recordingId]=true; cout<< "File recorded channel " + ofToString(recordingId+1) + " file is called " + pt + "\n"; } else { cout << "Sorry track" + ofToString(recordingId+1) + "is not recording"; } } I am careful not to interrupt the updating process but I cannot see where I am going wrong. Many Thanks

    Read the article

  • Alsa doesn't work in vlc

    - by freebird
    Alsa Audio Output works fine from terminal, e.g. aplay /usr/share/sounds/alsa/Noise.wav. But I got to change from default to Alsa Audio Output in vlc. I found it in Tools Perfernces Audio Outputs. The issue is that when I change it to Alsa, I Loose all sound. When I leave the default I get an annoying Audio delay of about 200ms or 500ms. From what I have found you have to use Alsa Audio Outpu to fix that issue. Updated 6-26-2011 10:28pm To fix the Alsa Audio Output: sudo add-apt-repository ppa:ferramroberto/vlc sudo apt-get update sudo apt-get install vlc mozilla-plugin-vlc then, opened Update Manager, there were 2 updates for vlc there, I installed them and rebooted. Now alsa works fine and audio is in sync with video.

    Read the article

  • Google rachète la firme Widevine spécialisée dans les DRM, pour un streaming "de meilleure qualité et plus sûr"

    Google rachète la firme Widevine, spécialisée dans les DRM, pour un streaming "de meilleure qualité et plus sûr" Vendredi, Google s'est offert un présent de Noël en avance. La firme a en effet racheté l'entreprise Widevine, qui est spécialisée dans les DRM vidéos (ces gestionnaires de droits numériques qui servent à protéger oeuvres et diffusions sur la toile). Pour l'instant, rien n'a filtré sur le deal. Aucun détail financier n'a été rendu public. Ce rachat montre en tous cas que Google souhaite s'investir davantage dans le domaine des vidéos en ligne. La firme veut en effet répondre au fait qu'il est de plus en plus facile de visionner des contenus où que l'on soit, à cause "des performances des sm...

    Read the article

  • how to make audio and video streaming servers work?

    - by explorex
    I am a php mysql developer ... just an (below) average. and i am interested in the way television and radio are broadcasted over internet live. i want to know how it works and and what are its requirements (which package of which programming language offers the best). i must admit that i am a complete layman but i expect it do by next half month or year or so. And please clarify me Websites are stored in servers. From my desktop, i want to broadcast some video, then i need to connect to webserver(to upstream the video). Is there an application to do that (or do i have to code that or embed in my web application and which programming language would be suitable(does python support that))? and i also need a script to handle the upstreamed video or audio(can i do that with php)?

    Read the article

  • What is the correct and most efficient approach of streaming vertex data?

    - by Martijn Courteaux
    Usually, I do this in my current OpenGL ES project (for iOS): Initialization: Create two VBO's and one IndexBuffer (since I will use the same indices), same size. Create two VAO's and configure them, both bound to the same Index Buffer. Each frame: Choose a VBO/VAO couple. (Different from the previous frame, so I'm alternating.) Bind that VBO Upload new data using glBufferSubData(GL_ARRAY_BUFFER, ...). Bind the VAO Render my stuff using glDrawElements(GL_***, ...); Unbind the VAO However, someone told me to avoid uploading data (step 3) and render immediately the new data (step 5). I should avoid this, because the glDrawElements call will stall until the buffer is effectively uploaded to VRAM. So he suggested to draw all my geometry I uploaded the previous frame and upload in the current frame what will be drawn in the next frame. Thus, everything is rendered with the delay of one frame. Is this true or am I using the good approach to work with streaming vertex data? (I do know that the pipeline will stall the other way around. Ie: when you draw and immediately try to change the buffer data. But I'm not doing that, since I implemented double buffering.)

    Read the article

< Previous Page | 78 79 80 81 82 83 84 85 86 87 88 89  | Next Page >