Search Results

Search found 9901 results on 397 pages for 'audio processing'.

Page 29/397 | < Previous Page | 25 26 27 28 29 30 31 32 33 34 35 36  | Next Page >

  • Why do I get rows of zeros in my 2D fft?

    - by Nicholas Pringle
    I am trying to replicate the results from a paper. "Two-dimensional Fourier Transform (2D-FT) in space and time along sections of constant latitude (east-west) and longitude (north-south) were used to characterize the spectrum of the simulated flux variability south of 40degS." - Lenton et al(2006) The figures published show "the log of the variance of the 2D-FT". I have tried to create an array consisting of the seasonal cycle of similar data as well as the noise. I have defined the noise as the original array minus the signal array. Here is the code that I used to plot the 2D-FT of the signal array averaged in latitude: import numpy as np from numpy import ma from matplotlib import pyplot as plt from Scientific.IO.NetCDF import NetCDFFile ### input directory indir = '/home/nicholas/data/' ### get the flux data which is in ### [time(5day ave for 10 years),latitude,longitude] nc = NetCDFFile(indir + 'CFLX_2000_2009.nc','r') cflux_southern_ocean = nc.variables['Cflx'][:,10:50,:] cflux_southern_ocean = ma.masked_values(cflux_southern_ocean,1e+20) # mask land nc.close() cflux = cflux_southern_ocean*1e08 # change units of data from mmol/m^2/s ### create an array that consists of the seasonal signal fro each pixel year_stack = np.split(cflux, 10, axis=0) year_stack = np.array(year_stack) signal_array = np.tile(np.mean(year_stack, axis=0), (10, 1, 1)) signal_array = ma.masked_where(signal_array > 1e20, signal_array) # need to mask ### average the array over latitude(or longitude) signal_time_lon = ma.mean(signal_array, axis=1) ### do a 2D Fourier Transform of the time/space image ft = np.fft.fft2(signal_time_lon) mgft = np.abs(ft) ps = mgft**2 log_ps = np.log(mgft) log_mgft= np.log(mgft) Every second row of the ft consists completely of zeros. Why is this? Would it be acceptable to add a randomly small number to the signal to avoid this. signal_time_lon = signal_time_lon + np.random.randint(0,9,size=(730, 182))*1e-05 EDIT: Adding images and clarify meaning The output of rfft2 still appears to be a complex array. Using fftshift shifts the edges of the image to the centre; I still have a power spectrum regardless. I expect that the reason that I get rows of zeros is that I have re-created the timeseries for each pixel. The ft[0, 0] pixel contains the mean of the signal. So the ft[1, 0] corresponds to a sinusoid with one cycle over the entire signal in the rows of the starting image. Here are is the starting image using following code: plt.pcolormesh(signal_time_lon); plt.colorbar(); plt.axis('tight') Here is result using following code: ft = np.fft.rfft2(signal_time_lon) mgft = np.abs(ft) ps = mgft**2 log_ps = np.log1p(mgft) plt.pcolormesh(log_ps); plt.colorbar(); plt.axis('tight') It may not be clear in the image but it is only every second row that contains completely zeros. Every tenth pixel (log_ps[10, 0]) is a high value. The other pixels (log_ps[2, 0], log_ps[4, 0] etc) have very low values.

    Read the article

  • How to read time from recorded surveillance camera video?

    - by stressed_geek
    I have a problem where I have to read the time of recording from the video recorded by a surveillance camera. The time shows up on the top-left area of the video. Below is a link to screen grab of the area which shows the time. Also, the digit color(white/black) keeps changing during the duration of the video. http://i55.tinypic.com/2j5gca8.png Please guide me in the direction to approach this problem. I am a Java programmer so would prefer an approach through Java. EDIT: Thanks unhillbilly for the comment. I had looked at the Ron Cemer OCR library and its performance is much below our requirement. Since the ocr performance is less than desired, I was planning to build a character set using the screen grabs for all the digits, and using some image/pixel comparison library to compare the frame time with the character-set which will show a probabilistic result after comparison. So I was looking for a good image comparison library(I would be OK with a non-java library which I can run using the command-line). Also any advice on the above approach would be really helpful.

    Read the article

  • Using imtophat in Matlab

    - by jaff12
    I'm trying to do top hat filtering in matlab. The imtophat function looks promising, but I have no idea how to use it. I dont have a lot of work with Matlab before. I am trying to look find basically small spots several pixels wide that are local max in my 2 dimensional array.

    Read the article

  • Extracting DCT coefficients from encoded images and video

    - by misha
    Is there a way to easily extract the DCT coefficients (and quantization parameters) from encoded images and video? Any decoder software must be using them to decode block-DCT encoded images and video. So I'm pretty sure the decoder knows what they are. Is there a way to expose them to whomever is using the decoder? I'm implementing some video quality assessment algorithms that work directly in the DCT domain. Currently, the majority of my code uses OpenCV, so it would be great if anyone knows of a solution using that framework. I don't mind using other libraries (perhaps libjpeg, but that seems to be for still images only), but my primary concern is to do as little format-specific work as possible (I don't want to reinvent the wheel and write my own decoders). I want to be able to open any video/image (H.264, MPEG, JPEG, etc) that OpenCV can open, and if it's block DCT-encoded, to get the DCT coefficients. In the worst case, I know that I can write up my own block DCT code, run the decompressed frames/images through it and then I'd be back in the DCT domain. That's hardly an elegant solution, and I hope I can do better. Presently, I use the fairly common OpenCV boilerplate to open images: IplImage *image = cvLoadImage(filename); // Run quality assessment metric The code I'm using for video is equally trivial: CvCapture *capture = cvCaptureFromAVI(filename); while (cvGrabFrame(capture)) { IplImage *frame = cvRetrieveFrame(capture); // Run quality assessment metric on frame } cvReleaseCapture(&capture); In both cases, I get a 3-channel IplImage in BGR format. Is there any way I can get the DCT coefficients as well?

    Read the article

  • GStreamer record iradio-mode artifacts

    - by Kanzeon
    I'm trying to record internet radio while listen it. I use the following line, but comes to my attention that when I set the iradio-mode true some noises comes in the recorded file, not in the playback. Without iradio-mode, all is ok. But in my app I need this mode to get the title message. gst-launch souphttpsrc location="<radio channel>" iradio-mode=true ! tee name=t ! queue ! decodebin2 ! audioconvert ! audioresample ! osxaudiosink t. ! queue ! filesink location=rectest.mp3

    Read the article

  • Finding center of fingerprints.

    - by an_ant
    If we suppose that every fingerprint is made of concentric curves (ellipses or circles) - and I'm aware of the fact that not every fingerprint is - how can I find center of those concentric curves? Let's take this "ideal" fingerprint and try to find out its center ... My approaches were to try: Find the spectrum through columns/rows of the image and try to find columns/rows that maximize particular band of the spectrum. I thought that column going through the center would have most regular pattern of changing amplitudes - therefore, most recognizible harmonic. My second approach was to try to count the changes of black-and-white also through the columns and rows, and to maximize that amount among rows and columns also. While these methods work to the some extant, with some additional filtering, they fail, when fingerprint is "not ideal as this one is". Can you think of any different approach? Are there standard ways to do it?

    Read the article

  • AudioTrack skipping after pause and resume

    - by Markus Drösser
    Hi, here is the problem. I play a wav file that i recorded earlier without problems. but when i call audiotrack.pause() and audiotrack.start() again after some waiting, it skips some frames of the file. why is that? here is my play listener // Start playback audioTrack.setPlaybackPositionUpdateListener(new OnPlaybackPositionUpdateListener() { @Override public void onPeriodicNotification(AudioTrack track) { try { if(ramfile!=null && ramfile.read(buffer)==-1) { audioTrack.release(); audioTrack = null; ramfile.close(); playing=false; } else { audioTrack.write(buffer, 0, buffer.length); } } catch (IOException e) { try { ramfile.close(); playing=false; } catch (IOException e1) { } } } @Override public void onMarkerReached(AudioTrack track) { playing=false; track.release(); } });

    Read the article

  • Un groupe de développeurs sort Flac.js, un décodeur JavaScript pour la lecture du contenu audio dans le navigateur sans recours aux codecs

    Un groupe de développeurs sort Flac.js un décodeur audio en JavaScript pour la lecture du contenu audio dans le navigateur sans nécessiter de codecs HTML5, le futur standard du Web introduit la balise audio permettant de créer des applications fournissant le traitement et la synthèse audio dans le navigateur. Les navigateurs récents comme Chrome ou Firefox, intègrent déjà des bibliothèques Javascript qui fournissent des méthodes et propriétés permettant de manipuler l'élément audio. Cependant, les applications HTML 5 manipulant du contenu audio qui fonctionnent normalement dans un navigateur sur un système d'exploitation donné pourraient ne pas marcher correctement lors de...

    Read the article

  • Audio doesn't work on Windows XP guest (WS 7.0)

    - by Mads
    Hi, I can't get audio to work with on a Windows XP guest running on VMware Workstation 7.0 and Ubuntu 9.10 host. Windows fails to produce any audio output and the Windows device manager says the Multimedia Audio Controller is not working properly. Audio is working fine in the host OS. When I open Multimedia Audio Controller properties it says: Device status: The drivers for this device are not installed (Code 28) If I try to reinstall the driver I get the following error message: "Cannot Install this Hardware There was a problem installing this hardware: Multimedia Audio Controller An Error occurred during the installation of the device Driver is not intended for this platform" Has anyone else experienced this problem?

    Read the article

  • Bsplayer - load audio tracks from external files

    - by torran
    I have a movie file: Video ID : 1 Format : AVC Format/Info : Advanced Video Codec Format profile : [email protected] Format settings, CABAC : Yes Format settings, ReFrames : 5 frames Muxing mode : Container [email protected] Codec ID : V_MPEG4/ISO/AVC Duration : 54mn 13s Bit rate : 3 380 Kbps Nominal bit rate : 3 459 Kbps Width : 1 280 pixels Height : 720 pixels Display aspect ratio : 16:9 Frame rate : 23.976 fps Resolution : 8 bits Colorimetry : 4:2:0 Scan type : Progressive Bits/(Pixel*Frame) : 0.153 Stream size : 1.28 GiB (88%) Writing library : x264 core 88 r1471 1144615 Audio ID : 2 Format : AC-3 Format/Info : Audio Coding 3 Codec ID : A_AC3 Duration : 54mn 16s Bit rate mode : Constant Bit rate : 384 Kbps Channel(s) : 6 channels Channel positions : Front: L C R, Side: L R, LFE Sampling rate : 48.0 KHz Stream size : 149 MiB (10%) and additional audio files in same folder: .mp3 and .ac3. How can I load them with bsplayer? Right click-audio-audio streams is empty. If i open the movie with media players classic I can switch audio files.

    Read the article

  • How do you setup the Audio plugin for Flowplayer?

    - by codeninja
    I'm having a bit of trouble getting the Audio player to work. Basically I want to initiate an mp3 player doing something like this <a href="path-to-my-audio.mp3" id="player" ></a> and then use the $f() call to initate the player. I've followed the instructions here (http://flowplayer.org/plugins/streaming/audio.html) This doesnt seem to be work and I'm not sure what's wrong because I'm able to play videos in this way. Thanks for your help!

    Read the article

  • Installing Oracle Event Processing 11g by Antoney Reynolds

    - by JuergenKress
    Earlier this month I was involved in organizing the Monument Family History Day. It was certainly a complex event, with dozens of presenters, guides and 100s of visitors. So with that experience of a complex event under my belt I decided to refresh my acquaintance with Oracle Event Processing (CEP). CEP has a developer side based on Eclipse and a runtime environment. Server install The server install is very straightforward (documentation). It is recommended to use the JRockit JDK with CEP so the steps to set up a working CEP server environment are: Download required software JRockit - I used Oracle “JRockit 6 - R28.2.5” which includes “JRockit Mission Control 4.1” and “JRockit Real Time 4.1”. Oracle Event Processor - I used “Complex Event Processing Release 11gR1 (11.1.1.6.0)” Install JRockit Run the JRockit installer, the download is an executable binary that just needs to be marked as executable. Install CEP Unzip the downloaded file Run the CEP installer, the unzipped file is an executable binary that may need to be marked as executable. Choose a custom install and add the examples if needed. It is not recommended to add the examples to a production environment but they can be helpful in development. Developer Install The developer install requires several steps (documentation). A developer install needs access to the software for the server install, although JRockit isn’t necessary for development use. Read the full article by Antony Reynolds. SOA & BPM Partner Community For regular information on Oracle SOA Suite become a member in the SOA & BPM Partner Community for registration please visit  www.oracle.com/goto/emea/soa (OPN account required) If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Mix Forum Technorati Tags: SOA Community,Oracle SOA,Oracle BPM,Community,OPN,Jürgen Kress,CEP,Reynolds

    Read the article

  • multi user web game with scheduled processing?

    - by Rooq
    I have an idea for a game which I am in the process of designing, but I am struggling to establish if the way I plan to implement it is possible. The game is a text based sports management simulation. This will require players to take certain actions through a web browser which will interact with a database - adding/updating and selecting. Most of the code required to be executed at this point will be fairly straightforward. The main processing will take place by applications which are scheduled to run on the server at certain times. These apps will process transactions added by the players and also perform some automatic processing based on the game date. My plan was to use an SQL server database (at last count I require about 20 tables) and VB.net for all the coding (coming from a mainframe programming background this language is the simplist for me to get to grips with). I will also need a scheduling tool on the server. Can anyone tell me if what I am planning is feasible before I dive into the actual coding stage of my project?

    Read the article

  • Core Audio on iPhone - any way to change the microphone gain (either for speakerphone mic or headpho

    - by Halle
    After much searching the answer seems to be no, but I thought I'd ask here before giving up. For a project I'm working on that includes recording sound, the input levels sound a little quiet both when the route is external mic + speaker and when it's headphone mic + headphones. Does anyone know definitively whether it is possible to programmatically change mic gain levels on the iPhone in any part of Core Audio? If not, is it possible that I'm not really in "speakerphone" mode (with the external mic at least) but only think I am? Here is my audio session init code: OSStatus error = AudioSessionInitialize(NULL, NULL, audioQueueHelperInterruptionListener, r); [...some error checking of the OSStatus...] UInt32 category = kAudioSessionCategory_PlayAndRecord; // need to play out the speaker at full volume too so it is necessary to change default route below error = AudioSessionSetProperty(kAudioSessionProperty_AudioCategory, sizeof(category), &category); if (error) printf("couldn't set audio category!"); UInt32 doChangeDefaultRoute = 1; error = AudioSessionSetProperty (kAudioSessionProperty_OverrideCategoryDefaultToSpeaker, sizeof (doChangeDefaultRoute), &doChangeDefaultRoute); if (error) printf("couldn't change default route!"); error = AudioSessionAddPropertyListener(kAudioSessionProperty_AudioRouteChange, audioQueueHelperPropListener, r); if (error) printf("ERROR ADDING AUDIO SESSION PROP LISTENER! %d\n", (int)error); UInt32 inputAvailable = 0; UInt32 size = sizeof(inputAvailable); error = AudioSessionGetProperty(kAudioSessionProperty_AudioInputAvailable, &size, &inputAvailable); if (error) printf("ERROR GETTING INPUT AVAILABILITY! %d\n", (int)error); error = AudioSessionAddPropertyListener(kAudioSessionProperty_AudioInputAvailable, audioQueueHelperPropListener, r); if (error) printf("ERROR ADDING AUDIO SESSION PROP LISTENER! %d\n", (int)error); error = AudioSessionSetActive(true); if (error) printf("AudioSessionSetActive (true) failed"); Thanks very much for any pointers.

    Read the article

  • What is the best service/tool to put short audio clips on a website so users can click and listen im

    - by Edward Tanguay
    I'm making a foreign language flashcard website in which I want to have 100s of short 3-10 second audio files available for users to click and listen. So I am looking for a tool/service such as YouTube or Screenr.com but for audio which e.g.: allows me to easily upload multiple kinds of audio files: mp3, wav, etc. easy to manage them online (delete, replace) has a simple, small player (e.g. flash) that integrates nicely into any site

    Read the article

  • Html5 Audio plays only once in my Javascript code.

    - by Poul
    I have a dashboard web-app that I want to play an alert sound if its having problems connecting. The site's ajax code will poll for data and throttle down its refresh rate if it can't connect. Once the server comes back up, the site will continue working. In the mean time I would like a sound to play each time it can't connect (so I know to check the server). Here is that code. This code works. var error_audio = new Audio("audio/"+settings.refresh.error_audio); error_audio.load(); //this gets called when there is a connection error. function onConnectionError() { error_audio.play(); } However the 2nd time through the function the audio doesn't play. Digging around in Chrome's debugger the 'played' attribute in the audio element gets set to true. Setting it to false has no results. Any ideas?

    Read the article

  • Parallelize incremental processing in Tabular #ssas #tabular

    - by Marco Russo (SQLBI)
    I recently came in a problem trying to improve the parallelism of Tabular processing. As you know, multiple tables can be processed in parallel, whereas the processing of several partitions within the same table cannot be parallelized. When you perform an incremental update by adding only new rows to existing table, what you really do is adding rows to a partition, so adding rows to many tables means adding rows to several partitions. The particular condition you have in this case is that every partition in which you add rows belongs to a different table. Adding rows implies using the ProcessAdd command; its QueryBinding parameter specifies a SQL syntax to read new rows, otherwise the original query specified for the partition will be used, and it could generate duplicated data if you don’t have a dynamic behavior on the SQL side. If you create the required XMLA code manually, you will find that the QueryBinding node that should be part of the ProcessAdd command has to be moved out from ProcessAdd in case you are using a Batch command with more than one Process command (which is the reason why you want to use a single batch: run multiple process operations in parallel!). If you use AMO (Analysis Management Objects) you will find that this combination is not supported, even if you don’t have a syntax error compiling the code, but you might obtain this error at execution time: The syntax for the 'Process' command is incorrect. The 'Bindings' keyword cannot appear under a 'Process' command if the 'Process' command is a part of a 'Batch' command and there are more than one 'Process' commands in the 'Batch' or the 'Batch' command contains any out of line related information. In this case, the 'Bindings' keyword should be a part of the 'Batch' command only. If this is happening to you, the best solution I’ve found is manipulating the XMLA code generated by AMO moving the Binding nodes in the right place. A more detailed description of the issue and the code required to send a correct XMLA batch to Analysis Services is available in my article Parallelize ProcessAdd with AMO. By the way, the same technique (and code) can be used also if you have the same problem in a Multidimensional model.

    Read the article

< Previous Page | 25 26 27 28 29 30 31 32 33 34 35 36  | Next Page >