Search Results

Search found 7107 results on 285 pages for 'processing efficiency'.

Page 51/285 | < Previous Page | 47 48 49 50 51 52 53 54 55 56 57 58  | Next Page >

  • Video match anlysis

    - by Mohammad
    Hi every body I am looking forward to find an algorithm to detect a pattern in a given video file. Actually I am going to index moments in a tennis match video at which service (first kick after a goal) is shot. PS1: sorry for broken English. PS2: I DO NOT know anything about tennis except that you need a ball to play!!

    Read the article

  • How to generate a lower frequency version of a signal in Matlab?

    - by estourodepilha.com
    With a sine input, I tried to modify it's frequency cutting some lower frequencies in the spectrum, shifting the main frequency towards zero. As the signal is not fftshifted I tried to do that by eliminating some samples at the begin and at the end of the fft vector: interval = 1; samplingFrequency = 44100; signalFrequency = 440; sampleDuration = 1 / samplingFrequency; timespan = 1 : sampleDuration : (1 + interval); original = sin(2 * pi * signalFrequency * timespan); fourierTransform = fft(original); frequencyCut = 10; %% Hertz frequencyCut = floor(frequencyCut * (length(pattern) / samplingFrequency) / 4); %% Samples maxFrequency = length(fourierTransform) - (2 * frequencyCut); signal = ifft(fourierTransform(frequencyCut + 1:maxFrequency), 'symmetric'); But it didn't work as expected. I also tried to remove the center part of the spectrum, but it wielded a higher frequency sine wave too. How to make it right?

    Read the article

  • Can anyone give me a sample DSP script in C/C++

    - by Andrew
    Im working on a (Audio) DSP project and just wondering if there are any sample (Open source) DSP example that are written in c or c++, for my MSP430 Chip. I just want something as a guideline so i can program my own script using the ACD and DCA on my board for sampling. http://focus.ti.com/docs/toolsw/folders/print/msp-exp430f5438.html Thats my board, MSP430F5438 Experimenter Board, from what i herd it can run dsp script via the USB connection with the computer. Im using CCS ( From TI, code composer studio) and Octave/Matlab. Just any DSP example scripts or sites that will help me create my own would be appreciated. What im tying to do, Partial audio (sampled) track -- Nyquist rate sampling -- over- and undersampling -- reconstruction of the audio track.

    Read the article

  • php gdlib angle problem

    - by creativz
    I'm using php gd lib 5.2.13 and tried to make a picture with imagettftext ($image, $color and $font are defined of course). imagettftext($image, 12, 90, 10, 20, $black, $font, "This.is_a test 123"); //image, font size, angle, x value, y value, color, font, text As you can see I want the angle to be 90°. The problem is that the text is not beeing rotated properly, e.g. the dots are at the top (and not at the bottom) of the text. I read that this is a common issue and has been fixed in php gdlib 5.3, But since I have 5.2.13 running on a webhost (...) is there a solution to rotate it properly with using gdlib 5.2.13? Thanks!

    Read the article

  • Solid FFmpeg wrapper for C#/.NET

    - by Lillemanden
    I have been searching the web for some time for a solid FFmpeg wrapper for C#/.NET. But I have yet to come up with something useful. I have found the following three projects, but all of them apears to be dead in early alpha stage. FFmpeg.NET ffmpeg-sharp FFLIB.NET So my question is if anyone knows of a wrapper project that is more mature? I am not looking for a full transcoding engine with job queues and more. Just a simple wrapper so I do not have to make a command line call and then parse the console output, but can make method calls and use eventlisteners for progress. And please feel free to mention any active projects, even if they are stil in the early stages.

    Read the article

  • Matlab: Analysis of signal

    - by Mateusz
    Hi, I have a problem with this task: For free route perform frequency analysis and give parametrs of each signal component: time of beginning and ending of each component beginning and ending frequency amplitude (in time domain) in the beginning and end of each signal's component level of noise in dB Assume, that, the parametrs of each component like amplitude, frequency is changing lineary in time. Frequency of sampling is 1000Hz For example I have signal like this: Nx=64; fs=1000; t=1/fs*(0:Nx-1); %========================== A1=1; A2=4; f1=500; f2=1000; x1=A1*cos(2*pi*f1*t); x2=A2*sin(2*pi*f2*t); %========================== x=x1+x2;

    Read the article

  • iPhone OS: Strategies for high density image work

    - by Jasconius
    I have a project that is coming around the bend this summer that is going to involve, potentially, an extremely high volume of image data for display. We are talking hundreds of 640x480-ish images in a given application session (scaled to a smaller resolution when displayed), and handfuls of very large (1280x1024 or higher) images at a time. I've already done some preliminary work and I've found that the typical 640x480ish image is just a shade under 1MB in memory when placed into a UIImageView and displayed... but the very large images can be a whopping 5+ MB's in some cases. This project is actually be targeted at the iPad, which, in my Instruments tests seems to cap out at about 80-100MB's of addressable physical memory. Details aside, I need to start thinking of how to move huge volumes of image data between virtual and physical memory while preserving the fluidity and responsiveness of the application, which will be high visibility. I'm probably on the higher ends of intermediate at Objective-C... so I am looking for some solid articles and advice on the following: 1) Responsible management of UIImage and UIImageView in the name of conserving physical RAM 2) Merits of using CGImage over UIImage, particularly for the huge images, and if there will be any performance gain 3) Anything dealing with memory paging particularly as it pertains to images I will epilogue by saying that the numbers I have above maybe off by about 10 or 15%. Images may or may not end up being bundled into the actual app itself as opposed to being loaded in from an external server.

    Read the article

  • How do you composite an image onto another image with PIL in Python?

    - by Sebastian
    I need to take an image and place it onto a new, generated white background in order for it to be converted into a downloadable desktop wallpaper. So the process would go: 1) Generate new, all white image with 1440x900 dimensions 2) Place existing image on top, centered 3) Save as single image In PIL, I see the ImageDraw object, but nothing indicates it can draw existing image data onto another image. Suggestions or links anyone can recommend?

    Read the article

  • GD! Converting a png image to jpeg and making the alpha by default white and not black.

    - by Shawn
    I tried something like this but it just makes the background of the image white, not necessarily the alpha of the image. I wanted to just upload everything as jpg's so if i could somehow "flatten" a png image with some transparently to default it to just be white so i can use it as a jpg instead. Appreciate any help. Thanks. $old = imagecreatefrompng($upload); $background = imagecolorallocate($old,255,255,255); imagefill($old, 0, 0, $background); imagealphablending($old, false); imagesavealpha($old, true);

    Read the article

  • Simple but efficient way to store a series of small changes to an image?

    - by finnw
    I have a series of images. Each one is typically (but not always) similar to the previous one, with 3 or 4 small rectangular regions updated. I need to record these changes using a minimum of disk space. The source images are not compressed, but I would like the deltas to be compressed. I need to be able to recreate the images exactly as input (so a lossy video codec is not appropriate.) I am thinking of something along the lines of: Composite the new image with a negative of the old image Save the composited image in any common format that can compress using RLE (probably PNG.) Recreate the second image by compositing the previous image with the delta. Although the images have an alpha channel, I can ignore it for the purposes of this function. Is there an easy-to-implement algorithm or free Java library with this capability?

    Read the article

  • Generating a scalogram of a signal

    - by Goz
    Hi there, I'm trying to build a scalogram view for my app to see whether there is relevant information we can retrieve from a wavelet transform as opposed to using a spectograms to see what can be retrieved via an FFT. So far I can take a wave form and I can perform the forward wavelet transform on it. However I am lost at the next step. How do I turn this information into power/energy information? I have a set of wave forms at different frequencies but I have, as I say, no frequency information. Can anyone tell me what the next step is for turning this transformed data into a scalogram? Any help would be much appreciated because my google skills are failing me!

    Read the article

  • Matlab - Propagate points orthogonally on to the edge of shape boundaries

    - by Graham
    Hi I have a set of points which I want to propagate on to the edge of shape boundary defined by a binary image. The shape boundary is defined by a 1px wide white edge. I also have the coordinates of these points stored in a 2 row by n column matrix. The shape forms a concave boundary with no holes within itself made of around 2500 points. I want to cast a ray from each point from the set of points in an orthogonal direction and detect at which point it intersects the shape boundary at. What would be the best method to do this? Are there some sort of ray tracing algorithms that could be used? Or would it be a case of taking orthogonal unit vector and multiplying it by a scalar and testing after multiplication if the end point of the vector is outside the shape boundary. When the end point of the unit vector is outside the shape, just find the point of intersection? Thank you very much in advance for any help!

    Read the article

  • How to display part of an image for the specific width and height?

    - by Brady Chu
    Recently I participated in a web project which has a huge large of images to handle and display on web page, we know that the width and height of images end users uploaded cannot be control easily and then they are hard to display. At first, I attempted to zoom in/out the images to rearch an appropriate presentation, and I made it, but my boss is still not satisfied with my solution, the following is my way: var autoResizeImage = function(maxWidth, maxHeight, objImg) { var img = new Image(); img.src = objImg.src; img.onload = function() { var hRatio; var wRatio; var Ratio = 1; var w = img.width; var h = img.height; wRatio = maxWidth / w; hRatio = maxHeight / h; if (maxWidth == 0 && maxHeight == 0) { Ratio = 1; } else if (maxWidth == 0) { if (hRatio < 1) { Ratio = hRatio; } } else if (maxHeight == 0) { if (wRatio < 1) { Ratio = wRatio; } } else if (wRatio < 1 || hRatio < 1) { Ratio = (wRatio <= hRatio ? wRatio : hRatio); } if (Ratio < 1) { w = w * Ratio; h = h * Ratio; } w = w <= 0 ? 250 : w; h = h <= 0 ? 370 : h; objImg.height = h; objImg.width = w; }; }; This way is only intended to limit the max width and height for the image so that every image in album still has different width and height which are still very urgly. And right at this minute, I know we can create a DIV and use the image as its background image, this way is too complicated and not direct I don't want to take. So I's wondering whether there is a better way to display images with the fixed width and height without presentation distortion? Thanks.

    Read the article

  • Image coding library

    - by Dmitry
    Is there any good library for lossless image encoding/decoding that has compression rate more or less similar to PNG but decoding to raw RGB bitmap data would be much faster than PNG? Also alpha transparency is needed, but not essential because, alpha channel could be taken from separate image. Original problem lies in slowness of reading and decoding PNG files on iPhone using standard libraries. Obvious and the simples solution would have been storing raw RGB bitmap data, but then size of unpacked ipa is too large - 4 times larger than PNG files. So, I am trying to find some compromise solution.

    Read the article

  • Parallelism in Python

    - by fmark
    What are the options for achieving parallelism in Python? I want to perform a bunch of CPU bound calculations over some very large rasters, and would like to parallelise them. Coming from a C background, I am familiar with three approaches to parallelism: Message passing processes, possibly distributed across a cluster, e.g. MPI. Explicit shared memory parallelism, either using pthreads or fork(), pipe(), et. al Implicit shared memory parallelism, using OpenMP. Deciding on an approach to use is an exercise in trade-offs. In Python, what approaches are available and what are their characteristics? Is there a clusterable MPI clone? What are the preferred ways of achieving shared memory parallelism? I have heard reference to problems with the GIL, as well as references to tasklets. In short, what do I need to know about the different parallelization strategies in Python before choosing between them?

    Read the article

  • How to Use Calculated Color Values with ColorMatrix?

    - by Otaku
    I am changing color values of each pixel in an image based on a calculation. The problem is that this takes over 5 seconds on my machine with a 1000x1333 image and I'm looking for a way to optimize it to be much faster. I think ColorMatrix may be an option, but I'm having a difficult time figure out how I would get a set of pixel RGB values, use that to calculate and then set the new pixel value. I can see how this can be done if I was just modifying (multiplying, subtracting, etc.) the original value with ColorMatrix, but now how I can use the pixels returned value to use it to calculate and new value. For example: Sub DarkenPicture() Dim clrTestFolderPath = "C:\Users\Me\Desktop\ColorTest\" Dim originalPicture = "original.jpg" Dim Luminance As Single Dim bitmapOriginal As Bitmap = Image.FromFile(clrTestFolderPath + originalPicture) Dim Clr As Color Dim newR As Byte Dim newG As Byte Dim newB As Byte For x = 0 To bitmapOriginal.Width - 1 For y = 0 To bitmapOriginal.Height - 1 Clr = bitmapOriginal.GetPixel(x, y) Luminance = ((0.21 * (Clr.R) + (0.72 * (Clr.G)) + (0.07 * (Clr.B))/ 255 newR = Clr.R * Luminance newG = Clr.G * Luminance newB = Clr.B * Luminance bitmapOriginal.SetPixel(x, y, Color.FromArgb(newR, newG, newB)) Next Next bitmapOriginal.Save(clrTestFolderPath + "colorized.jpg", ImageFormat.Jpeg) End Sub The Luminance value is the calculated one. I know I can set ColorMatrix's M00, M11, M22 to 0, 0, 0 respectively and then put a new value in M40, M41, M42, but that new value is calculated based of a value multiplication and addition of that pixel's components (((0.21 * (Clr.R) + (0.72 * (Clr.G)) + (0.07 * (Clr.B)) and the result of that - Luminance - is multiplied by the color component). Is this even possible with ColorMatrix?

    Read the article

  • Good library to load images of different formats

    - by codymanix
    Hi Iam developing an image viewer application just like irfan-view or acdsee which should be capable to view lots of different image file formats (not just the standard ones which can be done with System.Drawing.Image). Iam currently using ImageMagick but it isn't very fast and seems to be unstable with some image files. Can anyone suggest a good imaging library, ideally with a .NET wrapper already present?

    Read the article

  • How do I use CSS to add a non-rectangular border around an image?

    - by KPL
    Hello people, I have three images, and they are not square or rectangular in shape. They are just like face of anyone. So, basically, my images are in the size 196x196 or anything like that, but complete square or rectangle with the face in the middle and transperant background in the rest of the portion. Now, I want to remove the transperant background too and just keep the faces. Don't know if this is possible and mind you, this isn't a programming question. EDIT (from comments): How do I put a border around the shape of the image, not a rectangular one around the boundary, using CSS.

    Read the article

  • Best practice for writing ARRAYS

    - by Douglas
    I've got an array with about 250 entries in it, each their own array of values. Each entry is a point on a map, and each array holds info for: name, another array for points this point can connect to, latitude, longitude, short for of name, a boolean, and another boolean The array has been written by another developer in my team, and he has written it as such: names[0]=new Array; names[0][0]="Campus Ice Centre"; names[0][1]= new Array(0,1,2); names[0][2]=43.95081811364498; names[0][3]=-78.89848709106445; names[0][4]="CIC"; names[0][5]=false; names[0][6]=false; names[1]=new Array; names[1][0]="Shagwell's"; names[1][1]= new Array(0,1); names[1][2]=43.95090307839151; names[1][3]=-78.89815986156464; names[1][4]="shg"; names[1][5]=false; names[1][6]=false; Where I would probably have personally written it like this: var names = [] names[0] = new Array("Campus Ice Centre", new Array[0,1,2], 43.95081811364498, -78.89848709106445, "CIC", false, false); names[1] = new Array("Shagwell's", new Array[0,1], 43.95090307839151, -78.89815986156464, 'shg", false, false); They both work perfectly fine of course, but what I'm wondering is: 1) does one take longer than the other to actually process? 2) am I incorrect in assuming there is a benefit to the compactness of my version of the same thing? I'm just a little worried about his 3000 lines of code versus my 3-400 to get the same result. Thanks in advance for any guidance.

    Read the article

  • Is their an optimal config/format for a TIFF when using Tesseract or other OCR?

    - by Zando
    I'm having a bizarre problem with Tesseract. I have a name, "Janice" that is in a 200x40 pixel tiff, that Tesseract interprets as a blank. I'm running hundreds of names through Tesseract and they are processed fine. What I'm actually doing, though, is breaking up a larger TIFF into smaller tiffs of one word each. In the larger TIFF, tesseract recognizes "Janice". What could cause it to hiccup in a TIFF that solely contains that word (and there's enough space around the word to not truncate any of the pixels)? I'm using ImageMagick to split the big TIFF, are there options I should set when reconstituting the new TIFF files?

    Read the article

< Previous Page | 47 48 49 50 51 52 53 54 55 56 57 58  | Next Page >