Search Results

Search found 7107 results on 285 pages for 'processing efficiency'.

Page 51/285 | < Previous Page | 47 48 49 50 51 52 53 54 55 56 57 58  | Next Page >

  • Django-imagekit: how to reduce image quality with a preprocessor_spec ?

    - by pierre-guillaume-degans
    Hi, please excuse me for my ugly english :p I've created this simple model class, with a Preprocessor to reduce my photos'quality (the photos'extension is .JPG): from django.db import models from imagekit.models import ImageModel from imagekit.specs import ImageSpec from imagekit import processors class Preprocessor(ImageSpec): quality = 50 processors = [processors.Format] class Picture(ImageModel): image = models.ImageField(upload_to='pictures') class IKOptions: preprocessor_spec = Preprocessor The problem : pictures'quality are not reduced. :( Any idea to fix it ? Thank you very much ;)

    Read the article

  • Coordinate geometry operations in images/discrete space

    - by avd
    I have images which have line segments, rays etc. I am representing these line segments using Bresenham algorithm (means whatever coordinates I get using this algorithm between two points). Now I want to do operations such as finding intersection point between two line segments, finding the projection of one vector onto other etc... The problem is I am not working in continuous space. The line segments are being approximated using Bresenham algorithm. So I want suggestions on what are the best and most efficient ways to do this? A link to C++ library or implementation would also be good enough. Please suggest some books which deal with such problems.

    Read the article

  • Are there any well known algorithms to detect the presence of names?

    - by Rhubarb
    For example, given a string: "Bob went fishing with his friend Jim Smith." Bob and Jim Smith are both names, but bob and smith are both words. Weren't for them being uppercase, there would be less indication of this outside of our knowledge of the sentence. Without doing grammar analysis, are there any well known algorithms for detecting the presence of names, at least Western names?

    Read the article

  • Is their an optimal config/format for a TIFF when using Tesseract or other OCR?

    - by Zando
    I'm having a bizarre problem with Tesseract. I have a name, "Janice" that is in a 200x40 pixel tiff, that Tesseract interprets as a blank. I'm running hundreds of names through Tesseract and they are processed fine. What I'm actually doing, though, is breaking up a larger TIFF into smaller tiffs of one word each. In the larger TIFF, tesseract recognizes "Janice". What could cause it to hiccup in a TIFF that solely contains that word (and there's enough space around the word to not truncate any of the pixels)? I'm using ImageMagick to split the big TIFF, are there options I should set when reconstituting the new TIFF files?

    Read the article

  • Simple but efficient way to store a series of small changes to an image?

    - by finnw
    I have a series of images. Each one is typically (but not always) similar to the previous one, with 3 or 4 small rectangular regions updated. I need to record these changes using a minimum of disk space. The source images are not compressed, but I would like the deltas to be compressed. I need to be able to recreate the images exactly as input (so a lossy video codec is not appropriate.) I am thinking of something along the lines of: Composite the new image with a negative of the old image Save the composited image in any common format that can compress using RLE (probably PNG.) Recreate the second image by compositing the previous image with the delta. Although the images have an alpha channel, I can ignore it for the purposes of this function. Is there an easy-to-implement algorithm or free Java library with this capability?

    Read the article

  • How do I use CSS to add a non-rectangular border around an image?

    - by KPL
    Hello people, I have three images, and they are not square or rectangular in shape. They are just like face of anyone. So, basically, my images are in the size 196x196 or anything like that, but complete square or rectangle with the face in the middle and transperant background in the rest of the portion. Now, I want to remove the transperant background too and just keep the faces. Don't know if this is possible and mind you, this isn't a programming question. EDIT (from comments): How do I put a border around the shape of the image, not a rectangular one around the boundary, using CSS.

    Read the article

  • Matlab: Analysis of signal

    - by Mateusz
    Hi, I have a problem with this task: For free route perform frequency analysis and give parametrs of each signal component: time of beginning and ending of each component beginning and ending frequency amplitude (in time domain) in the beginning and end of each signal's component level of noise in dB Assume, that, the parametrs of each component like amplitude, frequency is changing lineary in time. Frequency of sampling is 1000Hz For example I have signal like this: Nx=64; fs=1000; t=1/fs*(0:Nx-1); %========================== A1=1; A2=4; f1=500; f2=1000; x1=A1*cos(2*pi*f1*t); x2=A2*sin(2*pi*f2*t); %========================== x=x1+x2;

    Read the article

  • How to generate a lower frequency version of a signal in Matlab?

    - by estourodepilha.com
    With a sine input, I tried to modify it's frequency cutting some lower frequencies in the spectrum, shifting the main frequency towards zero. As the signal is not fftshifted I tried to do that by eliminating some samples at the begin and at the end of the fft vector: interval = 1; samplingFrequency = 44100; signalFrequency = 440; sampleDuration = 1 / samplingFrequency; timespan = 1 : sampleDuration : (1 + interval); original = sin(2 * pi * signalFrequency * timespan); fourierTransform = fft(original); frequencyCut = 10; %% Hertz frequencyCut = floor(frequencyCut * (length(pattern) / samplingFrequency) / 4); %% Samples maxFrequency = length(fourierTransform) - (2 * frequencyCut); signal = ifft(fourierTransform(frequencyCut + 1:maxFrequency), 'symmetric'); But it didn't work as expected. I also tried to remove the center part of the spectrum, but it wielded a higher frequency sine wave too. How to make it right?

    Read the article

  • Generating a scalogram of a signal

    - by Goz
    Hi there, I'm trying to build a scalogram view for my app to see whether there is relevant information we can retrieve from a wavelet transform as opposed to using a spectograms to see what can be retrieved via an FFT. So far I can take a wave form and I can perform the forward wavelet transform on it. However I am lost at the next step. How do I turn this information into power/energy information? I have a set of wave forms at different frequencies but I have, as I say, no frequency information. Can anyone tell me what the next step is for turning this transformed data into a scalogram? Any help would be much appreciated because my google skills are failing me!

    Read the article

  • Good library to load images of different formats

    - by codymanix
    Hi Iam developing an image viewer application just like irfan-view or acdsee which should be capable to view lots of different image file formats (not just the standard ones which can be done with System.Drawing.Image). Iam currently using ImageMagick but it isn't very fast and seems to be unstable with some image files. Can anyone suggest a good imaging library, ideally with a .NET wrapper already present?

    Read the article

  • How can I overlay one image onto another?

    - by Edward Tanguay
    I would like to display an image composed of two images. I want image rectangle.png to show with image sticker.png on top of it with its left-hand corner at pixel 10, 10. Here is as far as I got, but how do I combine the images? Image image = new Image(); image.Source = new BitmapImage(new Uri(@"c:\test\rectangle.png")); image.Stretch = Stretch.None; image.HorizontalAlignment = HorizontalAlignment.Left; Image imageSticker = new Image(); imageSticker.Source = new BitmapImage(new Uri(@"c:\test\sticker.png")); image.OverlayImage(imageSticker, 10, 10); //how to do this? TheContent.Content = image;

    Read the article

  • Cropping image with ImageScience

    - by fl00r
    ImageScience is cool and light. I am using it in my sinatra app. But I can't understand how can I crop image with not square form and how can I make thumbnail with two dimensions. As I found on ImageScience site: ImageScience.with_image(file) do |img| img.cropped_thumbnail(100) do |thumb| thumb.save "#{file}_cropped.png" end img.thumbnail(100) do |thumb| thumb.save "#{file}_thumb.png" end img.resize(100, 150) do |img2| img2.save "#{file}_resize.png" end end I can crop thumb and resize thumb only with ONE dimension but I want to use two, as in RMagick. For example I want to crop 100x200px box from image, or I want to make thumbnail with width or height not bigger then 300 (width) or 500 (height) pixels.

    Read the article

  • How do you composite an image onto another image with PIL in Python?

    - by Sebastian
    I need to take an image and place it onto a new, generated white background in order for it to be converted into a downloadable desktop wallpaper. So the process would go: 1) Generate new, all white image with 1440x900 dimensions 2) Place existing image on top, centered 3) Save as single image In PIL, I see the ImageDraw object, but nothing indicates it can draw existing image data onto another image. Suggestions or links anyone can recommend?

    Read the article

  • Matlab - Propagate points orthogonally on to the edge of shape boundaries

    - by Graham
    Hi I have a set of points which I want to propagate on to the edge of shape boundary defined by a binary image. The shape boundary is defined by a 1px wide white edge. I also have the coordinates of these points stored in a 2 row by n column matrix. The shape forms a concave boundary with no holes within itself made of around 2500 points. I want to cast a ray from each point from the set of points in an orthogonal direction and detect at which point it intersects the shape boundary at. What would be the best method to do this? Are there some sort of ray tracing algorithms that could be used? Or would it be a case of taking orthogonal unit vector and multiplying it by a scalar and testing after multiplication if the end point of the vector is outside the shape boundary. When the end point of the unit vector is outside the shape, just find the point of intersection? Thank you very much in advance for any help!

    Read the article

  • scanned image in C#

    - by ahmed fouad
    We wanna a c# solution to correct the scanned image becouse it is rotated and to solve this problem we must detect the rotation angle first then rotate the image.this was our first thinking for our problem then we think image warping will be more accurate as i think it will make the scanned image like our template so we can process it as we know all the coordinates of our template ....... i search for a free SDK or a free solution in c# and helping me in this will be graet work as it is the last task in our working. rally thanxxxx for all.

    Read the article

  • Perl Parallel::ForkManager wait_all_children() takes excessively long time

    - by zhang18
    I have a script that uses Parallel::ForkManager. However, the wait_all_children() process takes incredibly long time even after all child-processes are completed. The way I know is by printing out some timestamps (see below). Does anyone have any idea what might be causing this (I have 16 CPU cores on my machine)? my $pm = Parallel::ForkManager->new(16) for my $i (1..16) { $pm->start($i) and next; ... do something within the child-process ... print (scalar localtime), " Process $i completed.\n"; $pm->finish(); } print (scalar localtime), " Waiting for some child process to finish.\n"; $pm->wait_all_children(); print (scalar localtime), " All processes finished.\n"; Clearly, I'll get the Waiting for some child process to finish message first, with a timestamp of, say, 7:08:35. Then I'll get a list of Process i completed messages, with the last one at 7:10:30. However, I do not receive the message All Processes finished until 7:16:33(!). Why is that 6-minute delay between 7:10:30 and 7:16:33? Thx!

    Read the article

  • "Unsupported Image Type" exception in only few images...

    - by Nitz
    In my java-swing project. I have one frame in which user can drop the images and save that images in database. Now this work perfectly but their are some images which is not showing. This are images Image 1 , Images 2 which is not supporting .. this are some images which are not reading... and its showing me exception, like javax.imageio.IIOException: Unsupported Image Type Can i check that, is the image which user had dropped is supported or not? And can i convert that file which is not supported into supported file in java?

    Read the article

  • How to convert a 32bpp image to an indexed format?

    - by Ed Swangren
    So here are the details (I am using C# BTW): I receive a 32bpp image (JPEG compressed) from a server. At some point, I would like to use the Palette property of a bitmap to color over-saturated pixels (brightness 240) red. To do so, I need to get the image into an indexed format. I have tried converting the image to a GIF, but I get quality loss. I have tried creating a new bitmap in an index format by these methods: // causes a "Parameter not valid" error Bitmap indexed = new Bitmap(orig.Width, orig.Height, PixelFormat.Indexed) // no error, but the resulting image is black due to information loss I assume Bitmap indexed = new Bitmap(orig.Width, orig.Height, PixelFormat.Format8bppIndexed) I am at a loss now. The data in this image is changed constantly by the user, so I don't want to manually set pixels that have a brightness 240 if I can avoid it. If I can set the palette once when the image is created, my work is done. If I am going about this the wrong way to begin with please let me know. EDIT: Thanks guys, here is some more detail on what I am attempting to accomplish. We are scanning a tissue slide at high resolution (pathology application). I write the interface to the actual scanner. We use a line-scan camera. To test the line rate of the camera, the user scans a very small portion and looks at the image. The image is displayed next to a track bar. When the user moves the track bar (adjusting line rate), I change the overall intensity of the image in an attempt to model what it would look like at the new line rate. I do this using an ImageAttributes and ColorMatrix object currently. When the user adjusts the track bar, I adjust the matrix. This does not give me per pixel information, but the performance is very nice. I could use LockBits and some unsafe code here, but I would rather not rewrite it if possible. When the new image is created, I would like for all pixels with a brightness value of 240 to be colored red. I was thinking that defining a palette for the bitmap up front would be a clean way of doing this.

    Read the article

  • How to display part of an image for the specific width and height?

    - by Brady Chu
    Recently I participated in a web project which has a huge large of images to handle and display on web page, we know that the width and height of images end users uploaded cannot be control easily and then they are hard to display. At first, I attempted to zoom in/out the images to rearch an appropriate presentation, and I made it, but my boss is still not satisfied with my solution, the following is my way: var autoResizeImage = function(maxWidth, maxHeight, objImg) { var img = new Image(); img.src = objImg.src; img.onload = function() { var hRatio; var wRatio; var Ratio = 1; var w = img.width; var h = img.height; wRatio = maxWidth / w; hRatio = maxHeight / h; if (maxWidth == 0 && maxHeight == 0) { Ratio = 1; } else if (maxWidth == 0) { if (hRatio < 1) { Ratio = hRatio; } } else if (maxHeight == 0) { if (wRatio < 1) { Ratio = wRatio; } } else if (wRatio < 1 || hRatio < 1) { Ratio = (wRatio <= hRatio ? wRatio : hRatio); } if (Ratio < 1) { w = w * Ratio; h = h * Ratio; } w = w <= 0 ? 250 : w; h = h <= 0 ? 370 : h; objImg.height = h; objImg.width = w; }; }; This way is only intended to limit the max width and height for the image so that every image in album still has different width and height which are still very urgly. And right at this minute, I know we can create a DIV and use the image as its background image, this way is too complicated and not direct I don't want to take. So I's wondering whether there is a better way to display images with the fixed width and height without presentation distortion? Thanks.

    Read the article

  • Parallelism in Python

    - by fmark
    What are the options for achieving parallelism in Python? I want to perform a bunch of CPU bound calculations over some very large rasters, and would like to parallelise them. Coming from a C background, I am familiar with three approaches to parallelism: Message passing processes, possibly distributed across a cluster, e.g. MPI. Explicit shared memory parallelism, either using pthreads or fork(), pipe(), et. al Implicit shared memory parallelism, using OpenMP. Deciding on an approach to use is an exercise in trade-offs. In Python, what approaches are available and what are their characteristics? Is there a clusterable MPI clone? What are the preferred ways of achieving shared memory parallelism? I have heard reference to problems with the GIL, as well as references to tasklets. In short, what do I need to know about the different parallelization strategies in Python before choosing between them?

    Read the article

< Previous Page | 47 48 49 50 51 52 53 54 55 56 57 58  | Next Page >