Search Results

Search found 11812 results on 473 pages for 'word processing'.

Page 84/473 | < Previous Page | 80 81 82 83 84 85 86 87 88 89 90 91  | Next Page >

  • Image Erosion for face detection in C#

    - by Chris Dobinson
    Hi, I'm trying to implement face detection in C#. I currently have a black + white outline of a photo with a face within it (Here). However i'm now trying to remove the noise and then dilate the image in order to improve reliability when i implement the detection. The method I have so far is here: unsafe public Image Process(Image input) { Bitmap bmp = (Bitmap)input; Bitmap bmpSrc = (Bitmap)input; BitmapData bmData = bmp.LockBits(new Rectangle(0, 0, bmp.Width, bmp.Height), ImageLockMode.ReadWrite, PixelFormat.Format24bppRgb); int stride = bmData.Stride; int stride2 = bmData.Stride * 2; IntPtr Scan0 = bmData.Scan0; byte* p = (byte*)(void*)Scan0; int nOffset = stride - bmp.Width * 3; int nWidth = bmp.Width - 2; int nHeight = bmp.Height - 2; var w = bmp.Width; var h = bmp.Height; var rp = p; var empty = CompareEmptyColor; byte c, cm; int i = 0; // Erode every pixel for (int y = 0; y < h; y++) { for (int x = 0; x < w; x++, i++) { // Middle pixel cm = p[y * w + x]; if (cm == empty) { continue; } // Row 0 // Left pixel if (x - 2 > 0 && y - 2 > 0) { c = p[(y - 2) * w + (x - 2)]; if (c == empty) { continue; } } // Middle left pixel if (x - 1 > 0 && y - 2 > 0) { c = p[(y - 2) * w + (x - 1)]; if (c == empty) { continue; } } if (y - 2 > 0) { c = p[(y - 2) * w + x]; if (c == empty) { continue; } } if (x + 1 < w && y - 2 > 0) { c = p[(y - 2) * w + (x + 1)]; if (c == empty) { continue; } } if (x + 2 < w && y - 2 > 0) { c = p[(y - 2) * w + (x + 2)]; if (c == empty) { continue; } } // Row 1 // Left pixel if (x - 2 > 0 && y - 1 > 0) { c = p[(y - 1) * w + (x - 2)]; if (c == empty) { continue; } } if (x - 1 > 0 && y - 1 > 0) { c = p[(y - 1) * w + (x - 1)]; if (c == empty) { continue; } } if (y - 1 > 0) { c = p[(y - 1) * w + x]; if (c == empty) { continue; } } if (x + 1 < w && y - 1 > 0) { c = p[(y - 1) * w + (x + 1)]; if (c == empty) { continue; } } if (x + 2 < w && y - 1 > 0) { c = p[(y - 1) * w + (x + 2)]; if (c == empty) { continue; } } // Row 2 if (x - 2 > 0) { c = p[y * w + (x - 2)]; if (c == empty) { continue; } } if (x - 1 > 0) { c = p[y * w + (x - 1)]; if (c == empty) { continue; } } if (x + 1 < w) { c = p[y * w + (x + 1)]; if (c == empty) { continue; } } if (x + 2 < w) { c = p[y * w + (x + 2)]; if (c == empty) { continue; } } // Row 3 if (x - 2 > 0 && y + 1 < h) { c = p[(y + 1) * w + (x - 2)]; if (c == empty) { continue; } } if (x - 1 > 0 && y + 1 < h) { c = p[(y + 1) * w + (x - 1)]; if (c == empty) { continue; } } if (y + 1 < h) { c = p[(y + 1) * w + x]; if (c == empty) { continue; } } if (x + 1 < w && y + 1 < h) { c = p[(y + 1) * w + (x + 1)]; if (c == empty) { continue; } } if (x + 2 < w && y + 1 < h) { c = p[(y + 1) * w + (x + 2)]; if (c == empty) { continue; } } // Row 4 if (x - 2 > 0 && y + 2 < h) { c = p[(y + 2) * w + (x - 2)]; if (c == empty) { continue; } } if (x - 1 > 0 && y + 2 < h) { c = p[(y + 2) * w + (x - 1)]; if (c == empty) { continue; } } if (y + 2 < h) { c = p[(y + 2) * w + x]; if (c == empty) { continue; } } if (x + 1 < w && y + 2 < h) { c = p[(y + 2) * w + (x + 1)]; if (c == empty) { continue; } } if (x + 2 < w && y + 2 < h) { c = p[(y + 2) * w + (x + 2)]; if (c == empty) { continue; } } // If all neighboring pixels are processed // it's clear that the current pixel is not a boundary pixel. rp[i] = cm; } } bmpSrc.UnlockBits(bmData); return bmpSrc; } As I understand it, in order to erode the image (and remove the noise), we need to check each pixel to see if it's surrounding pixels are black, and if so, then it is a border pixel and we need not keep it, which i believe my code does, so it is beyond me why it doesn't work. Any help or pointers would be greatly appreciated Thanks, Chris

    Read the article

  • PHP/GD - Cropping and Resizing Images

    - by Alix Axel
    I've coded a function that crops an image to a given aspect ratio and finally then resizes it and outputs it as JPG: <?php function Image($image, $crop = null, $size = null) { $image = ImageCreateFromString(file_get_contents($image)); if (is_resource($image) === true) { $x = 0; $y = 0; $width = imagesx($image); $height = imagesy($image); /* CROP (Aspect Ratio) Section */ if (is_null($crop) === true) { $crop = array($width, $height); } else { $crop = array_filter(explode(':', $crop)); if (empty($crop) === true) { $crop = array($width, $height); } else { if ((empty($crop[0]) === true) || (is_numeric($crop[0]) === false)) { $crop[0] = $crop[1]; } else if ((empty($crop[1]) === true) || (is_numeric($crop[1]) === false)) { $crop[1] = $crop[0]; } } $ratio = array ( 0 => $width / $height, 1 => $crop[0] / $crop[1], ); if ($ratio[0] > $ratio[1]) { $width = $height * $ratio[1]; $x = (imagesx($image) - $width) / 2; } else if ($ratio[0] < $ratio[1]) { $height = $width / $ratio[1]; $y = (imagesy($image) - $height) / 2; } /* How can I skip (join) this operation with the one in the Resize Section? */ $result = ImageCreateTrueColor($width, $height); if (is_resource($result) === true) { ImageSaveAlpha($result, true); ImageAlphaBlending($result, false); ImageFill($result, 0, 0, ImageColorAllocateAlpha($result, 255, 255, 255, 127)); ImageCopyResampled($result, $image, 0, 0, $x, $y, $width, $height, $width, $height); $image = $result; } } /* Resize Section */ if (is_null($size) === true) { $size = array(imagesx($image), imagesy($image)); } else { $size = array_filter(explode('x', $size)); if (empty($size) === true) { $size = array(imagesx($image), imagesy($image)); } else { if ((empty($size[0]) === true) || (is_numeric($size[0]) === false)) { $size[0] = round($size[1] * imagesx($image) / imagesy($image)); } else if ((empty($size[1]) === true) || (is_numeric($size[1]) === false)) { $size[1] = round($size[0] * imagesy($image) / imagesx($image)); } } } $result = ImageCreateTrueColor($size[0], $size[1]); if (is_resource($result) === true) { ImageSaveAlpha($result, true); ImageAlphaBlending($result, true); ImageFill($result, 0, 0, ImageColorAllocate($result, 255, 255, 255)); ImageCopyResampled($result, $image, 0, 0, 0, 0, $size[0], $size[1], imagesx($image), imagesy($image)); header('Content-Type: image/jpeg'); ImageInterlace($result, true); ImageJPEG($result, null, 90); } } return false; } ?> The function works as expected but I'm creating a non-required GD image resource, how can I fix it? I've tried joining both calls but I must be doing some miscalculations. <?php /* Usage Examples */ Image('http://upload.wikimedia.org/wikipedia/commons/4/47/PNG_transparency_demonstration_1.png', '1:1', '600x'); Image('http://upload.wikimedia.org/wikipedia/commons/4/47/PNG_transparency_demonstration_1.png', '2:1', '600x'); Image('http://upload.wikimedia.org/wikipedia/commons/4/47/PNG_transparency_demonstration_1.png', '2:', '250x300'); ?> Any help is greatly appreciated, thanks.

    Read the article

  • Offer the posibility to preview a personalized product

    - by bellesebastien
    I want to give my client's the possibility to preview their personalized product. In a nutshell, personalization means the user specifies some text and positions the text somewhere on an image of the product. A great example of what I'd like to achieve is this. Select a frame size and then hit Personalize, after you select the occasion and add the text you can chose to preview the product. The hardest thing to do here, as I see it, is to generate the image. The positioning of text will probably be made in javascript. Do you have any recommendations? Just as a note, using scene7 is not a possibility. Thanks.

    Read the article

  • Dynamically create and save image with Django and PIL/Django-Photologue

    - by Travis
    I want to generate a page of html with dynamic content and then save the result as an image as well as display the page to the user. So, user signs up to attend conference and gives their name and org. That data is combined with html/css elements to show what their id badge for the conference will look like (their name and org on top of our conference logo background) for preview. Upon approval, the page is saved on the server to an image format (PNG, PDF or JPG) to be printed onto a physical badge by an admin later. I am using Django and django-photologue powered by PIL. The view might look like this # app/views.py def badgepreview(request, attendee_id): u = User.objects.get(id=attendee_id) name = u.name org = u.org return render_to_response('app/badgepreview.html', {'name':name,'org':org,}, context_instance = RequestContext(request), ) The template could look like this {# templates/app/badgepreview.html #} {% extends "base.html" %} {% block page_body %} <div style='background:url(/site_media/img/logo_bg.png) no-repeat;'> <h4>{{ name }}</h4> <h4>{{ org }}</h4> </div> {% endblock %} simple, but how do I save the result? Or is there a better way to get this done?

    Read the article

  • C# Image Deskew

    - by Brett
    Has anyone seen any image deskew algorithms in c#? I have found: http://www.codeproject.com/KB/graphics/Deskew_an_Image.aspx but unfortunately, it doesn't do very much for images without text. I am specifically trying to auto deskew an image of a book with solid edges, but minimal text. Has anyone seen anything that may be able to do this?

    Read the article

  • Any task-control algorithms programming practices?

    - by NumberFour
    Hi, I was just wondering if there's any field which concerns the task-control programming (or at least that's the way I call it). For a better explanation of task-control consider the following scenario: An application (master-thread) waits for a command - which might be a particular action or a set of actions the application should perform. When a command is received the master-thread creates a task (= spawns an independent thread which actually does the action) and adds a record in it's task-list - thus keeping track of the time of execution, thread handle, task priority...etc. The master-thread awaits for any other incoming commands while taking care of all the tasks - e.g: kills tasks running too long, prioritizes tasks with higher priorities, kills a task on a request of another task, limits the number of currently running tasks, allows task scheduling, cleans finished tasks (threads) and so on. The model is pretty similar to what we can see in OS dealing with running processes. Are there any good practices programming such task-models or is there some theoretical work done in this field? Maybe my question is too generalized, but at least I wanted to know whether there are any experiences working on such models or if there's a better approach. Thanks for any answers.

    Read the article

  • Laplacian of Gaussian: how does it work? (opencv)

    - by maximus
    Does anybody know how does it work and how to do it using opencv? Laplacian can be calculated using opencv, but the result is not what I expected. I mean I expect the image to be approximately constant contrast at background regions, but it is black, and edges are white. There are a lot of noise also, even after gauss filter. I filter image using gaussian filter and then apply laplace. I think what I want is done by a different way.

    Read the article

  • Why aren't we programming on the GPU???

    - by Chris
    So I finally took the time to learn CUDA and get it installed and configured on my computer and I have to say, I'm quite impressed! Here's how it does rendering the Mandelbrot set at 1280 x 678 pixels on my home PC with a Q6600 and a GeForce 8800GTS (max of 1000 iterations): Maxing out all 4 CPU cores with OpenMP: 2.23 fps Running the same algorithm on my GPU: 104.7 fps And here's how fast I got it to render the whole set at 8192 x 8192 with a max of 1000 iterations: Serial implemetation on my home PC: 81.2 seconds All 4 CPU cores on my home PC (OpenMP): 24.5 seconds 32 processors on my school's super computer (MPI with master-worker): 1.92 seconds My home GPU (CUDA): 0.310 seconds 4 GPUs on my school's super computer (CUDA with static domain decomposition): 0.0547 seconds So here's my question - if we can get such huge speedups by programming the GPU instead of the CPU, why is nobody doing it??? I can think of so many things we could speed up like this, and yet I don't know of many commercial apps that are actually doing it. Also, what kinds of other speedups have you seen by offloading your computations to the GPU?

    Read the article

  • How to use gnlcomposition to concatenate video files?

    - by Hardy
    Hi All, I am trying to concatenate two video files with the gnonlin components of the gstreamer. The pipeline I am using is gst-launch-0.10 gnlcomposition { gnlfilesource name="s1" location="/home/s1.mp4" start=0 duration=2000000000 media-start=0 media-duration=2000000000 gnlfilesource name="s2" location="/home/s2.mp4" start=2000000000 duration=2000000000 media-start=0 media-duration=2000000000 } ! queue ! videorate ! progressreport name="Merging Progress" ! ffmpegcolorspace ! ffenc_mpeg4 ! ffmux_mp4 ! filesink location="/home/merge.mp4" As a result I am getting only the second file for the duration specified in the parameters. Tried several things and also searched on google but I could not figure out the problem with the above command. Can anyone point what i am doing wrong? Any other way of concatenating multiple files into one based on time is welcome too. Thanks

    Read the article

  • OCR for Devanagari (Hindi / Marathi / Sanskrit)

    - by Egon
    Does anybody have any idea about any recent work being done on optical character recognition for Indian scripts using modern Machine Learning techniques ? I know of some research being done at ISI, calcutta, but nothing new has come up in the last 3-4 years to the best of my knowledge, and OCR for Devanagari is sadly lacking!

    Read the article

  • Image Recognition (Shape recognition)

    - by mqpasta
    I want to recognize the shapes in the picture by template matching.Is the "ExhaustiveTemplateMatching" is the right option given in Aforge.Net for this purpose.Had anyone tried this class and find it working correctly.How accurate and right choice this class is for achieving my purpose.Suggest any other methods or Alogrithms as well for recognizing shapes by matching template.For example Identifying ComboBox in a picture.

    Read the article

  • computing matting Laplacian matrix of an Image

    - by ajith
    hi everyone, i need to compute Laplacian Matrix-L for an image(nXn) in opencv...computing goes as follows.......... dij - 1/|wk|{[1+1/(e/|wk|+s2)][(Ii-µk)*(Ij -µk)]}.... for all(i,j)?wk,summing over k yields (i,j)th element of L. where Here dij is the Kronecker delta,µk and s2k are the mean & variance of intensities in the window wk around k,and |wk| is the number of pixels in this window.wk is 3X3 window... here am not clear about 2 things... 1.what ll be the size of L?nXn or (nXn)X(nXn)?? 2.how to select Ii and Ij separately in 2D image?

    Read the article

  • Recognize objects in image

    - by DoomStone
    Hello I am in the process of doing a school project, where we have a robot driving on the ground in between Flamingo plates. We need to create an algorithm that can identify the locations of these plates, so we can create paths around them (We are using A Star for that). So far have we worked with AForged Library and we have created the following class, the only problem with this is that when it create the rectangles dose it not take in account that the plates are not always parallel with the camera border, and it that case will it just create a rectangle that cover the whole plate. So we need to some way find the rotation on the object, or another way to identify this. I have create an image that might help explain this Image the describe the problem: http://img683.imageshack.us/img683/9835/imagerectangle.png Any help on how I can do this would be greatly appreciated. Any other information or ideers are always welcome. public class PasteMap { private Bitmap image; private Bitmap processedImage; private Rectangle[] rectangels; public void initialize(Bitmap image) { this.image = image; } public void process() { processedImage = image; processedImage = applyFilters(processedImage); processedImage = filterWhite(processedImage); rectangels = extractRectangles(processedImage); //rectangels = filterRectangles(rectangels); processedImage = drawRectangelsToImage(processedImage, rectangels); } public Bitmap getProcessedImage { get { return processedImage; } } public Rectangle[] getRectangles { get { return rectangels; } } private Bitmap applyFilters(Bitmap image) { image = new ContrastCorrection(2).Apply(image); image = new GaussianBlur(10, 10).Apply(image); return image; } private Bitmap filterWhite(Bitmap image) { Bitmap test = new Bitmap(image.Width, image.Height); for (int width = 0; width < image.Width; width++) { for (int height = 0; height < image.Height; height++) { if (image.GetPixel(width, height).R > 200 && image.GetPixel(width, height).G > 200 && image.GetPixel(width, height).B > 200) { test.SetPixel(width, height, Color.White); } else test.SetPixel(width, height, Color.Black); } } return test; } private Rectangle[] extractRectangles(Bitmap image) { BlobCounter bc = new BlobCounter(); bc.FilterBlobs = true; bc.MinWidth = 5; bc.MinHeight = 5; // process binary image bc.ProcessImage( image ); Blob[] blobs = bc.GetObjects(image, false); // process blobs List<Rectangle> rects = new List<Rectangle>(); foreach (Blob blob in blobs) { if (blob.Area > 1000) { rects.Add(blob.Rectangle); } } return rects.ToArray(); } private Rectangle[] filterRectangles(Rectangle[] rects) { List<Rectangle> Rectangles = new List<Rectangle>(); foreach (Rectangle rect in rects) { if (rect.Width > 75 && rect.Height > 75) Rectangles.Add(rect); } return Rectangles.ToArray(); } private Bitmap drawRectangelsToImage(Bitmap image, Rectangle[] rects) { BitmapData data = image.LockBits(new Rectangle(0, 0, image.Width, image.Height), ImageLockMode.ReadWrite, PixelFormat.Format24bppRgb); foreach (Rectangle rect in rects) Drawing.FillRectangle(data, rect, Color.Red); image.UnlockBits(data); return image; } }

    Read the article

  • Finding edge and corner values of an image in matlab

    - by James
    Hi, this problem links to two other questions i've asked on here. I am tracing the outline of an image and plotting this to a dxf file. I would like to use the bwboundaries function to find the coordinates of the edges of the image, find the corner coordinates using the cornermetric function and then remove any edge coordinates that are not a corner. The important thing I need to be able to do is keep the order of the corner elements obtained from bwboundaries, so that the section traces properly. The dxf function I have that draws from the coordinates draws lines between coordinates that are next to each other, so the line has to be drawn "around" the section rather than straight between the corner points. The reason I am doing this is because there are less coordinates obtained this way, so it is easier to amend the dxf file (as there are less points to manipulate). The code I have so far is: %# Shape to be traced bw = zeros(200); bw(20:40,20:180) = 1; bw(20:180,90:110) = 1; bw(140:180,20:185) = 1; %# Boundary Finding Section [Boundary] = bwboundaries(bw); %Traces the boundary of each section figure, imshow(bw); hold on; colors=['b' 'g' 'r' 'c' 'm' 'y']; for k=1:length(Boundary) perim = Boundary{k}; %Obtains perimeter coordinates (as a 2D matrix) from the cell array cidx = mod(k,length(colors))+1;% Obtains colours for the plot plot(perim(:,2), perim(:,1),... colors(cidx),'LineWidth',2); end Coordmat = cell2mat(Boundary) %Converts the traced regions to a matrix X = Coordmat(:,1) Y = Coordmat(:,2) % This gives the edge coordinates in matrix form %% Corner Finding Section (from Jonas' answer to a previous question %# get corners cornerProbability = cornermetric(bw); cornerIdx = find(cornerProbability==max(cornerProbability(:))); %# Label the image. bwlabel puts 1 for the first feature, 2 for the second, etc. %# Since concave corners are placed just outside the feature, grow the features %# a little before labeling bw2 = imdilate(bw,ones(3)); labeledImage = bwlabel(bw2); %# read the feature number associated with the corner cornerLabels = labeledImage(cornerIdx); %# find all corners that are associated with feature 1 corners_1 = cornerIdx(cornerLabels==1) [Xcorners, Ycorners] = ind2sub(200,corners_1) % Convert subscripts The code I have is, to give a matrix Xfin for the final x coordinates (which are on the edge AND at a corner. Xfin = zeros(length(X),1) for i = Xcorners XFin(i) = Xcorners if i~= Xcorners XFin(i) = [] end end However, this does not work correctly, because the values in the solution are sorted into order, and only one of each value remains. As I said, I would like the corner elements to be in the same order as obtained from bwboundaries, to allow the image to trace properly. Thanks

    Read the article

  • Is there any super fast algorithm for finding LINES on picture?

    - by Ole Jak
    So I have Image like this I need some super fast algorithm for finding all straight lines on it. I want to give to algorithm parameters like min length and max line distortion. I want to get relative to picture pixel coords start and end points of lines. So on this picture to find all lines between dalles and thouse 2 black lines on top. So I need algorithm for super fast finding straight lines of different colors on picture. Is there any such algorithm? (super duper fast=)

    Read the article

  • C# Monte Carlo Incremental Risk Calculation optimisation, random numbers, parallel execution

    - by m3ntat
    My current task is to optimise a Monte Carlo Simulation that calculates Capital Adequacy figures by region for a set of Obligors. It is running about 10 x too slow for where it will need to be in production and number or daily runs required. Additionally the granularity of the result figures will need to be improved down to desk possibly book level at some stage, the code I've been given is basically a prototype which is used by business units in a semi production capacity. The application is currently single threaded so I'll need to make it multi-threaded, may look at System.Threading.ThreadPool or the Microsoft Parallel Extensions library but I'm constrained to .NET 2 on the server at this bank so I may have to consider this guy's port, http://www.codeproject.com/KB/cs/aforge_parallel.aspx. I am trying my best to get them to upgrade to .NET 3.5 SP1 but it's a major exercise in an organisation of this size and might not be possible in my contract time frames. I've profiled the application using the trial of dotTrace (http://www.jetbrains.com/profiler). What other good profilers exist? Free ones? A lot of the execution time is spent generating uniform random numbers and then translating this to a normally distributed random number. They are using a C# Mersenne twister implementation. I am not sure where they got it or if it's the best way to go about this (or best implementation) to generate the uniform random numbers. Then this is translated to a normally distributed version for use in the calculation (I haven't delved into the translation code yet). Also what is the experience using the following? http://quantlib.org http://www.qlnet.org (C# port of quantlib) or http://www.boost.org Any alternatives you know of? I'm a C# developer so would prefer C#, but a wrapper to C++ shouldn't be a problem, should it? Maybe even faster leveraging the C++ implementations. I am thinking some of these libraries will have the fastest method to directly generate normally distributed random numbers, without the translation step. Also they may have some other functions that will be helpful in the subsequent calculations. Also the computer this is on is a quad core Opteron 275, 8 GB memory but Windows Server 2003 Enterprise 32 bit. Should I advise them to upgrade to a 64 bit OS? Any links to articles supporting this decision would really be appreciated. Anyway, any advice and help you may have is really appreciated.

    Read the article

  • JPEG artifacts removal in C#

    - by Arcturus
    Hi all I am building a website for a club that is part of a mother organisation. I am downloading (leeching ;) ) the images that where put on profile pages of the mother organisation to show on my own page. But their website has a nice white background, and my website has a nice gray gradient on the background. This does not match nicely. So my idea was to edit the images before saving them to my server. I am using GDI+ to enhance my images, and when I use the method MakeTransparent of Bitmap, it does work, and it does do what its supposed to do, but I still have these white jpeg artifacts all over the place. The artifacts makes the image so bad, I am better off not making the image transparent and just leaving it white, but thats really ugly on my own website. I can always at a nice border with a white background of course, but I rather change the background to transparent. So I was wondering if and how I can remove some simple JPEG artifacts in C#. Has anyone ever done this before? Thanks for your time. Example image: Transformed image:

    Read the article

  • Float PPM image file format?

    - by Luca
    I've found a PPM image with the header starting with PF. The resolution number is stored in floating point (-1.000). No comments are inserted to get how it was produced. From the resolution, each pixel is composed by 12 bytes (4 bytes per component)... I suppose they are float or integer numbers. The problem is that I cannot get a clear image. Someone has already found this kind of images?

    Read the article

  • Programming for Multi core Processors

    - by Chathuranga Chandrasekara
    As far as I know, the multi-core architecture in a processor does not effect the program. The actual instruction execution is handled in a lower layer. my question is, Given that you have a multicore environment, Can I use any programming practices to utilize the available resources more effectively? How should I change my code to gain more performance in multicore environments?

    Read the article

  • Acordex Image viewer throws out of memory exception in CITRIX environment

    - by neha
    We have a .net 2.0 application. In the .aspx page we are calling the java applet using . This applet is calling the Acordex Image viewer. In the production environment users are facing "out of memory" or "insufficient memory" issues when users try to open the image or magnify an image in Acordex viewer. Strangely when the users logout and login again they are able to see the same image without any errors. The website is hosted in a CITRIX environment. We have access to this environment but we are not able to reproduce this issue on the test servers or the local machines. We dont know what is causing this issue. What should we do to troubleshoot the issue? Do we have to increase the memory allotted to the users in CITRIX? The RAM is around 4 gb. Number of simultaneous users - 10-13. image size is max 2 mb Following is the code used to call Acordex image viewer:

    Read the article

  • fft understanding

    - by maximus
    Can somebody give a good explanation of FFT image transform How the FFT transformed image and it's Re^2+Im^2 image can be analyzed? I just want to understand something when loiking to the image and it's frequency.

    Read the article

  • Crystal Reports Programmatic Image Resizing... Scale?

    - by C. Griffin
    I'm working with a Crystal Reports object in Visual Studio 2008 (C#). The report is building fine and the data is binding correctly. However, when I try to resize an IBlobFieldObject from within the source, the scale is getting skewed. Two notes about this scenario. Source image is 1024x768, my max width and height are 720x576. My math should be correct that my new image size will be 720x540 (to fit within the max width and height guidelines). The ratio is wrong when I do this though: img = Image.FromFile(path); newWidth = img.Size.Width; newHeight = img.Size.Height; if ((img.Size.Width > 720) || (img.Size.Height > 576)) { double ratio = Convert.ToDouble(img.Size.Width) / Convert.ToDouble(img.Size.Height); if (ratio > 1.25) // Adjust width to 720, height will fall within range { newWidth = 720; newHeight = Convert.ToInt32(Convert.ToDouble(img.Size.Height) * 720.0 / Convert.ToDouble(img.Size.Width)); } else // Adjust height to 576, width will fall within range { newHeight = 576; newWidth = Convert.ToInt32(Convert.ToDouble(img.Size.Width) * 576.0 / Convert.ToDouble(img.Size.Height)); } imgRpt.Section3.ReportObjects["image"].Height = newHeight; imgRpt.Section3.ReportObjects["image"].Width = newWidth; } I've stepped through the code to make sure that the values are correct from the math, and I've even saved the image file out to make sure that the aspect ratio is correct (it was). No matter what I try though, the image is squashed--almost as if the Scale values are off in the Crystal Reports designer (they're not). Thanks in advance for any help!

    Read the article

< Previous Page | 80 81 82 83 84 85 86 87 88 89 90 91  | Next Page >