Search Results

Search found 7955 results on 319 pages for 'signal processing'.

Page 58/319 | < Previous Page | 54 55 56 57 58 59 60 61 62 63 64 65  | Next Page >

  • Problem with xor operation

    - by gavishna
    Kindly tell me why the original image is not coming with this code. The resulting image receive is yellowish in color,instead of being similar to the image Img_new Img=imread(‘lena_color.tif’); Img_new=rgb2gray(img); Send=zeroes(size(Img_new); Receive= zeroes(size(Img_new); Mask= rand(size(Img_new); for i=1 :256 for j=1:256 Send(i,j)=xor( Img_new(i,j),mask(i,j)); End End image(send); imshow(send); for i=1 :256 for j=1:256 receive(i,j)=xor( send(i,j),mask(i,j)); End End image(receive); imshow(receive); plz help

    Read the article

  • Creating new image in a loop using OpenCV

    - by user565415
    I am programing some image conversion code with OpenCV and I don't know how can I create image memory buffer to load image on every iteration. I have number of iteration (maxImNumber) and I have an input image. In every loop program must create image that is resized and modified input image. Here is some basic code (concept). for (int imageIndex = 0; imageIndex < maxImNumber; imageIndex++){ cvCopy(inputImage, images[imageIndex], 0); cvReleaseImage(&inputImage); images[imageIndex+1] = cvCreateImage(cvSize((image[imageIndex]->width)/2, image[imageIndex]->height), IPL_DEPTH_8U, 1); for (i=1; i < image[imageIndex]->height; i++) { index = 0; // for(j=0; j < image[imageIndex]->width ; j=j+2){ // doing some basic matematical operation on image content and store it to new image images[imageIndex+1][i][index] = (image[imageIndex][i][j] + image[imageIndex][i][j+2])/2; index++ } } inputImage = cvCreateImage(cvSize((image[imageIndex+1]->width), image[imageIndex]->height), IPL_DEPTH_8U, 1); cvCopy(images[imageIndex+1], inputImage, 0); } Can somebody, please, explain how can I create this image buffer (images[]) and allocate memory for it. Also how can I access any image in this buffer? Thank you very much in advance!

    Read the article

  • Action T synchronous and asynchronous

    - by raffaeu
    Hi everybody I have a contextmenustrip control that allows you to execute an action is two different flawours. Sync and Async. I am trying to covert everything using Generics so I did this: public class BaseContextMenu<T> : IContextMenu { private T executor ... public void Exec(Action<T> action){ action.Invoke(this.executor); } public void ExecAsync(Action<T> asyncAction){ ... } How I can write the async method in order to execute the generic action and 'do something' with the menu in the meanwhile? I saw that the signature of BeginInvoke is something like: asyncAction.BeginInvoke(thi.executor, IAsyncCallback, object);

    Read the article

  • Image rotate opecv error

    - by avd
    When I use this code to rotate the image, the destination image size remains same and hence the image gets clipped. Please provide me a way/code snippet to resize accordingly (like matlab does in imrotate) so that image does not get clipped and outlier pixels gets filled with all white instead of black. void imrotate(std::string imgPath,std::string angleStr,std::string outPath) { size_t found1,found2; found1=imgPath.find_last_of('/'); found2=imgPath.size()-4; IplImage* src=cvLoadImage(imgPath.c_str(), -1);; IplImage* dst; dst = cvCloneImage( src ); int angle = atoi(angleStr.c_str()); CvMat* rot_mat = cvCreateMat(2,3,CV_32FC1); CvPoint2D32f center = cvPoint2D32f( src->width/2, src->height/2 ); double scale = 1; cv2DRotationMatrix( center, angle, scale, rot_mat ); cvWarpAffine( src, dst, rot_mat); char angStr[4]; sprintf(angStr,"%d",angle); cvSaveImage(string(outPath+imgPath.substr(found1+1,found2-found1-1)+"_"+angStr+".jpg").c_str(),dst); cvReleaseImage(&src); cvReleaseImage(&dst); cvReleaseMat( &rot_mat ); } Original Image: Rotated Image:

    Read the article

  • Opencv video frame not showing Sobel output

    - by user1016950
    This is a continuation question from Opencv video frame giving me an error I think I closed it off, Im new to Stackoverflow. I have code below that Im trying to see its Sobel edge image. However the program runs but the output is just a grey screen where if I mouseover the cursor disappears. Does anyone see the error? or is it a misunderstanding about the data structures Im using IplImage *frame, *frame_copy = 0; // capture frames from video CvCapture *capture = cvCaptureFromFile( "lightinbox1.avi"); //Allows Access to video propertys cvQueryFrame(capture); //Get the number of frames int nframe=(int) cvGetCaptureProperty(capture,CV_CAP_PROP_FRAME_COUNT); //Name window cvNamedWindow( "video:", 1 ); //start loop for(int i=0;i<nframe;i++){ //prepare capture frame extraction cvGrabFrame(capture); cout<<"We are on frame "<<i<<"\n"; //Get this frame frame = cvRetrieveFrame( capture ); con2txt(frame); frame_copy = cvCreateImage(cvSize(frame->width,frame->height),IPL_DEPTH_8U,frame->nChannels ); //show and destroy frame cvCvtColor( frame,frame,CV_RGB2GRAY); //Create Sobel output frame_copy1 = cvCreateImage(cvSize(frame->width,frame->height),IPL_DEPTH_16S,1 ); cvSobel(frame_copy,frame_copy1,2,2,3); cvShowImage("video:",frame_copy1); cvWaitKey(33);} cvReleaseCapture(&capture);

    Read the article

  • copying the contents of an image file

    - by Ganesh
    I am designing an image decoder and as a first step I tried to just copy the using c. i.e open the file, and write its contents to a new file. Below is the code that I used. while((c=getc(fp))!=EOF) fprintf(fp1,"%c",c); where fp is the source file and fp1 is the destination file. The program executes without any error, but the image file(".bmp") is not properly copied. I have observed that the size of the copied file is less and only 20% of the image is visible, all else is black. When I tried with simple text files, the copy was complete. Do you know what the problem is?

    Read the article

  • output image is not displayed

    - by gerry chocolatos
    so, im doing an image detection which i have to process on each red, green, n blue element to get the edge map and combine them become one to show the output. but it doesnt show my output image would anyone pls be kind enough to help me? here is my code so far. //get the red element process_red = new int[width * height]; counter = 0; for(int i = 0; i < 256; i++) { for(int j = 0; j < 256; j++) { int clr = buff_red.getRGB(j, i); int red = (clr & 0x00ff0000) >> 16; red = (0xFF<<24)|(red<<16)|(red<<8)|red; process_red[counter] = red; counter++; } } //set threshold value for red element int threshold = 100; for (int x = 0; x < width; x++) { for (int y = 0; y < height; y++) { int bin = (buff_red.getRGB(x, y) & 0x000000ff); if (bin < threshold) bin = 0; else bin = 255; buff_red.setRGB(x,y, 0xff000000 | bin << 16 | bin << 8 | bin); } } and i do the same way for my green n blue elements. and then i wanted to get to combination of the three by doing it this way: //combine the three elements process_combine = new int[width * height]; counter = 0; for(int i = 0; i < 256; i++) { for(int j = 0; j < 256; j++) { int clr_a = buff_red.getRGB(j, i); int ar = clr_a & 0x000000ff; int clr_b = buff_green.getRGB(j, i); int bg = clr_b & 0x000000ff; int clr_c = buff_blue.getRGB(j, i); int cb = clr_b & 0x000000ff; int alpha = 0xff000000; int combine = alpha|(ar<<16)|(bg<<8)|cb; process_combine[counter] = combine; counter++; } } buff_rgb = new BufferedImage(width,height, BufferedImage.TYPE_INT_ARGB); Graphics rgb; rgb = buff_rgb.getGraphics(); rgb.drawImage(output_rgb, 0, 0, null); rgb.dispose(); repaint(); and to show the output whic is from the combining process, i use a draw method: g.drawImage(buff_rgb,800,100,this); but still it doesnt show the image. can anyone pls help me? ur help is really appreciated. thanks.

    Read the article

  • Parallel doseq for Clojure

    - by andrew cooke
    I haven't used multithreading in Clojure at all so am unsure where to start. I have a doseq whose body can run in parallel. What I'd like is for there always to be 3 threads running (leaving 1 core free) that evaluate the body in parallel until the range is exhausted. There's no shared state, nothing complicated - the equivalent of Python's multiprocessing would be just fine. So something like: (dopar 3 [i (range 100)] ; repeated 100 times in 3 parallel threads... ...) Where should I start looking? Is there a command for this? A standard package? A good reference? So far I have found pmap, and could use that (how do I restrict to 3 at a time? looks like it uses 32 at a time - no, source says 2 + number of processors), but it seems like this is a basic primitive that should already exist somewhere. clarification: I really would like to control the number of threads. I have processes that are long-running and use a fair amount of memory, so creating a large number and hoping things work out OK isn't a good approach (example which uses a significant chunk available mem). update: Starting to write a macro that does this, and I need a semaphore (or a mutex, or an atom i can wait on). Do semaphores exist in Clojure? Or should I use a ThreadPoolExecutor? It seems odd to have to pull so much in from Java - I thought parallel programming in Clojure was supposed to be easy... Maybe I am thinking about this completely the wrong way? Hmmm. Agents?

    Read the article

  • How do I save an altered image in matlab?

    - by ef-i-blinky
    So I am using the code located here: http://wwwx.cs.unc.edu/~sjguy/CompVis/Features/BlobDetect.m and I was wondering how to save the final blob detected image. The image that I am doing the blob detection on gets shown and then he manually draws the lines on the image here: Xbar = cx1+X.*cos(alpha)+Y.*sin(alpha); Ybar = cy1+Y.*cos(alpha)-X.*sin(alpha); line(Xbar', Ybar', 'Color', color, 'LineWidth', ln_wid); I then want to save this image using something like imwrite. I have been reading around and it seems that no one really has an answer to to this problem. Thanks for any help you can give me, Josh

    Read the article

  • How to convert a BufferedImage to 8 bit?

    - by Zach Sugano
    I was looking at the ImageConverter class, trying to figure out how to convert a BufferedImage to 8-bit color, but I have no idea how I would do this. I was also searching around the internet and I could find no simple answer, they were all talking about 8 bit grayscale images. I simply want to convert the colors of an image to 8 bit... nothing else, no resizing no nothing. Does anyone mind telling me how to do this.

    Read the article

  • Image classification: recognizing various features of many buildings from images

    - by el chief
    so, let's say i have the long/lat or address of many buildings can get satellite images, "street view", and perhaps 3d/perspective views of buildings. want to find: height, number of floors, floor area (max building footprint) of the building. about 200k buildings. Is there a library for recognizing buildings from satellite shots or pictures? Kind of like face detection I suppose. Any other suggestions? Thanks!

    Read the article

  • Looking for a managed image parser library (JPEG, BMP, PNG, GIF)

    - by usr
    I am writing a discussion board software that will have "avatar" images for the users. I want to resize any picture that gets uploaded to a reasonable size. I could easily do that with System.Drawing but that is relying on GDI+ which has hat security problems before. The problem is that the images are untrusted. So I thought of using a fully managed lib to solve that problem because managed code cannot escape the sandbox (of course it can, but only if the code is user-supplied which it is not in my case). So does anybody know of a managed image parser library for JPEG, BMP, PNG and GIF? If some format is missing than I will have to live with that. Edit: Paint.NET also relies on GDI+. You might be interested in the discussion below, too.

    Read the article

  • Is it possible to add layers to .tiff images with .NET?

    - by Voyta
    Are there any .NET libraries etc. which allow adding layers (with text) to .tiff images? Something like annotations, so that it would be possible to separate them from image afterwards. I tried DotImage - it allows to add annotations, save them as embedded into image and load them afterwards, but no one other image viewer seems to recognize that they are there.

    Read the article

  • How to convert a one column integer data file into a mask for image

    - by gavishna
    I have a data file which contains integers say in range 0-255 containing about 1000 integers which are random in nature.I want to use that as a mask or to multiply an image which is in RGb and another image which is in gray format. HOw do i go about this, how do i convert/represent this data file in matrix format of image dimension ?Kindly suggest. also is it possible to obtain a 3D histogram?

    Read the article

  • How can I compress jpeg images in Java without losing any metadata in that image?

    - by guitarpoet
    I want compress jpeg files using Java. I do it like this: Read the image as BufferedImage Write the image to another file with compression rate. OK, that seems easy, but I find the ICC color profile and the EXIF information are gone in the new file and the DPI of the image is dropped from 240 to 72. It looks different from the origin image. I use a tool like preview in OS X. It can perfectly change the quality of the image without affecting other information. Can I done this in Java? At least keep the ICC color profile and let the image color look the same as the origin photo?

    Read the article

  • How to convert Bitmap to byte[,,] faster?

    - by Miko Kronn
    I wrote function: public static byte[, ,] Bitmap2Byte(Bitmap image) { int h = image.Height; int w = image.Width; byte[, ,] result= new byte[w, h, 3]; for (int i = 0; i < w; i++) { for (int j = 0; j < h; j++) { Color c= image.GetPixel(i, j); result[i, j, 0] = c.R; result[i, j, 1] = c.G; result[i, j, 2] = c.B; } } return result; } But it takes almost 6 seconds to convert 1800x1800 image. Can I do this faster?

    Read the article

  • .NET 4 ... Parallel.ForEach() question

    - by CirrusFlyer
    I understand that the new TPL (Task Parallel Library) has implemented the Parallel.ForEach() such that it works with "expressed parallelism." Meaning, it does not guarantee that your delegates will run in multiple threads, but rather it checks to see if the host platform has multiple cores, and if true, only then does it distribute the work across the cores (essentially 1 thread per core). If the host system does not have multiple cores (getting harder and harder to find such a computer) then it will run your code sequenceally like a "regular" foreach loop would. Pretty cool stuff, frankly. Normally I would do something like the following to place my long running operation on a background thread from the ThreadPool: ThreadPool.QueueUserWorkItem( new WaitCallback(targetMethod), new Object2PassIn() ); In a situation whereby the host computer only has a single core does the TPL's Parallel.ForEach() automatically place the invocation on a background thread? Or, should I manaully invoke any TPL calls from a background thead so that if I am executing from a single core computer at least that logic will be off of the GUI's dispatching thread? My concern is if I leave the TPL in charge of all this I want to ensure if it determines it's a single core box that it still marshalls the code that's inside of the Parallel.ForEach() loop on to a background thread like I would have done, so as to not block my GUI. Thanks for any thoughts or advice you may have ...

    Read the article

  • Implemeting "drawing modes" in a graphics library?

    - by banister
    i would like to implement 'drawing modes' (in my own graphics library). That is drawing with AND, OR, etc However i am storing colors using floats, each channel between 0 and 1.0 Do i have to first convert each color channel to 0-255 before i can use the AND, OR, etc drawing modes? and then convert back to float (0.0-1.0) ? Or is there another way of doing it? thanks

    Read the article

  • divide the image into 3*3 blocks

    - by Jayanth Silesh
    I have a matrix that does not happen to have dimensions that are multiples of 3 or it might. How can we divide the entire image into blocks of 3*3 matrices. (Can ignore the last ones which does not come under the 3*3 multiples. Also, the 3*3 matrices can be be saved in arrays. a=3; b=3; %window size x=size(f,1)/a; y=size(f,2)/b; %f is the original image m=a*ones(1,x); n=b*ones(1,y); I=mat2cell(f,m,n);

    Read the article

  • Separating text and graphics in an image

    - by avd
    I dont know whether should I post this question here or not? But if someone knows it, please answer? What are the algorithms for determining which region in an image is text and which one is graphic? Means how to separate such regions? (figure or diagram)

    Read the article

  • About curse of dimensionality

    - by Dan
    My question is about this topic I've been reading about a bit. Basically my understanding is that in higher dimensions all points end up being very close to each other. The doubt I have is whether this means that calculating distances the usual way (euclidean for instance) is valid or not. If it were still valid, this would mean that when comparing vectors in high dimensions, the two most similar wouldn't differ much from a third one even when this third one could be completely unrelated. Is this correct? Then in this case, how would you be able to tell whether you have a match or not?

    Read the article

< Previous Page | 54 55 56 57 58 59 60 61 62 63 64 65  | Next Page >