Search Results

Search found 40595 results on 1624 pages for 'string processing'.

Page 151/1624 | < Previous Page | 147 148 149 150 151 152 153 154 155 156 157 158  | Next Page >

  • tfidf, am I understanding it right?

    - by alskndalsnd
    Hey everyone, I am interested in doing some document clustering, and right now I am considering using TF-IDF for this. If I am not wrong, TFIDF is particularly used for evaluating the relevance of a document given a query. If I do not have a particular query, how can I apply tfidf to clustering?

    Read the article

  • Gradient Mapping in .NET

    - by Otaku
    Is there a way in .NET to perform the same technique Photoshop uses for Gradient Mapping (Image - Adjustments - Gradient Map [Gradient Editor])? Any ideas, links, code, etc. would be welcome.

    Read the article

  • RabbitMQ serializing messages from queue with multiple consumers

    - by Refefer
    Hi there, I'm having a problem where I have a queue set up in shared mode and multiple consumers bound to it. The issue is that it appears that rabbitmq is serializing the messages, that is, only one consumer at a time is able to run. I need this to be parallel, however, I can't seem to figure out how. Each consumer is running in its own process. There are plenty of messages in the queue. I'm using py-amqplib to interface with RabbitMQ. Any thoughts?

    Read the article

  • C# .NET : Is using the .NET Image Conversion enough?

    - by contactmatt
    I've seen a lot of people try to code their own image conversion techniques. It often seems to be very complicated, and ends up using GDI+ funciton calls, and manipulating bits of the image. This has got me wondering if I am missing something in the simplicity of .NET's image conversion call when saving an image. Here's the code I have Bitmap tempBmp = new Bitmap("c:\temp\img.jpg"); Bitmap bmp = new Bitmap(tempBmp, 800, 600); bmp.Save(c:\temp\img.bmp, //extension depends on format ImageFormat.Bmp) //These are all the ImageFormats I allow conversion to within the program. Ignore the syntax for a second ;) ImageFormat.Gif) //or ImageFormat.Jpeg) //or ImageFormat.Png) //or ImageFormat.Tiff) //or ImageFormat.Wmf) //or ImageFormat.Bmp)//or ); This is all I'm doing in my image conversion. Just setting the location of where the image should be saved, and passing it an ImageFormat type. I've tested it the best I can, but I'm wondering if I am missing anything in this simple format conversion, or if this is suffice?

    Read the article

  • How to sort my paws?

    - by Ivo Flipse
    In my previous question I got an excellent answer that helped me detect where a paw hit a pressure plate, but now I'm struggling to link these results to their corresponding paws: I manually annotated the paws (RF=right front, RH= right hind, LF=left front, LH=left hind). As you can see there's clearly a pattern repeating pattern and it comes back in aknist every measurement. Here's a link to a presentation of 6 trials that were manually annotated. My initial thought was to use heuristics to do the sorting, like: There's a ~60-40% ratio in weight bearing between the front and hind paws; The hind paws are generally smaller in surface; The paws are (often) spatially divided in left and right. However, I’m a bit skeptical about my heuristics, as they would fail on me as soon as I encounter a variation I hadn’t thought off. They also won’t be able to cope with measurements from lame dogs, whom probably have rules of their own. Furthermore, the annotation suggested by Joe sometimes get's messed up and doesn't take into account what the paw actually looks like. Based on the answers I received on my question about peak detection within the paw, I’m hoping there are more advanced solutions to sort the paws. Especially because the pressure distribution and the progression thereof are different for each separate paw, almost like a fingerprint. I hope there's a method that can use this to cluster my paws, rather than just sorting them in order of occurrence. So I'm looking for a better way to sort the results with their corresponding paw. For anyone up to the challenge, I have pickled a dictionary with all the sliced arrays that contain the pressure data of each paw (bundled by measurement) and the slice that describes their location (location on the plate and in time). To clarfiy: walk_sliced_data is a dictionary that contains ['ser_3', 'ser_2', 'sel_1', 'sel_2', 'ser_1', 'sel_3'], which are the names of the measurements. Each measurement contains another dictionary, [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10] (example from 'sel_1') which represent the impacts that were extracted. Also note that 'false' impacts, such as where the paw is partially measured (in space or time) can be ignored. They are only useful because they can help recognizing a pattern, but won't be analyzed. And for anyone interested, I’m keeping a blog with all the updates regarding the project!

    Read the article

  • rotating bitmaps. In code.

    - by Marco van de Voort
    Is there a faster way to rotate a large bitmap by 90 or 270 degrees than simply doing a nested loop with inverted coordinates? The bitmaps are 8bpp and typically 2048*2400*8bpp Currently I do this by simply copying with argument inversion, roughly (pseudo code: for x = 0 to 2048-1 for y = 0 to 2048-1 dest[x][y]=src[y][x]; (In reality I do it with pointers, for a bit more speed, but that is roughly the same magnitude) GDI is quite slow with large images, and GPU load/store times for textures (GF7 cards) are in the same magnitude as the current CPU time. Any tips, pointers? An in-place algorithm would even be better, but speed is more important than being in-place. Target is Delphi, but it is more an algorithmic question. SSE(2) vectorization no problem, it is a big enough problem for me to code it in assembler Duplicates How do you rotate a two dimensional array?. Follow up to Nils' answer Image 2048x2700 - 2700x2048 Compiler Turbo Explorer 2006 with optimization on. Windows: Power scheme set to "Always on". (important!!!!) Machine: Core2 6600 (2.4 GHz) time with old routine: 32ms (step 1) time with stepsize 8 : 12ms time with stepsize 16 : 10ms time with stepsize 32+ : 9ms Meanwhile I also tested on a Athlon 64 X2 (5200+ iirc), and the speed up there was slightly more than a factor four (80 to 19 ms). The speed up is well worth it, thanks. Maybe that during the summer months I'll torture myself with a SSE(2) version. However I already thought about how to tackle that, and I think I'll run out of SSE2 registers for an straight implementation: for n:=0 to 7 do begin load r0, <source+n*rowsize> shift byte from r0 into r1 shift byte from r0 into r2 .. shift byte from r0 into r8 end; store r1, <target> store r2, <target+1*<rowsize> .. store r8, <target+7*<rowsize> So 8x8 needs 9 registers, but 32-bits SSE only has 8. Anyway that is something for the summer months :-) Note that the pointer thing is something that I do out of instinct, but it could be there is actually something to it, if your dimensions are not hardcoded, the compiler can't turn the mul into a shift. While muls an sich are cheap nowadays, they also generate more register pressure afaik. The code (validated by subtracting result from the "naieve" rotate1 implementation): const stepsize = 32; procedure rotatealign(Source: tbw8image; Target:tbw8image); var stepsx,stepsy,restx,resty : Integer; RowPitchSource, RowPitchTarget : Integer; pSource, pTarget,ps1,ps2 : pchar; x,y,i,j: integer; rpstep : integer; begin RowPitchSource := source.RowPitch; // bytes to jump to next line. Can be negative (includes alignment) RowPitchTarget := target.RowPitch; rpstep:=RowPitchTarget*stepsize; stepsx:=source.ImageWidth div stepsize; stepsy:=source.ImageHeight div stepsize; // check if mod 16=0 here for both dimensions, if so -> SSE2. for y := 0 to stepsy - 1 do begin psource:=source.GetImagePointer(0,y*stepsize); // gets pointer to pixel x,y ptarget:=Target.GetImagePointer(target.imagewidth-(y+1)*stepsize,0); for x := 0 to stepsx - 1 do begin for i := 0 to stepsize - 1 do begin ps1:=@psource[rowpitchsource*i]; // ( 0,i) ps2:=@ptarget[stepsize-1-i]; // (maxx-i,0); for j := 0 to stepsize - 1 do begin ps2[0]:=ps1[j]; inc(ps2,RowPitchTarget); end; end; inc(psource,stepsize); inc(ptarget,rpstep); end; end; // 3 more areas to do, with dimensions // - stepsy*stepsize * restx // right most column of restx width // - stepsx*stepsize * resty // bottom row with resty height // - restx*resty // bottom-right rectangle. restx:=source.ImageWidth mod stepsize; // typically zero because width is // typically 1024 or 2048 resty:=source.Imageheight mod stepsize; if restx>0 then begin // one loop less, since we know this fits in one line of "blocks" psource:=source.GetImagePointer(source.ImageWidth-restx,0); // gets pointer to pixel x,y ptarget:=Target.GetImagePointer(Target.imagewidth-stepsize,Target.imageheight-restx); for y := 0 to stepsy - 1 do begin for i := 0 to stepsize - 1 do begin ps1:=@psource[rowpitchsource*i]; // ( 0,i) ps2:=@ptarget[stepsize-1-i]; // (maxx-i,0); for j := 0 to restx - 1 do begin ps2[0]:=ps1[j]; inc(ps2,RowPitchTarget); end; end; inc(psource,stepsize*RowPitchSource); dec(ptarget,stepsize); end; end; if resty>0 then begin // one loop less, since we know this fits in one line of "blocks" psource:=source.GetImagePointer(0,source.ImageHeight-resty); // gets pointer to pixel x,y ptarget:=Target.GetImagePointer(0,0); for x := 0 to stepsx - 1 do begin for i := 0 to resty- 1 do begin ps1:=@psource[rowpitchsource*i]; // ( 0,i) ps2:=@ptarget[resty-1-i]; // (maxx-i,0); for j := 0 to stepsize - 1 do begin ps2[0]:=ps1[j]; inc(ps2,RowPitchTarget); end; end; inc(psource,stepsize); inc(ptarget,rpstep); end; end; if (resty>0) and (restx>0) then begin // another loop less, since only one block psource:=source.GetImagePointer(source.ImageWidth-restx,source.ImageHeight-resty); // gets pointer to pixel x,y ptarget:=Target.GetImagePointer(0,target.ImageHeight-restx); for i := 0 to resty- 1 do begin ps1:=@psource[rowpitchsource*i]; // ( 0,i) ps2:=@ptarget[resty-1-i]; // (maxx-i,0); for j := 0 to restx - 1 do begin ps2[0]:=ps1[j]; inc(ps2,RowPitchTarget); end; end; end; end;

    Read the article

  • WPF WriteableBitmap

    - by Sam
    I'm using WriteableBitmap on an image of type Bgra32 to change the pixel value of certain pixels. I'm setting the value to 0x77CCCCCC. After calling WritePixels, the pixels I set to 0x77CCCCCC show up with a value of 0x77FFFFFF. Why does this happen? How do I make the pixels have the correct value?

    Read the article

  • mean image filter

    - by turmoil
    Starting to learn image filtering and stumped on a question found on website: Applying a 3×3 mean filter twice does not produce quite the same result as applying a 5×5 mean filter once. However, a 5×5 convolution kernel can be constructed which is equivalent. What does this kernel look like? Would appreciate help so that I can understand the subject better. Thanks.

    Read the article

  • imagejpeg memory exhaustion

    - by 0plus1
    I'm creating thumbnails cycling through a lot of images, when I find a large image I get: Fatal error: Allowed memory size of 33554432 bytes exhausted (tried to allocate 13056 bytes) Now I already know how to circumvent this with: ini_set('memory_limit', '-1'); What I want to know is why it exhaust the memory! Is there some debug tools that will show me exactly when memory is exhausting? And specifically that will show me if there are variables/arrays that are killing my memory? OR, are there better way to resize images other then: $thumb=imagecreatetruecolor($newwidth,$newheight); $source=imagecreatefromjpeg($imgfile); imagecopyresampled($thumb,$source,0,0,0,0,$newwidth,$newheight,$width,$height); imagejpeg($thumb,$destinationfile,85); ? Thank you very much!

    Read the article

  • How to "smart resize" a displayed image to original aspect ratio

    - by Paul Sasik
    I have an application in which end-users can size and position images in a designer. Since the spec calls for the image to be "stretched" to the containing control, the end user can end up with an awkwardly stretched image. To help the user with image sizing I am thinking of implementing a smart resizer function which would allow the the user to easily fix the aspect ratio of the picture so that it no longer appears stretched. The quick way to solve this is to actually provide two options: 1) scale from width 2) scale from height. The user chooses the method and the algorithm adjusts the size of the picture by using the original aspect ratio. For example: A picture is displayed as 200x200 on the designer but the original image is 1024x768 pixels. The user chooses "Smart Size from width" and the new size becomes ~200x150 since the original aspect ratio is ~1.333 That's OK, but how could I make the algorithm smarter and not bother the user by asking which dimension the recalculation should be based on?

    Read the article

  • Using PHP script to fill in XML

    - by Yaaqov
    Hi - we are all familiar with "embedding" PHP scripts in HTML pages to do tasks like displaying form results, but how can that be done in a way that displays XML? For example, if I wrote: <? xml version='1.0' ?> <Responses> <?php $body = $_GET['Body']; $fromPh = $_GET['From']; echo echo "<Msg>Your number is: $fromPh, and you typed: $body.</Msg>" ?> </Response> I get a message that states "parse error, unexpected T_STRING on line 1". Any help or tutorials would be appreciated.

    Read the article

  • MPIexec.exe Access denide

    - by shake
    I have installed microsoft compute cluster and MPI.net, now i have trouble to run program using mpiexec.exe on cluster. When i try to run it on console i get message: "Access Denied", and pop up: "mpiexec.exe is not valid win32 application". I tried google it, but found nothing. Pls help. :)

    Read the article

  • Video match anlysis

    - by Mohammad
    Hi every body I am looking forward to find an algorithm to detect a pattern in a given video file. Actually I am going to index moments in a tennis match video at which service (first kick after a goal) is shot. PS1: sorry for broken English. PS2: I DO NOT know anything about tennis except that you need a ball to play!!

    Read the article

  • HMM for perspective estimation in document image, can't understand the algorithm

    - by maximus
    Hello! Here is a paper, it is about estimating the perspective of binary image containing text and some noise or non text objects. PDF document The algorithm uses the Hidden Markov Model: actually two conditions T - text B - backgrouond (i.e. noise) It is hard to understand the algorithm itself. The question is that I've read about Hidden Markov Models and I know that it uses probabilities that must be known. But in this algorithm I can't understand, if they use HMM, how do they get those probabilities (probability of changing the state from S1 to another state for example S2)? I didn't find anything about training there also in that paper. So, if somebody understands it, please tell me. Also is it possible to use HMM without knowing the state change probabilities?

    Read the article

  • CPU Affinity Masks (Putting Threads on different CPUs)

    - by hahuang65
    I have 4 threads, and I am trying to set thread 1 to run on CPU 1, thread 2 on CPU 2, etc. However, when I run my code below, the affinity masks are returning the correct values, but when I do a sched_getcpu() on the threads, they all return that they are running on CPU 4. Anybody know what my problem here is? Thanks in advance! #define _GNU_SOURCE #include <stdio.h> #include <pthread.h> #include <stdlib.h> #include <sched.h> #include <errno.h> void *pthread_Message(char *message) { printf("%s is running on CPU %d\n", message, sched_getcpu()); } int main() { pthread_t thread1, thread2, thread3, thread4; pthread_t threadArray[4]; cpu_set_t cpu1, cpu2, cpu3, cpu4; char *thread1Msg = "Thread 1"; char *thread2Msg = "Thread 2"; char *thread3Msg = "Thread 3"; char *thread4Msg = "Thread 4"; int thread1Create, thread2Create, thread3Create, thread4Create, i, temp; CPU_ZERO(&cpu1); CPU_SET(1, &cpu1); temp = pthread_setaffinity_np(thread1, sizeof(cpu_set_t), &cpu1); printf("Set returned by pthread_getaffinity_np() contained:\n"); for (i = 0; i < CPU_SETSIZE; i++) if (CPU_ISSET(i, &cpu1)) printf("CPU1: CPU %d\n", i); CPU_ZERO(&cpu2); CPU_SET(2, &cpu2); temp = pthread_setaffinity_np(thread2, sizeof(cpu_set_t), &cpu2); for (i = 0; i < CPU_SETSIZE; i++) if (CPU_ISSET(i, &cpu2)) printf("CPU2: CPU %d\n", i); CPU_ZERO(&cpu3); CPU_SET(3, &cpu3); temp = pthread_setaffinity_np(thread3, sizeof(cpu_set_t), &cpu3); for (i = 0; i < CPU_SETSIZE; i++) if (CPU_ISSET(i, &cpu3)) printf("CPU3: CPU %d\n", i); CPU_ZERO(&cpu4); CPU_SET(4, &cpu4); temp = pthread_setaffinity_np(thread4, sizeof(cpu_set_t), &cpu4); for (i = 0; i < CPU_SETSIZE; i++) if (CPU_ISSET(i, &cpu4)) printf("CPU4: CPU %d\n", i); thread1Create = pthread_create(&thread1, NULL, (void *)pthread_Message, thread1Msg); thread2Create = pthread_create(&thread2, NULL, (void *)pthread_Message, thread2Msg); thread3Create = pthread_create(&thread3, NULL, (void *)pthread_Message, thread3Msg); thread4Create = pthread_create(&thread4, NULL, (void *)pthread_Message, thread4Msg); pthread_join(thread1, NULL); pthread_join(thread2, NULL); pthread_join(thread3, NULL); pthread_join(thread4, NULL); return 0; }

    Read the article

  • Number of threads and thread numbers in Grand Central Dispatch

    - by raphgott
    I am using C and Grand Central Dispatch to parallelize some heavy computations. How can I get the number of threads used by GCD? Also is it possible to know on which thread a piece of code is currently running on? Basically I'd like to use sprng (parallel random numbers) with multiple streams and for that I need to know what stream id to use (and therefore what thread is being used).

    Read the article

  • Files not waiting for each other

    - by Sunny
    I have two batch files as follows in which file2.bat is dependent on file1.bat's output: file1.bat @ECHO OFF setlocal enabledelayedexpansion SET "keystring1=" ( FOR /f "delims=" %%a IN ( Source.txt ) DO ( ECHO %%a|FIND "Appprocess.exe" >NUL IF NOT ERRORLEVEL 1 SET keystring1=%%a FOR %%b IN (App1 App2 App3 App4 App5 App6 ) DO ( ECHO %%a|FIND "%%b" >NUL IF NOT ERRORLEVEL 1 IF DEFINED keystring1 CALL ECHO(%%keystring1%% %%b&SET "keystring1=" )))>result.txt GOTO :EOF file2.bat @echo off setlocal enabledelayedexpansion (for /f "tokens=1,2" %%a in (memory.txt) do ( for /f "tokens=5" %%c in ('find " %%a " ^< result.txt ') do echo %%c %%b ))> new.txt file1.bat usually takes 60 sec to complete its execution. In master.bat file i am calling above two files as: call file1.bat call file2.bat but file2.bat is not waiting for file1.bat to complete its execution. Even , i tried to call file2.bat within file1.bat as below but still its not waiting for file1.bat to get completed: @ECHO OFF setlocal enabledelayedexpansion SET "keystring1=" ( FOR /f "delims=" %%a IN ( Source.txt ) DO ( ECHO %%a|FIND "HsvDataSource.exe" >NUL IF NOT ERRORLEVEL 1 SET keystring1=%%a FOR %%b IN (EUHFMPROD USHFMPROD TL2TEST GSHFMPROD TL2PROD GSARCH1213 TL2FY13) DO ( ECHO %%a|FIND "%%b" >NUL IF NOT ERRORLEVEL 1 IF DEFINED keystring1 CALL ECHO(%%keystring1%% %%b&SET "keystring1=" )))>file2.txt GOTO :EOF call file1.bat I also tried below start option, but no effect.: start file1.bat /wait call file2.bat Not getting ..why its happening..?

    Read the article

  • deciding between subprocess, multiprocesser and thread in Python?

    - by user248237
    I'd like to parallelize my Python program so that it can make use of multiple processors on the machine that it runs on. My parallelization is very simple, in that all the parallel "threads" of the program are independent and write their output to separate files. I don't need the threads to exchange information but it is imperative that I know when the threads finish since some steps of my pipeline depend on their output. Portability is important, in that I'd like this to run on any Python version on Mac, Linux and Windows. Given these constraints, which is the most appropriate Python module for implementing this? I am tryign to decide between thread, subprocess and multiprocessing, which all seem to provide related functionality. Any thoughts on this? I'd like the simplest solution that's portable. Thanks.

    Read the article

  • Training sets for AdaBoost algorithm

    - by palau1
    How do you find the negative and positive training data sets of Haar features for the AdaBoost algorithm? So say you have a certain type of blob that you want to locate in an image and there are several of them in your entire array - how do you go about training it? I'd appreciate a nontechnical explanation as much as possible. I'm new to this. Thanks.

    Read the article

< Previous Page | 147 148 149 150 151 152 153 154 155 156 157 158  | Next Page >