Search Results

Search found 7955 results on 319 pages for 'signal processing'.

Page 20/319 | < Previous Page | 16 17 18 19 20 21 22 23 24 25 26 27  | Next Page >

  • Resize images to specific height value in ImageMagick?

    - by Jason
    I've looked around for this, and can't find an easily implemented solution. Currently I'm working on an application that deals with panoramas. As they come out of the batch stitch process, the dimensions average 18000x4000. Using ImageMagick, how can I downscale those images to a specific height value while maintaining aspect ratio? According to the manual, the convert operation takes in both height and width to resize to while maintaining the same aspect ratio. What I'd like is to put in 600 and 1000 in my existing resize script function and have both a regular viewable image as well as a reduced size.

    Read the article

  • How do I throttle a command in a terminal window?

    - by To Do
    I needed to run convert with a lot of images at the same time. The command took quite a while but this doesn't bother me. The issue is that this command rendered my computer unusable while the command was running (for about 15 minutes). So is it possible to throttle the command by limiting resources (processor and memory) to the command, directly from the command line? This can only work if I add something to the same line before pressing Enter because once I start the process the computer slows so much that it is impossible for example to switch to "System monitor" and reduce priority. Edit: top and iotop results I managed to run top and sudo iotop >iotop.txt while doing one of these convert operations. (The iotop.txt file produced is difficult to read) Results of top: PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 14275 username 20 0 4043m 3.0g 1448 D 7.0 80.4 0:16.45 convert Results of iotop: [?1049h[1;24r(B[m[4l[?7h[?1h=[39;49m[?25l[39;49m(B[m[H[2JTotal DISK READ: 1269.04 K/s | Total DISK WRITE:[59G0.00 B/s (B[0;7m TID PRIO USER DISK READ DISK WRITE SWAPIN(B[0;1;7m IO(B[0;7m COMMAND [3;2H(B[m2516 be/4 username 350.08 K/s 0.00 B/s 0.00 % 0.00 % zeitgeist-datahub 7394 be/4 username 568.88 K/s 0.00 B/s 77.41 % 0.00 % --rendere~.530483991[5;1H14275 idle username 350.08 K/s 0.00 B/s 37.49 % 0.00 % convert S~f test.pdf[6;2H2048 be/4 root[6;24H0.00 B/s 0.00 B/s 0.00 % 0.00 % [kworker/3:2] [5G1 be/4 root[7;24H0.00 B/s 0.00 B/s 0.00 % 0.00 % init Furthermore, even after the process ends, the computer does not return to the previous performance. I found a way around this by running sudo swapoff -a followed by sudo swapon -a

    Read the article

  • How to make a text search template?

    - by Flipper
    I am not really sure what to call this, but I am looking for a way to have a "template" for my code to go by when searching for text. I am working on a project where a summary for a piece of text is supplied to the user. I want to allow the user to select a piece of text on the page so that the next time they come across a similar page I can find the text. For instance, lets say somebody goes to foxnews.com and selects the article like in the image below. Then whenever they go to any other foxnews.com article I would be able to identify the text for the article and summarize it for them. But an issue I see with this is for a site like Stack Exchange where you have multiple comments to be selected (like below) which means that I would have to be able to recursively search for all separate pieces of text. Requirements Be able to keep pieces of text separate from each other. Possible Issues DIV's may not contain ids, classes, or names. A piece of text may span across multiple DIVs How to recognize where an old piece of text ends and a new begins. How to store this information for later searching?

    Read the article

  • No proper kmeans clustering of images in matlab

    - by user3237134
    I am having 1200 face images in my training set.There are 2989 test face images. I am using eigen faces (PCA) for feature extraction. I am using kmeans clustering. Source code I tried: IDX = kmeans(z,5); clustercount=accumarray(IDX, ones(size(IDX))); disp(clustercount); Problem: Images are not clustered properly. Same faces should be clustered. But different faces are being clustered. Questions: Should I have to use still more face images for training? How accuracy of clustering can be achieved? What is the solution?

    Read the article

  • Effective and simple matching for 2 unequal small-scale point sets

    - by Pavlo Dyban
    I need to match two sets of 3D points, however the number of points in each set can be different. It seems that most algorithms are designed to align images and trimmed to work with hundreds of thousands of points. My case are 50 to 150 points in each of the two sets. So far I have acquainted myself with Iterative Closest Point and Procrustes Matching algorithms. Implementing Procrustes algorithms seems like a total overkill for this small quantity. ICP has many implementations, but I haven't found any readily implemented version accounting for the so-called "outliers" - points without a matching pair. Besides the implementation expense, algorithms like Fractional and Sparse ICP use some statistics information to cancel points that are considered outliers. For series with 50 to 150 points statistic measures are often biased or statistic significance criteria are not met. I know of Assignment Problem in linear optimization, but it is not suitable for cases with unequal sets of points. Are there other, small-scale algorithms that solve the problem of matching 2 point sets? I am looking for algorithm names, scientific papers or C++ implementations. I need some hints to know where to start my search.

    Read the article

  • Possible applications of algorithm devised for differentiating between structured vs random text

    - by rooznom
    I have written a program that can rapidly (within 5 sec on a 2GB RAM desktop, 2.33 Ghz CPU) differentiate between structured text (e.g english text) and random alphanumeric strings. It can also provide a probability score for the prediction. Are there any practical applications/uses of such a program. Note that the program is based on entropy models and does not have any dictionary comparisons in its workflow. Thanks in advance for your responses

    Read the article

  • Applying the Knuth-Plass algorithm (or something better?) to read two books with different length and amount of chapters in parallel

    - by user147133
    I have a Bible reading plan that covers the whole Bible in 180 days. For the most of the time, I read 5 chapters in the Old Testament and 1 or 2 (1.5) chapters in the New Testament each day. The problem is that some chapters are longer than others (for example Psalm 119 which is 7 times longer than a average chapter in the Bible), and the plan I'm following doesn't take that in count. I end up with some days having a lot more to read than others. I thought I could use programming to make myself a better plan. I have a datastructure with a list of all chapters in the bible and their length in number of lines. (I found that the number of lines is the best criteria, but it could have been number of verses or number of words as well) I then started to think about this problem as a line wrap problem. Think of a chapter like a word, a day like a line and the whole plan as a paragraph. The "length" of a word (a chapter) is the number of lines in that chapter. I could then generate the best possible reading plan by applying a simplified Knuth-Plass algorithm to find the best breakpoints. This works well if I want to read the Bible from beginning to end. But I want to read a little from the new testament each day in parallel with the old testament. Of course I can run the Knuth-Plass algorithm on the Old Testament first, then on the New Testament and get two separate plans. But those plans merged is not a optimal plan. Worst-case days (days with extra much reading) in the New Testament plan will randomly occur on the same days as the worst-case days in the Old Testament. Since the New Testament have about 180*1.5 chapters, the plan is generally to read one chapter the first day, two the second, one the third etc... And I would like the plan for the Old Testament to compensate for this alternating length. So I will need a new and better algorithm, or I will have to use the Knuth-Plass algorithm in a way that I've not figured out. I think this could be a interesting and challenging nut for people interested in algorithms, so therefore I wanted to see if any of you have a good solution in mind.

    Read the article

  • How can I "bulk paste" a clipboard string of multi-line text into a readable ordered list?

    - by gunshor
    How can I "bulk paste" a clipboard string of multi-line text into a readable ordered list? I'm trying to demonstrate how to turn any string of multi-line text into an ordered list. The script (preferably JS) needs to respect: - carriage returns at the end of a line, to mean "that line ends here" - indentations at the beginning of a line, to mean "this is part of the item above it" - dashes at the beginning of a line, to mean "this is a task, and the line above it is its project"

    Read the article

  • Would opencv be a good choice for image colour summarization?

    - by codecowboy
    I would like to analyse a set of hundreds of thousands of product images (clothing, electronic goods etc) and retrieve the dominant colours in each. I'm only interested in the top 3 or 4 colours. The aim is to achieve a degree of certainty that x image is mostly red or image y is mostly orange and blue. The images are likely to be colour jpegs of reasonable quality and approximately 100kb in size. I would like to use C# and the solution should run on a Linux server, preferably using open source libraries. Would opencv be a good choice for this? What other libraries or specific algorithms might be helpful?

    Read the article

  • What is the simplest way for a slippy SVG visualization?

    - by totymedli
    I have a big SVG file representing a complicated graph with hundreds of points. I want to represent this in a web page. My idea was that I could make it like Google Maps represent their maps, in those slippy, dragable, moveable maps. I'am looking for an easy and fast JavaScript library which could do the work. What I need for my "map" is the drag/move, zoom ability, and some way to click on the points of the picture, which makes a little information apear about that point, like Google maps markers. I'am looking for a free/open source library. I saw some solutions but I'am uncertain about them, and none of them seemed to be perfet: Polymaps - I love the technique it uses, but I don't know much about this library. Leaflet - I love the simplicity of it, but I dont know how could I apply it for my SVG. Raphael - I heard the awesomeness of this, but It seemed a lots of work to do this task. What would be the best/easiest solution for my problem, and what is your opinion aboute the above libraries?

    Read the article

  • What Shading/Rendering techniques are being used in this image?

    - by Rhakiras
    My previous question wasn't clear enough. From a rendering point of view what kind of techniques are used in this image as I would like to apply a similar style (I'm using OpenGL if that matters): http://alexcpeterson.com/ My specific questions are: How is that sun glare made? How does the planet look "cartoon" like? How does the space around the planet look warped/misted? How does the water look that good? I'm a beginner so any information/keywords on each question would be helpful so I can go off and learn more. Thanks

    Read the article

  • how to use a batch file to delete a line of text in a bunch of text files? [on hold]

    - by wbt
    I have a bunch of txt files in my D drive which are placed randomly in different locations.Some files also contain symbols.I want a batch file so that i can delete their specific lines completely at the same time without doing it one by one for each file and please refer to a code which does not create a new text file at some other location with the changes being incorporated i.e I do not want the input.txt and output.txt thing.I just need the original files to be replaced with the changes as soon as i click the batch file. e.g D:\abc\1.txt D:\xyz\2.txt etc I want both of their 3rd lines erased completely with a single click and the new file must be saved with the same name in the same location i.e the new changed text files must replace the old text files with their respective lines removed.Maybe some sort of *.txt thing i.e i should be able to change all the files with the .txt extensions in a drive via a single batch file perhaps in another drive,not placing my batch file into each and every folder separately and then running them.Alternatively a vbs file is also welcomed. SORRY FOR THE LONG AND THOROUGH MESSAGE BUT I'M WONDERING ALL OVER THE INTERNET FOR THE LAST TWO DAYS JUST FOR THIS ONE BATCH FILE.ALL THE INFORMATION I GET IS A SORT OF JARGON FOR ME AS I AM NOT A GEEK WITH THE SCRIPTING.PLEASE DESCRIBE THE CODE TOO.YOUR HELP IS MUCH APPRECIATED

    Read the article

  • Signal "0" error while scrolling a tableview with images

    - by Amitkumar
    Hi, I have a problem while scrolling images on tableview. I am getting a Signal "0" error. I think it is due to some memory issues but I am not able to find out the exact error. The code is as follows, - (UITableViewCell *)tableView:(UITableView *)tableView cellForRowAtIndexPath:(NSIndexPath *)indexPath { static NSString *CellIdentifier = @"Cell"; UITableViewCell *cell = [travelSummeryPhotosTable dequeueReusableCellWithIdentifier:CellIdentifier]; if (cell == nil) { cell = [[[UITableViewCell alloc] initWithStyle:UITableViewCellStyleDefault reuseIdentifier:CellIdentifier]autorelease]; } //Photo ImageView UIImageView *photoTag = [[UIImageView alloc] initWithFrame:CGRectMake(5.0, 5.0, 85.0, 85.0)]; NSString *rowPath =[[imagePathsDictionary valueForKey:[summaryTableViewDataArray objectAtIndex:indexPath.section]] objectAtIndex:indexPath.row]; photoTag.image = [UIImage imageWithContentsOfFile:rowPath]; [cell.contentView addSubview:photoTag]; [photoTag release]; // Image Caption UILabel *labelImageCaption = [[UILabel alloc] initWithFrame:CGRectMake(110.0, 15.0, 190.0, 50.0)]; labelImageCaption.textAlignment = UITextAlignmentLeft; NSString *imageCaptionText =[ [imageCaptionsDictionary valueForKey:[summaryTableViewDataArray objectAtIndex:indexPath.section]] objectAtIndex:indexPath.row]; labelImageCaption.text = imageCaptionText; [cell.contentView addSubview:labelImageCaption]; [labelImageCaption release]; return cell; } Thanks in advance.

    Read the article

  • iPhone OpenGLES: Textures are consuming too much memory and the program crashes with signal "0"

    - by CustomAppsMan
    I am not sure what the problem is. My app is running fine on the simulator but when I try to run it on the iPhone it crashes during debugging or without debugging with signal "0". I am using the Texture2D.m and OpenGLES2DView.m from the examples provided by Apple. I profiled the app on the iPhone with Instruments using the Memory tracer from the Library and when the app died the final memory consumed was about 60Mb real and 90+Mb virtual. Is there some other problem or is the iPhone just killing the application because it has consumed too much memory? If you need any information please state it and I will try to provide it. I am creating thousands of textures at load time which is why the memory consumption is so high. Really cant do anything about reducing the number of pics being loaded. I was running before on just UIImage but it was giving me really low frame rates. I read on this site that I should use OpenGLES for higher frame rates. Also sub question is there any way not to use UIImage to load the png file and then use the Texture class provided to create the texture for OpenGLES functions to use it for drawing? Is there some function in OpenGLES which will create a texture straight from a png file?

    Read the article

  • receiving signal: EXC_BAD_ACCESS in web service call function

    - by murali
    I'm new to iPhone development. I'm using xcode 4.2. When I click on the save button, I'm getting values from the html page and my web service is processing them, and then I get the error: program received signal: EXC_BAD_ACCESS in my web service call function. Here is my code: NSString *val=[WebviewObj stringByEvaluatingJavaScriptFromString:@"save()"]; NSLog(@"return value:: %@",val); [adict setObject:[NSString stringWithFormat:@"%i",userid5] forKey:@"iUser_Id" ]; [adict setObject:[[val componentsSeparatedByString:@","]objectAtIndex:0] forKey:@"vImage_Url"]; [adict setObject:[[val componentsSeparatedByString:@","]objectAtIndex:1] forKey:@"IGenre_Id"]; [adict setObject:[[val componentsSeparatedByString:@","]objectAtIndex:2] forKey:@"vTrack_Name"]; [adict setObject:[[val componentsSeparatedByString:@","]objectAtIndex:3] forKey:@"vAlbum_Name"]; [adict setObject:[[val componentsSeparatedByString:@","]objectAtIndex:4] forKey:@"vMusic_Url"]; [adict setObject:[[val componentsSeparatedByString:@","]objectAtIndex:5] forKey:@"iTrack_Duration_min"]; [adict setObject:[[val componentsSeparatedByString:@","]objectAtIndex:6] forKey:@"iTrack_Duration_sec"]; [adict setObject:[[val componentsSeparatedByString:@","]objectAtIndex:7] forKey:@"vDescription"]; NSLog(@"dict==%@",[adict description]); NSString *URL2= @"http://184.164.156.55/Music/Track.asmx/AddTrack"; obj=[[UrlController alloc]init]; obj.URL=URL2; obj.InputParameters = adict; [obj WebserviceCall]; obj.delegate= self; //this is my function..it is working for so many function calls -(void)WebserviceCall{ webData = [[NSMutableData alloc] init]; NSMutableURLRequest *urlRequest = [[ NSMutableURLRequest alloc ] initWithURL: [ NSURL URLWithString: URL ] ]; NSString *httpBody = @""; for(id key in InputParameters) { if([httpBody length] == 0){ httpBody=[httpBody stringByAppendingFormat:@"&%@=%@",key,[InputParameters valueForKey:key]]; } else{ httpBody=[httpBody stringByAppendingFormat:@"&%@=%@",key,[InputParameters valueForKey:key]]; } } httpBody = [httpBody stringByAppendingFormat:httpBody];//Here i am getting EXC_BAD_ACCESS [urlRequest setHTTPMethod: @"POST" ]; [urlRequest setHTTPBody:[httpBody dataUsingEncoding:NSUTF8StringEncoding]]; [urlRequest setValue:@"application/x-www-form-urlencoded" forHTTPHeaderField:@"content-type"]; NSURLConnection *theConnection = [[NSURLConnection alloc] initWithRequest:urlRequest delegate:self]; } Can any one help me please? thanks in advance

    Read the article

  • Multithreading/Parallel Processing in PHP

    - by manyxcxi
    I have a PHP script that will generate a report using PHPExcel from data queried from a MySQL DB. Currently, it is linear in processing in that it gets the data back from MySQL, reads in the Excel template, writes the data to the template, then outputs it. I have optimized the code to the point that the data is only iterated over once, and there is very little processing done on the PHP side. The query returns hundreds of lines in less than .001 seconds, so it is running fast enough. After some timing I have found my bottlenecks to be (surprise, surprise) reading the template and writing the output. I would like to do this: Spawn a thread/process to read the template Spawn a thread/process to fetch the data Return back to parent thread - Parent thread will wait until both are complete Proceed on as normal My main questions are is this possible, is it worth it? If yes to both, how would you tackle it? Also, it is PHP 5 on CentOS

    Read the article

  • How to solve "NullPointerException" with "Server.processing" error while we are using Flex Builder 3

    - by Teerasej
    I am using Flex builder 3, BlazeDS, and Java with Spring and Hibernate framework. I using the remote object to load a string from spring's configuration files. But in testing, I found this fault event like this: RPC Fault faultString="java.lang.NullPointerException" faultCode="Server.Processing" faultDetail="null" I have checked the configuration in remote-config.xml and services-config.xml. But it looks good. Some people have talked about this problem around the Internet and I think you can help me and them. I am using these environment: Flex Builder 3 BlazeDS 3.2.0 JBoss server Full stacktrace: [RPC Fault faultString="java.lang.NullPointerException" faultCode="Server.Processing" faultDetail="null"] at mx.rpc::AbstractInvoker/http://www.adobe.com/2006/flex/mx/internal::faultHandler()[C:\autobuild\3.2.0\frameworks\projects\rpc\src\mx\rpc\AbstractInvoker.as:220] at mx.rpc::Responder/fault()[C:\autobuild\3.2.0\frameworks\projects\rpc\src\mx\rpc\Responder.as:53] at mx.rpc::AsyncRequest/fault()[C:\autobuild\3.2.0\frameworks\projects\rpc\src\mx\rpc\AsyncRequest.as:103] at NetConnectionMessageResponder/statusHandler()[C:\autobuild\3.2.0\frameworks\projects\rpc\src\mx\messaging\channels\NetConnectionChannel.as:569] at mx.messaging::MessageResponder/status()[C:\autobuild\3.2.0\frameworks\projects\rpc\src\mx\messaging\MessageResponder.as:222]

    Read the article

  • Best practices for fixed-width processing in .NET

    - by jmgant
    I'm working a .NET web service that will be processing a text file with a relatively long, multilevel record format. Each record in the file represents a different entity; the record contains multiple sub-types. (The same record format is currently being processed by a COBOL job, if that gives you a better picture of what we're looking at). I've created a class structure (a DATA DIVISION if you will) to hold the input data. My question is, what best practices have you found for processing large, complex fixed-width files in .NET? My general approach will be to read the entire line into a string and then parse the data from the string into the classes I've created. But I'm not sure whether I'll get better results working with the characters in the string as an array, or with the string itself. I guess that's the specific question, string vs. char[], but I would appreciate any other pointers anyone has. Thanks.

    Read the article

  • Return data from subroutine while the subroutine is still processing

    - by Perl QuestionAsker
    Is there any way to have a subroutine send data back while still processing? For instance (this example used simply to illustrate) - a subroutine reads a file. While it is reading through the file, if some condition is met, then "return" that line and keep processing. I know there are those that will answer - why would you want to do that? and why don't you just ...?, but I really would like to know if this is possible. Thank you so much in advance.

    Read the article

  • Error processing response in .net web service with WSE3 mutualCertificate10Security Assertion

    - by Maeloc
    I am securing a .net web service (framework 2.0) with WSE3 mutualCertificate10Security Assertion. When request are valid all is fine and the response is wellformed, but when the request is invalid (cause a invalid signature, failed check, or soapexception thrown), the web server isn't able to process the response to send to the client. The error in application event log is: An error occured processing an outgoing fault response. Details of the error causing the processing failure: System.InvalidOperationException: Send security filter on the server could not retrieve the operation protection requirements from the operation state. en Microsoft.Web.Services3.Security.SecureConversationServiceSendSecurityFilter.SecureMessage(SoapEnvelope envelope, Security security) en Microsoft.Web.Services3.Security.SendSecurityFilter.ProcessMessage(SoapEnvelope envelope) en Microsoft.Web.Services3.Pipeline.ProcessOutputMessage(SoapEnvelope envelope) en Microsoft.Web.Services3.WseProtocol.GetFilteredResponseEnvelope(SoapEnvelope outputEnvelope) All certificate permissions are OK (when request is OK the web service is able to sign the response). Error occurs only if a soapFault must be returned in the response. Any ideas?

    Read the article

  • LINQ to SQL: On load processing of lazy loaded associations

    - by Matt Holmes
    If I have an object that lazy loads an association with very large objects, is there a way I can do processing at the time the lazy load occurs? I thought I could use AssociateWith or LoadWith from DataLoadOptions, but there are very, very specific restrictions on what you can do in those. Basically I need to be notified when an EntitySet< decides it's time to load the associated object, so I can catch that event and do some processing on the loaded object. I don't want to simply walk through the EntitySet when I load the parent object, because that will force all the lazy loaded items to load (defeating the purpose of lazy loading entirely).

    Read the article

  • Processing a property in linq to sql

    - by Mostafa
    Hi It's my first LINQ TO SQL Project , So definitely my question could be naive . Till now I used to create new property in Business Object beside of every DateTime Property , That's because i need to do some processing in my DateTime property and show it in special string format for binding to UI Controls .Like : private DateTime _insertDate; /// /// I have "InertDate" field in my Table on Database /// public DateTime InsertDate { get { return _insertDate; } set { _insertDate = value; } } // Because i need to do some processing I create a readonly string property that pass InsertDate to Utility method and return special string Date public string PInsertDate { get { return Utility.ToSpecialDate(_insertDate); } } My question is I don't know how to do it in LINQ . I did like follow but i get run time error. ToosDataContext db = new ToosDataContext(); var newslist = from p in db.News select new {p.NewsId,p.Title,tarikh =MD.Utility.ToSpecialDate( p.ReleaseDate)}; GridView1.DataSource = newslist; GridView1.DataBind();

    Read the article

  • Is there a substitute for blockproc in Matlab?

    - by SetchSen
    I've been using blockproc for processing images blockwise. Unfortunately, blockproc is part of the Image Processing Toolbox, which I don't have on my personal computer. Is there a combination of functions in base Matlab that can substitute for blockproc? My initial guess was to use im2col to transform each block into columns, and then arrayfun to process each column. Then I realized that im2col is also a part of the Image Processing Toolbox, so that doesn't solve my problem.

    Read the article

  • get output of last Process on SSAS cube

    - by Raj More
    I have processed a SSAS cube. After it was done processing, I hit the close button - and then realized that I should have saved the output. I think SSAS stores the processing log as a text or XML file, but I do not know what folder to look into. Can someone direct me to retrieving processing logs?

    Read the article

  • Problem processing large data using Applet-Servlet communication

    - by Marquinio
    Hi everyone. I have an Applet that makes a request to a Servlet. On the servlet it's using the PrintWriter to write the response back to Applet: out.println("Field1|Field2|Field3|Field4|Field5......|Field10"); There are about 15000 records, so the out.println() gets executed about 15000 times. Problem is that when the Applet gets the response from Servlet it takes about 15 minutes to process the records. I placed System.out.println's and processing is paused at around 5000, then after 15 minutes it continues processing and then its done. Has anyone faced a similar problem? The servlet takes about 2 seconds to execute. So seems that the browser/Applet is too slow to process the records. Any ideas appreciated. Thanks.

    Read the article

< Previous Page | 16 17 18 19 20 21 22 23 24 25 26 27  | Next Page >