Search Results

Search found 6053 results on 243 pages for 'solutionsfactory usage'.

Page 218/243 | < Previous Page | 214 215 216 217 218 219 220 221 222 223 224 225  | Next Page >

  • Debugging site written mainly in JScript with AJAX code injection

    - by blumidoo
    Hello, I have a legacy code to maintain and while trying to understand the logic behind the code, I have run into lots of annoying issues. The application is written mainly in Java Script, with extensive usage of jQuery + different plugins, especially Accordion. It creates a wizard-like flow, where client code for the next step is downloaded in the background by injecting a result of a remote AJAX request. It also uses callbacks a lot and pretty complicated "by convention" programming style (lots of events handlers are created on the fly based on certain object names - e.g. current page name, current step name). Adding to that, the code is very messy and there is no obvious inner structure - the functions are scattered in the code, file names do not reflect the business role of the code, lots of functions and code snippets are most likely not used at all etc. PROBLEM: How to approach this code base, so that the inner flow of the code can be sort-of "reverse engineered" using a suite of smart debugging tools. Ideally, I would like to be able to attach to the running application and step through the code, breaking on each new function call. Also, it would be nice to be able to create a "diagram of calls" in the application (i.e. in order to run a particular page logic, this particular flow of function calls was executed in a particular order). Not to mention to be able to run a coverage analysis, identifying potentially orphaned code fragments. I would like to stress out once more, that it is impossible to understand the inner logic of the application just by looking at the code itself, unless you have LOTS of spare time and beer crates, which I unfortunately do not have :/ (shame...) An IDE of some sort that would aid in extending that code would be also great, but I am currently looking into possibility to use Visual Studio 2010 to do the job, as the site itself is a mix of Classic ASP and ASP.NET (I'd say - 70% Java Script with jQuery, 30% ASP). I have obviously tried FireBug, but I was unable to find a way to define a breakpoint or step into the code, which is "injected" into the client JS using AJAX calls (i.e. the application retrieves the code by invoking an URL and injects it to the client local code). Venkman debugger had similar issues. Any hints would be welcome. Feel free to ask additional questions.

    Read the article

  • Should Application_End fire on an automatic App Pool Recycle?

    - by Laramie
    I have read this, this, this and this plus a dozen other posts/blogs. I have an ASP.Net app in shared hosting that is frequently recycling. We use NLog and have the following code in global.asax void Application_Start(object sender, EventArgs e) { NLog.Logger logger = NLog.LogManager.GetCurrentClassLogger(); logger.Debug("\r\n\r\nAPPLICATION STARTING\r\n\r\n"); } protected void Application_OnEnd(Object sender, EventArgs e) { NLog.Logger logger = NLog.LogManager.GetCurrentClassLogger(); logger.Debug("\r\n\r\nAPPLICATION_OnEnd\r\n\r\n"); } void Application_End(object sender, EventArgs e) { HttpRuntime runtime = (HttpRuntime)typeof(System.Web.HttpRuntime).InvokeMember("_theRuntime", BindingFlags.NonPublic | BindingFlags.Static | BindingFlags.GetField, null, null, null); if (runtime == null) return; string shutDownMessage = (string)runtime.GetType().InvokeMember("_shutDownMessage", BindingFlags.NonPublic | BindingFlags.Instance | BindingFlags.GetField, null, runtime, null); string shutDownStack = (string)runtime.GetType().InvokeMember("_shutDownStack", BindingFlags.NonPublic | BindingFlags.Instance | BindingFlags.GetField, null, runtime, null); ApplicationShutdownReason shutdownReason = System.Web.Hosting.HostingEnvironment.ShutdownReason; NLog.Logger logger = NLog.LogManager.GetCurrentClassLogger(); logger.Debug(String.Format("\r\n\r\nAPPLICATION END\r\n\r\n_shutDownReason = {2}\r\n\r\n _shutDownMessage = {0}\r\n\r\n_shutDownStack = {1}\r\n\r\n", shutDownMessage, shutDownStack, shutdownReason)); } void Application_Error(object sender, EventArgs e) { NLog.Logger logger = NLog.LogManager.GetCurrentClassLogger(); logger.Debug("\r\n\r\nApplication_Error\r\n\r\n"); } Our log file is littered with "APPLICATION STARTING" entries, but neither Application_OnEnd, Application_End, nor Application_Error are ever fired during these spontaneous restarts. I know they are working because there are entries for touching the web.config or /bin files. We also ran a memory overload test and can trigger an OutOfMemoryException which is caught in Application_Error. We are trying to determine whether the virtual memory limit is causing the recycling. We have added GC.GetTotalMemory(false) throughout the code, but this is for all of .Net, not just our App´s pool, correct? We've also tried var oPerfCounter = new PerformanceCounter(); oPerfCounter.CategoryName = "Process"; oPerfCounter.CounterName = "Virtual Bytes"; oPerfCounter.InstanceName = "iisExpress"; logger.Debug("Virtual Bytes: " + oPerfCounter.RawValue + " bytes"); but don't have permission in shared hosting. I've monitored the app on a dev server with the same requests that caused the recycles in production with ANTS Memory Profiler attached and can't seem to find a culprit. We have also run it with a debugger attached in dev to check for uncaught exceptions in spawned threads that might cause the app to abort. My questions are these: How can I effectively monitor memory usage in shared hosting to tell how much my application is consuming prior to an application recycle? Why are the Application_[End/OnEnd/Error] handlers in global.asax not being called? How else can I determine what is causing these recycles? Thanks.

    Read the article

  • Fairness: Where can it be better handled?

    - by Srinivas Nayak
    Hi, I would like to share one of my practical experience with multiprogramming here. Yesterday I had written a multiprogram. Modifications to sharable resources were put under critical sections protected by P(mutex) and V(mutex) and those critical section code were put in a common library. The library will be used by concurrent applications (of my own). I had three applications that will use the common code from library and do their stuff independently. my library --------- work_on_shared_resource { P(mutex) get_shared_resource work_with_it V(mutex) } --------- my application ----------- application1 { *[ work_on_shared_resource do_something_else_non_ctitical ] } application2 { *[ work_on_shared_resource do_something_else_non_ctitical ] } application3 { *[ work_on_shared_resource ] } *[...] denote a loop. ------------ I had to run the applications on Linux OS. I had a thought in my mind, hanging over years, that, OS shall schedule all the processes running under him with all fairness. In other words, it will give all the processes, their pie of resource-usage equally well. When first two applications were put to work, they run perfectly well without deadlock. But when the third application started running, always the third one got the resources, but since it is not doing anything in its non-critical region, it gets the shared resource more often when other tasks are doing something else. So the other two applications were found almost totally halted. When the third application got terminated forcefully, the previous two applications resumed their work as before. I think, this is a case of starvation, first two applications had to starve. Now how can we ensure fairness? Now I started believing that OS scheduler is innocent and blind. It depends upon who won the race; he got the largest pie of CPU and resource. Shall we attempt to ensure fairness of resource users in the critical-section code in library? Or shall we leave it up to the applications to ensure fairness by being liberal, not greedy? To my knowledge, adding code to ensure fairness to the common library shall be an overwhelming task. On the other hand, believing on the applications will also never ensure 100% fairness. The application which does a very little task after working with shared resources shall win the race where as the application which does heavy processing after their work with shared resources shall always starve. What is the best practice in this case? Where we ensure fairness and how? Sincerely, Srinivas Nayak

    Read the article

  • Running out of memory with UIImage creation on an offscreen Bitmap Context by NSOperation

    - by sigsegv
    I have an app with multiple UIView subclasses that acts as pages for a UIScrollView. UIViews are moved back and forth to provide a seamless experience to the user. Since the content of the views is rather slow to draw, it's rendered on a single shared CGBitmapContext guarded by locks by NSOperation subclasses - executed one at once in an NSOperationQueue - wrapped up in an UIImage and then used by the main thread to update the content of the views. -(void)main { NSAutoreleasePool * pool = [[NSAutoreleasePool alloc]init]; if([self isCancelled]) { return; } if(nil == data) { return; } // Buffer is the shared instance of a CG Bitmap Context wrapper class // data is a dictionary CGImageRef img = [buffer imageCreateWithData:data]; UIImage * image = [[UIImage alloc]initWithCGImage:img]; CGImageRelease(img); if([self isCancelled]) { [image release]; return; } NSDictionary * result = [[NSDictionary alloc]initWithObjectsAndKeys:image,@"image",id,@"id",nil]; // target is the instance of the UIView subclass that will use // the image [target performSelectorOnMainThread:@selector(updateContentWithData:) withObject:result waitUntilDone:NO]; [result release]; [image release]; [pool release]; } The updateContentWithData: of the UIView subclass performed on the main thread is just as simple -(void)updateContentWithData:(NSDictionary *)someData { NSDictionary * data = [someData retain]; if([[data valueForKey:@"id"]isEqualToString:[self pendingRequestId]]) { UIImage * image = [data valueForKey:@"image"]; [self setCurrentImage:image]; [self setNeedsDisplay]; } // If the image has not been retained, it should be released together // with the dictionary retaining it [data release]; } The drawLayer:inContext: method of the subclass will just get the CGImage from the UIImage and use it to update the backing layer or part of it. No retain or release is involved in the process. The problem is that after a while I run out of memory. The number of the UIViews is static. CGImageRef and UIImage are created, retained and released correctly (or so it seems to me). Instruments does not show any leaks, just the free memory available dip constantly, rise a few times, and then dip even lower until the application is terminated. The app cycles through about 2-300 of the aforementioned pages before that, but I would expect to have the memory usage reach a more or less stable level of used memory after a bunch of pages have been already skimmed at fast speed or, since the images are up to 3MB in size, deplete way earlier. Any suggestion will be greatly appreciated.

    Read the article

  • server|configuration problem, a php script just die with no error log & no reason

    - by Roberto
    Hi (first of all, thanks for your attention & sorry for my bad english hahaha also this is not a programming error, or thats what I think, I think this is an error in some configuration of the server or something else but I dont know what) I have a php script (is running like a process of linux, its not running on the web browser) that send SMS via SMPP on the port 2055 (using sockets in php) & then inserts like 10,000 rows into a dababase on MySQL, the script gets the data from a XML file; firts it was running in a shared server (Hostgator is our hosting provider) & at the begining it worked fine, with no trouble, but 5 months later an error appear, the process just die with no reason, the script only sent & inserted 700 rows in the table of the database & the process didnt show any warning or error, nothing appears in the error logs, & I didnt make any change in the script Hostgator never helped us hahaha so we decided to move the script from the shared server to a dedicated server; I thought it was a memory problem or something like that, but when we move the script to the dedicated server the problem just get worse, the script die when has just sent & inserted 40 to 50 rows to the database the information about this error: the shared server is on Red Hat 4.1.2-46 & the dedicated server is on CentOS 5.4 I have commented the line that sends the SMS, & the problem remains in the shared server, at the begining the script was fine, but then the script started to die when has just inserted 700 aprox. in the database, & now the script is dying when has inserted 2500 rows, its better but we didnt change anything in the dedicated server, the script dies when has just inserted like 40 row in the table the script, before it dies, change to a zombie process & we dont know why the usage of memory appears to be 0.3%, and of the cpu appears to be 0.7% to 1% I have changed the max memory limit of php to 128Mb, and even to -1 (so php wont have any limit) but the problem remains we have the limit of 50 connections of mysql at the same time, so I think this is not the problem Im using mysqli to connect from php to mysql Hostgator report that they haven't made any change or update in the servers what could be the problem?? what should I do??? what should I search??? is something in the logic Im missing?? what steps do I have to follow when managing & searching problems of process on Linux??? thank you very much, I think this is not a programming problem, but you have more experience than me, you can tell me thanks!!! bye!!! :)

    Read the article

  • Does this mySQL Stored Procedure Work?

    - by Laxmidi
    Hi, I got the following stored procedure from http://dev.mysql.com/doc/refman/5.1/en/functions-that-test-spatial-relationships-between-geometries.html Does this work? CREATE FUNCTION myWithin(p POINT, poly POLYGON) RETURNS INT(1) DETERMINISTIC BEGIN DECLARE n INT DEFAULT 0; DECLARE pX DECIMAL(9,6); DECLARE pY DECIMAL(9,6); DECLARE ls LINESTRING; DECLARE poly1 POINT; DECLARE poly1X DECIMAL(9,6); DECLARE poly1Y DECIMAL(9,6); DECLARE poly2 POINT; DECLARE poly2X DECIMAL(9,6); DECLARE poly2Y DECIMAL(9,6); DECLARE i INT DEFAULT 0; DECLARE result INT(1) DEFAULT 0; SET pX = X(p); SET pY = Y(p); SET ls = ExteriorRing(poly); SET poly2 = EndPoint(ls); SET poly2X = X(poly2); SET poly2Y = Y(poly2); SET n = NumPoints(ls); WHILE i<n DO SET poly1 = PointN(ls, (i+1)); SET poly1X = X(poly1); SET poly1Y = Y(poly1); IF ( ( ( ( poly1X <= pX ) && ( pX < poly2X ) ) || ( ( poly2X <= pX ) && ( pX < poly1X ) ) ) && ( pY > ( poly2Y - poly1Y ) * ( pX - poly1X ) / ( poly2X - poly1X ) + poly1Y ) ) THEN SET result = !result; END IF; SET poly2X = poly1X; SET poly2Y = poly1Y; SET i = i + 1; END WHILE; RETURN result; End; Usage: SET @point = PointFromText('POINT(5 5)') ; SET @polygon = PolyFromText('POLYGON((0 0,10 0,10 10,0 10))') ; SELECT myWithin(@point, @polygon) AS result ; I'm using phpMyAdmin and it blows up when using stored procedures. If this one works, then I'll try to figure out how to call it in php instead. Thanks, Laxmidi

    Read the article

  • How to I serialize a large graph of .NET object into a SQL Server BLOB without creating a large bu

    - by Ian Ringrose
    We have code like: ms = New IO.MemoryStream bin = New System.Runtime.Serialization.Formatters.Binary.BinaryFormatter bin.Serialize(ms, largeGraphOfObjects) dataToSaveToDatabase = ms.ToArray() // put dataToSaveToDatabase in a Sql server BLOB But the memory steam allocates a large buffer from the large memory heap that is giving us problems. So how can we stream the data without needing enough free memory to hold the serialized objects. I am looking for a way to get a Stream from SQL server that can then be passed to bin.Serialize() so avoiding keeping all the data in my processes memory. Likewise for reading the data back... Some more background. This is part of a complex numerical processing system that processes data in near real time looking for equipment problems etc, the serialization is done to allow a restart when there is a problem with data quality from a data feed etc. (We store the data feeds and can rerun them after the operator has edited out bad values.) Therefore we serialize the object a lot more often then we de-serialize them. The objects we are serializing include very large arrays mostly of doubles as well as a lot of small “more normal” objects. We are pushing the memory limit on a 32 bit system and make the garage collector work very hard. (Effects are being made elsewhere in the system to improve this, e.g. reusing large arrays rather then create new arrays.) Often the serialization of the state is the last straw that courses an out of memory exception; our peak memory usage is while this serialization is being done. I think we get large memory pool fragmentation when we de-serialize the object, I expect there are also other problem with large memory pool fragmentation given the size of the arrays. (This has not yet been investigated, as the person that first looked at this is a numerical processing expert, not a memory management expert.) Are customers use a mix of Sql Server 2000, 2005 and 2008 and we would rather not have different code paths for each version of Sql Server if possible. We can have many active models at a time (in different process, across many machines), each model can have many saved states. Hence the saved state is stored in a database blob rather then a file. As the spread of saving the state is important, I would rather not serialize the object to a file, and then put the file in a BLOB one block at a time. Other related questions I have asked How to Stream data from/to SQL Server BLOB fields? Is there a SqlFileStream like class that works with Sql Server 2005?

    Read the article

  • Error processing Spree sample images - file not recognized by identify command in paperclip geometry.rb:29

    - by purpletonic
    I'm getting an error when I run the Spree sample data. It occurs when Spree tries to load in the product data, specifically the product images. Here's the error I'm getting: * Execute db:load_file loading ruby <GEM DIR>/sample/lib/tasks/../../db/sample/spree/products.rb -- Processing image: ror_tote.jpeg rake aborted! /var/folders/91/63kgbtds2czgp0skw3f8190r0000gn/T/ror_tote.jpeg20121007-21549-2rktq1 is not recognized by the 'identify' command. <GEM DIR>/paperclip-2.7.1/lib/paperclip/geometry.rb:31:in `from_file' <GEM DIR>/spree/core/app/models/spree/image.rb:35:in `find_dimensions' I've made sure ImageMagick is installed correctly, as previously I was having problems with it. Here's the output I'm getting when running the identify command directly. $ identify Version: ImageMagick 6.7.7-6 2012-10-06 Q16 http://www.imagemagick.org Copyright: Copyright (C) 1999-2012 ImageMagick Studio LLC Features: OpenCL ... other usage info omitted ... I also used pry with the pry-debugger and put a breakpoint in geometry.rb inside of Paperclip. Here's what that section of geometry.rb looks like: # Uses ImageMagick to determing the dimensions of a file, passed in as either a # File or path. # NOTE: (race cond) Do not reassign the 'file' variable inside this method as it is likely to be # a Tempfile object, which would be eligible for file deletion when no longer referenced. def self.from_file file file_path = file.respond_to?(:path) ? file.path : file raise(Errors::NotIdentifiedByImageMagickError.new("Cannot find the geometry of a file with a blank name")) if file_path.blank? geometry = begin silence_stream(STDERR) do binding.pry Paperclip.run("identify", "-format %wx%h :file", :file => "#{file_path}[0]") end rescue Cocaine::ExitStatusError "" rescue Cocaine::CommandNotFoundError => e raise Errors::CommandNotFoundError.new("Could not run the `identify` command. Please install ImageMagick.") end parse(geometry) || raise(Errors::NotIdentifiedByImageMagickError.new("#{file_path} is not recognized by the 'identify' command.")) end At the point of my binding.pry statement, the file_path variable is set to the following: file_path => "/var/folders/91/63kgbtds2czgp0skw3f8190r0000gn/T/ror_tote.jpeg20121007-22732-1ctl1g1" I've also double checked that this exists, by opening my finder in this directory, and opened it with preview app; and also that the program can run identify by running %x{identify} in pry, and I receive the same version Version: ImageMagick 6.7.7-6 2012-10-06 Q16 as before. Removing the additional digits (is this a timestamp?) after the file extension and running the Paperclip.run command manually in Pry gives me a different error: Cocaine::ExitStatusError: Command 'identify -format %wx%h :file' returned 1. Expected 0 I've also tried manually updating the Paperclip gem in Spree to 3.0.2 and still get the same error. So, I'm not really sure what else to try. Is there still something incorrect with my ImageMagick setup?

    Read the article

  • TicTacToe AI Making Incorrect Decisions

    - by Chris Douglass
    A little background: as a way to learn multinode trees in C++, I decided to generate all possible TicTacToe boards and store them in a tree such that the branch beginning at a node are all boards that can follow from that node, and the children of a node are boards that follow in one move. After that, I thought it would be fun to write an AI to play TicTacToe using that tree as a decision tree. TTT is a solvable problem where a perfect player will never lose, so it seemed an easy AI to code for my first time trying an AI. Now when I first implemented the AI, I went back and added two fields to each node upon generation: the # of times X will win & the # of times O will win in all children below that node. I figured the best solution was to simply have my AI on each move choose and go down the subtree where it wins the most times. Then I discovered that while it plays perfect most of the time, I found ways where I could beat it. It wasn't a problem with my code, simply a problem with the way I had the AI choose it's path. Then I decided to have it choose the tree with either the maximum wins for the computer or the maximum losses for the human, whichever was more. This made it perform BETTER, but still not perfect. I could still beat it. So I have two ideas and I'm hoping for input on which is better: 1) Instead of maximizing the wins or losses, instead I could assign values of 1 for a win, 0 for a draw, and -1 for a loss. Then choosing the tree with the highest value will be the best move because that next node can't be a move that results in a loss. It's an easy change in the board generation, but it retains the same search space and memory usage. Or... 2) During board generation, if there is a board such that either X or O will win in their next move, only the child that prevents that win will be generated. No other child nodes will be considered, and then generation will proceed as normal after that. It shrinks the size of the tree, but then I have to implement an algorithm to determine if there is a one move win and I think that can only be done in linear time (making board generation a lot slower I think?) Which is better, or is there an even better solution?

    Read the article

  • Creating a spam list with a web crawler in python

    - by user313623
    Hey guys, I'm not trying to do anything malicious here, I just need to do some homework. I'm a fairly new programmer, I'm using python 3.0, and I having difficulty using recursion for problem-solving. I've been stuck on this question for quite a while. Here's the assignment: Write a recursive method spam(url, n) that takes a url of a web page as input and a non-negative integer n, collects all the email address contained in the web page and adds them to a global dictionary variable spam_dict, and then recursively calls itself on every http hyperlink contained in the web page. You will use a dictionary so only one copy of every email address is save; your dictionary will store (key,value) pairs (email, email). The recursive call should use the parameter n-1 instead of n. If n = 0, you should collect the email addresses but no recursive calls should be made. The parameter n is used to limit the recursion to at most depth n. You will need to use the solutions of the two above problems; you method spam() will call the methods links2() and emails() and possibly other functions as well. Notes: 1. running spam() directly will produce no output on the screen; to find your spam_dict, you will need to read the value of spam_dict, and you will also need to reset it to the empty dictionary before every run of spam. 2. Recall how global variables are used. Usage: spam_dict = {} spam('http://reed.cs.depaul.edu/lperkovic/csc242/test1.html',0) spam_dict.keys() dict_keys([]) spam_dict = {} spam('http://reed.cs.depaul.edu/lperkovic/csc242/test1.html',1) spam_dict.keys() dict_keys(['[email protected]', '[email protected]']) So far, I've written a function that traverses web pages and puts all the links in a nice little list, and what I wanted to do was call that functions. And why would I use recursion on a dictionary? And how? I don't understand how n ties into all of this. def links2(url): content = str(urlopen(url).read()) myparser = MyHTMLParser() myparser.feed(content) lst = myparser.get() mergelst = [] for link in lst: mergelst.append(urljoin(lst[0],link)) print(mergelst) Any input (except why spam is bad) would be greatly appreciated. Also, I realize that the above function could probably look better, if you have a way to do it, I'm all ears. However, all I need is the point is for the program to produce the proper output.

    Read the article

  • Accidental Complexity in OpenSSL HMAC functions

    - by Hassan Syed
    SSL Documentation Analaysis This question is pertaining the usage of the HMAC routines in OpenSSL. Since Openssl documentation is a tad on the weak side in certain areas, profiling has revealed that using the: unsigned char *HMAC(const EVP_MD *evp_md, const void *key, int key_len, const unsigned char *d, int n, unsigned char *md, unsigned int *md_len); From here, shows 40% of my library runtime is devoted to creating and taking down **HMAC_CTX's behind the scenes. There are also two additional function to create and destroy a HMAC_CTX explicetly: HMAC_CTX_init() initialises a HMAC_CTX before first use. It must be called. HMAC_CTX_cleanup() erases the key and other data from the HMAC_CTX and releases any associated resources. It must be called when an HMAC_CTX is no longer required. These two function calls are prefixed with: The following functions may be used if the message is not completely stored in memory My data fits entirely in memory, so I choose the HMAC function -- the one whose signature is shown above. The context, as described by the man page, is made use of by using the following two functions: HMAC_Update() can be called repeatedly with chunks of the message to be authenticated (len bytes at data). HMAC_Final() places the message authentication code in md, which must have space for the hash function output. The Scope of the Application My application generates a authentic (HMAC, which is also used a nonce), CBC-BF encrypted protocol buffer string. The code will be interfaced with various web-servers and frameworks Windows / Linux as OS, nginx, Apache and IIS as webservers and Python / .NET and C++ web-server filters. The description above should clarify that the library needs to be thread safe, and potentially have resumeable processing state -- i.e., lightweight threads sharing a OS thread (which might leave thread local memory out of the picture). The Question How do I get rid of the 40% overhead on each invocation in a (1) thread-safe / (2) resume-able state way ? (2) is optional since I have all of the source-data present in one go, and can make sure a digest is created in place without relinquishing control of the thread mid-digest-creation. So, (1) can probably be done using thread local memory -- but how do I resuse the CTX's ? does the HMAC_final() call make the CTX reusable ?. (2) optional: in this case I would have to create a pool of CTX's. (3) how does the HMAC function do this ? does it create a CTX in the scope of the function call and destroy it ? Psuedocode and commentary will be useful.

    Read the article

  • cuda/thrust: Trying to sort_by_key 2.8GB of data in 6GB of gpu RAM throws bad_alloc

    - by Sven K
    I have just started using thrust and one of the biggest issues I have so far is that there seems to be no documentation as to how much memory operations require. So I am not sure why the code below is throwing bad_alloc when trying to sort (before the sorting I still have 50% of GPU memory available, and I have 70GB of RAM available on the CPU)--can anyone shed some light on this? #include <thrust/device_vector.h> #include <thrust/sort.h> #include <thrust/random.h> void initialize_data(thrust::device_vector<uint64_t>& data) { thrust::fill(data.begin(), data.end(), 10); } #define BUFFERS 3 int main(void) { size_t N = 120 * 1024 * 1024; char line[256]; try { std::cout << "device_vector" << std::endl; typedef thrust::device_vector<uint64_t> vec64_t; // Each buffer is 900MB vec64_t c[3] = {vec64_t(N), vec64_t(N), vec64_t(N)}; initialize_data(c[0]); initialize_data(c[1]); initialize_data(c[2]); std::cout << "initialize_data finished... Press enter"; std::cin.getline(line, 0); // nvidia-smi reports 48% memory usage at this point (2959MB of // 6143MB) std::cout << "sort_by_key col 0" << std::endl; // throws bad_alloc thrust::sort_by_key(c[0].begin(), c[0].end(), thrust::make_zip_iterator(thrust::make_tuple(c[1].begin(), c[2].begin()))); std::cout << "sort_by_key col 1" << std::endl; thrust::sort_by_key(c[1].begin(), c[1].end(), thrust::make_zip_iterator(thrust::make_tuple(c[0].begin(), c[2].begin()))); } catch(thrust::system_error &e) { std::cerr << "Error: " << e.what() << std::endl; exit(-1); } return 0; }

    Read the article

  • Help with Arrays in Objective C.

    - by NJTechie
    Problem : Take an integer as input and print out number equivalents of each number from input. I hacked my thoughts to work in this case but I know it is not an efficient solution. For instance : 110 Should give the following o/p : one one zero Could someone throw light on effective usage of Arrays for this problem? #import <Foundation/Foundation.h> int main (int argc, const char * argv[]) { NSAutoreleasePool * pool = [[NSAutoreleasePool alloc] init]; int input, i=0, j,k, checkit; int temp[i]; NSLog(@"Enter an integer :"); scanf("%d", &input); checkit = input; while(input > 0) { temp[i] = input%10; input = input/10; i++; } if(checkit != 0) { for(j=i-1;j>=0;j--) { //NSLog(@" %d", temp[j]); k = temp[j]; //NSLog(@" %d", k); switch (k) { case 0: NSLog(@"zero"); break; case 1: NSLog(@"one"); break; case 2: NSLog(@"two"); break; case 3: NSLog(@"three"); break; case 4: NSLog(@"four"); break; case 5: NSLog(@"five"); break; case 6: NSLog(@"six"); break; case 7: NSLog(@"seven"); break; case 8: NSLog(@"eight"); break; case 9: NSLog(@"nine"); break; default: break; } } } else NSLog(@"zero"); [pool drain]; return 0; }

    Read the article

  • WPF Application Slow Unresponsive when demonstrating using remote sharing software

    - by Kev
    After spending 14 hours on this I think its time to share my woes and see if anyone has experienced this issue before. Ill describe the issue and tests I have done to rule out certain things. Ok so I have a WPF application which loads in data from an SQL database. I am using DevExpress Components for datagrids, ribbons etc.. and FluentNhibernate to provide a session for database operations. I am also using log4net to log events to a textfile. Using the application on my laptop with SQL Express 2008 works fine.. the application starts up, retrieves 1000 records and I can tab through the controls on the ribbon. Now, I decided to demo the application to a third party and used remote login/sharing software online to share my desktop with the other person so as I could load the application on my laptop and they could view me using the application. Now, the application takes approx 45 seconds to load... 30 seconds with a blank database where as, when im not sharing out my screen using the online software the application loads in about 7-10 seconds. As well as that, even using the controls in the application during the demo were very sticky, slow and unresponsive. During the sharing session though however I was able to use other applications without any problems.. everything else worked fine. But I cannot understand how my application works ok under normal conditions , even browsing the net at the same time etc... BUT totally fails to perform correctly when I am sharing a session with another user... the CPU usage shot up to 100% too at times when the application was trying to start up... Please see below a list of 3rd party dlls I am using as references in my project. DevExpress dlls FluidKit PixelLab.WPF PixelLab.Common Galasoft WPF Kit FluentNHibernate NHibernate Nhibernate.ByteCode.Castle Skype4ComLib TXTEXTControl log4net LinqKit All of these DLLs are in the output folder with the application dlls created from the class assemblys in the project. So when installed via an installer on a machine the dlls will be in the same application folder as the application file itself. Many thanks

    Read the article

  • Running Network Application on port Server side give me error how can i solve it?

    - by Phsika
    if i run Server App. Exception occurs: on Dinle.Start() System.Net.SocketException - Only one usage of each socket address (protocol/network address/port) is normally permitted How can i solve this error? Server.cs using System; using System.Collections.Generic; using System.ComponentModel; using System.Data; using System.Drawing; using System.Linq; using System.Text; using System.Windows.Forms; using System.IO; using System.Net; using System.Net.Sockets; using System.Threading; namespace Server { public partial class Server : Form { Thread kanal; public Server() { InitializeComponent(); try { kanal = new Thread(new ThreadStart(Dinle)); kanal.Start(); kanal.Priority = ThreadPriority.Normal; this.Text = "Kanla Çalisti"; } catch (Exception ex) { this.Text = "kanal çalismadi"; MessageBox.Show("hata:" + ex.ToString()); kanal.Abort(); throw; } } private void Server_Load(object sender, EventArgs e) { Dinle(); } private void btn_Listen_Click(object sender, EventArgs e) { Dinle(); } void Dinle() { // IPAddress localAddr = IPAddress.Parse("localhost"); // TcpListener server = new TcpListener(port); // server = new TcpListener(localAddr, port); //TcpListener Dinle = new TcpListener(localAddr,51124); TcpListener Dinle = new TcpListener(51124); try { while (true) { Dinle.Start(); Exception is occured. Socket Baglanti = Dinle.AcceptSocket(); if (!Baglanti.Connected) { MessageBox.Show("Baglanti Yok"); } else { TcpClient tcpClient = Dinle.AcceptTcpClient(); if (tcpClient.ReceiveBufferSize 0) { byte[] Dizi = new byte[250000]; Baglanti.Receive(Dizi, Dizi.Length, 0); string Yol; saveFileDialog1.Title = "Dosyayi kaydet"; saveFileDialog1.ShowDialog(); Yol = saveFileDialog1.FileName; FileStream Dosya = new FileStream(Yol, FileMode.Create); Dosya.Write(Dizi, 0, Dizi.Length - 20); Dosya.Close(); listBox1.Items.Add("dosya indirildi"); listBox1.Items.Add("Dosya Boyutu=" + Dizi.Length.ToString()); listBox1.Items.Add("Indirilme Tarihi=" + DateTime.Now); listBox1.Items.Add("--------------------------------"); } } } } catch (Exception ex) { MessageBox.Show("hata:" + ex.ToString()); } } } }

    Read the article

  • Square Peg Web: Gets you the traffic to where it matters most: Your Website!

    - by demetriusalwyn
    Have you decided to start your business online or is your business not reaching the targeted audience? Come to Square Peg Web; where you will find what you want to make your business reach new heights. The team at Square Peg Web is professionals who understand what you want and make sure you get it right. Our confidence stems from the fact of thousands of satisfied clients who keep referring friends and business associates to us and we do not let our clients down. Many companies promise the sky but how far is does their work live up to the promises? We do not know about the others however, we are sure that we strive to put together all our ideas and thoughts to make your website rank among the top. Web hosting is something that needs to have a personal touch; Square Peg Web customizes everything to suit your requirements so that you do not have to look further. With Square Peg Web you have a host of features to make your Business go viral. Some of the product details that are offered with Square Peg Web are unlimited product options/ variants/ properties giving you an option on price modifiers. You get unlimited customized input fields for your products and you can also Customer-define the prices. Square Peg Web provides you an option of using multiple product images with zoom features and one can also list a particular product in several categories. There are other aspects which make Square Peg Web the best choice for your website needs; every sale of yours’ is important to you and to us. We make sure that each sale is tracked by the product and also the list of bestsellers that appeal to the audience. Other comprehensive statistics of Square Peg Web includes searchable order data, an interface for shipments and order fulfillments, export sales & customer data for usage in a spreadsheet and the ability to export orders to QuickBooks format. With Square Peg Web; Admin Panel is a lot simpler. Administrative access is completely password protected and any changes done are all in real-time. You can have absolute control on the cart from anywhere around the world using your web browser and the topping on the cake is the unlimited amount of admin accounts that can be created for you. Square Peg Web offers you a world of experience with the options of choosing from marketing websites to e-commerce and from customized applications to community oriented sites. Some of the projects which appear in the portfolio of Square Peg Web are Online Marketing Web Sites, E-Commerce Web Sites, customized web applications, Blog designing and programming, video sharing and the option of downloading web sites, online advertisements, flash animation, customer and product support web sites, web site re-designing and planning and complete information architecture.

    Read the article

  • gcc/g++: error when compiling large file

    - by Alexander
    Hi, I have a auto-generated C++ source file, around 40 MB in size. It largely consists of push_back commands for some vectors and string constants that shall be pushed. When I try to compile this file, g++ exits and says that it couldn't reserve enough virtual memory (around 3 GB). Googling this problem, I found that using the command line switches --param ggc-min-expand=0 --param ggc-min-heapsize=4096 may solve the problem. They, however, only seem to work when optimization is turned on. 1) Is this really the solution that I am looking for? 2) Or is there a faster, better (compiling takes ages with these options acitvated) way to do this? Best wishes, Alexander Update: Thanks for all the good ideas. I tried most of them. Using an array instead of several push_back() operations reduced memory usage, but as the file that I was trying to compile was so big, it still crashed, only later. In a way, this behaviour is really interesting, as there is not much to optimize in such a setting -- what does the GCC do behind the scenes that costs so much memory? (I compiled with deactivating all optimizations as well and got the same results) The solution that I switched to now is reading in the original data from a binary object file that I created from the original file using objcopy. This is what I originally did not want to do, because creating the data structures in a higher-level language (in this case Perl) was more convenient than having to do this in C++. However, getting this running under Win32 was more complicated than expected. objcopy seems to generate files in the ELF format, and it seems that some of the problems I had disappeared when I manually set the output format to pe-i386. The symbols in the object file are by standard named after the file name, e.g. converting the file inbuilt_training_data.bin would result in these two symbols: binary_inbuilt_training_data_bin_start and binary_inbuilt_training_data_bin_end. I found some tutorials on the web which claim that these symbols should be declared as extern char _binary_inbuilt_training_data_bin_start;, but this does not seem to be right -- only extern char binary_inbuilt_training_data_bin_start; worked for me.

    Read the article

  • How safe is my safe rethrow?

    - by gustafc
    (Late edit: This question will hopefully be obsolete when Java 7 comes, because of the "final rethrow" feature which seems like it will be added.) Quite often, I find myself in situations looking like this: do some initialization try { do some work } catch any exception { undo initialization rethrow exception } In C# you can do it like this: InitializeStuff(); try { DoSomeWork(); } catch { UndoInitialize(); throw; } For Java, there's no good substitution, and since the proposal for improved exception handling was cut from Java 7, it looks like it'll take at best several years until we get something like it. Thus, I decided to roll my own: (Edit: Half a year later, final rethrow is back, or so it seems.) public final class Rethrow { private Rethrow() { throw new AssertionError("uninstantiable"); } /** Rethrows t if it is an unchecked exception. */ public static void unchecked(Throwable t) { if (t instanceof Error) throw (Error) t; if (t instanceof RuntimeException) throw (RuntimeException) t; } /** Rethrows t if it is an unchecked exception or an instance of E. */ public static <E extends Exception> void instanceOrUnchecked( Class<E> exceptionClass, Throwable t) throws E, Error, RuntimeException { Rethrow.unchecked(t); if (exceptionClass.isInstance(t)) throw exceptionClass.cast(t); } } Typical usage: public void doStuff() throws SomeException { initializeStuff(); try { doSomeWork(); } catch (Throwable t) { undoInitialize(); Rethrow.instanceOrUnchecked(SomeException.class, t); // We shouldn't get past the above line as only unchecked or // SomeException exceptions are thrown in the try block, but // we don't want to risk swallowing an error, so: throw new SomeException("Unexpected exception", t); } private void doSomeWork() throws SomeException { ... } } It's a bit wordy, catching Throwable is usually frowned upon, I'm not really happy at using reflection just to rethrow an exception, and I always feel a bit uneasy writing "this will not happen" comments, but in practice it works well (or seems to, at least). What I wonder is: Do I have any flaws in my rethrow helper methods? Some corner cases I've missed? (I know that the Throwable may have been caused by something so severe that my undoInitialize will fail, but that's OK.) Has someone already invented this? I looked at Commons Lang's ExceptionUtils but that does other things. Edit: finally is not the droid I'm looking for. I'm only interested to do stuff when an exception is thrown. Yes, I know catching Throwable is a big no-no, but I think it's the lesser evil here compared to having three catch clauses (for Error, RuntimeException and SomeException, respectively) with identical code. Note that I'm not trying to suppress any errors - the idea is that any exceptions thrown in the try block will continue to bubble up through the call stack as soon as I've rewinded a few things.

    Read the article

  • Constructor Injection and when to use a Service Locator

    - by Simon
    I'm struggling to understand parts of StructureMap's usage. In particular, in the documentation a statement is made regarding a common anti-pattern, the use of StructureMap as a Service Locator only instead of constructor injection (code samples straight from Structuremap documentation): public ShippingScreenPresenter() { _service = ObjectFactory.GetInstance<IShippingService>(); _repository = ObjectFactory.GetInstance<IRepository>(); } instead of: public ShippingScreenPresenter(IShippingService service, IRepository repository) { _service = service; _repository = repository; } This is fine for a very short object graph, but when dealing with objects many levels deep, does this imply that you should pass down all the dependencies required by the deeper objects right from the top? Surely this breaks encapsulation and exposes too much information about the implementation of deeper objects. Let's say I'm using the Active Record pattern, so my record needs access to a data repository to be able to save and load itself. If this record is loaded inside an object, does that object call ObjectFactory.CreateInstance() and pass it into the active record's constructor? What if that object is inside another object. Does it take the IRepository in as its own parameter from further up? That would expose to the parent object the fact that we're access the data repository at this point, something the outer object probably shouldn't know. public class OuterClass { public OuterClass(IRepository repository) { // Why should I know that ThingThatNeedsRecord needs a repository? // that smells like exposed implementation to me, especially since // ThingThatNeedsRecord doesn't use the repo itself, but passes it // to the record. // Also where do I create repository? Have to instantiate it somewhere // up the chain of objects ThingThatNeedsRecord thing = new ThingThatNeedsRecord(repository); thing.GetAnswer("question"); } } public class ThingThatNeedsRecord { public ThingThatNeedsRecord(IRepository repository) { this.repository = repository; } public string GetAnswer(string someParam) { // create activeRecord(s) and process, returning some result // part of which contains: ActiveRecord record = new ActiveRecord(repository, key); } private IRepository repository; } public class ActiveRecord { public ActiveRecord(IRepository repository) { this.repository = repository; } public ActiveRecord(IRepository repository, int primaryKey); { this.repositry = repository; Load(primaryKey); } public void Save(); private void Load(int primaryKey) { this.primaryKey = primaryKey; // access the database via the repository and set someData } private IRepository repository; private int primaryKey; private string someData; } Any thoughts would be appreciated. Simon

    Read the article

  • Good working habits to observe in project development?

    - by Will Marcouiller
    As my development experience grows, I see fit to stick to best practices from here and there to build somehow my own working practices while observing the conventions, etc. I'm currently working on a project which my goals is to graduate the security access model from an environment's Active Directory to another environment's automatically. I don't know for any of you, but as far as I'm concerned, I meet some real difficulties sticking to only one way, then develop. I mean, I learn something new everyday while visiting SO, and recently wanted to get acquainted with generics. On the other hand, I better know the Façade pattern which proved to be very practical in transactional programming in process systems. This seems to be less practical for desktop application as there are plenty of variables to consider in a desktop application that you don't have to care in transactional programming, as you're playing only with information data. As for my current project, I have: Groups; Organizational Units; Users. Which are all considered an entry in the Active Directory. This points out to be a good candidate for generics, as also approached this way by Bart de Smett's Linq to AD on CodePlex. He has a DirectorySource<T>, and to manage let's say groups, then he instantiate a source with the proper type: var groups = new DirectorySource<Group>(); This seems to be very a good way of doing. Despite, I seem to go from one pattern to another and I don't seem to be able to strictly stick to one. While I'm aware that one must not stay with only one way of doing, since each pattern statisfies certain advantages, while also illustrating disadvantages under some usage conditions, I seem to want to develop with both patterns having a singleton Façade class with the underlying factories which represent the sub systems: GroupsFactory; UsersFactory; OrganizationalUnitsFactory. Each of the factories offers the possible operations for their respective entity (group, user, OU). To make a very long story short, I often have plenty of ideas while developping and this causes me some trouble, as I go from an idea to another feeling completely lost after a while. Yet I understand the advantages and disavantages, I have no trouble choosing from one pattern to another depending on the situation. Nevertheless, when it comes to programming itself, if I'm not part of a team, I feel sometimes like I can't do anything good. That is, because I can't stand not doing something "perfect" the first time. The role I play within the project is both: the project manager and the programmer. I am more comfortable in the project manager role, architectural role, analytical role than the developer's. Has any of you some good habbits to observe in project development? Thanks to you all! =)

    Read the article

  • deleted gen folder, eclipse isn't generating it now :(

    - by LuxuryMode
    I accidentally deleted my gen folder and now, predictably, my resources are all messed up. I just created a gen folder myself and tried to project clean - that didn't work. Tried right-clicking project and going to android tools fix project properties - didn't work. Tried unchecking build automatically...didn't work. cleaned, closed project, closed eclipse, restarted, etc, etc. Nothing is working and I keep seeing this error: gen already exists but is not a source folder. Convert to a source folder or rename it. EDIT - OK was able to generate R.java, but now I'm getting crazy stuff in the console: [2011-06-14 17:06:11 - fastapp] Conversion to Dalvik format failed with error 1 [2011-06-14 17:06:42 - fastapp] Dx trouble processing "java/awt/font/NumericShaper.class": Ill-advised or mistaken usage of a core class (java.* or javax.*) when not building a core library. This is often due to inadvertently including a core library file in your application's project, when using an IDE (such as Eclipse). If you are sure you're not intentionally defining a core class, then this is the most likely explanation of what's going on. However, you might actually be trying to define a class in a core namespace, the source of which you may have taken, for example, from a non-Android virtual machine project. This will most assuredly not work. At a minimum, it jeopardizes the compatibility of your app with future versions of the platform. It is also often of questionable legality. If you really intend to build a core library -- which is only appropriate as part of creating a full virtual machine distribution, as opposed to compiling an application -- then use the "--core-library" option to suppress this error message. If you go ahead and use "--core-library" but are in fact building an application, then be forewarned that your application will still fail to build or run, at some point. Please be prepared for angry customers who find, for example, that your application ceases to function once they upgrade their operating system. You will be to blame for this problem. If you are legitimately using some code that happens to be in a core package, then the easiest safe alternative you have is to repackage that code. That is, move the classes in question into your own package namespace. This means that they will never be in conflict with core system classes. JarJar is a tool that may help you in this endeavor. If you find that you cannot do this, then that is an indication that the path you are on will ultimately lead to pain, suffering, grief, and lamentation. [2011-06-14 17:06:42 - fastapp] Dx 1 error; aborting [2011-06-14 17:06:42 - fastapp] Conversion to Dalvik format failed with error 1 And eclipse can't resolve the import of my resources import com.me.fastapp.R;

    Read the article

  • Failed to obtain JDBC Driver for MySQL under Tomcat environment

    - by Michael Mao
    Hi all: I've been trying to obtain the Driver class for JDBC connection to MySQL. The workstation is running on Linux, Fedora 10. I have manually set up the classpath variable for Java by CLI like this: bash-3.2$ echo $CLASSPATH /home/cmao/public_html/jsp/mysql-connector-java-5.1.12-bin.jar This shows that I've added the lastest mysql connection jar archive to my CLASSPATH variable. I've created a test JSP page which can be found here And source code for this page is: <%@page language="java"%> <%@page import="java.sql.*"%> <%@page import="java.util.*"%> <html> <head> <title>UTS JDBC MySQL connection test page</title> </head> <body> <% Connection con = null; out.print("Java version is : " + System.getProperty("java.version") + "<br />"); out.print("Tomcat version is : " + application.getServerInfo() + "<br />"); out.print("Servlet version is: " + application.getMajorVersion() + "<br />"); out.print("JSP version is : " + JspFactory.getDefaultFactory().getEngineInfo().getSpecificationVersion() +"<br />"); //out.print("Java classpath is : " + System.getProperty("java.class.path")+ "<br />"); //out.print("JSP classpath is : " + appliaction.getAttribute("org.apache.catalina.jsp_classpath") + "<br />"); //out.print("Tomcat classpath is : " + System.getProperty("org.apache.tomcat.common.classpath") + "<br />"); try { Class c = Class.forName("com.mysql.jdbc.Driver"); } catch(Exception e) { out.println("Error! Failed to obtain JDBC driver for MySQL... Missing class \"com.mysql.jdbc.Driver\"<br />"); } %> </body> </html> None of those commented out line would work, various Jsper Expetions would be thrown. You can check those Error pages from the following links: classpath Error page catalina Error page tomcat Error page It seems, from my limited knowledge of JSP and Servlet, the Tomcat environment "ignores" my Java CLASSPATH? In which case I cannot configure the MySQL JDBC package to let my Servlets(a JSP is but a Servlet anyway) work. I am not sure how to fix this issue. would it be better if I use an IDE like Eclipse or NetBeans and create a real Java "web app" so that everything can be "self-configured" by the usage of a web.config XML configuration file? So that I can certainly bypass this Tomcat environment restriction? Many thanks for the suggestions in advance.

    Read the article

  • Performance tuning of a Hibernate+Spring+MySQL project operation that stores images uploaded by user

    - by Umar
    Hi I am working on a web project that is Spring+Hibernate+MySQL based. I am stuck at a point where I have to store images uploaded by a user into the database. Although I have written some code that works well for now, but I believe that things will mess up when the project would go live. Here's my domain class that carries the image bytes: @Entity public class Picture implements java.io.Serializable{ long id; byte[] data; ... // getters and setters } And here's my controller that saves the file on submit: public class PictureUploadFormController extends AbstractBaseFormController{ ... protected ModelAndView onSubmit(HttpServletRequest request, HttpServletResponse response, Object command, BindException errors) throws Exception{ MutlipartFile file; // getting MultipartFile from the command object ... // beginning hibernate transaction ... Picture p=new Picture(); p.setData(file.getBytes()); pictureDAO.makePersistent(p); // this method simply calls getSession().saveOrUpdate(p) // committing hiernate transaction ... } ... } Obviously a bad piece of code. Is there anyway I could use InputStream or Blob to save the data, instead of first loading all the bytes from the user into the memory and then pushing them into the database? I did some research on hibernate's support for Blob, and found this in Hibernate In Action book: java.sql.Blob and java.sql.Clob are the most efficient way to handle large objects in Java. Unfortunately, an instance of Blob or Clob is only useable until the JDBC transaction completes. So if your persistent class defines a property of java.sql.Clob or java.sql.Blob (not a good idea anyway), you’ll be restricted in how instances of the class may be used. In particular, you won’t be able to use instances of that class as detached objects. Furthermore, many JDBC drivers don’t feature working support for java.sql.Blob and java.sql.Clob. Therefore, it makes more sense to map large objects using the binary or text mapping type, assuming retrieval of the entire large object into memory isn’t a performance killer. Note you can find up-to-date design patterns and tips for large object usage on the Hibernate website, with tricks for particular platforms. Now apparently the Blob cannot be used, as it is not a good idea anyway, what else could be used to improve the performance? I couldn't find any up-to-date design pattern or any useful information on Hibernate website. So any help/recommendations from stackoverflowers will be much appreciated. Thanks

    Read the article

  • Regular expression does not find the first occurrence

    - by scharan
    I have the following input to a perl script and I wish to get the first occurrence of NAME="..." strings in each of the ... structures. The entire file is read into a single string and the reg exp acts on that input. However, the regex always returns the LAST occurrence of NAME="..." strings. Can anyone explain what is going on and how this can be fixed? Input file: ADSDF <TABLE> NAME="ORDERSAA" line1 line2 NAME="ORDERSA" line3 NAME="ORDERSAB" </TABLE> <TABLE> line1 line2 NAME="ORDERSB" line3 </TABLE> <TABLE> line1 line2 NAME="ORDERSC" line3 </TABLE> <TABLE> line1 line2 NAME="ORDERSD" line3 line3 line3 </TABLE> <TABLE> line1 line2 NAME="QUOTES2" line3 NAME="QUOTES3" NAME="QUOTES4" line3 NAME="QUOTES5" line3 </TABLE> <TABLE> line1 line2 NAME="QUOTES6" NAME="QUOTES7" NAME="QUOTES8" NAME="QUOTES9" line3 line3 </TABLE> <TABLE> NAME="MyName IsKhan" </TABLE> Perl Code starts here: use warnings; use strict; my $nameRegExp = '(<table>((NAME="(.+)")|(.*|\n))*</table>)'; sub extractNames($$){ my ($ifh, $ofh) = @_; my $fullFile; read ($ifh, $fullFile, 1024);#Hardcoded to read just 1024 bytes. while( $fullFile =~ m#$nameRegExp#gi){ print "found: ".$4."\n"; } } sub main(){ if( ($#ARGV + 1 )!= 1){ die("Usage: extractNames infile\n"); } my $infileName = $ARGV[0]; my $outfileName = $ARGV[1]; open my $inFile, "<$infileName" or die("Could not open log file $infileName"); my $outFile; #open my $outFile, ">$outfileName" or die("Could not open log file $outfileName"); extractNames( $inFile, $outFile ); close( $inFile ); #close( $outFile ); } #call main();

    Read the article

  • Regular expression does not find the first occurance

    - by scharan
    I have the following input to a perl script and I wish to get the first occurrence of NAME="..." strings in each of the ... structures. The entire file is read into a single string and the reg exp acts on that input. However, the regex always returns the LAST occurrence of NAME="..." strings. Can anyone explain what is going on and how this can be fixed? Input file: ADSDF <TABLE> NAME="ORDERSAA" line1 line2 NAME="ORDERSA" line3 NAME="ORDERSAB" </TABLE> <TABLE> line1 line2 NAME="ORDERSB" line3 </TABLE> <TABLE> line1 line2 NAME="ORDERSC" line3 </TABLE> <TABLE> line1 line2 NAME="ORDERSD" line3 line3 line3 </TABLE> <TABLE> line1 line2 NAME="QUOTES2" line3 NAME="QUOTES3" NAME="QUOTES4" line3 NAME="QUOTES5" line3 </TABLE> <TABLE> line1 line2 NAME="QUOTES6" NAME="QUOTES7" NAME="QUOTES8" NAME="QUOTES9" line3 line3 </TABLE> <TABLE> NAME="MyName IsKhan" </TABLE> Perl Code starts here: use warnings; use strict; my $nameRegExp = '(<table>((NAME="(.+)")|(.*|\n))*</table>)'; sub extractNames($$){ my ($ifh, $ofh) = @_; my $fullFile; read ($ifh, $fullFile, 1024);#Hardcoded to read just 1024 bytes. while( $fullFile =~ m#$nameRegExp#gi){ print "found: ".$4."\n"; } } sub main(){ if( ($#ARGV + 1 )!= 1){ die("Usage: extractNames infile\n"); } my $infileName = $ARGV[0]; my $outfileName = $ARGV[1]; open my $inFile, "<$infileName" or die("Could not open log file $infileName"); my $outFile; #open my $outFile, ">$outfileName" or die("Could not open log file $outfileName"); extractNames( $inFile, $outFile ); close( $inFile ); #close( $outFile ); } #call main();

    Read the article

< Previous Page | 214 215 216 217 218 219 220 221 222 223 224 225  | Next Page >