Search Results

Search found 22041 results on 882 pages for 'kill process'.

Page 636/882 | < Previous Page | 632 633 634 635 636 637 638 639 640 641 642 643  | Next Page >

  • IIS 7 with PHP 5.2 - Error 500

    - by Razor
    I have a fresh install of IIS 7 - I just added Web Platform Installer, and PHP 5.2 thru that. However, when trying to access to a simple test.php file (just has phpinfo() in it), I get the following list of errors: • IIS was not able to access the web.config file for the Web site or application. This can occur if the NTFS permissions are set incorrectly. • IIS was not able to process configuration for the Web site or application. • The authenticated user does not have permission to use this DLL. • The request is mapped to a managed handler but the .NET Extensibility Feature is not installed. Any idea of what I'm doing wrong here?

    Read the article

  • Python text file processing speed issues

    - by Anonymouslemming
    Hi all, I'm having a problem with processing a largeish file in Python. All I'm doing is f = gzip.open(pathToLog, 'r') for line in f: counter = counter + 1 if (counter % 1000000 == 0): print counter f.close This takes around 10m25s just to open the file, read the lines and increment this counter. In perl, dealing with the same file and doing quite a bit more (some regular expression stuff), the whole process takes around 1m17s. Perl Code: open(LOG, "/bin/zcat $logfile |") or die "Cannot read $logfile: $!\n"; while (<LOG>) { if (m/.*\[svc-\w+\].*login result: Successful\.$/) { $_ =~ s/some regex here/$1,$2,$3,$4/; push @an_array, $_ } } close LOG; Can anyone advise what I can do to make the Python solution run at a similar speed to the Perl solution? I've tried just uncompressing the file and dealing with it using open instead of gzip.open, but that made a very small difference to the overall time.

    Read the article

  • How to evade writing a lot of repetitive code when mapping?

    - by JPCF
    I have a data access layer (DAL) using Entity Framework, and I want to use Automapper to communicate with upper layers. I will have to map data transfer objects (DTOs) to entities as the first operation on every method, process my inputs, then proceed to map from entities to DTOs. What would you do to skip writing this code? As an example, see this: //This is a common method in my DAL public CarDTO getCarByOwnerAndCreditStatus(OwnerDTO ownerDto, CreditDto creditDto) { //I want to automatize this code on all methods similar to this Mapper.CreateMap<OwnerDTO,Owner>(); Mapper.CreateMap<CreditDTO,Credit>(); Owner owner = Mapper.map(ownerDto); Owner credit = Mapper.map(creditDto) //... Some code processing the mapped DTOs //I want to automatize this code on all methods similar to this Mapper.CreateMap<Car,CarDTO>(); Car car = Mapper.map(ownedCar); return car; }

    Read the article

  • Oracle: delete suddenly taking a long time

    - by Damo
    Hi We have a feed process which runs every day of the year. As part of that we delete every row from a table (approx 1 million rows) every day, repopulate it using 5 different stored procedures and then commit the transaction. This is the only commit statement that we call. All of a sudden the delete has started takign about 2 hours to complete. The delete is also very simple (delete from T_PROFILE_WORK) This has worked perfectly well for the past year, but in the past week i have noticed this issue. Any help on this is greatly appreciated Thanks Damien

    Read the article

  • Why does Indy 10's echo server have high CPU usage when the client disconnects?

    - by Virtuo
    When I disconnect echo client like : EchoClient1.Disconnect; client disconnects fine... but EchoServer does NOT EVEN register client disconnection and it ends up with high process usage !?!? in every example and every doc it says that calling EchoClient.Disconnect is sufficient ! anyone, any idea ? (I am working in Win7, cloud that be a problem ?) Server code : procedure TForm2.EServerConnect(AContext: TIdContext); begin SrvMsg.Lines.Add('ECHO Client connected !'); end; procedure TForm2.EServerDisconnect(AContext: TIdContext); begin SrvMsg.Lines.Add('ECHO Client disconnected !'); end; problem is "TForm2.EServerDisconnect" never executes !?!?

    Read the article

  • How to start an Open Source Software development

    - by harigm
    I have an idea to start a Open Source Software using Java and PHP, I am so sure that software will help many individuals in their daily routine. Can Any one suggest me how to start the process ? Do we need to register some where and submit our idea for an approval before we start development? Any license that we need to get? How to invite the people for the open source development community, if they are interested? If any people who contributes Do we need to get any agreement signed off? once the Open Source product is stabilized who will have the ownership?

    Read the article

  • Multilanguage Support In C#

    - by Pramodh
    Dear All, I've developed a sample software in c# windows Appliation. How to make it a multilingual supporting software. For Example: One of the message boxes display " Welcome to sample application" i installed the software in a chinees os , but it displays the message in english only. i'm using "string table" for this problem. In string table i need to create entry for each messages in english and Chinees. its a timely process. is there any other way to do this?

    Read the article

  • Add widgets to custom WordPress sidebars on theme activation?

    - by McGirl
    I'm working on a WordPress 3.0 multi-site installation. Each new blog will use the same theme with slight modifications (a custom Thesis install, if it matters). I'm trying to automate the set-up process for each new blog as much as possible. To that end, I'd like to automatically add widgets to my custom sidebars and widget-enabled footers. It'd be even better if the widgets could have pre-set parameters/content, that I or the blog owner could then go into the Widgets panel and edit. I've searched high and low and haven't been able to figure out a way to make this work. Any and all suggestions are welcome. Thanks so much!

    Read the article

  • The cost of finalize in .Net

    - by Jules
    (1) I've read a lot of questions about IDisposable where the answers recommend not using Finalize unless you really need to because of the process time involved. What I haven't seen is how much this cost is and how often it's paid. Every millisecond? second? hour, day etc. (2) Also, it seems to me that Finalize is handy when its not always known if an object can be disposed. For instance, the framework font class. A control can't dispose of it because it doesn't know if the font is shared. The font is usually created at design time so the user won't know to dispose it, therefore finalize kicks in to finally get rid of it when there are no references left. Is that a correct impression?

    Read the article

  • Good overview tool / board for visualizing Subversion branch acitivity?

    - by Sam
    Our team is sometimes finding it a bit confusing and time-consuming to figure out which subversion operations have been perrformed on our different branches in Subversion. Example, when has the Development branch last been merged into the Trunk? When was this particular Tag created, based on what branch etc etc. All of this information can of course be extracted from the Subversion Log, but thats always a manual, time-consuming and error-prone process. Simplest solution seems to be a simple whiteboard with a visualization of all the different branches/tags/trunk in Subversion and people drawing on it, whenever something significant happens. But we're not averse to finding some kind of a digital solution as well, stored centrally. Obviously both systems depend on people actually maintaining the model, but you'll always more or less have that. What do you use as best practice for keeping a clear view on all Subversion operations in the current Sprint (or beyond)?

    Read the article

  • Perl Regex Mismatch Issue

    - by Russell C.
    This is a really basic regex question but since I can't seem to figure out why the match is failing in certain circumstances I figured I'd post it to see if anyone else can point out what I'm missing. I'm trying to pull out the 2 sets of digits from strings of the form: 12309123098_102938120938120938 1321312_103810312032123 123123123_10983094854905490 38293827_1293120938129308 I'm using the following code to process each string: if($string && $string =~ /^(\d)+_(\d)+$/) { if(IsInteger($1) && IsInteger($2)) { print "success ('$1','$2')"; } else { print "fail"; } } Where the IsInterger() function is as follows: sub IsInteger { my $integer = shift; if($integer && $integer =~ /^\d+$/) { return 1; } return; } This function seems to work most of the time but fails on the following for some reason: 1287123437_1268098784380 1287123437_1267589971660 Any ideas on why these fail while others succeed? Thanks in advance for your help!

    Read the article

  • Is referencing a selector faster in jquery than actually calling the selector? if so, how much does it make a difference?

    - by anthonypliu
    Hi, I have this code: $(preview-button).click(...) $(preview-button).slide(...) $(preview-button).whatever(...) Is it a better practice to do this: var preview-button = $(preview-button); preview-button.click(...); preview-button.click(...); preview-button).slide(...); preview-button.whatever(...); It probably would be better practice to do this for the sake of keeping code clean and modular, BUT does it make a difference performance wise? Does one take longer to process than the other? Thanks guys.

    Read the article

  • Writing to CSV issue in Spyder

    - by 0003
    I am doing the Kaggle Titanic beginner contest. I generally work in Spyder IDE, but I came across a weird issue. The expected output is supposed to be 418 rows. When I run the script from terminal the output I get is 418 rows (as expected). When I run it in Spyder IDE the output is 408 rows not 418. When I re-run it in the current python process, it outputs the expected 418 rows. I posted a redacted portion of the code that has all of the relevant bits. Any ideas? import csv import numpy as np csvFile = open("/train.csv","ra") csvFile = csv.reader(csvFile) header = csvFile.next() testFile = open("/test.csv","ra") testFile = csv.reader(testFile) testHeader = testFile.next() writeFile = open("/gendermodelDebug.csv", "wb") writeFile = csv.writer(writeFile) count = 0 for row in testFile: if row[3] == 'male': do something to row writeFile.writerow(row) count += 1 elif row[3] == 'female': do something to row writeFile.writerow(row) count += 1 else: raise ValueError("Did not find a male or female in %s" % row)

    Read the article

  • Workflow Foundation: Asynchronous operations (lengthy network I/O)

    - by StormianRootSolver
    I have to create an application that will be started a few times per day (it's non - interactive). To operate, it needs LARGE amounts of data from the Internet (megabytes) via a rather slow connection, so the WCF service calls take quite some time. At the same time, it needs to perform local calculations and has a sophisticated initialization process. So, what I want to do is to create a workflow that asynchronously fetches the data (takes a few minutes) while already initializing / calculating locally. Is there a way to accomplish this?

    Read the article

  • Puzzle: find the minimum number of weights

    - by avd
    I came across this question: say given two weights 1 and 3, u can weigh 1,2 (by 3-1),3,4 (by 3+1). Now find the minimum number of weights so that you can measure 1 to 1000. So the answer was 1,3,9,27... I want to know how do you arrive at such a solution means powers of 3. What is the thought process? Source: http://classic-puzzles.blogspot.com/search/label/Google%20Interview%20Puzzles Solution: http://classic-puzzles.blogspot.com/2006/12/solution-to-shopkeeper-problem.html

    Read the article

  • Import orders file on magento enterprise or community (product with customs options)

    - by wil
    Hello, We need to import some orders file on magento enterprise. In our file, products contains customs Options. We tried to make an extension but we have some problems to import Customs options. The import of standard product is successful but not for the product with customs options. For customs option, missing "info_buyRequest" valu in database. The technical support of magento we responded "the import process currently can't handle importing products with custom options". Magento use custom options when ordering a product with customs options on website. What features do magento use to fill in the fields "info_buyRequest" and "Product_options" when ordering? Have you see a extension pack for import file order with product contains customs options? Thanks for your help.

    Read the article

  • Test plans and how best to write them

    - by Karim
    We're trying to figure out the best way to write tests in our test plan. Specifically, when writing a test that is meant to be used by anyone including QA staff, should the steps in the test be very specific or more broad giving the tester more leeway in how the task can be accomplished. As a very simple example, if you're testing opening a document in word processing document, should the test read: Using the mouse, open the file menu Choose "Open File..." in the file menu In the open file dialog that appears, navigate to x and double-click the document called y OR Bring up the file open dialog Open the file y Now I realize one answer is probably going to be "it depends on what you're trying to test" but I'm trying to answer a broader question here: If the test steps are too specific do we risk a) making the testing process to laborious and tedious and more importantly b) do we risk missing something because we wrote down too specific a path to achieve a goal. Alternatively, if we make it broad do we depend too much on the whims of the tester at the time and lose crucial testing of paths that are more common to customers/clients?

    Read the article

  • What is the fastest way to compare 2 rows in SQL?

    - by Swoosh
    I have 2 different databases. Upon changing something in the big one (i don't have access to), i get some rows imported in my databases in a similar HUGE table. I have a job checking for records in this table, and if any, execute a stored procedure, process and delete from table. Performance. (Huge amount of data) I would like to know what is the fastest way to know if something has changed using let's say 2 imported rows with 100 columns each. Don't have FK-s, don't need. Chances are, that even though I have records in my table, nothing has actually changed. Also. Let's say there is actually changed something. Is it possible for example to check only for changes inside datetime columns? Thanks

    Read the article

  • asynchronous writing and reading of a file

    - by tazim
    hi, I have two processes. 1.) One processes is redirecting output of some unix command to a file on server side.the data is always appended to the file eg : find / > tmp.txt 2.)Another process is opening and reading the same file and storing it in a string and sending the entire string to the client Now, this things take simultaneously. I am using python. Any suggestion as in what can be possible ways to implement this scenario . Please explain with sample code . Thanks in advance . Tazim.

    Read the article

  • Please help me in creating an update query

    - by Rajesh Rolen- DotNet Developer
    I have got a table which contains 5 column and query requirements: update row no 8 (or id=8) set its column 2, column 3's value from id 9th column 2, column 3 value. means all value of column 2, 3 should be shifted to column 2, 3 of upper row (start from row no 8) and value of last row's 2, 3 will be null For example, with just 3 rows, the first row is untouched, the second to N-1th rows are shifted once, and the Nth row has nulls. id math science sst hindi english 1 11 12 13 14 15 2 21 22 23 24 25 3 31 32 33 34 35 The result of query of id=2 should be: id math science sst hindi english 1 11 12 13 14 15 2 31 32 23 24 25 //value of 3rd row (col 2,3) shifted to row 2 3 null null 33 34 35 This process should run for all rows whose id 2 Please help me to create this update query I am using MS sqlserver 2005

    Read the article

  • Google Contacts Data API and PHP

    - by pako
    I'm developing a PHP application to retrieve the list of contacts from a GMail account. I'm looking for a solution which would enable the user of my application to provide the login and password to their Gmail account in my application (as opposed to getting redirected to Google) and then automatically do the retrieval. The retrieval process can be run in PHP or JavaScript (which would then feed the list of contacts back to PHP using Ajax). Is it possible to do that? Which JavaScript API should I use for that? Can someone point me at the right chapter in Google Contacts Data API documentation?

    Read the article

  • JNI AttachCurrentThread NULLs the jenv

    - by Damg
    Hello all, I'm currently in the process of adding JNI functionality into a legacy delphi app. In a single-threaded environment everything works fine, but as soon as I move into multi-threaded environment, things start to become hairy. My problem is that calling JavaVM^.AttachCurrentThread( JavaVM, @JEnv, nil ); returns 0, but puts the JEnv pointer to nil. I have no idea why jvm.dll should return a NULL pointer. Is there anything I am missing? Thank you in advance -- damg PS: * Environment: WinXP + JDK 1.6 * Using JNI.pas from http://www.pacifier.com/~mmead/jni/delphi/

    Read the article

  • Creating an n tiered application

    - by aaron
    I am researching architecture for a project that will be started next year. It is mainly a c# web app, but there will be a service layer so that it can talk to our facebook/iphone app. There are a few long running processes, which means that I will be creating a windows process that can handle those. I’m thinking of putting the entire app in the windows service instead of just the long running processes. Asp - wcf - bll Vs Asp - bll I know this will be more scalable. But it is probably overkill as everything will be running on the same box, even the database. This could change down the road if the server can’t handle the traffic like marketing says it will. I don’t have access to production hardware, just my crappy testing box and my local machine. Has anyone decided to go down this route? But mostly, what is the best way to test both methods to get some metrics?

    Read the article

  • better for-loop syntax for detecting empty sequences?

    - by Dmitry Beransky
    Hi, Is there a better way to write the following: row_counter = 0 for item in iterable_sequence: # do stuff with the item counter += 1 if not row_counter: # handle the empty-sequence-case Please keep in mind that I can't use len(iterable_sequence) because 1) not all sequences have known lengths; 2) in some cases calling len() may trigger loading of the sequence's items into memory (as the case would be with sql query results). The reason I ask is that I'm simply curious if there is a way to make above more concise and idiomatic. What I'm looking for is along the lines of: for item in sequence: #process item *else*: #handle the empty sequence case (assuming "else" here worked only on empty sequences, which I know it doesn't)

    Read the article

  • How to have partial incremental synchronizations based on a GUID?

    - by Gonçalo Veiga
    I need to synchronize an SQL Server database to Oracle through an Oracle Transparent Gateway. The synchronization is performed in batches, so I need to get the next set of data from the point where I left off. The problem I'm having is that the only field I have in the source, to help me, is a GUID. If it were a number I could just order by it, keep the last one processed and restart the process by getting the records which are my recorded number. This won't work with a GUID. Any ideas?

    Read the article

< Previous Page | 632 633 634 635 636 637 638 639 640 641 642 643  | Next Page >