Search Results

Search found 74197 results on 2968 pages for 'part time'.

Page 371/2968 | < Previous Page | 367 368 369 370 371 372 373 374 375 376 377 378  | Next Page >

  • python filter can't output

    - by Jesse Siu
    i create filter by python to the log file like Sat Jun 2 03:32:13 2012 [pid 12461] CONNECT: Client "66.249.68.236" Sat Jun 2 03:32:13 2012 [pid 12460] [ftp] OK LOGIN: Client "66.249.68.236", anon password "[email protected]" Sat Jun 2 03:32:14 2012 [pid 12462] [ftp] OK DOWNLOAD: Client "66.249.68.236", "/pub/10.5524/100001_101000/100022/readme.txt", 451 bytes, 1.39Kbyte/sec the script is import time lines=[] f= open("/opt/CLiMB/Storage1/log/vsftp.log") line = f.readline() lines=[line for line in f] def OnlyRecent(line): if time.strptime(line.split("[")[0].strip(),"%a %b %d %H:%M:%S %Y") < time.time()-(60*60*24*2): return True return False print"\n".join(filter(OnlyRecent,lines)) f.close() but when i run this script, it continue running but didn't show anything until i stop it. Why it can't shows records happened in 2 days.

    Read the article

  • Seperating two graphs based on connectivity and coordinates

    - by martin
    I would like to separate existing data of vertices and edges into two or more graphs that are not connected. I would like to give the following as example: Imagine two hexagons on top of each other but are lying in different Z. Hexagon 1 has the following vertices A(0,0,1), B(1,0,2), C(2,1,2), D(1,2,1), E(0,2,1), F(-1,2,1). The connectivity is as following: A-B, B-C, C-D, D-E, E-F, F-A. This part of Graph 1 as all the vertices are connected in this layer. Hexagon2 has the following vertices A1(0,0,6), B1(1,0,7), C1(2,1,7), D1(1,2,8), E1(0,2,7), F1(-1,2,6). The connectivity is as following: A1-B1, B1-C1, C1-D1, D1-E1, E1-F1, F1-A1. This is part of Graph 2 My data is in the following form: list of Vertices and list of Edges that i can form graphs with. I would like to eliminate graph 2 and give only vertices and connectivity of graph 1 to polygon determination part of my algorithm. My real data contains around 1000 connected polygons as graph 1 and around 100 (much larger in area) polygons as graph 2. I would like to eliminate graph 2. I am programming this in python. Thanks in advance.

    Read the article

  • Setting timeout for embedded Lua

    - by skyeagle
    I have embedded Lua in a C/C+= application. I want to be able to set a timeout value to prevent getting trapped with badly written scripts that can result in infinite loops (or even string searches that take an infinite time to complete). Basically, I want to be able to set a time interval and if the script fails to complete running at the end of that time interval, I want to be able to kill the Lua script engine (gracefully, if possible). Anyone knows of best practise way to do this?

    Read the article

  • How do I find the install directory of a Windows Service, using C#?

    - by endian
    I'm pretty sure that a Windows service gets C:\winnt (or similar) as its working directory when installed using InstallUtil.exe. Is there any way I can access, or otherwise capture (at install time), the directory from which the service was originally installed? At the moment I'm manually entering that into the app.exe.config file, but that's horribly manual and feels like a hack. Is there a programmatic way, either at run time or install time, to determine where the service was installed from?

    Read the article

  • problem in wordpress like delete rows in jquery

    - by Mac Taylor
    hey guys im trying to impelement a technique to delete stories with jquery animation exactly like wordpress this is my script part : $(function(){ $('#jqdelete').click(function() { $(this).parents('tr.box').animate( { backgroundColor: '#cb5555' }, 500).animate( { height: 0, paddingTop: 0, paddingBottom: 0 }, 500, function() { $(this).css( { 'display' : 'none' } ); }); }); }); and this is my html part <tr class='box'> <td> <a href=\"edit.php&pid=$pid\">$title</font></a></td> <td >$eki</td> <td> <a href='javascript:void(0)' id='jqdelete'> <img src=\"images/delete.gif\"></a> </td> </tr> but not working am i wrong in any part of my code ? Note : if i use div instead of tr and td , works fine , but i want to use this effect on my table rows thanx

    Read the article

  • Why can't I rename a data frame column inside a list?

    - by Moreno Garcia
    I would like to rename some columns from CPU_Usage to the process name before I merge the dataframes in order to make it more legible. names(byProcess[[1]]) # [1] "Time" "CPU_Usage" names(byProcess[1]) # [1] "CcmExec_3344" names(byProcess[[1]][2]) <- names(byProcess[1]) names(byProcess[[1]][2]) # [1] "CPU_Usage" names(byProcess[[1]][2]) <- 'test' names(byProcess[[1]][2]) # [1] "CPU_Usage" lapply(byProcess, names) # $CcmExec_3344 # [1] "Time" "CPU_Usage" # # ... (removed several entries to make it more readable) # # $wrapper_1604 # [1] "Time" "CPU_Usage"

    Read the article

  • Simple aggregating query very slow in PostgreSql, any way to improve?

    - by Ash
    HI I have a table which holds files and their types such as CREATE TABLE files ( id SERIAL PRIMARY KEY, name VARCHAR(255), filetype VARCHAR(255), ... ); and another table for holding file properties such as CREATE TABLE properties ( id SERIAL PRIMARY KEY, file_id INTEGER CONSTRAINT fk_files REFERENCES files(id), size INTEGER, ... // other property fields ); The file_id field has an index. The file table has around 800k lines, and the properties table around 200k (not all files necessarily have/need a properties). I want to do aggregating queries, for example find the average size and standard deviation for all file types. But it's very slow - around 70 seconds for the latter query. I understand it needs a sequential scan, but still it seems too much. Here's the query SELECT f.filetype, avg(size), stddev(size) FROM files as f, properties as pr WHERE f.id = pr.file_id GROUP BY f.filetype; and the explain HashAggregate (cost=140292.20..140293.94 rows=116 width=13) (actual time=74013.621..74013.954 rows=110 loops=1) -> Hash Join (cost=6780.19..138945.47 rows=179564 width=13) (actual time=1520.104..73156.531 rows=179499 loops=1) Hash Cond: (f.id = pr.file_id) -> Seq Scan on files f (cost=0.00..108365.41 rows=1140941 width=9) (actual time=0.998..62569.628 rows=805270 loops=1) -> Hash (cost=3658.64..3658.64 rows=179564 width=12) (actual time=1131.053..1131.053 rows=179499 loops=1) -> Seq Scan on properties pr (cost=0.00..3658.64 rows=179564 width=12) (actual time=0.753..557.171 rows=179574 loops=1) Total runtime: 74014.520 ms Any ideas why it is so slow/how to make it faster?

    Read the article

  • Anything wrong with my cURL code (http status of 0)?

    - by Ilya
    Consistently getting a status of 0 even though if I copy and paste the url sent into my browser, I get a json object right back <?php $mainUrl = "https://api.xxxx.com/?"; $co = "xxxxx"; $pa = "xxxx"; $par = "xxxx"; $part= "xxxx"; $partn = "xxxx"; $us= "xxx"; $fields_string; $fields = array( 'co'=>urlencode($co), 'pa'=>urlencode($pa), 'par'=>urlencode($par), 'part'=>urlencode($part), 'partn'=>urlencode($partn), 'us'=>urlencode($us) ); foreach($fields as $key=>$value) { $fields_string .= $key . '=' . $value . '&' ;} $fields_string = rtrim($fields_string, "&"); $fields_string = "?" . $fields_string; $url = "https://api.xxxxx.com/" . $fields_string; $request = $url; $ch = curl_init(); curl_setopt($ch, CURLOPT_URL, $url); curl_setopt($ch, CURLOPT_RETURNTRANSFER,1); curl_setopt($ch, CURLOPT_TIMEOUT,'3'); $content = trim(curl_exec($ch)); $http_status = curl_getinfo($ch, CURLINFO_HTTP_CODE); curl_close($ch); print $url; print $http_status; print $content; ?>

    Read the article

  • Selecting row in SSMS causes Entity Framework 4 to Fail

    - by Eric J.
    I have a simple Entity Framework 4 unit test that creates a new record, saves it, attempts to find it, then deletes it. All works great, unless... ... I open up SQL Server Management Studio while stopped at a breakpoint in the unit test and execute a SELECT statement that returns the row I just created (not SELECT FOR UPDATE, not WITH (updlock), no transaction, just a plain SELECT). If I do that before attempting to find the row I just created, I don't find the row. If I instead do that after finding the row but before deleting the row, I do find the row but get an OptimisticConcurrencyException. This is consistently repeatable. Unit Test: [TestMethod()] public void CreateFindDeleteActiveParticipantsTest() { // Setup this test Participant utPart = CreateUTParticipant(); ctx.Participants.AddObject(utPart); ctx.SaveChanges(); // External SELECT Point #1: // part is null // Find participant Participant part = ParticipantRepository.Find(UT_SURVEY_ID, UT_TOKEN); Assert.IsNotNull(part, "Expected to find a participant"); // External SELECT Point #2: // SaveChanges throws OptimisticConcurrencyException // Cleanup this test ctx.Participants.DeleteObject(utPart); ctx.SaveChanges(); }

    Read the article

  • Write/read count to txt file

    - by Brian
    Hi, I need a batch file that writes the count number to a txt file. Next time the batch file is run, it should read the current count number from the txt file and add 1 to count and save this new value in the txt file. (nothing else is in the txt file) When count is 5 it should start from 1 again Example: Count.bat runs 1 time: count.txt has no count so Count.bat saves the value 1 in count.txt Count.bat is run 2 time: Count.bat reads 1 from count.txt and saves the new value 2 to count.txt When count.bat is run for the 6 time it should start over by saving the value 1 in count.txt I think this just be easy to do, but I'am not use to batch commands So hopefully someone here could help me.

    Read the article

  • cleartool question

    - by chuanose
    Lets say I have a directory at \testfolder, and the latest is currently at /main/10. I know that the operation resulting in testfolder@@/main/6 is to remove a file named test.txt. What's a sequence of cleartool operations that can be done in a script that will take "testfolder@@/main/6" and "test.txt" as input, and will cat out the contents of test.txt as of that time? One way I can think of is to get the time of /main/6 operation, create a view with config spec -time set to that time, and then cat the test.txt at the directory. But I'm wondering if I can do this in a easier way that doesn't involve manipulating config specs, perhaps through "cleartool find" and extended path names

    Read the article

  • load balance timeout SQL connection string

    - by george9170
    It seems that if there is a sql memory leak somewhere and you dont have time to find it you can use the load balance timeout option in a sql connection string to destory the connection after x seconds. Am i right in assuming I can set the load balance time out to 30-40 seconds and then hunt for the leak latter, while in the mean time the leak will not affect my application too much.

    Read the article

  • updating a column in a table only if after the update it won't be negative and identifying all updat

    - by Azeem
    Hello all, I need some help with a SQL query. Here is what I need to do. I'm lost on a few aspects as outlined below. I've four relevant tables: Table A has the price per unit for all resources. I can look up the price using a resource id. Table B has the funds available to a given user. Table C has the resource production information for a given user (including the number of units to produce everyday). Table D has the number of units ever produced by any given user (can be identified by user id and resource id) Having said that, I need to run a batch job on a nightly basis to do the following: a. for all users, identify whether they have the funds needed to produce the number of resources specified in table C and deduct the funds if they are available from table B (calculating the cost using table A). b. start the process to produce resources and after the resource production is complete, update table D using values from table C after the resource product is complete. I figured the second part can be done by using an UPDATE with a subquery. However, I'm not sure how I should go about doing part a. I can only think of using a cursor to fetch each row, examine and update. Is there a single sql statement that will help me avoid having to process each row manually? Additionally, if any rows weren't updated, the part b. SQL should not produce resources for that user. Basically, I'm attempting to modify the sql being used for this logic that currently is in a stored procedure to something that will run a lot faster (and won't process each row separately). Please let me know any ideas and thoughts. Thanks! - Azeem

    Read the article

  • Stopping long-running requests in Pylons

    - by Jack
    I'm working on an application using Pylons and I was wondering if there was a way to make sure it doesn't spend way too much time handling one request. That is, I would like to find a way to put a timer on each request such that when too much time elapses, the request just stops (and possibly returns some kind of error). The application is supposed to allow users to run some complex calculations but I would like to make sure that if a calculation starts taking too much time, we stop it to allow other calculations to take place.

    Read the article

  • Problem with loop MATLAB

    - by Jessy
    no time scores 1 10 123 2 11 22 3 12 22 4 50 55 5 60 22 6 70 66 . . . . . . n n n Above a the content of my txt file (thousand of lines). 1st column - number of samples 2nd column - time (from beginning to end ->accumulated) 3rd column - scores I wanted to create a new file which will be the total of every three sample of the scores divided by the time difference of the same sample. e.g. (123+22+22)/ (12-10) = 167/2 = 83.5 (55+22+66)/(70-50) = 143/20 = 7.15 new txt file 83.5 7.15 . . . n so far I have this code: fid=fopen('data.txt') data = textscan(fid,'%*d %d %d') time = (data{1}) score= (data{2}) for sample=1:length(score) ..... // I'm stucked here .. end ....

    Read the article

  • Best way to do one-to-many "JOIN" in CouchDB

    - by mit
    There are CouchDB documents that are list elements: { "type" : "el", "id" : "1", "content" : "first" } { "type" : "el", "id" : "2", "content" : "second" } { "type" : "el", "id" : "3", "content" : "third" } There is one document that defines the list: { "type" : "list", "elements" : ["2","1"] , "id" : "abc123" } As you can see the third element was deleted, it is no longer part of the list. So it must not be part of the result. Now I want a view that returns the content elements including the right order. The result could be: { "content" : ["second", "first"] } In this case the order of the elements is already as it should be. Another possible result: { "content" : [{"content" : "first", "order" : 2},{"content" : "second", "order" : 1}] } I started writing the map function: map = function (doc) { if (doc.type === 'el') { emit(doc.id, {"content" : doc.content}); //emit the id and the content exit; } if (doc.type === 'list') { for ( var i=0, l=doc.elements.length; i<l; ++i ){ emit(doc.elements[i], { "order" : i }); //emit the id and the order } } } This is as far as I can get. Can you correct my mistakes and write a reduce function? Remember that the third document must not be part of the result. Of course you can write a different map function also. But the structure of the documents (one definig element document and an entry document for each entry) cannot be changed.

    Read the article

  • Organizing development teams

    - by Patrick
    A long time ago, when my company was much smaller, dividing the development work over teams was quite easy: the 'application' team developed the applications-specific logic, often requiring a deep insight of specific industry problems) the 'generic' team developed the parts that were common/generic for all applications (user interface related stuff, database access, low-level Windows stuff, ...) Over the years the boundaries between the teams have become fuzzy: the 'application' teams often write application-specific functionality with a 'generic' part, so instead of asking the 'generic' team to write that part for them, they write it themselves to speed up the developments; then donate it to the 'generic' team the 'generic' team's focus seems to be more 'maintenance oriented'. All of the 'very generic' code has already been written, so no new developments are needed in it, but instead they continuously have to support all the functionality donated by the application teams. All this seems to indicate that it's not a good idea anymore to have this split in teams. Maybe the 'generic' team should evolve into a 'software quality' team (defining and guarding the rules for writing good quality software), or into a 'software deployment' team (defining how software should be deployed, installed, ...). How do you split up the work in different teams if you have different applications? everybody can write generic code and donates it to a central 'generic' team? everybody can write generic code, but nobody 'manages' this generic code (everybody is the owner) generic code is written by a 'generic' team only and the applications have to wait until the 'generic' team delivers the generic part (via a library, via a DLL) there is no overlap in code between the different applications some other way? Notice that thee advantage of having the mix (allowing everybody to write everywhere in the code) is that: code is written in a more flexible way it's easier to debug the code since you can easily step into the 'generic' code in the debugger But the big (and maybe only) disadvantage is that this generic code may become nobody's responsibility if there is no clear team that manages it anymore. What is your vision?

    Read the article

  • http connection timeout issues

    - by Mark
    I'm running into an issue when i try to use the HttpClient connecting to a url. The http connection is taking a longer time to timeout, even after i set a connection timeoout. int timeoutConnection = 5000; HttpConnectionParams.setConnectionTimeout(httpParameters, timeoutConnection); int timeoutSocket = 5000; HttpConnectionParams.setSoTimeout(httpParameters, timeoutSocket); It works perfect most of the time. However, every once in while, the http connection runs for ever and ignore the setconnectiontimeout, especailly when the phone is connected to wifi, and the phone was idling. So after the phone is idling, the first time i try to connect, the http connection ignores the setconnectiontimeout and runs forever, after i cancel it and try again, it works like charm everytime. But that one time that doesn't work it creates a threadtimeout error, i tried using a different thread, it works, but i know that the thread is running for long time. I understand that the wifi goes to sleep on idle, but i dont understand why its ignoring the setconnectiontimeout. Anyone can help, id really appreciated.

    Read the article

  • Makefile rule depending on change of number of files instead of change in content of files.

    - by goathens
    I'm using a makefile to automate some document generation. I have several documents in a directory, and one of my makefile rules will generate an index page of those files. The list of files itself is loaded on the fly using list := $(shell ls documents/*.txt) so I don't have to bother manually editing the makefile every time I add a document. Naturally, I want the index-generation rule to trigger when number/title of files in the documents directory changes, but I don't know how to set up the prerequisites to work in this way. I could use .PHONY or something similar to force the index-generation to run all the time, but I'd rather not waste the cycles. I tried piping ls to a file list.txt and using that as a prerequisite for my index-generation rule, but that would require either editing list.txt manually (trying to avoid it), or auto-generating it in the makefile (this changes the creation time, so I can't use list.txt in the prerequisite because it would trigger the rule every time).

    Read the article

  • Threaded Python port scanner

    - by Amnite
    I am having issues with a port scanner I'm editing to use threads. This is the basics for the original code: for i in range(0, 2000): s = socket(AF_INET, SOCK_STREAM) result = s.connect_ex((TargetIP, i)) if(result == 0) : c = "Port %d: OPEN\n" % (i,) s.close() This takes approx 33 minutes to complete. So I thought I'd thread it to make it run a little faster. This is my first threading project so it's nothing too extreme, but I've ran the following code for about an hour and get no exceptions yet no output. Am I just doing the threading wrong or what? import threading from socket import * import time a = 0 b = 0 c = "" d = "" def ScanLow(): global a global c for i in range(0, 1000): s = socket(AF_INET, SOCK_STREAM) result = s.connect_ex((TargetIP, i)) if(result == 0) : c = "Port %d: OPEN\n" % (i,) s.close() a += 1 def ScanHigh(): global b global d for i in range(1001, 2000): s = socket(AF_INET, SOCK_STREAM) result = s.connect_ex((TargetIP, i)) if(result == 0) : d = "Port %d: OPEN\n" % (i,) s.close() b += 1 Target = raw_input("Enter Host To Scan:") TargetIP = gethostbyname(Target) print "Start Scan On Host ", TargetIP Start = time.time() threading.Thread(target = ScanLow).start() threading.Thread(target = ScanHigh).start() e = a + b while e < 2000: f = raw_input() End = time.time() - Start print c print d print End g = raw_input()

    Read the article

  • Simple php statement, I can't get it to work!

    - by eberswine
    How to get $schedule = true and $schedule = true to work??? I know this is easy and I'm overlooking something simple! else { if($schedule){ $where[] = 'date > '.(time() + $config_date_adjust * 60 - 432000); } else{ $where[] = 'date < '.(time() + $config_date_adjust * 60); } $schedule = false; } else { if($schedule2){ $where[] = 'date > '.(time() + $config_date_adjust * 60 - 86400); } else{ $where[] = 'date < '.(time() + $config_date_adjust * 60); } $schedule2 = false; }

    Read the article

< Previous Page | 367 368 369 370 371 372 373 374 375 376 377 378  | Next Page >