Search Results

Search found 66916 results on 2677 pages for 'real time strategy'.

Page 349/2677 | < Previous Page | 345 346 347 348 349 350 351 352 353 354 355 356  | Next Page >

  • How to make the eclipse IDE to build faster

    - by Solitaire
    Hi all.. i am using Eclipse IDE for developmental purpose, IDE is taking too much time to build, it gets hangs up, when the percentage of Build reaches to 78. it shows refreshing workspace several times.. it eats up lots of time.. please tell me how to make it to disable the unwanted "refreshing workspace" and other time consuming activities, and make the build faster. Thanks

    Read the article

  • Io exception: There is no process to read data written to a pipe.

    - by Srikanth
    I'm using Hibernate3.2+Websphere6.0+struts1.3.. After deploying ,application works fine. After some idle time ,i will get this type of error repeatedly,am not able to login at all. Im not using any connection pooling. i feel after idle time its not able to connect to the database again..if i restart the server everything works fine for some time...after that same story.. please help me out

    Read the article

  • How can I determine when an InnoDB table was last changed?

    - by David M
    I've had success in the past storing the (heavily) processed results of a database query in memcached, using the last update time of the underlying tables(s) as part of the cache key. For MyISAM tables, that last changed time is available in SHOW TABLE STATUS. Unfortunately, that's usually NULL for InnoDB tables. In MySQL 4.1, the ctime for an InnoDB in its SHOW TABLE STATUS line was usually its actual last update time, but that doesn't seem to be true for MySQL 5.1. There is a DATETIME field in the table, but it only shows when a row has been modified - it cannot show the deletion time of a row that's not there anymore! So, I really cannot use MAX(update_time). Here's the really tricky part. I have a number of replicas that I do reads from. Can I figure out the state of the table that doesn't rely on when the changes have actually been applied? My conclusion after working on this for a while is that it's not going to be possible to get this information as cheaply as I'd like. I'm probably going to cache data until the time that I expect the table to change (it's updated once a day), and let the query cache help out where it can.

    Read the article

  • "|" pipe operator not working in command line in C++

    - by user332024
    I am having a windows application interacting with DB2 database. In my application i have code to execute some DB2 commands through command line interface. I have used windowsAPI "ShellExecuteEx()" to execute those DB2 commands through command line. Following is the code written to execute DB2 command through command line. string command = "/c /w /i DB2 UNCATALOG NODE DB_DATABASE "" test.log | echo %date% %time% test.log SHELLEXECUTEINFO shellInfo; ZeroMemory(&shellInfo, sizeof(shellInfo)); shellInfo.cbSize = sizeof(shellInfo); shellInfo.fMask = SEE_MASK_FLAG_NO_UI | SEE_MASK_NOCLOSEPROCESS; //shellInfo.lpFile = "db2cmd"; shellInfo.lpFile = "db2cmd"; shellInfo.lpParameters = command.c_str(); The code is executed successfully , however if test.log is observered i only get result of DB2 command and not date and time. If you see the above command there is "|" pipe operator and echo command to log date and time in test.log Please note that if I execute above DB2 command through separately command line i.e. not through code. I am able to view date and time log along with DB2 command result in test.log. Following is the full command which i executed through command line. DB2CMD /c /i /w DB2 UNCATALOG NODE DB_DATABASE "" test.log | echo %date% %time% test.log According to me since DB2 command is executed successfully through code, there is problem with only usage of "|" pipe operator or echo command.

    Read the article

  • Python implementation of avro slow?

    - by lazy1
    I'm reading some data from avro file using the avro library. It takes about a minute to load 33K objects from the file. This seem very slow to me, specially with the Java version reading the same file in about 1sec. Here is the code, am I doing something wrong? import avro.datafile import avro.io from time import time def load(filename): fo = open(filename, "rb") reader = avro.datafile.DataFileReader(fo, avro.io.DatumReader()) for i, record in enumerate(reader): pass return i + 1 def main(argv=None): import sys from argparse import ArgumentParser argv = argv or sys.argv parser = ArgumentParser(description="Read avro file") start = time() num_records = load("events.avro") end = time() print("{0} records in {1} seconds".format(num_records, end - start)) if __name__ == "__main__": main()

    Read the article

  • Easy Python input question

    - by Josh K
    I'd like to have something similar to the following pseudo code: while input is not None and timer < 5: input = getChar() timer = time.time() - start if timer >= 5: print "took too long" else: print input Anyway to do this without threading? I would like an input method that returns whatever has been entered since the last time it was called, or None (null) if nothing was entered.

    Read the article

  • How to write a JOIN statement to combine data from disparate tables

    - by Amarundo
    I have the following 2 procedures that I use as my source for a report. As of now, I'm presenting 2 different tables in my SQL Server Reporting Services 2008 R2 report, because it doesn't let me put them together as they belong to 2 different data sets. I want to present them in a single table, but I have not been successful trying to use JOIN here. How do I do that? NOTE: cName in IAgentQueueStats corresponds to UserId in AgentActivityLog. /*** Aggregate values for Call Center Agents for calls, talk and hold time ***/ /*** The detail/row values is per 30-minute interval ***/ ALTER PROCEDURE [dbo].[sp_IAgentQueueStats_OnlyCalls_Grouped] @p_StartDate datetime, @p_EndDate datetime, @p_Agents varchar(8000) AS SELECT [cName] ,sum([nAnswered]) SumNAnswered ,sum([nAnsweredAcd]) SumNAnsweredAcd ,sum([tTalkAcd]) SumTTalkAcd ,sum([nHoldAcd]) SumNHoldAcd ,sum([tHoldAcd]) SumTHoldAcd ,sum([tAcw]) SumTAcw FROM [I3_IC].[dbo].[IAgentQueueStats] WHERE dIntervalStart between @p_StartDate and DATEADD(s, 86400-1, @p_EndDate) AND CHARINDEX ( cName ,@p_Agents)> 0 AND cReportGroup <> '*' AND cHKey3 = '*' and cHKey4 ='*' AND nEnteredAcd > 0 AND cReportGroup <> 'CCFax Email' GROUP BY cName And here is the second one: /*** Aggregate values for Call Center Agents for status/activity time ***/ /*** The detail/row values is per start-time/end-time ***/ ALTER PROCEDURE [dbo].[sp_AgentActivity_Grouped] @p_StartDate datetime, @p_EndDate datetime, @p_Agents varchar(8000) AS SELECT [UserId],[StatusCategory],SUM([StateDuration]) [StatusDuration] FROM ( SELECT [UserId] ,[StatusGroup] ,[StatusKey] , CASE [StatusKey] WHEN 'Available' THEN 'Productive' WHEN 'Follow Up' THEN 'Productive' WHEN 'Campaign Call' THEN 'Productive' WHEN 'Awaiting Callback' THEN 'Productive' WHEN 'In a Meeting' THEN 'Not Your Fault' WHEN 'Project Work' THEN 'Not Your Fault' WHEN 'At a Training Session'THEN 'Not Your Fault' WHEN 'System Issues' THEN 'Not Your Fault' WHEN 'Test' THEN 'Not Your Fault' WHEN 'At Lunch' THEN 'Non Productive' WHEN 'Available, Forward' THEN 'Non Productive' WHEN 'Available, Follow-Me' THEN 'Non Productive' WHEN 'At Play' THEN 'Non Productive' WHEN 'AcdAgentNotAnswering' THEN 'Non Productive' WHEN 'Do Not Disturb' THEN 'Non Productive' WHEN 'Available, No ACD' THEN 'Non Productive' WHEN 'Away from desk' THEN 'Non Productive' ELSE [StatusKey] END StatusCategory ,stateduration FROM [I3_IC].[dbo].[AgentActivityLog] WHERE [StatusDateTime] between @p_StartDate and DATEADD(s, 86400-1, @p_EndDate) AND CHARINDEX ( [UserId] ,@p_Agents)> 0 AND [StatusKey] not in ('Gone Home','Out of the Office','On Vacation','Out of Town') ) a GROUP BY [UserId],[StatusCategory] ORDER BY [UserId], [StatusCategory] desc BTW, if I take some time to comment/reply on your posts, it's not lack of interest, but of understanding...

    Read the article

  • How to Implement Dynamic Timestamp in Web Page?

    - by Morgan Cheng
    In Facebook and twitter, we can see that there is a timestamp like "23 seconds ago" or "1 hour ago" for each event & tweet. If we leave the page for some time, the timestamp changes accordingly. Since it is possible that user machine doesn't have same system time as server machine, how to make the dynamic timestamp accurate? My idea is: It is always based on server time. When request is sent to server for the web page, timestamp T1 (seconds to 1970/1/1) is rendered into inline javascript variable. The displayed timestamp ("23 seconds ago") is calculated by T1 instead of local time. I'm not sure whether this is how Facebook/Twitter do it. Is there any better idea?

    Read the article

  • Why can I not send more than one request?

    - by Doug
    function stateChanged(idname) { xmlhttp.onreadystatechange=function() { if (xmlhttp.readyState==4 && xmlhttp.status==200) { document.getElementById(idname).value = xmlhttp.responseText; } } } function openSend(php,idname) { stateChanged(idname); xmlhttp.open("GET",php,true); xmlhttp.send(); } function showHint() { if (window.XMLHttpRequest) { xmlhttp=new XMLHttpRequest(); } else { xmlhttp=new ActiveXObject("Microsoft.XMLHTTP"); } openSend("time.php", "Time"); openSend("date1.php", "Date1"); openSend("date2.php", "Date2"); return; } These two say aborted (in Firebug) and doesn't return a value. Why is that? Is it because I can't send more than 1 request? openSend("time.php", "Time"); openSend("date1.php", "Date1"); If I can't, how could I achieve 3 requests with only one invocation?

    Read the article

  • Randomly sorting an array

    - by Cam
    Does there exist an algorithm which, given an ordered list of symbols {a1, a2, a3, ..., ak}, produces in O(n) time a new list of the same symbols in a random order without bias? "Without bias" means the probability that any symbol s will end up in some position p in the list is 1/k. Assume it is possible to generate a non-biased integer from 1-k inclusive in O(1) time. Also assume that O(1) element access/mutation is possible, and that it is possible to create a new list of size k in O(k) time. In particular, I would be interested in a 'generative' algorithm. That is, I would be interested in an algorithm that has O(1) initial overhead, and then produces a new element for each slot in the list, taking O(1) time per slot. If no solution exists to the problem as described, I would still like to know about solutions that do not meet my constraints in one or more of the following ways (and/or in other ways if necessary): the time complexity is worse than O(n). the algorithm is biased with regards to the final positions of the symbols. the algorithm is not generative. I should add that this problem appears to be the same as the problem of randomly sorting the integers from 1-k, since we can sort the list of integers from 1-k and then for each integer i in the new list, we can produce the symbol ai.

    Read the article

  • Countdown timer using jquery or google app engine ?

    - by john
    hi everybody, I need to do a countdown clock, that counts down the days, hours, minutes and seconds that are left to a date of my choice,Using jquery or google app engine(Python). I created a timer using Javascript,But in that i used system time. I need to use server time.Can any body give me ideas to build up a count down Timer using server UTC time.

    Read the article

  • speed up calling lot of entities, and getting unique values, google app engine python

    - by user291071
    OK this is a 2 part question, I've seen and searched for several methods to get a list of unique values for a class and haven't been practically happy with any method so far. So anyone have a simple example code of getting unique values for instance for this code. Here is my super slow example. class LinkRating2(db.Model): user = db.StringProperty() link = db.StringProperty() rating2 = db.FloatProperty() def uniqueLinkGet(tabl): start = time.time() dic = {} query = tabl.all() for obj in query: dic[obj.link]=1 end = time.time() print end-start return dic My second question is calling for instance an iterator instead of fetch slower? Is there a faster method to do this code below? Especially if the number of elements called be larger than 1000? query = LinkRating2.all() link1 = 'some random string' a = query.filter('link = ', link1) adic ={} for itema in a: adic[itema.user]=itema.rating2

    Read the article

  • How to deal with large result sets with Linq to Entities?

    - by user169867
    I have a fairly complex linq to entities query that I display on a website. It uses paging so I never pull down more than 50 records at a time for display. But I also want to give the user the option to export the full results to Excel or some other file format. My concern is that there could potentially be a large # of records all being loaded into memory at one time to do this. Is there a way to process a linq result set 1 record at a time like you could w/ a datareader so only 1 record is really being kept in memory at a time? I'd appreciate any help. Thanks

    Read the article

  • Books on data-intensive enterprise integration patterns

    - by Tristan
    I'm trying to understand design patterns used by data-intensive enterprise applications. A classic example is the financial industry, where system must consume, analyze, and execute on real-time financial data while providing information and configuration options to a broad set of traders and analysts. One can imagine similar system in airlines, major supply chain operations, and utility providers. Are there good books that provide and inside view of how these systems work? Enterprise Integration Patterns is one example, but I'm looking for something with more real-world applications, particularly in finance.

    Read the article

  • C++ STL Map vs Vector speed

    - by sub
    In the interpreter for my experimental programming language I have a symbol table. Each symbol consists of a name and a value (the value can be e.g.: of type string, int, function, etc.). At first I represented the table with a vector and iterated through the symbols checking if the given symbol name fitted. Then I though using a map, in my case map<string,symbol>, would be better than iterating through the vector all the time but: It's a bit hard to explain this part but I'll try. If a variable is retrieved the first time in a program in my language, of course its position in the symbol table has to be found (using vector now). If I would iterate through the vector every time the line gets executed (think of a loop), it would be terribly slow (as it currently is, nearly as slow as microsoft's batch). So I could use a map to retrieve the variable: SymbolTable[ myVar.Name ] But think of the following: If the variable, still using vector, is found the first time, I can store its exact integer position in the vector with it. That means: The next time it is needed, my interpreter knows that it has been "cached" and doesn't search the symbol table for it but does something like SymbolTable.at( myVar.CachedPosition ). Now my (rather hard?) question: Should I use a vector for the symbol table together with caching the position of the variable in the vector? Should I rather use a map? Why? How fast is the [] operator? Should I use something completely different?

    Read the article

  • Convert a form_tag select_datetime to SQL datetime

    - by Mitchell
    Hi I am trying to make a simple search form that uses a startTime and endTime to specify a time range. The db has a datetime field time that is compared against. So far when i try to use params[:startTime] in the controller I get an array of values which wont work with :conditions = ['time < ?', params[:endTime]] Is there a simple solution to parse the form's datetime to SQL datetime?

    Read the article

  • Is there any way to prevent the display of unmatched xml tags using xslt?

    - by StevenWilkins
    Here is a contrived example of an xml document. In my real world case, the xml is fairly complex with multiple nested levels. <alphabet> <a>A</a> <b>B</b> <c>C</c> ... and so on </alphabet> Using xslt, I want to transform the document so that only the vowels are printed. In my real world case, we're using empty template match tags to block the display. But that's too verbose for my liking.

    Read the article

  • How to make a random number generator in matlab that is based on percentages?

    - by Ben Fossen
    I am currently using the built in random number generator. for example nAsp = randi([512, 768],[1,1]); 512 is the lower bound and 768 is the upper bound, the random number generator chooses a number from between these two values. What I want is to have two ranges for nAsp but I want one of them to get called 25% of the time and the other 75% of the time. Then gets plugged into he equation. Does anyone have any ideas how to do this or if there is a built in function in matlab already? for example nAsp = randi([512, 768],[1,1]); gets called 25% of the time nAsp = randi([690, 720],[1,1]); gets called 75% of the time

    Read the article

  • SQL Group by Minute- Expanded

    - by Barnie
    I am working on something similar to this post here: TS SQL - group by minute However mine is pulling from an message queue, and I need to see an accurate count of the amount of traffic the Message Queue is creating/ sending, and at what time Select * From MessageQueue mq My expanded version of this though is the following: A) User defines a start time and an end time (Easy enough using Declare's @StartTime and @EndTime B) Give the user the option of choosing the "grouping". Will it be broken out by 1 minutes, 5 minutes, 15 minutes, or 30 minutes (Max). (I had thought to do this with a CASE statement, but my test problems fall apart on me.) C) Display the data to accurately show a count of what happened during the interval (Grouping) selected. This is where I am at so far SQL Blob: DECLARE @StartTime datetime DECLARE @EndTime datetime SELECT DATEPART(n, mq.cre_date)/5 as Time --Trying to just sort by 5 minute intervals ,CONVERT(VARCHAR(10),mq.Cre_Date,101) ,COUNT(*) as results FROM dbo.MessageQueue mq WHERE mq.cre_date BETWEEN @StartDate AND @EndDate GROUP BY DATEPART(n, mq.cre_date)/5 --Trying to just sort by 5 minute intervals , eq.Cre_Date This is the output I would like to achieve: [Time] [Date] [Message Count] 1300 06/26/2012 5 1305 06/26/2012 1 1310 06/26/2012 100

    Read the article

  • How can I filter a date of a DateTimeField in Django?

    - by Xidobix
    I am trying to filter a DateTimeField comparing with a date. I mean: MyObject.objects.filter(datetime_attr=datetime.date(2009,8,22)) I get an empty queryset list as an answer because (I think) I am not considering time, but I want "any time". Is there an easy way in Django for doing this? * I have the time in the datetime setted, it is not 00:00.

    Read the article

  • What's the best/easiest way to compare two times in Objective-C?

    - by Andy
    I've got a string representation of a time, like "11:13 AM." This was produced using an NSDateFormatter and the stringFromDate: method. I'd like to compare this time to the current time, but when I use the dateFromString: method to turn the string back into a date, a year, month and day are added - which I don't want. I just need to know if right now is < or the time stored in the string. What's going to be the best way to handle that? Thanks in advance for your help.

    Read the article

  • Need help in retrive date format with am/pm in codeignitor

    - by JigneshMistry
    I have got one problem. which is as follow I have converted date to my local time as below $this->date_string = "%Y/%m/%d %h:%i:%s"; $timestamp = now(); $timezone = 'UP45'; $daylight_saving = TRUE; $time = gmt_to_local($timestamp, $timezone, $daylight_saving); $this->updated_date = mdate($this->date_string,$time); And Storing this field in to database. now at retrive time i want like this format "11-04-2011 4:50:00 PM" I have used this code $timestamp = strtotime($rs->updated_date); $date1 = "%d-%m-%Y %h:%i:%s %a"; $updat1 = date($date1,$timestamp); but this will give me only "11-04-2011 4:50:00 AM" but I have stored in like it was PM. Can any one help me out. thanks.

    Read the article

  • JAVA. Writing a matrix in a file using column information.

    - by Dmitry
    Hello, everybody! I have a file in which a matrix is stored. This file has a RandomAccessFile type. This matrix is stored by columns. I mean that in an i-th row of this matrix an i-th column (of a real matrix) is stored. There is an example: i-th row: 1 2 3 4 (in the file). That means that the real matrix has an i-th column: (1 2 3 4)(transpose). I need to save this matrix in natural way (by rows) in a new file, which I will then open with FileReader and display with TestArea. DO you know, how to do that? If so, please help =)

    Read the article

  • Searching for duplicate records within a text file where the duplicate is determined by only two fie

    - by plg
    First, Python Newbie; be patient/kind. Next, once a month I receive a large text file (think 7 Million records) to test for duplicate values. This is catalog information. I get 7 fields, but the two I'm interested in are a supplier code and a full orderable part number. To determine if the record is dupliacted, I compress all special characters from the part number (except . and #) and create a compressed part number. The test for duplicates becomes the supplier code and compressed part number combination. This part is fairly straight forward. Currently, I am just copying the original file with 2 new columns (compressed part and duplicate indicator). If the part is a duplicate, I put a "YES" in the last field. Now that this is done, I want to be able to go back (or better yet, at the same time) to get the previous record where there was a supplier code/compressed part number match. So far, my code looks like this: Compress Full Part to a Compressed Part and Check for Duplicates on Supplier Code and Compressed Part combination import sys import re import time ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ start=time.time() try: file1 = open("C:\Accounting\May Accounting\May.txt", "r") except IOError: print sys.stderr, "Cannot Open Read File" sys.exit(1) try: file2 = open(file1.name[0:len(file1.name)-4] + "_" + "COMPRESSPN.txt", "a") except IOError: print sys.stderr, "Cannot Open Write File" sys.exit(1) hdrList="CIGSUPPLIER|FULL_PART|PART_STATUS|ALIAS_FLAG|ACQUISITION_FLAG|COMPRESSED_PART|DUPLICATE_INDICATOR" file2.write(hdrList+chr(10)) lines_seen=set() affirm="YES" records = file1.readlines() for record in records: fields = record.split(chr(124)) if fields[0]=="CIGSupplier": continue #If incoming file has a header line, skip it file2.write(fields[0]+"|"), #Supplier Code file2.write(fields[1]+"|"), #Full_Part file2.write(fields[2]+"|"), #Part Status file2.write(fields[3]+"|"), #Alias Flag file2.write(re.sub("[$\r\n]", "", fields[4])+"|"), #Acquisition Flag file2.write(re.sub("[^0-9a-zA-Z.#]", "", fields[1])+"|"), #Compressed_Part dupechk=fields[0]+"|"+re.sub("[^0-9a-zA-Z.#]", "", fields[1]) if dupechk not in lines_seen: file2.write(chr(10)) lines_seen.add(dupechk) else: file2.write(affirm+chr(10)) print "it took", time.time() - start, "seconds." ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ file2.close() file1.close() It runs in less than 6 minutes, so I am happy with this part, even if it is not elegant. Right now, when I get my results, I import the results into Access and do a self join to locate the duplicates. Loading/querying/exporting results in Access a file this size takes around an hour, so I would like to be able to export the matched duplicates to another text file or an Excel file. Confusing enough? Thanks.

    Read the article

< Previous Page | 345 346 347 348 349 350 351 352 353 354 355 356  | Next Page >