Search Results

Search found 61110 results on 2445 pages for 'generation time'.

Page 308/2445 | < Previous Page | 304 305 306 307 308 309 310 311 312 313 314 315  | Next Page >

  • Suggestions for opening the Rails toolbox to design a challenge game?

    - by keruilin
    How would you suggest designing a challenge system as part of a food-eating game so that it's automated as possible? All RoR tools, design patterns and logic are at your disposal (e.g., admin consoles, crontab, arch, etc.). Prize goes to whoever can suggest the simplest and most-automated design! Here are the requirements: User has many challenges. Badge has many challenges. (A unique badge is awarded for each challenge won.) Only one challenge can run at a time. Each challenge has a limited number of days that it runs. For example, one challenge can run 3 days, while another runs 7 days. Challenges can be seasonal. For example, "Eat 13 Pumpkins" only runs during the Fall. New challenges are added to the game on an ongoing basis. For example, a new challenge every week. Each challenge has a certain probability of being selected to run. For example, "Eat 10 Pies" challenge has 10% chance of being selected to run. As each new challenge is added to the database, I want the probabilities of running to change dynamically. I want to avoid the scenario where I'm manually updating a database field just to change the probability from 10% to 5%, for example. Challenges act like Easter eggs. Challenge icons pop-up at different places on the webpage. User is awarded a badge for successfully completing a challenge, but only when it's active. There is some wait time between each challenge. Between 1 and 7 days. Which wait time is random, but the probability of the wait time being short is high and the probability of it being a long wait time is low.

    Read the article

  • Building a survey to put in a WordPress website using Python/Django

    - by chiurox
    So I've been given a task to build a survey to get data regarding time slot preferences of prospective students for a particular course. I know there are really quick solutions to this like Google Forms, SurveyMonkey, but since it's not unusually hard, I want to implement the survey myself in a totally new language as an opportunity to get started with it and also be able to customize and provide dynamic info to the users who are voting. Although I have done some stuff in PHP, C++, javascript, etc, I'm pretty new to Python+Django framework but it's something I've been meaning to get into since a long time ago. Initially, what I want is to make a grid with the days of the week as columns and time-durations as rows. In each cell I want to provide users a way to choose how strong (high/medium/low) their preference for this particular day+time is. I also want to show how many "votes" have already been cast for this particular preference because this will influence a lot in their decisions and as a result make this process easier when we are going to define the classes. I'll probably store the data in MySQL. Could anyone point me to some really good Python+Django tutorials for my particular purpose? Does anyone think I'm wasting my time with this trivial task by choosing new tools and that I should just use something I already know (like PHP) or a free service or plugin for Wordpress? Thanks!

    Read the article

  • Grep for 2 words after pattern found

    - by Dileep Ch
    The scenario is i have a file and contains a string "the date and time is 2012-12-07 17:11:50" I had searched and found a command grep 'the date and time is' 2012-12-07.txt | cut -d\ -f5 it just displays the 5th word and i need the combination of 5th and 6th, so i tried grep 'the date and time is' 2012-12-07.txt | cut -d\ -f5 -f6 But its error. Now, how to grep the 5th and 6th word with one command I just need the output like 2012-12-07 17:11:50

    Read the article

  • Extremely Difficult Problem with ASP.Net 4.0 WebForms app using Routing

    - by dudeNumber4
    I have a completed app running in a QA environment. Everything works fine under most circumstances. If you hit a plain URL (no identifying information in the URL), you see an intro page with a button (generated by an asp LinkButton control) that posts back and directs you to another page. The markup looks the same when it fails and when it doesn't. When such a URL is followed from, e.g., Word and the default browser is IE, the intro page loads fine, but clicking the button causes an error. When not debugging, this behavior occurs every time. While debugging, the error occurs only ~ 1 in 10 times (closing the browser instance and starting over every time). When the error occurs, the intro page Page_Load fires and IsPostBack is false. Somehow, instead of a post, a get is being issued. When I run fiddler to try to analyze the actual calls (can't use firebug because it never happens using Firefox), everything works every time. I don't know whether this issue has anything to do with routing, and I've no idea even what to look at next. The strange thing is, when I debug, the intro page doesn't fully load every time. Only about 1 in 3 times does it fully load even if I've just cleared browser cache. When I run it through fiddler, it fully loads and works fine every time.

    Read the article

  • Many users, many cpus, no delays. Good for cloud?

    - by Eric
    I wish to set up a CPU-intensive time-important query service for users on the internet. A usage scenario is described below. Is cloud computing the right way to go for such an implementation? If so, what cloud vendor(s) cater to this type of application? I ask specifically, in terms of: 1) pricing 2) latency resulting from: - slow CPUs, instance creations, JIT compiles, etc.. - internal management and communication of processes inside the cloud (e.g. a queuing process and a calculation process) - communication between cloud and end user 3) ease of deployment A usage scenario I am expecting is: - A typical user sends a query (XML of size around 1K) once every 30 seconds on average. - Each query requires a numerical computation of average time 0.2 sec and max time 1 sec on a 1 GHz Pentium. The computation requires no data other than the query itself and is performed by the same piece of code each time. - The delay a user experiences between sending a query and receiving a response should be on average no more than 2 seconds and in general no more than 5 seconds. - A background save to a DB of the response should occur (not time critical) - There can be up to 30000 simultaneous users - i.e., on average 1000 queries a second, each requiring an average 0.2 sec calculation, so that would necessitate around 200 CPUs. Currently I'm look at GAE Java (for quicker deployment and less IT hassle) and EC2 (Speed and price optimization) as options. Where can I learn more about the right way to set ups such a system? past threads, different blogs, books, etc.. BTW, if my terminology is wrong or confusing, please let me know. I'd greatly appreciate any help.

    Read the article

  • How can I select all records between two dates without using date fields?

    - by Hayden Bech
    Hi, I have a MySQL DB and I need to be able to store dates earlier then 1970 - in my case, as early as 0 AD and earlier too, so I need a custom way to store dates. I have thought to use this format: Year - int(6) | Month -int(2) | day - int (2) | time - time | AD tinyint (1) | mya - int (11) But when it comes to actually using data in this format it becomes difficult. For example, if I want to get all records between two dates it would be like (pseudocode not SQL): get all where year between minYear and maxYear if year == minYear, month = minMonth if year == maxYear, month <= maxMonth if month == minMonth, day = minDay if month == maxMonth, day <= maxDay if day == minDay, time = minTime if day == maxDay, time <= maxTime or something, which seems like a right pain. I could store seconds before/after 0 AD, but that would take up way too much data! 2010 (EDIT: 2011) = 6.4 billion seconds since 0 AD. Does anybody have any ideas for this problem?

    Read the article

  • Searching for duplicate records within a text file where the duplicate is determined by only two fie

    - by plg
    First, Python Newbie; be patient/kind. Next, once a month I receive a large text file (think 7 Million records) to test for duplicate values. This is catalog information. I get 7 fields, but the two I'm interested in are a supplier code and a full orderable part number. To determine if the record is dupliacted, I compress all special characters from the part number (except . and #) and create a compressed part number. The test for duplicates becomes the supplier code and compressed part number combination. This part is fairly straight forward. Currently, I am just copying the original file with 2 new columns (compressed part and duplicate indicator). If the part is a duplicate, I put a "YES" in the last field. Now that this is done, I want to be able to go back (or better yet, at the same time) to get the previous record where there was a supplier code/compressed part number match. So far, my code looks like this: Compress Full Part to a Compressed Part and Check for Duplicates on Supplier Code and Compressed Part combination import sys import re import time ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ start=time.time() try: file1 = open("C:\Accounting\May Accounting\May.txt", "r") except IOError: print sys.stderr, "Cannot Open Read File" sys.exit(1) try: file2 = open(file1.name[0:len(file1.name)-4] + "_" + "COMPRESSPN.txt", "a") except IOError: print sys.stderr, "Cannot Open Write File" sys.exit(1) hdrList="CIGSUPPLIER|FULL_PART|PART_STATUS|ALIAS_FLAG|ACQUISITION_FLAG|COMPRESSED_PART|DUPLICATE_INDICATOR" file2.write(hdrList+chr(10)) lines_seen=set() affirm="YES" records = file1.readlines() for record in records: fields = record.split(chr(124)) if fields[0]=="CIGSupplier": continue #If incoming file has a header line, skip it file2.write(fields[0]+"|"), #Supplier Code file2.write(fields[1]+"|"), #Full_Part file2.write(fields[2]+"|"), #Part Status file2.write(fields[3]+"|"), #Alias Flag file2.write(re.sub("[$\r\n]", "", fields[4])+"|"), #Acquisition Flag file2.write(re.sub("[^0-9a-zA-Z.#]", "", fields[1])+"|"), #Compressed_Part dupechk=fields[0]+"|"+re.sub("[^0-9a-zA-Z.#]", "", fields[1]) if dupechk not in lines_seen: file2.write(chr(10)) lines_seen.add(dupechk) else: file2.write(affirm+chr(10)) print "it took", time.time() - start, "seconds." ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ file2.close() file1.close() It runs in less than 6 minutes, so I am happy with this part, even if it is not elegant. Right now, when I get my results, I import the results into Access and do a self join to locate the duplicates. Loading/querying/exporting results in Access a file this size takes around an hour, so I would like to be able to export the matched duplicates to another text file or an Excel file. Confusing enough? Thanks.

    Read the article

  • Hardware for multipurpose home server

    - by Michael Dmitry Azarkevich
    Hi guys, I'm looking to set up a multipurpose home server and hoped you could help me with the hardware selection. First of all, the services it will provide: Hosting a MySQL database (for training and testing purposes) FTP server Personal Mail Server Home media server So with this in mind I've done some research, and found some viable solutions: A standard PC with the appropriate software (Either second hand or new) A non-solid state mini-ITX system A solid state, fanless mini-ITX system I've also noted the pros and cons of each system: A standard second hand PC with old hardware would be the cheapest option. It could also have lacking processing power, not enough RAM and generally faulty hardware. Also, huge power consumption heat generation and noise levels. A standard new PC would have top-notch hardware and will stay that way for quite some time, so it's a good investment. But again, the main problem is power consumption, heat generation and noise levels. A non-solid state mini-ITX system would have the advantages of lower power consumption, lower cost (as far as I can see) and long lasting hardware. But it will generate noise and heat which will be even worse because of the size. A solid state, fanless mini-ITX system would have all the advantages of a non-solid state mini-ITX but with minimal noise and heat. The main disadvantage is the read\write problems of flash memory. All in all I'm leaning towards a non-solid state mini-ITX because of the read\write issues of flash memory. So, after this overview of what I do know, my questions are: Are all these services even providable from a single server? To my best understanding they are, but then again, I might be wrong. Is any of these solutions viable? If yes, which one is the best for my purposes? If not, what would you suggest? Also, on a more software oriented note: OS wise, I'm planning to run Linux. I'm currently thinking of four options I've been recommended: CentOS, Gentoo, DSL (Damn Small Linux) and LFS (Linux From Scratch). Any thoughts on this? Any other distro you would recomend? Regarding FTP services, I've herd good things about FileZila. Anyone has any experience with that? Do you recommend it? Do you recommend something else? Regarding the Mail service, I know nothing about this except that it exists. Any software you recommend for this task? Home media, same as mail service. Any recommended software? Thank you very much.

    Read the article

  • Profiling and Graphics.updateDisplay0()

    - by AD
    I am profiling my BlackBerry application (a game) with the options 'In method only' and 'Time including native method'. Under the 'All methods' node (after a decent run of the game) the single most time consuming function is Graphics.updateDisplay0() with over 50% of the run time spent there. I don't quite know what is the purpose of that function or if I can do anything to reduce its processing time, so if anyone can shed some light on the matter, I would be grateful. Because from what I understand with the 'In method only' option, it means there is some processing within that function. ps. there is no information on that function in the documentation.

    Read the article

  • ASP NET forms Authorization: how to reduce duration?

    - by eddo
    I've got a web page which is implementing cookie based ASPNET Forms Authentication. Once the user has logged in the page, he can edit some information using a form which is created using a partialview and returned to him as a dialog for editing. The action linked to the partial view is decorated as follows: [HttpGet] [OutputCache(Duration = 0, VaryByParam = "None")] [Authorize(Roles = "test")] public ActionResult changeTripInfo(int tripID, bool ovride=false) { ... } The problem i am experiencing is the latency between the request and the time when the dialog is shown to the user: time ranges between 800 and 1100 ms which is not justified by the complexity of the form. Investigating with Glimpse turns out that the time to process the AuthorizeAttribute (see snip) sums up to at least 650 ms which is troubling me. Looking at the Sql server log, the call which checks the user roles takes, as expected, virtually nothing (duration 0). How can I reduce this time? Am I missing some optimization?

    Read the article

  • Write/read count to txt file

    - by Brian
    Hi, I need a batch file that writes the count number to a txt file. Next time the batch file is run, it should read the current count number from the txt file and add 1 to count and save this new value in the txt file. (nothing else is in the txt file) When count is 5 it should start from 1 again Example: Count.bat runs 1 time: count.txt has no count so Count.bat saves the value 1 in count.txt Count.bat is run 2 time: Count.bat reads 1 from count.txt and saves the new value 2 to count.txt When count.bat is run for the 6 time it should start over by saving the value 1 in count.txt I think this just be easy to do, but I'am not use to batch commands So hopefully someone here could help me.

    Read the article

  • cleartool question

    - by chuanose
    Lets say I have a directory at \testfolder, and the latest is currently at /main/10. I know that the operation resulting in testfolder@@/main/6 is to remove a file named test.txt. What's a sequence of cleartool operations that can be done in a script that will take "testfolder@@/main/6" and "test.txt" as input, and will cat out the contents of test.txt as of that time? One way I can think of is to get the time of /main/6 operation, create a view with config spec -time set to that time, and then cat the test.txt at the directory. But I'm wondering if I can do this in a easier way that doesn't involve manipulating config specs, perhaps through "cleartool find" and extended path names

    Read the article

  • Stopping long-running requests in Pylons

    - by Jack
    I'm working on an application using Pylons and I was wondering if there was a way to make sure it doesn't spend way too much time handling one request. That is, I would like to find a way to put a timer on each request such that when too much time elapses, the request just stops (and possibly returns some kind of error). The application is supposed to allow users to run some complex calculations but I would like to make sure that if a calculation starts taking too much time, we stop it to allow other calculations to take place.

    Read the article

  • How do I find the install directory of a Windows Service, using C#?

    - by endian
    I'm pretty sure that a Windows service gets C:\winnt (or similar) as its working directory when installed using InstallUtil.exe. Is there any way I can access, or otherwise capture (at install time), the directory from which the service was originally installed? At the moment I'm manually entering that into the app.exe.config file, but that's horribly manual and feels like a hack. Is there a programmatic way, either at run time or install time, to determine where the service was installed from?

    Read the article

  • load balance timeout SQL connection string

    - by george9170
    It seems that if there is a sql memory leak somewhere and you dont have time to find it you can use the load balance timeout option in a sql connection string to destory the connection after x seconds. Am i right in assuming I can set the load balance time out to 30-40 seconds and then hunt for the leak latter, while in the mean time the leak will not affect my application too much.

    Read the article

  • python filter can't output

    - by Jesse Siu
    i create filter by python to the log file like Sat Jun 2 03:32:13 2012 [pid 12461] CONNECT: Client "66.249.68.236" Sat Jun 2 03:32:13 2012 [pid 12460] [ftp] OK LOGIN: Client "66.249.68.236", anon password "[email protected]" Sat Jun 2 03:32:14 2012 [pid 12462] [ftp] OK DOWNLOAD: Client "66.249.68.236", "/pub/10.5524/100001_101000/100022/readme.txt", 451 bytes, 1.39Kbyte/sec the script is import time lines=[] f= open("/opt/CLiMB/Storage1/log/vsftp.log") line = f.readline() lines=[line for line in f] def OnlyRecent(line): if time.strptime(line.split("[")[0].strip(),"%a %b %d %H:%M:%S %Y") < time.time()-(60*60*24*2): return True return False print"\n".join(filter(OnlyRecent,lines)) f.close() but when i run this script, it continue running but didn't show anything until i stop it. Why it can't shows records happened in 2 days.

    Read the article

  • Simple aggregating query very slow in PostgreSql, any way to improve?

    - by Ash
    HI I have a table which holds files and their types such as CREATE TABLE files ( id SERIAL PRIMARY KEY, name VARCHAR(255), filetype VARCHAR(255), ... ); and another table for holding file properties such as CREATE TABLE properties ( id SERIAL PRIMARY KEY, file_id INTEGER CONSTRAINT fk_files REFERENCES files(id), size INTEGER, ... // other property fields ); The file_id field has an index. The file table has around 800k lines, and the properties table around 200k (not all files necessarily have/need a properties). I want to do aggregating queries, for example find the average size and standard deviation for all file types. But it's very slow - around 70 seconds for the latter query. I understand it needs a sequential scan, but still it seems too much. Here's the query SELECT f.filetype, avg(size), stddev(size) FROM files as f, properties as pr WHERE f.id = pr.file_id GROUP BY f.filetype; and the explain HashAggregate (cost=140292.20..140293.94 rows=116 width=13) (actual time=74013.621..74013.954 rows=110 loops=1) -> Hash Join (cost=6780.19..138945.47 rows=179564 width=13) (actual time=1520.104..73156.531 rows=179499 loops=1) Hash Cond: (f.id = pr.file_id) -> Seq Scan on files f (cost=0.00..108365.41 rows=1140941 width=9) (actual time=0.998..62569.628 rows=805270 loops=1) -> Hash (cost=3658.64..3658.64 rows=179564 width=12) (actual time=1131.053..1131.053 rows=179499 loops=1) -> Seq Scan on properties pr (cost=0.00..3658.64 rows=179564 width=12) (actual time=0.753..557.171 rows=179574 loops=1) Total runtime: 74014.520 ms Any ideas why it is so slow/how to make it faster?

    Read the article

  • How to change file contents in PHP ?

    - by Misha Moroshko
    I save in a file some info about users (like number of times user passed the login page, last visited time, and so on). I want to read this info from the file, and update it (add 1 to the counter, and change the last visited time). My question is: can I do it without opening the file twice ? I open the first time to read the contents, and then open it again to overwrite the contents with the updated ones. Thanks !

    Read the article

  • Understanding omission failure in distributed systems

    - by karthik A
    The following text says this which I'm not able to quite agree : client C sends a request R to server S. The time taken by a communication link to transport R over the link is D. P is the maximum time needed by S to recieve , process and reply to R. If omission failure is assumed ; then if no reply to R is received within 2(D+P) , then C will never recieve a reply to R . Why is the time here 2(D+P). As I understand shouldn't it be 2D+P ?

    Read the article

  • Setting timeout for embedded Lua

    - by skyeagle
    I have embedded Lua in a C/C+= application. I want to be able to set a timeout value to prevent getting trapped with badly written scripts that can result in infinite loops (or even string searches that take an infinite time to complete). Basically, I want to be able to set a time interval and if the script fails to complete running at the end of that time interval, I want to be able to kill the Lua script engine (gracefully, if possible). Anyone knows of best practise way to do this?

    Read the article

  • Making alarm clock with NSTimer

    - by Alex G
    I just went through trying to make an alarm clock app with local notifications but that didn't do the trick because I needed an alert to appear instead of it going into notification centre in iOS 5+ So far I've been struggling greatly with nstimer and its functions so I was wondering if anyone could help out. I want when the user selects a time (through UIDatePicker, only time) for an alert to be displayed at exactly this time. I have figured out how to get the time from UIDatePicker but I do not know how to properly set the firing function of nstimer. This is what I have attempted so far, if anyone could help... be much appreciated. Thank you Example (it keeps going into the function every second opposed to a certain time I told it too... not what I want): NSDate *timestamp; NSDateComponents *comps = [[[NSDateComponents alloc] init] autorelease]; [comps setHour:2]; [comps setMinute:8]; timestamp = [[NSCalendar currentCalendar] dateFromComponents:comps]; NSTimer *f = [[NSTimer alloc] initWithFireDate:timestamp interval:0 target:self selector:@selector(test) userInfo:nil repeats:YES]; NSRunLoop *runner = [NSRunLoop currentRunLoop]; [runner addTimer:f forMode: NSDefaultRunLoopMode];

    Read the article

  • Why can't I rename a data frame column inside a list?

    - by Moreno Garcia
    I would like to rename some columns from CPU_Usage to the process name before I merge the dataframes in order to make it more legible. names(byProcess[[1]]) # [1] "Time" "CPU_Usage" names(byProcess[1]) # [1] "CcmExec_3344" names(byProcess[[1]][2]) <- names(byProcess[1]) names(byProcess[[1]][2]) # [1] "CPU_Usage" names(byProcess[[1]][2]) <- 'test' names(byProcess[[1]][2]) # [1] "CPU_Usage" lapply(byProcess, names) # $CcmExec_3344 # [1] "Time" "CPU_Usage" # # ... (removed several entries to make it more readable) # # $wrapper_1604 # [1] "Time" "CPU_Usage"

    Read the article

  • Problem with loop MATLAB

    - by Jessy
    no time scores 1 10 123 2 11 22 3 12 22 4 50 55 5 60 22 6 70 66 . . . . . . n n n Above a the content of my txt file (thousand of lines). 1st column - number of samples 2nd column - time (from beginning to end ->accumulated) 3rd column - scores I wanted to create a new file which will be the total of every three sample of the scores divided by the time difference of the same sample. e.g. (123+22+22)/ (12-10) = 167/2 = 83.5 (55+22+66)/(70-50) = 143/20 = 7.15 new txt file 83.5 7.15 . . . n so far I have this code: fid=fopen('data.txt') data = textscan(fid,'%*d %d %d') time = (data{1}) score= (data{2}) for sample=1:length(score) ..... // I'm stucked here .. end ....

    Read the article

< Previous Page | 304 305 306 307 308 309 310 311 312 313 314 315  | Next Page >