Search Results

Search found 532 results on 22 pages for 'intervals'.

Page 18/22 | < Previous Page | 14 15 16 17 18 19 20 21 22  | Next Page >

  • Parsing data with Clojure, interval problem.

    - by Andrea Di Persio
    Hello! I'm writing a little parser in clojure for learning purpose. basically is a TSV file parser that need to be put in a database, but I added a complication. The complication itself is that in the same file there are more intervals. The file look like this: ###andreadipersio 2010-03-19 16:10:00### USER COMM PID PPID %CPU %MEM TIME root launchd 1 0 0.0 0.0 2:46.97 root DirectoryService 11 1 0.0 0.2 0:34.59 root notifyd 12 1 0.0 0.0 0:20.83 root diskarbitrationd 13 1 0.0 0.0 0:02.84` .... ###andreadipersio 2010-03-19 16:20:00### USER COMM PID PPID %CPU %MEM TIME root launchd 1 0 0.0 0.0 2:46.97 root DirectoryService 11 1 0.0 0.2 0:34.59 root notifyd 12 1 0.0 0.0 0:20.83 root diskarbitrationd 13 1 0.0 0.0 0:02.84 I ended up with this code: (defn is-header? "Return true if a line is header" [line] (> (count (re-find #"^\#{3}" line)) 0)) (defn extract-fields "Return regex matches" [line pattern] (rest (re-find pattern line))) (defn process-lines [lines] (map process-line lines)) (defn process-line [line] (if (is-header? line) (extract-fields line header-pattern)) (extract-fields line data-pattern)) My idea is that in 'process-line' interval need to be merged with data so I have something like this: ('andreadipersio', '2010-03-19', '16:10:00', 'root', 'launchd', 1, 0, 0.0, 0.0, '2:46.97') for every row till the next interval, but I can't figure how to make this happen. I tried with something like this: (def process-line [line] (if is-header? line) (def header-data (extract-fields line header-pattern))) (cons header-data (extract-fields line data-pattern))) But this doesn't work as excepted. Any hints? Thanks!

    Read the article

  • How to access a TextView element in a BroadcastReceiver

    - by ric03uec
    Hello, I am testing a simple widget in android and using Alarms to update a TextView at regular intervals. The problem is that in the BroadcastReceiver class I cannot access the TextView element, which I want to get updated when the alarm expires. The class is being called properly because the Toast i have put there is giving the appropriate message. The following code is from the class where I configure the widget and set the timers. public void onCreate(Bundle bundle) { super.onCreate(bundle); Intent intent = getIntent(); Bundle extras = intent.getExtras(); if(extras != null){ mWidgetId = extras.getInt(AppWidgetManager.EXTRA_APPWIDGET_ID, AppWidgetManager.INVALID_APPWIDGET_ID); AppWidgetManager appWidgetManager = AppWidgetManager.getInstance(WidgetConfigure.this); RemoteViews views = new RemoteViews(WidgetConfigure.this.getPackageName(), R.layout.widget_layout); views.setTextViewText(R.id.quote, "Widget Loaded From Activity"); appWidgetManager.updateAppWidget(mWidgetId, views); setTimer(); //set the timers... setResult();// set the result... } } Now i want to update the same TextView when the BroadCastReceiver is called after the timer expires. I have tried the code provided in the ExampleAppWidget example provided in android api demos and that isnt working out. How can i set the required text? Thankx in advance

    Read the article

  • How to solve the delay problem in animation of sprites ?

    - by srikanth rongali
    My problem is, I did coding for a sprite. It should change it should change it's image from( 1, 2, 3). It should look like count down time to start a game. 1, 2, 3 are 3 png images. But the images are not displayed in equal intervals of time. I mean time between (1 - 2), (2 - 3) is not same. It is random. Please help me with my problem. Help me if there is better solution than what I am doing.(My animation should be like, before any game starts we see count down 1 then 2 then 3 then GO). -(id)init { if((self = [super init])) { [[CCDirector sharedDirector] setAnimationInterval:60.0/60]; [[CCDirector sharedDirector] setDisplayFPS:NO]; CCAnimation* numberAnimation = [CCAnimation animationWithName:@"countDown" delay: 60.0/60]; for( int i=1;i<4;i++) [numberAnimation addFrameWithFilename: [NSString stringWithFormat:@"number_%02d.png", i]]; id numberAction = [CCAnimate actionWithAnimation: numberAnimation restoreOriginalFrame:NO]; id action2 = [CCFadeOut actionWithDuration:0.5f]; CCSprite *number; number = [CCSprite spriteWithFile:@"number.png"]; .... } }

    Read the article

  • Advice needed: cold backup for SQL Server 2008 Express?

    - by Mikey Cee
    What are my options for achieving a cold backup server for SQL Server Express instance running a single database? I have an SQL Server 2008 Express instance in production that currently represents a single point of failure for my application. I have a second physical box sitting at the installation that is currently doing nothing. I want to somehow replicate my database in near real time (a little bit of data loss is acceptable) to the second box. The database is very small and resources are utilized very lightly. In the case that the production server dies, I would manually reconfigure my application to point to the backup server instead. Although Express doesn't support log shipping, I am thinking that I could manually script a poor man's version of it, where I use batch files to take the logs and copy them across the network and apply them to the second server at 5 minute intervals. Does anyone have any advice on whether this is technically achievable, or if there is a better way to do what I am trying to do? Note that I want to avoid having to pay for the full version of SQL Server and configure mirroring as I think it is an overkill for this application. I understand that other DB platforms may present suitable options (eg. a MySQL Cluster), but for the purposes of this discussion, let's assume we have to stick to SQL Server.

    Read the article

  • Most efficient method of detecting/monitoring DOM changes?

    - by Graza
    I need an efficient mechanism for detecting changes to the DOM. Preferably cross-browser, but if there's any efficient means which are not cross browser, I can implement these with a fail-safe cross browser method. In particular, I need to detect changes that would affect the text on a page, so any new, removed or modified elements, or changes to inner text (innerHTML) would be required. I don't have control over the changes being made (they could be due to 3rd party javascript includes, etc), so it can't be approached from this angle - I need to "monitor" for changes somehow. Currently I've implemented a "quick'n'dirty" method which checks body.innerHTML.length at intervals. This won't of course detect changes which result in the same length being returned, but in this case is "good enough" - the chances of this happening are extremely slim, and in this project, failing to detect a change won't result in lost data. The problem with body.innerHTML.length is that it's expensive. It can take between 1 and 5 milliseconds on a fast browser, and this can bog things down a lot - I'm also dealing with a large-ish number of iframes and it all adds up. I'm pretty sure the expensiveness of doing this is because the innerHTML text is not stored statically by browsers, and needs to be calculated from the DOM every time it is read. The types of answers I am looking for are anything from the "precise" (for example event) to the "good enough" - perhaps something as "quick'n'dirty" as the innerHTML.length method, but that executes faster.

    Read the article

  • SQL Server log backups "stalling"

    - by MattK
    I have interited a box running SQL Server 2008 and Windows 2003, and have had a few events where largeish (35GB) log backups "stall", both before and after the installation of SQL 2008 SP1. The server log ships to a standby, so regular log backups are taken at 15 minute intervals. However, after an index reorg causes the log to grow to about 35GB (on a DB with about 17GB of data), the next log backup runs to ~95% completion, then seems to stop. The process shows as suspended, with a wait state of BACKUPIO. CPU, read, and write activity on the SPID also does not change, and the process stays in this state for hours, when normally a backup of this size should complete in about 20 minutes. This server has a single RAID-1 volume, thus the source database files and destination backup files are on the same volume. However, I cannot determine if another process is blocking the backup. The backup SPID cannot be killed, and the only way to terminate the log backup and clear the lock on the backup file is to cycle the SQL Server service. There was one event where the backup terminated completely, with an error that another process had locked the backup file, but no details about what that process was. Can anyone suggest a cause or diagnostic process to this situation?

    Read the article

  • Interpolating data points in Excel

    - by Niels Basjes
    Hi, I'm sure this is the kind of problem other have solved many times before. A group of people are going to do measurements (Home energy usage to be exact). All of them will do that at different times and in different intervals. So what I'll get from each person is a set of {date, value} pairs where there are dates missing in the set. What I need is a complete set of {date, value} pairs where for each date withing the range a value is known (either measured or calculated). I expect that a simple linear interpolation would suffice for this project. If I assume that it must be done in Excel. What is the best way to interpolate in such a dataset (so I have a value for every day) ? Thanks. NOTE: When these datasets are complete I'll determine the slope (i.e. usage per day) and from that we can start doing home-to-home comparisons. ADDITIONAL INFO After first few suggestions: I do not want to manually figure out where the holes are in my measurement set (too many incomplete measurement sets!!). I'm looking for something (existing) automatic to do that for me. So if my input is {2009-06-01, 10} {2009-06-03, 20} {2009-06-06, 110} Then I expect to automatically get {2009-06-01, 10} {2009-06-02, 15} {2009-06-03, 20} {2009-06-04, 50} {2009-06-05, 80} {2009-06-06, 110} Yes, I can write software that does this. I am just hoping that someone already has a "ready to run" software (Excel) feature for this (rather generic) problem.

    Read the article

  • Delphi: How to avoid EIntOverflow underflow when subtracting?

    - by Ian Boyd
    Microsoft already says, in the documentation for GetTickCount, that you could never compare tick counts to check if an interval has passed. e.g.: Incorrect (pseudo-code): DWORD endTime = GetTickCount + 10000; //10 s from now ... if (GetTickCount > endTime) break; The above code is bad because it is suceptable to rollover of the tick counter. For example, assume that the clock is near the end of it's range: endTime = 0xfffffe00 + 10000 = 0x00002510; //9,488 decimal Then you perform your check: if (GetTickCount > endTime) Which is satisfied immediatly, since GetTickCount is larger than endTime: if (0xfffffe01 > 0x00002510) The solution Instead you should always subtract the two time intervals: DWORD startTime = GetTickCount; ... if (GetTickCount - startTime) > 10000 //if it's been 10 seconds break; Looking at the same math: if (GetTickCount - startTime) > 10000 if (0xfffffe01 - 0xfffffe00) > 10000 if (1 > 10000) Which is all well and good in C/C++, where the compiler behaves a certain way. But what about Delphi? But when i perform the same math in Delphi, with overflow checking on ({Q+}, {$OVERFLOWCHECKS ON}), the subtraction of the two tick counts generates an EIntOverflow exception when the TickCount rolls over: if (0x00000100 - 0xffffff00) > 10000 0x00000100 - 0xffffff00 = 0x00000200 What is the intended solution for this problem? Edit: i've tried to temporarily turn off OVERFLOWCHECKS: {$OVERFLOWCHECKS OFF}] delta = GetTickCount - startTime; {$OVERFLOWCHECKS ON} But the subtraction still throws an EIntOverflow exception. Is there a better solution, involving casts and larger intermediate variable types?

    Read the article

  • How to efficiently store and update binary data in Mongodb?

    - by Rocketman
    I am storing a large binary array within a document. I wish to continually add bytes to this array and sometimes change the value of existing bytes. I was looking for some $append_bytes and $replace_bytes type of modifiers but it appears that the best I can do is $push for arrays. It seems like this would be doable by performing seek-write type operations if I had access somehow to the underlying bson on disk, but it does not appear to me that there is anyway to do this in mongodb (and probably for good reason). If I were instead to just query this binary array, edit or add to it, and then update the document by rewriting the entire field, how costly will this be? Each binary array will be on the order of 1-2MB, and updates occur once every 5 minutes and across 1000s of documents. Worse, yet there is no easy way to spread these out (in time) and they will usually be happening close to one another on the 5 minute intervals. Does anyone have a good feel for how disastrous this will be? Seems like it would be problematic. An alternative would be to store this binary data as separate files on disk, implement a thread pool to efficiently manipulate the files on disk, and reference the filename from my mongodb document. (I'm using python and pymongo so I was looking at pytables). I'd prefer to avoid this though if possible. Is there any other alternative that I am overlooking here? Thanks in advnace.

    Read the article

  • Python 3, urllib ... Reset Connection Possible?

    - by Rhys
    In the larger scale of my program the goal of the below code is to filter out all dynamic html in a web-page source code code snippet: try: deepreq3 = urllib.request.Request(deepurl3) deepreq3.add_header("User-Agent","etc......") deepdata3 = urllib.request.urlopen(deepurl3).read().decode("utf8", 'ignore') The following code is looped 3 times in order to identify whether the target web-page is Dynamic (source code is changed at intervals) or not. If the page IS dynamic, the above code loops another 15 times and attempts to filter out the dynamic content. QUESTION: While this filtering method works 80% of the time, some pages will reload ALL 15 times and STILL contain dynamic code. HOWEVER. If I manually close down the Python Shell and re-execute my program, the dynamic html that my 'refresh-page method' could not shake off is no longer there ... it's been replaced with new dynamic html that my 'refresh-page method' cannot shake off. So I need to know, what is going on here? How is re-running my program causing the dynamic content of a page to change. AND, is there any way, any 'reset connection' command I can use to recreate this ... without manually restarting my app. Thanks for your response.

    Read the article

  • XMLHttpRequest leak

    - by Raja
    Hi everyone, Below is my javascript code snippet. Its not running as expected, please help me with this. <script type="text/javascript"> function getCurrentLocation() { console.log("inside location"); navigator.geolocation.getCurrentPosition(function(position) { insert_coord(new google.maps.LatLng(position.coords.latitude,position.coords.longitude)); }); } function insert_coord(loc) { var request = new XMLHttpRequest(); request.open("POST","start.php",true); request.onreadystatechange = function() { callback(request); }; request.setRequestHeader("Content-Type","application/x-www-form-urlencoded"); request.send("lat=" + encodeURIComponent(loc.lat()) + "&lng=" + encodeURIComponent(loc.lng())); return request; } function callback(req) { console.log("inside callback"); if(req.readyState == 4) if(req.status == 200) { document.getElementById("scratch").innerHTML = "callback success"; //window.setTimeout("getCurrentLocation()",5000); setTimeout(getCurrentLocation,5000); } } getCurrentLocation(); //called on body load </script> What i'm trying to achieve is to send my current location to the php page every 5 seconds or so. i can see few of the coordinates in my database but after sometime it gets weird. Firebug show very weird logs like simultaneous POST's at irregular intervals. Here's the firebug screenshot: IS there a leak in the program. please help. EDIT: The expected outcome in the firebug console should be like this :- inside location POST .... inside callback /* 5 secs later */ inside location POST ... inside callback /* keep repeating */

    Read the article

  • PHP - Drilling down Data and Looping with Loops

    - by stogdilla
    I'm currently having difficulty finding a way of making my loops work. I have a table of data with 15 minute values. I need the data to pull up in a few different increments $filters=Array('Yrs','Qtr','Day','60','30','15'); I think I have a way of finding out what I need to be able to drill down to but the issue I'm having is after the first loop to cycle through all the Outter most values (ex: the user says they want to display by Hours, each hour should be able to have a "+" that will then add a new div to display the half hour data, then each half hour data have a "+" to display the 15 minute data upon request. Now I can just program the number of outputs for each value (6 different outputs) just in-case... but isn't there a way I can make it do the drill down for each one in a loop? so I only have to code one output once and have it just check if there are any more intervals after it and check for those? I'm sure I'm just overlooking some very simple way of doing this but my brain isn't being clever today. Sorry in advance if this is a simple solution. I guess the best way I could think of it as a reply on a form. How you would check to see if it's a reply of a reply, and then if that reply has any replys...etc for output. Can anyone help or at least point me in the right direction? Or am I stuck coding each possible check? Thanks in advance!

    Read the article

  • How to write to a text file in pipe delimited format from SQL Server / ASP.Net?

    - by NJTechGuy
    I have a text file which needs to be constantly updated (regular intervals). All I want is the syntax and possibly some code that outputs data from a SQL Server database using ASP.Net. The code I have so far is : <%@ Import Namespace="System.IO" %> <script language="vb" runat="server"> sub Page_Load(sender as Object, e as EventArgs) Dim FILENAME as String = Server.MapPath("Output.txt") Dim objStreamWriter as StreamWriter ' If Len(Dir$(FILENAME)) > 0 Then Kill(FILENAME) objStreamWriter = File.AppendText(FILENAME) objStreamWriter.WriteLine("A user viewed this demo at: " & DateTime.Now.ToString()) objStreamWriter.Close() Dim objStreamReader as StreamReader objStreamReader = File.OpenText(FILENAME) Dim contents as String = objStreamReader.ReadToEnd() lblNicerOutput.Text = contents.Replace(vbCrLf, "<br>") objStreamReader.Close() end sub </script> <asp:label runat="server" id="lblNicerOutput" Font-Name="Verdana" /> With PHP, it is a breeze, but with .Net I have no clue. If you could help me with the database connectivity and how to write the data in pipe delimited format to an Output.txt file, that had be awesome. Thanks guys!

    Read the article

  • How can I build a wrapper to wait for listening on a port?

    - by BillyBBone
    Hi, I am looking for a way of programmatically testing a script written with the asyncore Python module. My test consists of launching the script in question -- if a TCP listen socket is opened, the test passes. Otherwise, if the script dies before getting to that point, the test fails. The purpose of this is knowing if a nightly build works (at least up to a point) or not. I was thinking the best way to test would be to launch the script in some kind of sandbox wrapper which waits for a socket request. I don't care about actually listening for anything on that port, just intercepting the request and using that as an indication that my test passed. I think it would be preferable to intercept the open socket request, rather than polling at set intervals (I hate polling!). But I'm a bit out of my depths as far as how exactly to do this. Can I do this with a shell script? Or perhaps I need to override the asyncore module at the Python level? Thanks in advance, - B

    Read the article

  • AWK: compare apache dates without using regular expression

    - by smallmeans
    I'm writing a loganalysis application and wanted to grab apache log records between two certain dates. Assume that a date is formated as such: 22/Dec/2009:00:19 (day/month/year:hour:minute) Currently, I'm using a regular expression to replace the month name with its numeric value, remove the separators, so the above date is converted to: 221220090019 making a date comparison trivial.. but.. Running a regex on each record for large files, say, one containing a quarter million records, is extremely costly.. is there any other method not involving regex substitution? Thanks in advance Edit: here's the function doing the convertion/comparison function dateInRange(t, from, to) { sub(/[[]/, "", t); split(t, a, "[/:]"); match("JanFebMarAprMayJunJulAugSepOctNovDec", a[2]); a[2] = sprintf("%02d", (RSTART + 2) / 3); s = a[3] a[2] a[1] a[4] a[5]; return s >= from && s <= to; } "from" and "to" are the intervals in the aforementioned format, and "t" is the raw apache log date/time field (e.g [22/Dec/2009:00:19:36)

    Read the article

  • How to write to a Text File in Pipe delimited format from MS Sql Server / ASP.Net?

    - by NJTechGuy
    I have a text file which needs to be constantly updated (regular intervals). All I want is the syntax and possibly some code that outputs data from a MS Sql Database using ASP.Net. The code I have so far is : <%@ Import Namespace="System.IO" %> <script language="vb" runat="server"> sub Page_Load(sender as Object, e as EventArgs) Dim FILENAME as String = Server.MapPath("Output.txt") Dim objStreamWriter as StreamWriter ' If Len(Dir$(FILENAME)) > 0 Then Kill(FILENAME) objStreamWriter = File.AppendText(FILENAME) objStreamWriter.WriteLine("A user viewed this demo at: " & DateTime.Now.ToString()) objStreamWriter.Close() Dim objStreamReader as StreamReader objStreamReader = File.OpenText(FILENAME) Dim contents as String = objStreamReader.ReadToEnd() lblNicerOutput.Text = contents.Replace(vbCrLf, "<br>") objStreamReader.Close() end sub </script> <asp:label runat="server" id="lblNicerOutput" Font-Name="Verdana" /> With PHP, it is a breeze, but with .Net I have no clue. If you could help me with the database connectivity and how to write the data in pipe delimited format to an Output.txt file, that had be awesome. Thanks guys!

    Read the article

  • 2D Histogram in R: Converting from Count to Frequency within a Column

    - by Jac
    Would appreciate help with generating a 2D histogram of frequencies, where frequencies are calculated within a column. My main issue: converting from counts to column based frequency. Here's my starting code: # expected packages library(ggplot2) library(plyr) # generate example data corresponding to expected data input x_data = sample(101:200,10000, replace = TRUE) y_data = sample(1:100,10000, replace = TRUE) my_set = data.frame(x_data,y_data) # define x and y interval cut points x_seq = seq(100,200,10) y_seq = seq(0,100,10) # label samples as belonging within x and y intervals my_set$x_interval = cut(my_set$x_data,x_seq) my_set$y_interval = cut(my_set$y_data,y_seq) # determine count for each x,y block xy_df = ddply(my_set, c("x_interval","y_interval"),"nrow") # still need to convert for use with dplyr # convert from count to frequency based on formula: freq = count/sum(count in given x interval) ################ TRYING TO FIGURE OUT ################# # plot results fig_count <- ggplot(xy_df, aes(x = x_interval, y = y_interval)) + geom_tile(aes(fill = nrow)) # count fig_freq <- ggplot(xy_df, aes(x = x_interval, y = y_interval)) + geom_tile(aes(fill = freq)) # frequency I would appreciate any help in how to calculate the frequency within a column. Thanks! jac EDIT: I think the solution will require the following steps 1) Calculate and store overall counts for each x-interval factor 2) Divide the individual bin count by its corresponding x-interval factor count to obtain frequency. Not sure how to carry this out though. .

    Read the article

  • Is it possible to limit outside connections to a subdomain with .htaccess or similar?

    - by digidave0205
    I host a web application. This application serves static html pages that are refreshed at various intervals. Some as often as every 30 secs. At this time I have about 300 unique pages that are accessed via 300 unique subdomains. Some clients have at most 50 visitors to their unique page and it refreshes every 30 secs, no problem. Other clients have up to 1000 or more visitors to their page. These clients are killing my server. There was no predefined limit upon signup but now I have to impose such a limit to remain afloat financially. I would like to define a finite number of connections allowed for each individual subdomain in my hosting account. Connections attempted out of range of this finite value would either be rejected or redirected. I have access to .htaccess and php.ini. Is something of this nature possible? Oh, I have a dedicated/managed server at 1and1.

    Read the article

  • MySQL query paralyzes site

    - by nute
    Once in a while, at random intervals, our website gets completely paralyzed. Looking at SHOW FULL PROCESSLIST;, I've noticed that when this happens, there is a specific query that is "Copying to tmp table" for a loooong time (sometimes 350 seconds), and almost all the other queries are "Locked". The part I don't understand is that 90% of the time, this query runs fine. I see it going through in the process list and it finishes pretty quickly most of the time. This query is being called by an ajax call on our homepage to display product recommendations based your browsing history (a la amazon). Just sometimes, randomly (but too often), it gets stuck at "copying to tmp table". Here is a caught instance of the query that was up 109 seconds when I looked: SELECT DISTINCT product_product.id, product_product.name, product_product.retailprice, product_product.imageurl, product_product.thumbnailurl, product_product.msrp FROM product_product, product_xref, product_viewhistory WHERE ( (product_viewhistory.productId = product_xref.product_id_1 AND product_xref.product_id_2 = product_product.id) OR (product_viewhistory.productId = product_xref.product_id_2 AND product_xref.product_id_1 = product_product.id) ) AND product_product.outofstock='N' AND product_viewhistory.cookieId = '188af1efad392c2adf82' AND product_viewhistory.productId IN (24976, 25873, 26067, 26073, 44949, 16209, 70528, 69784, 75171, 75172) ORDER BY product_xref.hits DESC LIMIT 10 Of course the "cookieId" and the list of "productId" changes dynamically depending on the request. I use php with PDO.

    Read the article

  • PHP sleep() excution sequence while echoeing.

    - by Babiker
    I have the following: echo time()."<br>"; sleep(1); echo time()."<br>"; sleep(1); echo time()."<br>"; I wrote the preceding code with intention to echo time()."<br>" ln 1,echo time()."<br>" ln 4, wait a final second and then echo the final time()."<br>". Altough the time bieng echoed is correct when it comes to the intervals between time(), all echo functions are echoeing after the total of the waiting period/parameters in each sleep function. This is how the script runs: Excutes. Waits 2 secons. echoes 1275540664 1275540665 1275540666 Notice the correct incrementation in time() being echoed. My question is why is it not behaving like expected to where it echoes, waits a second, echoes again, waits one final second and then echos the last parameter? I know my question is a little confusing due to my wording, but i will try my hardest to answer any comments regarding this, thanks.

    Read the article

  • Scalably processing large amount of comlpicated database data in PHP, many times a day.

    - by Eph
    I'm soon to be working on a project that poses a problem for me. It's going to require, at regular intervals throughout the day, processing tens of thousands of records, potentially over a million. Processing is going to involve several (potentially complicated) formulas and the generation of several random factors, writing some new data to a separate table, and updating the original records with some results. This needs to occur for all records, ideally, every three hours. Each new user to the site will be adding between 50 and 500 records that need to be processed in such a fashion, so the number will not be steady. The code hasn't been written, yet, as I'm still in the design process, mostly because of this issue. I know I'm going to need to use cron jobs, but I'm concerned that processing records of this size may cause the site to freeze up, perform slowly, or just piss off my hosting company every three hours. I'd like to know if anyone has any experience or tips on similar subjects? I've never worked at this magnitude before, and for all I know, this will be trivial to the server and not pose much of an issue. As long as ALL records are processed before the next three hour period occurs, I don't care if they aren't processed simultaneously (though, ideally, all records belonging to a specific user should be processed in the same batch), so I've been wondering if I should process in batches every 5 minutes, 15 minutes, hour, whatever works, and how best to approach this (and make it scalable in a way that is fair to all users)?

    Read the article

  • R optimization: How can I avoid a for loop in this situation?

    - by chrisamiller
    I'm trying to do a simple genomic track intersection in R, and running into major performance problems, probably related to my use of for loops. In this situation, I have pre-defined windows at intervals of 100bp and I'm trying to calculate how much of each window is covered by the annotations in mylist. Graphically, it looks something like this: 0 100 200 300 400 500 600 windows: |-----|-----|-----|-----|-----|-----| mylist: |-| |-----------| So I wrote some code to do just that, but it's fairly slow and has become a bottleneck in my code: ##window for each 100-bp segment windows <- numeric(6) ##second track mylist = vector("list") mylist[[1]] = c(1,20) mylist[[2]] = c(120,320) ##do the intersection for(i in 1:length(mylist)){ st <- floor(mylist[[i]][1]/100)+1 sp <- floor(mylist[[i]][2]/100)+1 for(j in st:sp){ b <- max((j-1)*100, mylist[[i]][1]) e <- min(j*100, mylist[[i]][2]) windows[j] <- windows[j] + e - b + 1 } } print(windows) [1] 20 81 101 21 0 0 Naturally, this is being used on data sets that are much larger than the example I provide here. Through some profiling, I can see that the bottleneck is in the for loops, but my clumsy attempt to vectorize it using *apply functions resulted in code that runs an order of magnitude more slowly. I suppose I could write something in C, but I'd like to avoid that if possible. Can anyone suggest another approach that will speed this calculation up?

    Read the article

  • Granting administrator privileges to an application launched at startup without UAC prompt?

    - by iKenndac
    Background I've written a small C#/.NET 4.0 application that syncs various settings from a game installed in Program Files to and from other copies of the same game on different machines (think Chrome bookmark sync, but for this game). The sync itself is a relatively simple affair, dealing with files stored inside the game's Program Files folder. On my machine, this works fine without having to elevate my application through UAC. Windows 7 makes the game use Program Files virtualisation and my application works fine with that. However, on a lot of tester's machines, I'm getting reports that the application either can't work with the files and in come cases can't even see the game's folder! Having the user right-click and "Run as Administrator" solves the problem in every case. So, we just set the application's manifest to require admin privileges, right? That's fine (although not ideal) for when the user manually invokes the application or the sync process because they'll be interacting with the application and ready to accept a UAC request. However, one of the features of my application is a "Sync Automatically" option, which allows the user to "set and forget" the application. With this set, the application puts itself into the registry at HKCU\Software\Microsoft\Windows\CurrentVersion\Run to be run at startup and sits in the system tray syncing the settings in the background as needed. Obviously, I need to be smarter here. Presenting a UAC prompt as soon as the user logs in to their account or at random intervals afterwards isn't the way forwards. So, my question! What's the best way to approach a situation where I'd need to run an application at startup that needs administrator privileges? Is there a way to have the user authorise an installation that causes the system to automatically run the application with the correct privileges without a prompt at startup/login?

    Read the article

  • What happens if I just add a second IP to a domain?

    - by tntu
    We have two servers that are in constant sync. We have two applications that connect to them. Each app to different server. We devised a new version of those apps that will read a dns entry and get a list of IP addresses and try them in order. Now problem is old apps. We have noticed that some ppl still use the old ones even if we have released the new. If we were to add two IP's to each domain would they receive the IP's in the order we set them or random? Either way it will still work for us but I'm just curious. If first server goes offline will the client application try the other? To be noted for old version: Interruption does not affect in any way the continuation once connection is reestablished. Each communication is independent of previous ones. Applications connect at set intervals of time anywhere between 5 seconds to 1 hour. Connection is done simply using an http post to the URL in question.

    Read the article

  • Multithreading: Read from / write to a pipe

    - by Tero Jokinen
    I write some data to a pipe - possibly lots of data and at random intervals. How to read the data from the pipe? Is this ok: in the main thread (current process) create two more threads (2, 3) the second thread writes sometimes to the pipe (and flush-es the pipe?) the 3rd thread has infinite loop which reads the pipe (and then sleeps for some time) Is this so far correct? Now, there are a few thing I don't understand: do I have to lock (mutex?) the pipe on write? IIRC, when writing to pipe and its buffer gets full, the write end will block until I read the already written data, right? How to check for read data in the pipe, not too often, not too rarely? So that the second thread wont block? Is there something like select for pipes? It is possible to set the pipe to unbuffered more or I have to flush it regularly - which one is better? Should I create one more thread, just for flushing the pipe after write? Because flush blocks as well, when the buffer is full, right? I just don't want the 1st and 2nd thread to block.... [Edit] Sorry, I thought the question is platform agnostic but just in case: I'm looking at this from Win32 perspective, possibly MinGW C...

    Read the article

< Previous Page | 14 15 16 17 18 19 20 21 22  | Next Page >