Search Results

Search found 606 results on 25 pages for 'frequent'.

Page 19/25 | < Previous Page | 15 16 17 18 19 20 21 22 23 24 25  | Next Page >

  • Ubuntu's garbage collection cron job for PHP sessions takes 25 minutes to run, why?

    - by Lamah
    Ubuntu has a cron job set up which looks for and deletes old PHP sessions: # Look for and purge old sessions every 30 minutes 09,39 * * * * root [ -x /usr/lib/php5/maxlifetime ] \ && [ -d /var/lib/php5 ] && find /var/lib/php5/ -depth -mindepth 1 \ -maxdepth 1 -type f -cmin +$(/usr/lib/php5/maxlifetime) ! -execdir \ fuser -s {} 2> /dev/null \; -delete My problem is that this process is taking a very long time to run, with lots of disk IO. Here's my CPU usage graph: The cleanup running is represented by the teal spikes. At the beginning of the period, PHP's cleanup jobs were scheduled at the default 09 and 39 minutes times. At 15:00 I removed the 39 minute time from cron, so a cleanup job twice the size runs half as often (you can see the peaks get twice as wide and half as frequent). Here are the corresponding graphs for IO time: And disk operations: At the peak where there were about 14,000 sessions active, the cleanup can be seen to run for a full 25 minutes, apparently using 100% of one core of the CPU and what seems to be 100% of the disk IO for the entire period. Why is it so resource intensive? An ls of the session directory /var/lib/php5 takes just a fraction of a second. So why does it take a full 25 minutes to trim old sessions? Is there anything I can do to speed this up? The filesystem for this device is currently ext4, running on Ubuntu Precise 12.04 64-bit. EDIT: I suspect that the load is due to the unusual process "fuser" (since I expect a simple rm to be a damn sight faster than the performance I'm seeing). I'm going to remove the use of fuser and see what happens.

    Read the article

  • How can I get (g)Vim to display the character count of the current file?

    - by OwenP
    I like to write tutorials and articles for a programming forum I frequent. This forum has a character limit per post. I've used Notepad++ in the past to write posts and it keeps a live character count in the status bar. I'm starting to use gVim more and I really don't want to go back to Notepad++ at this point, but it is very useful to have this character count. If I go over the count, I usually end up pasting the post into Notepad++ so I can see when I've trimmed enough to get by the limit. I've seen suggestions that :set ruler would help, but this only gives the character count via the current column index on the current line. This would be great if I didn't use paragraph breaks, but I'm sure you'd agree that reading several thousand characters in one paragraph is not comfortable. I read the help and thought that rulerformat would work, but after looking over the statusline format it uses I didn't see anything that gives a character count for the current buffer. I've seen that there are plugins that add this, but I'm still dipping my toes into gVim and I'm not sure I want to load random plugins before I understand what they do. I'd prefer to use something built in to vim, but if it doesn't exist it doesn't exist. What should I do to accomplish my goal? If it involves a plugin, do you use it and how well does it work?

    Read the article

  • Where does Firefox store cerificates and how to delete one?

    - by majid4466
    Hi all, The root cause of my problem is not known to me, whatever it is, I experience frequent DNS failures. When it happens I cannot browse to my Gmail inbox. I use two DNS settings. One is the public DNS server offered by OpenDNS, and the other is Google's free DNS server. When this happens I switch from the active setting to the other one and the problem goes away. But there is a side effect to this. When browsing to Gmail fails to load, after switching the DNS I receive an error saying the security certificate the site uses is only valid for OpenDNS. This my wild guess at what is going on: 1. OpenDNS fails to resolve mail.google.com to its IP, 2. My ISP sends me a page showing search results for 'mail.google.com' 3. Since I have received some sort of page instead of a timeout, the browser, mistakenly, binds the certificate it has cached for 'mail.google.com' to the new domain. This search page is not served by https so not exception is thrown by the wrong binding 4. After switching the DNS, the domain is correctly resolved to Gmail server's IP and since his is on https the handshake is triggered. 5. Now, because of the wrong binding, which passed quietly as no handshake was involved, I receive the error saying the certificate used by 'mail.google.com' is only good for openDNS I don't know much about DNS, less about https and the process of establishing a secure connection. How correct is my explanation? How can I delete the wrong association and/or the certificate? Thanks for listening. P. S. The problem goes away by itself, but sometimes it takes several hours before Gmail works again.

    Read the article

  • Where does Firefox store cerificates and how to delete one?

    - by majid4466
    Hi all, The root cause of my problem is not known to me, whatever it is, I experience frequent DNS failures. When it happens I cannot browse to my Gmail inbox. I use two DNS settings. One is the public DNS server offered by OpenDNS, and the other is Google's free DNS server. When this happens I switch from the active setting to the other one and the problem goes away. But there is a side effect to this. When browsing to Gmail fails to load, after switching the DNS I receive an error saying the security certificate the site uses is only valid for OpenDNS. This my wild guess at what is going on: OpenDNS fails to resolve mail.google.com to its IP, My ISP sends me a page showing search results for 'mail.google.com' Since I have received some sort of page instead of a timeout, the browser, mistakenly, binds the certificate it has cached for 'mail.google.com' to the new domain. This search page is not served by https so not exception is thrown by the wrong binding After switching the DNS, the domain is correctly resolved to Gmail server's IP and since his is on https the handshake is triggered. Now, because of the wrong binding, which passed quietly as no handshake was involved, I receive the error saying the certificate used by 'mail.google.com' is only good for openDNS I don't know much about DNS, less about https and the process of establishing a secure connection. How correct is my explanation? How can I delete the wrong association and/or the certificate? Thanks for listening. P. S. The problem goes away by itself, but sometimes it takes several hours before Gmail works again.

    Read the article

  • Why are my SharePoint downloads not completing for outside users?

    - by CT
    I am using WSS 3.0 with Microsoft Server 2003. I am running into the following problem. On a pretty frequent basis, outside users are having trouble downloading documents. Some downloads are completing while the download is still incomplete. So for instance, a PDF is a 17MB file. If I download it from within the office, all 17MBs are downloaded and it opens. If I download it from an outside connection, it may download anywhere from 5-10 MB of the file and then say it is complete. When these partial downloads are opened, it gives the user the error, this file is corrupt and cannot be repaired. I have solved this problem on some of the occasions by simply deleting the document and uploading a new copy of the document. This does not always work. Are there known bugs? Are the Internet settings that need to be modified on the outside user's machine? Does anyone else run into this?

    Read the article

  • Using db4o with multiple application instances under medium trust

    - by Anders Fjeldstad
    I recently stumbled over the object database engine db4o which I think looks really interesting. I would like to use it in an ASP.NET MVC application that will be deployed to a shared hosting environment under medium trust. Because of the trust level, I'm restricted to using db4o in embedded/in-process mode. That in itself should be no problem, but the hosting provider also transparently runs each web application in multiple (load-balanced) server instances with shared storage, which I would say is normally a very nice feature for a $10/month hoster. However, since an instance of a db4o server with write access (whether in-process or networked) locks the underlying database file, having multiple instances of the application using the same file won't work (or at least I can't see how it would). So the question is: is it possible to use db4o in this specific environment? I have considered letting each application have its own database which is synchronized with a master database using replication (dRS), but that approach will most likely end up with very frequent bi-directional replication (read master changes at beginning of each request, write to master after each change) which I don't think will be very efficient. Summary of the web application/environment characteristics: Read-intensive (but not entirely read-only) Some delay (a few seconds) is acceptible between the time that a change is made and the time when the change shows up in all the application instances' data Must run in medium trust No guarantee that the load-balancer uses "sticky sessions" All suggestions are much appreciated!

    Read the article

  • Java and tomcat vs ASP.NET and IIS

    - by Mark Cooper
    Until recently I'd considered myself to be a pretty good web programmer (coming up for 10yrs commercial experience on a variety of e-commerce, static and enterprise applications). I'm self taught and have always used the Microsoft product stack (ASP, ASP.NET)... My applications are always functional, relatively bug free, but have never been lightening quick. As a frequent web user I always found this to be the norm... how fast are the websites from the big tech players (eBay, Facebook, Microsoft, IBM, Dell, Telerik etc etc) - in truth none are particularly fast. I always attributed this to "the way things are with web apps"... ...then I cam across a product called Jira from atlasian and this has stopped me in my tracks... This application is fast, and I mean blindingly fast.. too fast to time the switches between pages, fully live content, lots of images and data and cross references etc etc... I run this on an intranet, with a large application DB, and this is running on a very normal server (single processor, SATA HDD, 8GB RAM). Am I missing something?? Are my programming techniques that bad?? I am wondering if this speed gain is down to it being written in Java and running on Tomcat. Does anyone have any benchmarks to compare JSP / ASP or Tomcat / IIS??? Thanks, Mark NOTE: this isn't a blatant plug for Jira. I don't work for them or have any affiliation to them... but I would like to be able to write applications like them :)

    Read the article

  • Getting Indy Error "Could not bind socket. Address and port are already in use"

    - by M Schenkel
    I have developed a Delphi web server application (TWebModule). It runs as a ISAPI DLL on Apache under Windows. The application in turn makes frequent https calls to other web sites using the Indy TIdHTTP component. Periodically I get this error when using the TIdHTTP.get method: Could not bind socket. Address and port are already in use Here is the code: IdSSLIOHandlerSocket1 := TIdSSLIOHandlerSocketOpenSSL.create(nil); IdHTTP := TIdHTTP.create(nil); idhttp.handleredirects := True; idhttp.OnRedirect := DoRedirect; with IdSSLIOHandlerSocket1 do begin SSLOptions.Method := sslvSSLv3; SSLOptions.Mode := sslmUnassigned; SSLOptions.VerifyMode := []; SSLOptions.VerifyDepth := 2; end; with IdHTTP do begin IOHandler := IdSSLIOHandlerSocket1; ProxyParams.BasicAuthentication := False; Request.UserAgent := 'Test Google Analytics Interface'; Request.ContentType := 'text/html'; request.connection := 'keep-alive'; Request.Accept := 'text/html, */*'; end; try idhttp.get('http://www.mysite.com......'); except ....... end; IdHTTP.free; IdSSLIOHandlerSocket1.free; I have read about the reusesocket method, which can be set on both the TIdHttp and TIdSSLLIOHandlerSocketOpenSSL objects. Will setting this to rsTrue solve my problems? I ask because I have not been able to replicate this error, it just happens periodically. Other considerations: I know multiple instances of TWebModule are being spawned. Would wrapping all calls to TIdHttp.get within a TCriticalSection solve the problem?

    Read the article

  • Is This a Valid Way to Use Blocks in Objective-C?

    - by Carter
    I've been building a HTTP client that uses web services to synchronize information between the client and server. I've been using Blocks and NSURLConnection to achieve this on the client side, but I'm getting frequent EXC_BAD_ACCESS crashes in objc_msgSend(). From what I understand, this usually means that a stored block that has fallen off the stack has been called. I think I've coded things correctly to avoid this, but I'm still stuck. Here is conceptually what my code is doing. It starts by calling "synchronizeWithWebServer". That method invokes "listRootObjectsOnServerWithBlock:" which takes in a block to be called when the method returns. "listRootObjectsOnServersWithBlock:" initiates a NSURLConnection to the web server asynchronously. It to expects a block to be called when it returns. Inside that block I want to be able to execute the original Block (so aptly named 'block'). This is only a simplified version of my code. The real synchronization process is more complex but it's mostly more of the same as what you see below. Sometimes the code works perfectly, but about 80% of the time it crashes very early on in the routine. It seems to be more vulnerable to crashing when my data set gets larger. - (void)synchronizeWithWebServer { [self listRootObjectsOnServerWithBlock:^(NSArray *results, NSError *error) { //Iterate over result objects and perform some other similar routines. }]; } - (void)listRootObjectsOnServerWithBlock:(void (^)(NSArray *results, NSError *error))block { //Create NSURLRequest Here //Create connection asynchronously. block = [block copy]; [NSURLConnection sendAsynchronousRequest:urlRequest queue:[NSOperationQueue currentQueue] completionHandler:^(NSURLResponse *response, NSData *data, NSError *error){ //Parse response from web server (stored in NSData *data) NSArray *results = ..... //Call 'block' block(results, error); [block release]; }]; }

    Read the article

  • How to implement or emulate an "abstract" OCUnit test class?

    - by Quinn Taylor
    I have a number of Objective-C classes organized in an inheritance hierarchy. They all share a common parent which implements all the behaviors shared among the children. Each child class defines a few methods that make it work, and the parent class raises an exception for the methods designed to be implemented/overridden by its children. This effectively makes the parent a pseudo-abstract class (since it's useless on its own) even though Objective-C doesn't explicitly support abstract classes. The crux of this problem is that I'm unit testing this class hierarchy using OCUnit, and the tests are structured similarly: one test class that exercises the common behavior, with a subclass corresponding to each of the child classes under test. However, running the test cases on the (effectively abstract) parent class is problematic, since the unit tests will fail in spectacular fashion without the key methods. (The alternative of repeating the common tests across 5 test classes is not really an acceptable option.) The non-ideal solution I've been using is to check (in each test method) whether the instance is the parent test class, and bail out if it is. This leads to repeated code in every test method, a problem that becomes increasingly annoying if one's unit tests are highly granular. In addition, all such tests are still executed and reported as successes, skewing the number of meaningful tests that were actually run. What I'd prefer is a way to signal to OCUnit "Don't run any tests in this class, only run them in its child classes." To my knowledge, there isn't (yet) a way to do that, something similar to a +(BOOL)isAbstractTest method I can implement/override. Any ideas on a better way to solve this problem with minimal repetition? Does OCUnit have any ability to flag a test class in this way, or is it time to file a Radar? Edit: Here's a link to the test code in question. Notice the frequent repetition of if (...) return; to start a method, including use of the NonConcreteClass() macro for brevity.

    Read the article

  • implementing a download manager that supports resuming

    - by Idan K
    hi, I intend on writing a small download manager in C++ that supports resuming (and multiple connections per download). From the info I gathered so far, when sending the http request I need to add a header field with a key of "Range" and the value "bytes=startoff-endoff". Then the server returns a http response with the data between those offsets. So roughly what I have in mind is to split the file to the number of allowed connections per file and send a http request per splitted part with the appropriate "Range". So if I have a 4mb file and 4 allowed connections, I'd split the file to 4 and have 4 http requests going, each with the appropriate "Range" field. Implementing the resume feature would involve remembering which offsets are already downloaded and simply not request those. Is this the right way to do this? What if the web server doesn't support resuming? (my guess is it will ignore the "Range" and just send the entire file) When sending the http requests, should I specify in the range the entire splitted size? Or maybe ask smaller pieces, say 1024k per request? When reading the data, should I write it immediately to the file or do some kind of buffering? I guess it could be wasteful to write small chunks. Should I use a memory mapped file? If I remember correctly, it's recommended for frequent reads rather than writes (I could be wrong). Is it memory wise? What if I have several downloads simultaneously? If I'm not using a memory mapped file, should I open the file per allowed connection? Or when needing to write to the file simply seek? (if I did use a memory mapped file this would be really easy, since I could simply have several pointers). Note: I'll probably be using Qt, but this is a general question so I left code out of it.

    Read the article

  • How to minor updates to Drupal-6 with shared hosting

    - by marty.fried
    I've got Drupal working on a shared host, and I uploaded some modules from my home system successfully, but I've got the message that there is a security update for my version, and I should update immediately. I'm not sure how I'm supposed to do that. It seems like the update is an entire new installation. I originally installed it using the hosting company's installer, Fantastico. Should I simply over-write the existing installation with the new files? Or ignore the message? I realize I shouldn't over-write the sites folder, or anything I've modified. The instructions that come with the download seem to be for a major version upgrade, and are way too much trouble for frequent security updates. Searching Drupal's site shows many other methods, but no indication of anything official. And some were ridiculously error-prone, and not really useful. I don't have shell access to the hosting site, although I can pay extra to get it if I really need to. Or, maybe I can clone the site on my local Linux system, do the update using a script, then upload the whole thing. Does anyone have experience with this situation?

    Read the article

  • SQLite for personal use

    - by ALife
    What are the applications for your personal use that needs a small database like SQLite? I am thinking of trying a few popular databases and SQLite is surely the first one I am planning to try since I know barely nothing about database except some simple programming years ago. I learned that SQLite is good for personal use. But embarrassingly I do not see any application except maybe managing my list of phone numbers/contact info, which has probably a few hundred items. What's your experience? FYI, I use EndNote for my reference and softcopy of books, and I feel iTunes' music/media management is ok since I am not a frequent user anyway. And others? I do lots of coding, but I just use some simple etags tools for that. And I pretty much use .txt file (sometimes in the asciidoc style) for my notes. I have quite a bunch of notes, but not that many either. So, really, what are your personal applications that need a small database instead of existing tools and plain text files?

    Read the article

  • java GC periodically enters into several full GC cycles

    - by Peter
    Environment: sun JDK 1.6.0_16 vm settings: -XX:+DisableExplicitGC -XX:+UseConcMarkSweepGC -Xms1024 -Xmx1024M -XX:MaxNewSize=448m -XX:NewSize=448m -XX:SurvivorRatio=4(6 also checked) -XX:MaxPermSize=128M OS: windows server 2003 processor: 4 cores of INTEL XEON 5130, 2000 Hz my application description: high intensity of concurrent(java 5 concurrency used) operations completed each time by commit to oracle. it's about 20-30 threads run non stop, doing tasks. application runs in JBOSS web container. My GC starts work normally, I see a lot of small GCs and all that time CPU shows good load, like all 4 cores loaded to 40-50%, CPU graph is stable. Then , after 1 min of good work, CPU starts drop to 0% on 2 cores from 4, it's graph becomes unstable, goes up and down("teeth"). I see, that my threads work slower(I have monitoring), I see that GC starts produce a lot of FULL GC during that time and next 4-5 minutes this situation remains as is, then for short period of time, like 1 minute, it gets back to normal situation, but shortly after that all bad thing repeats. Question: Why I have so frequent full GC??? How to prevent that? I played with SurvivorRatio - does not help. I noticed, that application behaves normally until first FULL GC occurs, while I have enough memory. Then it runs badly. my GC LOG: starts good then long period of FULL GCs(many of them) 1027.861: [GC 942200K-623526K(991232K), 0.0887588 secs] 1029.333: [GC 803279K(991232K), 0.0927470 secs] 1030.551: [GC 967485K-625549K(991232K), 0.0823024 secs] 1030.634: [GC 625957K(991232K), 0.0763656 secs] 1033.126: [GC 969613K-632963K(991232K), 0.0850611 secs] 1033.281: [GC 649899K(991232K), 0.0378358 secs] 1035.910: [GC 813948K(991232K), 0.3540375 secs] 1037.994: [GC 967729K-637198K(991232K), 0.0826042 secs] 1038.435: [GC 710309K(991232K), 0.1370703 secs] 1039.665: [GC 980494K-972462K(991232K), 0.6398589 secs] 1040.306: [Full GC 972462K-619643K(991232K), 3.7780597 secs] 1044.093: [GC 620103K(991232K), 0.0695221 secs] 1047.870: [Full GC 991231K-626514K(991232K), 3.8732457 secs] 1053.739: [GC 942140K(991232K), 0.5410483 secs] 1056.343: [Full GC 991232K-634157K(991232K), 3.9071443 secs] 1061.257: [GC 786274K(991232K), 0.3106603 secs] 1065.229: [Full GC 991232K-641617K(991232K), 3.9565638 secs] 1071.192: [GC 945999K(991232K), 0.5401515 secs] 1073.793: [Full GC 991231K-648045K(991232K), 3.9627814 secs] 1079.754: [GC 936641K(991232K), 0.5321197 secs]

    Read the article

  • An Ideal Keyboard Layout for Programming

    - by Jon Purdy
    I often hear complaints that programming languages that make heavy use of symbols for brevity, most notably C and C++ (I'm not going to touch APL), are difficult to type because they require frequent use of the shift key. A year or two ago, I got tired of it myself, downloaded Microsoft's Keyboard Layout Creator, made a few changes to my layout, and have not once looked back. The speed difference is astounding; with these few simple changes I am able to type C++ code around 30% faster, depending of course on how hairy it is; best of all, my typing speed in ordinary running text is not compromised. My questions are these: what alternate keyboard layouts have existed for programming, which have gained popularity, are any of them still in modern use, do you personally use any altered layout, and how can my layout be further optimised? I made the following changes to a standard QWERTY layout. (I don't use Dvorak, but there is a programmer Dvorak layout worth mentioning.) Swap numbers with symbols in the top row, because long or repeated literal numbers are typically replaced with named constants; Swap backquote with tilde, because backquotes are rare in many languages but destructors are common in C++; Swap minus with underscore, because underscores are common in identifiers; Swap curly braces with square brackets, because blocks are more common than subscripts; and Swap double quote with single quote, because strings are more common than character literals. I suspect this last is probably going to be the most controversial, as it interferes the most with running text by requiring use of shift to type common contractions. This layout has significantly increased my typing speed in C++, C, Java, and Perl, and somewhat increased it in LISP and Python.

    Read the article

  • Will this class cause memory leaks, and does it need a dispose method? (asp.net vb)

    - by Phil
    Here is the class to export a gridview to an excel sheet: Imports System Imports System.Data Imports System.Configuration Imports System.IO Imports System.Web Imports System.Web.Security Imports System.Web.UI Imports System.Web.UI.WebControls Imports System.Web.UI.WebControls.WebParts Imports System.Web.UI.HtmlControls Namespace ExcelExport Public NotInheritable Class GVExportUtil Private Sub New() End Sub Public Shared Sub Export(ByVal fileName As String, ByVal gv As GridView) HttpContext.Current.Response.Clear() HttpContext.Current.Response.AddHeader("content-disposition", String.Format("attachment; filename={0}", fileName)) HttpContext.Current.Response.ContentType = "application/ms-excel" Dim sw As StringWriter = New StringWriter Dim htw As HtmlTextWriter = New HtmlTextWriter(sw) Dim table As Table = New Table table.GridLines = GridLines.Vertical If (Not (gv.HeaderRow) Is Nothing) Then GVExportUtil.PrepareControlForExport(gv.HeaderRow) table.Rows.Add(gv.HeaderRow) End If For Each row As GridViewRow In gv.Rows GVExportUtil.PrepareControlForExport(row) table.Rows.Add(row) Next If (Not (gv.FooterRow) Is Nothing) Then GVExportUtil.PrepareControlForExport(gv.FooterRow) table.Rows.Add(gv.FooterRow) End If table.RenderControl(htw) HttpContext.Current.Response.Write(sw.ToString) HttpContext.Current.Response.End() End Sub Private Shared Sub PrepareControlForExport(ByVal control As Control) Dim i As Integer = 0 Do While (i < control.Controls.Count) Dim current As Control = control.Controls(i) If (TypeOf current Is LinkButton) Then control.Controls.Remove(current) control.Controls.AddAt(i, New LiteralControl(CType(current, LinkButton).Text)) ElseIf (TypeOf current Is ImageButton) Then control.Controls.Remove(current) control.Controls.AddAt(i, New LiteralControl(CType(current, ImageButton).AlternateText)) ElseIf (TypeOf current Is HyperLink) Then control.Controls.Remove(current) control.Controls.AddAt(i, New LiteralControl(CType(current, HyperLink).Text)) ElseIf (TypeOf current Is DropDownList) Then control.Controls.Remove(current) control.Controls.AddAt(i, New LiteralControl(CType(current, DropDownList).SelectedItem.Text)) ElseIf (TypeOf current Is CheckBox) Then control.Controls.Remove(current) control.Controls.AddAt(i, New LiteralControl(CType(current, CheckBox).Checked)) End If If current.HasControls Then GVExportUtil.PrepareControlForExport(current) End If i = (i + 1) Loop End Sub End Class End Namespace Will this class cause memory leaks? And does anything here need to be disposed of? The code is working but I am getting frequent crashes of the app pool when it is in use. Thanks.

    Read the article

  • Efficiently Serving Dynamic Content in Google App Engine

    - by awegawef
    My app on google app engine returns content items (just text) and comments on them. It works like this (pseudo-ish code): query: get keys of latest content #query to datastore for each item in content if item_dict in memcache: use item_dict else: build_item_dict(item) #by fetching from datastore store item_dict in memcache send all item_dicts to template Sorry if the code isn't understandable. I get all of the content dictionaries and send them to the template, which uses them to create the webpage. My problem is that if the memcache has expired, for each item I want to display, I have to (1) lookup item in memcache, (2) since no memcache exists I must fetch item from the datastore, and (3) store the item in memcache. These calls build up quickly. I don't set an expire time for the entries to the memcache, so this really only happens once in the morning, but the webpage takes long enough to load (~1 sec) that the browser reports it as not existing. Regularly, my webpages take about 50ms to load. This approach works decently for frequent visits, but it has its flaws as shown above. How can I remedy this? The entries are dynamic enough that I don't think it would be in my best interest to cache my initial request. Thanks in advance

    Read the article

  • how to design a schema where the columns of a table are not fixed

    - by hIpPy
    I am trying to design a schema where the columns of a table are not fixed. Ex: I have an Employee table where the columns of the table are not fixed and vary (attributes of Employee are not fixed and vary). Nullable columns in the Employee table itself i.e. no normalization Instead of adding nullable columns, separate those columns out in their individual tables ex: if Address is a column to be added then create table Address[EmployeeId, AddressValue]. Create tables ExtensionColumnName [EmployeeId, ColumnName] and ExtensionColumnValue [EmployeeId, ColumnValue]. ExtensionColumnName would have ColumnName as "Address" and ExtensionColumnValue would have ColumnValue as address value. Employee table EmployeeId Name ExtensionColumnName table ColumnNameId EmployeeId ColumnName ExtensionColumnValue table EmployeeId ColumnNameId ColumnValue There is a drawback is the first two ways as the schema changes with every new attribute. Note that adding a new attribute is frequent. I am not sure if this is the good or bad design. If someone had a similar decision to make, please give an insight on things like foreign keys / data integrity, indexing, performance, reporting etc.

    Read the article

  • Performance of Multiple Joins

    - by geeko
    Greetings Overflowers, I need to query against objects with many/complex spacial conditions. In relational databases that is translated to many joins (possibly 10+). I'm new to this business and wondering whether to go with MS SQL Server 2008 R2 or Oracle 11g or document-based solutions such as RavenDB or simply go with some spacial database (GIS)... Any thoughts ? Regards UPDATE: Thank you all for your answers. Would anybody opt for document/spatial databases ? My database would consist of tens of millions to few billion records. Mostly read-only. Almost no updates unless in case of mistakes in input. Overnight inserts and not that frequent. The join tables are predicted beforehand but the number of self joins (tables joining themselves multiple times) is not. Small pages of results from such queries are going to be viewed on an highly interactive website so response time is critical. Any predictions on how this can perform on MS SQL Server 2008 R2 or Oracle 11g ? I'm also concerned about boosting performance by adding more servers, which one scales better ? How about PostgresQL ?

    Read the article

  • Refreshing a MKMapView when the user location is updated or using default values

    - by koichirose
    I'm trying to refresh my map view and load new data from the server when the device acquires the user location. Here's what I'm doing: - (void)viewDidLoad { CGRect bounds = self.view.bounds; mapView = [[MKMapView alloc] initWithFrame:bounds]; mapView.showsUserLocation=YES; mapView.delegate=self; [self.view insertSubview:mapView atIndex:0]; [self refreshMap]; } - (void)mapView:(MKMapView *)theMapView didUpdateUserLocation:(MKUserLocation *)userLocation { //this is to avoid frequent updates, I just need one position and don't need to continuously track the user's location if (userLocation.location.horizontalAccuracy > self.myUserLocation.location.horizontalAccuracy) { self.myUserLocation = userLocation; CLLocationCoordinate2D centerCoord = { userLocation.location.coordinate.latitude, userLocation.location.coordinate.longitude }; [theMapView setCenterCoordinate:centerCoord zoomLevel:10 animated:YES]; [self refreshMap]; } } - (void)refreshMap { [[UIApplication sharedApplication] setNetworkActivityIndicatorVisible:YES]; [self.mapView removeAnnotations:self.mapView.annotations]; //default position NSString *lat = @"45.464161"; NSString *lon = @"9.190336"; if (self.myUserLocation != nil) { lat = [[NSNumber numberWithDouble:myUserLocation.coordinate.latitude] stringValue]; lon = [[NSNumber numberWithDouble:myUserLocation.coordinate.longitude] stringValue]; } NSString *url = [[NSString alloc] initWithFormat:@"http://myserver/geo_search.json?lat=%@&lon=%@", lat, lon]; NSURLRequest *theRequest=[NSURLRequest requestWithURL:[NSURL URLWithString:url] cachePolicy:NSURLRequestUseProtocolCachePolicy timeoutInterval:60.0]; [[NSURLConnection alloc] initWithRequest:theRequest delegate:self]; [url release]; [[UIApplication sharedApplication] setNetworkActivityIndicatorVisible:NO]; } I also have a - (void)connection:(NSURLConnection *)connection didReceiveData:(NSData *)data, to create and add the annotations. Here's what happens: sometimes I get the location after a while, so I already loaded data close to the default location. Then I get the location and it should remove the old annotations and add the new ones, instead I keep seeing the old annotations and the new ones are added anyway (let's say I receive 10 annotations each time, so it means that I have 20 on my map when I should have 10). What am I doing wrong?

    Read the article

  • Tell bots apart from human visitors for stats?

    - by Pekka
    I am looking to roll my own simple web stats script. The only major obstacle on the road, as far as I can see, is telling human visitors apart from bots. I would like to have a solution for that which I don't need to maintain on a regular basis (i.e. I don't want to update text files with bot-related User-agents). Is there any open service that does that, like Akismet does for spam? Or is there a PHP project that is dedicated to recognizing spiders and bots and provides frequent updates? To clarify: I'm not looking to block bots. I do not need 100% watertight results. I just want to exclude as many as I can from my stats. In know that parsing the user-Agent is an option but maintaining the patterns to parse for is a lot of work. My question is whether there is any project or service that does that already. Bounty: I thought I'd push this as a reference question on the topic. The best / most original / most technically viable contribution will receive the bounty amount.

    Read the article

  • Migrating MachineKey from iis6 on old server to iis7 on new server

    - by MaseBase
    I am migrating our hosting environment to a totally new data center with new boxes and hardware and software... the whole deal. Our website cookies are encrypted using the machineKey, so when I make a request to my domain and point it to the new web server (by overriding the local hosts file), I get an error because the cookie cannot be decrypted, since the Machine Key is different. I'd like to avoid any problems a frequent user might have when they arrive at the new server for the first time. To the best of my knowledge, at this point I think I need to set the same MachineKey from our current servers on our new servers. This way when past visitors with a cookie arrive at our website served by the new server, the cookie will be decrypted properly with the MachineKey it was encrypted with and then log them in properly. My question is where do I find my MachineKey value (in IIS 6 win2k3 server) so I can use that value to set it statically on my new servers? I've pulled up my machine.config file, but it doesn't specify the key, it only specifies a configSection where the key can be defined. It's not in my web.config for the app or elsewhere. I did find this great article on some MachineKey and Web Garden woes (which could explain some other bugs I've been experiencing with regard to the machineKey). Update I am back to this issue and am still faced with a similar problem. I have the MachineKey auto-generated on the IIS6 server but I need to get that exact key so I can set it explicitly and not have it auto-generated anymore. Any help is appreciated...

    Read the article

  • Mysterious double execution of Click handler after change to Windows 2008 Server

    - by mjn
    After moving a Delphi 2009 application from a Windows 2003 / Citrix environment to a 64 bit Windows 2008 R2 environment (users now use RDP instead of Citrix), there are frequent runtime errors which are caused by a double exection of a menu item click handler. The first call of the event handler opens a modal form. Then the same event handler is triggered again and thus a runtime exception occurs since the visible form can not be made modal. The madExcept stack trace looks like this: main thread ($2dc): 004ffa88 +06c Ladelist.exe Forms TCustomForm.ShowModal 0052f4ff +0bb Ladelist.exe DB TDataSet.SetActive 008fc9a1 +089 Ladelist.exe u_ladeli 928 +16 TForm1.EditLoadingList 008fadec +070 Ladelist.exe u_ladeli 504 +5 TForm1.OpenLoadingListClick <----- 2 004d6e27 +0a7 Ladelist.exe Menus TMenuItem.Click 004d857f +0ef Ladelist.exe Menus DoClick 004d866b +087 Ladelist.exe Menus TMenu.IsShortCut 005001c1 +04d Ladelist.exe Forms TCustomForm.IsShortCut 004e8dc0 +068 Ladelist.exe Controls TWinControl.IsMenuKey 004e8f89 +011 Ladelist.exe Controls TWinControl.CNSysKeyDown 004e247e +2d2 Ladelist.exe Controls TControl.WndProc 004e6983 +513 Ladelist.exe Controls TWinControl.WndProc 004fb5e8 +594 Ladelist.exe Forms TCustomForm.WndProc 004e20a4 +024 Ladelist.exe Controls TControl.Perform 004856e0 +014 Ladelist.exe Classes StdWndProc 004e609c +02c Ladelist.exe Controls TWinControl.MainWndProc 004856e0 +014 Ladelist.exe Classes StdWndProc 775b00e3 +02b ntdll.dll KiUserCallbackDispatcher 008fc9a1 +089 Ladelist.exe u_ladeli 928 +16 TForm1.EditLoadingList 008fadec +070 Ladelist.exe u_ladeli 504 +5 TForm1.OpenLoadingListClick <----- 1 004d6e27 +0a7 Ladelist.exe Menus TMenuItem.Click 004d857f +0ef Ladelist.exe Menus DoClick 004d866b +087 Ladelist.exe Menus TMenu.IsShortCut 005001c1 +04d Ladelist.exe Forms TCustomForm.IsShortCut 004e8dc0 +068 Ladelist.exe Controls TWinControl.IsMenuKey 004e8e0d +01d Ladelist.exe Controls TWinControl.CNKeyDown 004e247e +2d2 Ladelist.exe Controls TControl.WndProc 004e6983 +513 Ladelist.exe Controls TWinControl.WndProc 004fb5e8 +594 Ladelist.exe Forms TCustomForm.WndProc 004e20a4 +024 Ladelist.exe Controls TControl.Perform 004e609c +02c Ladelist.exe Controls TWinControl.MainWndProc 004856e0 +014 Ladelist.exe Classes StdWndProc 775b00e3 +02b ntdll.dll KiUserCallbackDispatcher 753c3675 +010 kernel32.dll BaseThreadInitThunk

    Read the article

  • Reference remotely located assembly (web uri) from locally installed application?

    - by moonground.de
    Hi Stackoverflowers! :) We have a .NET application for Windows which is installed locally by Microsoft Installer. Now we have the need to use additional assemblies which are located online at our Web Servers. We'd like to refer to a remote uri like https://www.ourserver.com/OurProductName/ExternalLib.dll and reveal additional functionality, which is described roughly by a known common ("AddIn/Plugin") Interface. These are not 3rd Party Plugins, we just want be able to exchange parts of the application frequently, without the need to have frequent software updates. Our first idea was to add some kind of "remote refence" in Visual Studio by setting the path to the remote assembly uri. But Visual Studio downloaded the assembly immediately to a temporary directory, adding a reference to it. Our second attempt then, is simply using a WebRequest (or WebClient) to retrieve a binary stream of the Assembly, loading it "from image" by using Assembly.Load(...). This actually works, but is not very elegant and requires more additional programming for verification etc. We hoped Clickonce would provide useful techniques but apparently it's suitable for standalone applications only. (Correct me?) Is there a way (.net native or by framework/api) to reference remotely located assemblies? Thanks in advance and have a happy easter!

    Read the article

  • Importing package as a submodule

    - by wecac
    Hi, I have a package 3rd party open source package "foo"; that is in beta phase and I want to tweak it to my requirements. So I don't want to get it installed in /usr/local/lib/python or anywhere in current sys.path as I can't make frequent changes in top level packages. foo/ __init__.py fmod1.py import foo.mod2 fmod2.py pass I want to install the the package "foo" as a sub package of my namespace say "team.my_pkg". So that the "fullname" of the package becomes "team.my_pkg.foo" without changing the code in inner modules that refer "team.my_pkg.foo" as "foo". team/ __init__.py my_pkg/ __init__.py foo/ fmod1.py import foo.mod2 fmod2.py pass One way to do this is to do this in team.my_pkg.init.py: import os.path import sys sys.path.append(os.path.dirname(__file__)) But I think it is very unsafe. I hope there is some way that only fmod1.py and fmod2.py can call "foo" by its short name everything else should use its complete name "team.my_pkg.foo" I mean this should fail outside team/my_pkg/foo: import team.my_pkg import foo But this should succeed outside team/my_pkg/foo: import team.my_pkg.foo

    Read the article

< Previous Page | 15 16 17 18 19 20 21 22 23 24 25  | Next Page >