Search Results

Search found 2338 results on 94 pages for 'donut caching'.

Page 82/94 | < Previous Page | 78 79 80 81 82 83 84 85 86 87 88 89  | Next Page >

  • Is there a max recommended size on bundling js/css files due to chunking or packet loss?

    - by George Mauer
    So we all have heard that its good to bundle your javascript. Of course it is, but it seems to me that the story is too simple. See if my logic makes sense here. Obviously fewer HTTP requests is fewer round trips and hence better. However - and I don't know much about bare http - aren't http responses sent in chunks? And if a file is larger than one of those chunks doesn't it have to be downloaded as multiple (possibly synchronous?) round trips? As opposed to this, several requests for files just under the chunking size would arrive much quicker since modern web browsers download resources like javascripts in parallel. Even if chunking is not an issue, it seems like there would be some max recommended size just due to likelyhood of packet loss alone since a bundled file must wait till it is entirely downloaded to execute, versus the more lenient native rule that scripts must execute in order. Obviously there's also matters of browser caching and code volatility to consider but can someone confirm this or explain why I'm off base? Does anyone have any numbers to put to it?

    Read the article

  • A quick question on data returned by jquery.ajax() call (EDITED)

    - by recipriversexclusion
    EDIT: The original problem was due a stupid syntax mistake somewhere else, whicj I fixed. I have a new problem though, as described below I have the following jquery.ajax call: $.ajax({ type: 'GET', url: servicesUrl + "/" + ID + "/tasks", dataType: "xml", success : createTaskListTable }); The createTaskListTable function is defined as function createTaskListTable(taskListXml) { $(taskListXml).find("Task").each(function(){ alert("Found task") }); // each task } Problem is: this doesn't work, I get an error saying taskListXml is not defined. JQuery documentation states that the success functions gets passed three arguments, the first of which is the data. How can I pass the data returned by .ajax() to my function with a variable name of my own choosing. My problem now is that I'm getting the XML from a previous ajax call! How is this even possible? That previous function is defined as function convertServiceXmlDataToTable(xml), so they don't use the same variable name. Utterly confused. Is this some caching issue? If so, how can I clear the browser cache to get rid of the earlier XML? Thanks!

    Read the article

  • I just need one dependent module , but don't need its dependencies (Maven)

    - by smallufo
    In my project , there is a spring XML config that utilizes ehcache to cache method returns. xml snippet : http://www.springmodules.org/schema/ehcache http://www.springmodules.org/schema/cache/springmodules-ehcache.xsd <ehcache:config configLocation="classpath:ehcache.xml"/> <ehcache:proxy id="explainRetriever" refId="explainRetrieverImpl"> <ehcache:caching methodName="get*" cacheName="explainCache"/> </ehcache:proxy> but in runtime , server complains it cannot find definitions of http://www.springmodules.org/schema/ehcache , I know I have to add "spring-modules-cache.jar" to WEB-INF/lib directory. But my project is maintained by maven , if I add "spring-modules-cache" to the runtime dependency , it will bring a lot of dependencies to my WAR file , filling my WAR with a lot of unnecessary jars. I just need one declaration in it , not all of its dependencies ... Is there any way to tell maven not to include its dependencies to the project ? Or ... another way , when packaging WAR , just copy another prepared spring-modules-cache.jar to WEB-INF/lib , how to do this ? Thanks !

    Read the article

  • ASPX page renders differently when reached on intranet vs. internet?

    - by MattSlay
    This is so odd to me.. I have IIS 5 running on XP and it's hosting a small ASP.Net app for our LAN that we can access by using the computer name, virtual directory, and page name (http://matt/smallapp/customers.aspx), but you can also hit that IIS server and page from the internet because I have a public IP that my firewall routes to the "Matt" computer (like http://213.202.3.88/smallapp/customers.aspx [just a made-up IP]). Don't worry, I have Windows domain authentication is in place to protect the app from anonymous users. So all the abovea parts works fine. But what's weird is that the Border of the divs on the page are rendered much thicker when you access the page from the intranet, versus the internet, (I'm using IE8) and also, some of the div layout (stretching and such) acts differently. Why would it render different in the same browser based on whether it was reached from the LAN vs. the internet? It does NOT do this in FireFox. So it must be just an IE8 thing. All the CSS for the divs is right in the HTML page, so I do not think it is a caching matter of a CSS file. Notice how the borders are different in these two images: Internet: http://twitpic.com/hxx91 . Lan: http://twitpic.com/hxxtv

    Read the article

  • Exporting de-aggregated data

    - by Ben
    I'm currently working on a data export feature for a survey application. We are using SQL2k8. We store data in a normalized format: QuestionId, RespondentId, Answer. We have a couple other tables that define what the question text is for the QuestionId and demographics for the RespondentId... Currently I'm using some dynamic SQL to generate a pivot that joins the question table to the answer table and creates an export, its working... The problem is that it seems slow and we don't have that much data (less than 50k respondents). Right now I'm thinking "why am I 'paying' to de-aggregate the data for each query? Why don't I cache that?" The data being exported is based on dynamic criteria. It could be "give me respondents that completed on x date (or range)" or "people that like blue", etc. Because of that, I think I have to cache at the respondent level, find out what respondents are being exported and then select their combined cached de-aggregated data. To me the quick and dirty fix is a totally flat table, RespondentId, Question1, Question2, etc. The problem is, we have multiple clients and that doesn't scale AND I don't want to have to maintain the flattened table as the survey changes. So I'm thinking about putting an XML column on the respondent table and caching the results of a SELECT * FROM Data FOR XML AUTO WHERE RespondentId = x. With that in place, I would then be able to get my export with filtering and XML calls into the XML column. What are you doing to export aggregated data in a flattened format (CSV, Excel, etc)? Does this approach seem ok? I worry about the cost of XML functions on larger result sets (think SELECT RespondentId, XmlCol.value('//data/question_1', 'nvarchar(50)') AS [Why is there air?], XmlCol.RinseAndRepeat)... Is there a better technology/approach for this? Thanks!

    Read the article

  • Java applet loading images from external jars

    - by Mathias
    I have a jar on a server, and users should be able to develop extensions for it. Therefore the jars main class should be extended and some resources should be added to a second user created jar which will be loaded from another server or locally. Now I have problems accessing the resources (images) from the user loaded jars. Heres is the structure: My Server: game.jar containing game.class images.class ... image1.png (...) Local: user.jar containing: user.class extends game userimage.png The extension is loaded via Greasemonkey, it modifies the "archive" attribute to "/home/username/user.jar, game.jar" and the "code" attribute to "user.class". The user should be able to overwrite already defined images. If the image does not exist in game.jar, it is loaded correctly from user.jar. But the images loaded early in the game are always loaded from the game.jar, others seem to be overwritten correctly by the user. Is there a way to make sure they are always loaded in the correct order? This might be because of some caching mechanism. Because Greasemonkey removes the game from the page, changes the archive and code and reinsert it, the game is loaded without a mod for a brief second. In that time, images are loaded as expected from game jar, but those are the ones not being overwritable by the user. But how to avoid it? Another thing: If I overwrite the "run" method in user.class, the game is unable to load any image at all. Not from the user.jar and not from the game.jar. Java doesn't find the image, as the URL object "getClass().getResource(imagename)" returns with null. I tried to overwrite the image.class, but that doesn't fix the problem, unless I overwrite every class from game.class involved into calling the image.class

    Read the article

  • Is .NET 4.0 just a show?

    - by Will Marcouiller
    I went to a presentation about the .NET Framework and Visual Studio 2010, last night. The topis were: ASP.NET 4 - Some of the new features of ASP.NET 4 More control over ClientID's in WebForms; Output Caching; ... // Some other stuff I don't really remember being more in framework and WinForms world. Entity Framework 2.0 (.NET 4.0) T4 Templates; Domain driven development; Data driven development; Contexts (edmx files); Some of real-world limitations of EF4 (projects with over 70 to 75 tables); Better POCO support, despite there are still these hidden EntityObject and StructuralObject, but used differently in comparison to EF 1.0 so that it doesn't take off your inheritance; Allows to easily choose how to persist the hierarchy into the underlying database; Code only (start working with EF4 directly from your code!); Design by Contract (DbC). The most interesting feature is, and only, as far as I'm concerned, all related to parallelism made easier. Which really works! No additional assembly references to add. In conclusion, I'm far from impressed about .NET Framework 4.0, apart that it makes some things easier to do. But when you're used to make it a way, it doesn't really change much, in my opinion. Is it me who cannot foresee what .NET 4.0 has to offer? What would you guys base your decision on to migrate to .NET 4.0, in a practical way?

    Read the article

  • Flushing writes in buffer of Memory Controller to DDR device

    - by Rohit
    At some point in my code, I need to push the writes in my code all the way to the DIMM or DDR device. My requirement is to ensure the write reaches the row,ban,column of the DDR device on the DIMM. I need to read what I've written to the main memory. I do not want caching to get me the value. Instead after writing I want to fetch this value from main memory(DIMM's). So far I've been using Intel's x86 instruction wbinvd(write back and invalidate cache). However this means the caches and TLB are flushed. Write-back requests go to the main memory. However, there is a reasonable amount of time this data might reside in the write buffer of the Memory Controller( Intel calls it integrated memory controller or IMC). The Memory Controller might take some more time depending on the algorithm that runs in the Memory Controller to handle writes. Is there a way I force all existing or pending writes in the write buffer of the memory controller to the DRAM devices ?? What I am looking for is something more direct and more low-level than wbinvd. If you could point me to right documents or specs that describe this I would be grateful. Generally, the IMC has a several registers which can be written or read from. From looking at the specs for that for the chipset I could not find anything useful. Thanks for taking the time to read this.

    Read the article

  • SQL Server service broker reporting as off when I have written a query to turn it on

    - by dotnetdev
    I have made a small ASP.NET website. It uses sqlcachedependency The SQL Server Service Broker for the current database is not enabled, and as a result query notifications are not supported. Please enable the Service Broker for this database if you wish to use notifications. Description: An unhandled exception occurred during the execution of the current web request. Please review the stack trace for more information about the error and where it originated in the code. Exception Details: System.InvalidOperationException: The SQL Server Service Broker for the current database is not enabled, and as a result query notifications are not supported. Please enable the Service Broker for this database if you wish to use notifications. Source Error: Line 12: System.Data.SqlClient.SqlDependency.Start(connString); This is the erroneous line in my global.asax. However, in sql server (2005), I enabled service broker like so (I connect and run the SQL Server service when I debug my site): ALTER DATABASE mynewdatabase SET ENABLE_BROKER with rollback immediate And this was successful. What am I missing? I am trying to use sql caching dependency and have followed all procedures. Thanks

    Read the article

  • Best practices for withstanding launch day traffic burst

    - by Sam McAfee
    We are working on a website for a client that (for once) is expected to get a fair amount of traffic on day one. There are press releases, people are blogging about it, etc. I am a little concerned that we're going to fall flat on our face on day one. What are the main things you would look at to ensure (in advance without real traffic data) that you can stay standing after a big launch. Details: This is a L/A/M/PHP stack, using an internally developed MVC framework. This is currently being launched on one server, with Apache and MySQL both on it, but we can break that up if need be. We are already installing memcached and doing as much PHP-level caching as we can think of. Some of the pages are rather query intensive, and we are using Smarty as our template engine. Keep in mind there is no time to change any of these major aspects--this is the just the setup. What sorts of things should we watch out for?

    Read the article

  • Amazon Product API ResponseGroups and Default results

    - by aboxy
    A. In our application, most of the data we work with is stored as free text .i.e. there is no categorization done as of now. We are using openNLP libraries to make sense of the data(extract keywords/classify) and do a query to Amazon web services to pull the results of the query. We use searchindex=All and keywords=. Results are not always returned and we basically get 'AWS.ECommerceService.NoExactMatches' How to avoid that? 1) Is there a way to specify default results if no match found? e.g. Amazon carousel widget does that if the search query did not return results, it basically show some computer items. 2) Should I batch the request always and add another search criteria to every request? If my first criteria does not pull any results, we can be sure that our 2nd query will always pull results(possibly caching?) Here is one search criteria 'Open Circle Hoop Earrings Polished Stainless Steel Open Circle Hoop Earrings Polished Stainless Steel DiamondShark' This return no results via API. On Amazon site,I get alternative suggestions with some results which are pretty relevant. Is there a way to pull those results? B. We just need a thumbnail image and a title and description for our app. Which responseGroup is appropriate? We are using medium rt now but there is awful lot of information even with that responseGroup. Any help is appreciated. thanks

    Read the article

  • C# wrapper for objects

    - by Haggai
    I'm looking for a way to create a generic wrapper for any object. The wrapper object will behave just like the class it wraps, but will be able to have more properties, variable, methods etc., for e.g. object counting, caching etc. Say the wrapper class be called Wrapper, and the class to be wrapped be called Square and has the constructor Square(double edge_len) and the properties/methods EdgeLength and Area, I would like to use it as follows: Wrapper<Square> mySquare = new Wrapper<Square>(2.5); /* or */ new Square(2.5); Console.Write("Edge {0} -> Area {1}", mySquare.EdgeLength, mySquare.Area); Obviously I can create such a wrapper class for each class I want to wrap, but I'm looking for a general solution, i.e. Wrapper<T> which can handle both primitive and compound types (although in my current situation I would be happy with just wrapping my own classes). Suggestions? Thanks.

    Read the article

  • hosting a high traffic facebook app (game)

    - by z3cko
    we are currently developing a high traffic facebook application. all the traffic will be within one month, where there are 500.000 to 1.000.000 expected users. after that month, the game is over and we have a winner - so the app will be archived. we are currently planning to develop the application with ruby on rails and searching for hosting options that can deal with the traffic. the problem is not so much the users, but the peak values: we will have around 500.000 requests coming daily within a short timeframe (lets say within 3 minutes in the worst case) we are expecting 500.000 to 1.000.000 users of the application, with peaks at 1:00pm (timezone GMT+1), where most (up to 80% of the users) will send most of the requests. the requests are from 11th of june to 11.july - after that, the app/game is closed/over. we are currently developing an aggressive caching mechanism - currently we are thinking about 2 or 3 small apps/webservices, that will handle the load. the load is distributed as follows: a) main application, cached data (11 screens, 200k each) b) voting: every day until 1:00pm (timezone GMT+1) - every user votes with about 10k data sent, high concurrent peak values! questions: is there any specific application setup that is recommendable? are there any hosting partners that can be recommended? thanks!

    Read the article

  • Firefox extension js object initialization

    - by Michael
    Note: this is about Firefox extension, not a js general question. In Firefox extension project I need my javascript object to be initialized just once per Firefox window. Otherwise each time I open my window a new timers will be engaged, new properties will be used, so everything will start from scratch. hope example below will demystify my question :) var StupidExtension { statusBarValue: "Not Initialized Yet", startup: function () { ... // Show statusBarValue in Status Bar Panel }, initTimerToRetrieveStatusBarValueFromNetwork: function () { ... } } so each time you hit Ctrl+N a new window you will see "Not Initialized Yet" and then new timer will be fired, so after some time it retrieve data from network you will see value also on second window and so on. Ideally would be to have just a single timer function running and updating all status bar panels in all Firefox windows. Of course I can do some caching, like saving the value in prefs or some other storage, then show it from there. But I feel like this is artificial. So the question will be is there "native" technique of making static some parts of the object among all Firefox window instances?

    Read the article

  • Telerik chart not loading correctly in new window (ajax issue?)

    - by Phillip Schmidt
    I have a page which contains user controls with Telerik Charts (grids also, but they work fine). From this page, the user can click on a button to be redirected to a "Printer-Friendly Version" type page, which opens a new window via javascript and goes through a slightly different view (for formatting and stuff), but the telerik code is all the same. The problem is, my Chart displays just fine in the original window, but the new window displays basically an empty chart with no data. This bug is only present in IE, and only applies to Charts. Grids work fine, for whatever reason. I'm thinking this is due to differences in script caching between browsers -- correct me if I'm wrong, I'm semi-new to client-directed web development. Anyway I read somewhere that Telerik has issues with loading data and/or js files when loaded via ajax, so maybe that's the problem? If so, how could I get around this? And if not, any ideas on what could be causing this issue? It's causing me a great deal of frustration, since a print preview page seems like it should be the easiest of jobs. Edit: The charts are being rendered as html (if somebody can explain how to render them as images, that would be awesome). And dev tools shows basically the same thing between chrome and IE. Whenever my web service goes back up ill WinMerge them and look for any peculiarities/differences between them. In the mean time, though, the "render as an image" concept sounds promising. That way I could just save the image from the first page, and insert it right into the print preview page, right?. And since it's a print-preview page, it's not going to need to be interactive or anything, so that'd work out nicely. Another (important) Edit: These are probably the culprit... And here is a little more detail on that: And here is a side-by-side of it working(in chrome) and not working (in IE):

    Read the article

  • Zend Framework - Deny access to folders other than public folder

    - by Vincent
    All, I have the following Zend application structure: helloworld - application - configs - controllers - models - layouts - include - library - public - .htaccess - index.php - design - .htaccess The .htaccess in the root folder has the following contents: ##################################################### # CONFIGURE media caching # Header unset ETag FileETag None Header unset Last-Modified Header set Expires "Fri, 21 Dec 2012 00:00:00 GMT" Header set Cache-Control "max-age=7200, must-revalidate" SetOutputFilter DEFLATE # ##################################################### ErrorDocument 404 /custom404.php RedirectMatch permanent ^/$ /public/ The .htaccess in the public folder has the following: Options -MultiViews ErrorDocument 404 /custom404.php RewriteEngine on # The leading %{DOCUMENT_ROOT} is necessary when used in VirtualHost context RewriteCond %{DOCUMENT_ROOT}%{REQUEST_FILENAME} -s [OR] RewriteCond %{DOCUMENT_ROOT}%{REQUEST_FILENAME} -l [OR] RewriteCond %{DOCUMENT_ROOT}%{REQUEST_FILENAME} -d RewriteRule ^.*$ - [NC,L] RewriteRule ^.*$ index.php [NC,L] My vhost configuration is as under: <VirtualHost *:80> ServerAdmin [email protected] DocumentRoot "C:\\xampp\\htdocs\\xampp\\helloworld\\" ServerName helloworld ServerAlias helloworld <Directory "C:\\xampp\\htdocs\\xampp\\helloworld\\"> Options Indexes FollowSymLinks AllowOverride all Order Deny,Allow Deny from all Allow from 127.0.0.1 </Directory> </VirtualHost> Currently, if the user visits, http://localhost, my .htaccess files above make sure, the request is routed to http://localhost/public automatically. If the user visits any other folder apart from public folder from the address bar, he gets a directory listing of that folder. How can I make sure to deny the user access to every other folder except the public folder? I want the user to be redirected to the public folder if he visits any other folder. However, if the underlying code requests something from other folders, (ex: ) it should still work.. Thanks

    Read the article

  • gwt seperate modules with no code sharing

    - by Code freak
    Hi, I have to make a web application using GWT. The project has a core module that'll expose a set of apis to be used by other apps; each of these app are unrelated. Each shall be loaded in a separate iframe. My idea was to compile core into core.js and each app shall have its own app1.js app2.js and so on... App1 script type="text/javascript" src="core.js" ></script> script type="text/javascript" src="app1.js" ></script> with this design, due to browser caching, each app laod only the app.js which should be smaller ~20kb in size. Making a core module is straightforward but the apps are problematic. The reason being after compilation, each app contains the entire GWT library - this substantially increases the download size of the complete webapp. Can anyone suggest a way to get aroung this problem ? I've checked similar questions on SO, but failed to find a simple working answer fr the problem. Thanks for any help.

    Read the article

  • Django Model Formset Pre-Filled Value Problem

    - by user552377
    Hi, i'm trying to use model formsets with Django. When i load forms template, i see that it's filled-up with previous values. Is there a caching mechanism that i should stop, or what? Thanks for your help, here is my code: models.py class FooModel( models.Model ): a_field = models.FloatField() b_field = models.FloatField() def __unicode__( self ): return self.a_field forms.py from django.forms.models import modelformset_factory FooFormSet = modelformset_factory(FooModel) views.py def foo_func(request): if request.method == 'POST': formset = FooFormSet(request.POST, request.FILES, prefix='foo_prefix' ) if formset.is_valid(): formset.save() return HttpResponseRedirect( '/true/' ) else: return HttpResponseRedirect( '/false/' ) else: formset = FooFormSet(prefix='foo_prefix') variables = RequestContext( request , { 'formset':formset , } ) return render_to_response('footemplate.html' , variables ) template: <form method="post" action="."> {% csrf_token %} <input type="submit" value="Submit" /> <table id="FormsetTable" border="0" cellpadding="0" cellspacing="0"> <tbody> {% for form in formset.forms %} <tr> <td>{{ form.a_field }}</td> <td>{{ form.b_field }}</td> </tr> {% endfor %} </tbody> </table> {{ formset.management_form }} </form>

    Read the article

  • How do I implement Hibernate Pagination using a cursor (so the results stay consistent, despite new

    - by hunterae
    Hey all, Is there any way to maintain a database cursor using Hibernate between web requests? Basically, I'm trying to implement pagination, but the data that is being paged is consistently changing (i.e. new records are added into the database). We are trying to set it up such that when you do your initial search (returning a maximum of 5000 results), and you page through the results, those same records always appear on the same page (i.e. we're not continuously running the query each time next and previous page buttons are clicked). The way we're currently implementing this is by merely selecting 5000 (at most) primary keys from the table we're paging, storing those keys in memory, and then just using 20 primary keys at a time to fetch their details from the database. However, we want to get away from having to store these keys in memory and would much prefer a database cursor that we just keep going back to and moving backwards and forwards over the cursor to generate pages. I tried doing this with Hibernate's ScrollableResults but found that I could not call methods like next() and previous() would cause an exception if you within a different web request / Hibernate session (no surprise there). Is there any way to reattach a ScrollableResults object to a Session, much the same way you would reattach a detached database object to make it persistent? Are there any other approaches to implement this data paging with consistent paging results without caching the primary keys?

    Read the article

  • C++ STL Map vs Vector speed

    - by sub
    In the interpreter for my experimental programming language I have a symbol table. Each symbol consists of a name and a value (the value can be e.g.: of type string, int, function, etc.). At first I represented the table with a vector and iterated through the symbols checking if the given symbol name fitted. Then I though using a map, in my case map<string,symbol>, would be better than iterating through the vector all the time but: It's a bit hard to explain this part but I'll try. If a variable is retrieved the first time in a program in my language, of course its position in the symbol table has to be found (using vector now). If I would iterate through the vector every time the line gets executed (think of a loop), it would be terribly slow (as it currently is, nearly as slow as microsoft's batch). So I could use a map to retrieve the variable: SymbolTable[ myVar.Name ] But think of the following: If the variable, still using vector, is found the first time, I can store its exact integer position in the vector with it. That means: The next time it is needed, my interpreter knows that it has been "cached" and doesn't search the symbol table for it but does something like SymbolTable.at( myVar.CachedPosition ). Now my (rather hard?) question: Should I use a vector for the symbol table together with caching the position of the variable in the vector? Should I rather use a map? Why? How fast is the [] operator? Should I use something completely different?

    Read the article

  • Set a datetime for next or previous sunday at specific time

    - by Marc
    I have an app where there is always a current contest (defined by start_date and end_date datetime). I have the following code in the application_controller.rb as a before_filter. def load_contest @contest_last = Contest.last @contest_last.present? ? @contest_leftover = (@contest_last.end_date.utc - Time.now.utc).to_i : @contest_leftover = 0 if @contest_last.nil? Contest.create(:start_date => Time.now.utc, :end_date => Time.now.utc + 10.minutes) elsif @contest_leftover < 0 @winner = Organization.order('votes_count DESC').first @contest_last.update_attributes!(:organization_id => @winner.id, :winner_votes => @winner.votes_count) if @winner.present? Organization.update_all(:votes_count => 0) Contest.create(:start_date => @contest_last.end_date.utc, :end_date => Time.now.utc + 10.minutes) end end My questions: 1) I would like to change the :end_date to something that signifies next Sunday at a certain time (eg. next Sunday at 8pm). Similarly, I could then set the :start_date to to the previous Sunday at a certain time. I saw that there is a sunday() class (http://api.rubyonrails.org/classes/Time.html#method-i-sunday), but not sure how to specify a certain time on that day. 2) For this situation of always wanting the current contest, is there a better way of loading it in the app? Would caching it be better and then reloading if a new one is created? Not sure how this would be done, but seems to be more efficient. Thanks!

    Read the article

  • Fast path cache generation for a connected node graph

    - by Sukasa
    I'm trying to get a faster pathfinding mechanism in place in a game I'm working on for a connected node graph. The nodes are classed into two types, "Networks" and "Routers." In this picture, the blue circles represent routers and the grey rectangles networks. Each network keeps a list of which routers it is connected to, and vice-versa. Routers cannot connect directly to other routers, and networks cannot connect directly to other networks. Networks list which routers they're connected to Routers do the same I need to get an algorithm that will map out a path, measured in the number of networks crossed, for each possible source and destination network excluding paths where the source and destination are the same network. I have one right now, however it is unusably slow, taking about two seconds to map the paths, which becomes incredibly noticeable for all connected players. The current algorithm is a depth-first brute-force search (It was thrown together in about an hour to just get the path caching working) which returns an array of networks in the order they are traversed, which explains why it's so slow. Are there any algorithms that are more efficient? As a side note, while these example graphs have four networks, the in-practice graphs have 55 networks and about 20 routers in use. Paths which are not possible also can occur, and as well at any time the network/router graph topography can change, requiring the path cache to be rebuilt. What approach/algorithm would likely provide the best results for this type of a graph?

    Read the article

  • Process data BEFORE a 301 Redirect?

    - by Jesse
    So, I've been working on a PHP link shortener (I know, just what the world needs). Basically when the page loads, php determines where it needs to go and sends a 301 Header to redirect the browser, like so... Header( "HTTP/1.1 301 Moved Permanently" ); header("Location: http://newsite.com"; Now, I'm trying to add some tracking to my redirects and insert some custom analytics data into a MySQL table before the redirect happen. It works perfectly if I don't specify the a redirect type and just use: header("Location: http://newsite.com"; But, of course as soon as you add in the 301 header, nothing else gets processed. Actually, on the first request, it sends the data to MySQL, but on any subsequent requests there's no communication with the database. I assume it's a browser caching issue, once it's seen the 301 it decides they're no reason to parse anything on future requests. But, does anyone know if there's any way to get around this? I'd really like to keep it as a 301 for SEO purposes (I believe if you don't specify it sends a 404 by default?). I thought about using .htaccess to prepend a file to the page that will do the MySQL work, but with the 301, wouldn't that just get ignored as well? Anyway, I'm not sure if there's any solution other than using a different type of redirect, but I'm ready to give up just yet. So, any suggestions would be much appreciated. Thanks!

    Read the article

  • Repeated host lookups failing in urllib2

    - by reve_etrange
    I have code which issues many HTTP GET requests using Python's urllib2, in several threads, writing the responses into files (one per thread). During execution, it looks like many of the host lookups fail (causing a name or service unknown error, see appended error log for an example). Is this due to a flaky DNS service? Is it bad practice to rely on DNS caching, if the host name isn't changing? I.e. should a single lookup's result be passed into the urlopen? Exception in thread Thread-16: Traceback (most recent call last): File "/usr/lib/python2.6/threading.py", line 532, in __bootstrap_inner self.run() File "/home/da/local/bin/ThreadedDownloader.py", line 61, in run page = urllib2.urlopen(url) # get the page File "/usr/lib/python2.6/urllib2.py", line 126, in urlopen return _opener.open(url, data, timeout) File "/usr/lib/python2.6/urllib2.py", line 391, in open response = self._open(req, data) File "/usr/lib/python2.6/urllib2.py", line 409, in _open '_open', req) File "/usr/lib/python2.6/urllib2.py", line 369, in _call_chain result = func(*args) File "/usr/lib/python2.6/urllib2.py", line 1170, in http_open return self.do_open(httplib.HTTPConnection, req) File "/usr/lib/python2.6/urllib2.py", line 1145, in do_open raise URLError(err) URLError: <urlopen error [Errno -2] Name or service not known>

    Read the article

  • Scalability 101: How can I design a scalable web application using PHP?

    - by Legend
    I am building a web-application and have a couple of quick questions. From what I learnt, one should not worry about scalability when initially building the app and should only start worrying when the traffic increases. However, this being my first web-application, I am not quite sure if I should take an approach where I design things in an ad-hoc manner and later "fix" them. I have been reading stories about how people start off with an app that gets millions of users in a week or two. Not that I will face the same situation but I can't help but wonder, how do these people do it? Currently, I bought a shared hosting account on Lunarpages and that got me started in building and testing the application. However, I am interested in learning how to build the same application in a scalable-manner using the cloud, for instance, Amazon's EC2. From my understanding, I can see a couple of components: There is a load balancer that first receives requests and then decides where to route each request This request is then handled by a server replica that then processes the request and updates (if required) the database and sends back the response to the client If a similar request comes in, then a caching mechanism like memcached kicks into picture and returns objects from the cache A blackbox that handles database replication Specifically, I am trying to do the following: Setting up a load balancer (my homework revealed that HAProxy is one such load balancer) Setting up replication so that databases can be synchronized Using memcached Configuring Apache to work with multiple web servers Partitioning application to use Amazon EC2 and Amazon S3 (my application is something that will need great deal of storage) Finally, how can I avoid burning myself when using Amazon services? Because this is just a learning phase, I can probably do with 2-3 servers with a simple load balancer and replication but until I want to avoid paying loads of money accidentally. I am able to find resources on individual topics but am unable to find something that starts off from the big picture. Can someone please help me get started?

    Read the article

< Previous Page | 78 79 80 81 82 83 84 85 86 87 88 89  | Next Page >