Search Results

Search found 100587 results on 4024 pages for 'page load time'.

Page 22/4024 | < Previous Page | 18 19 20 21 22 23 24 25 26 27 28 29  | Next Page >

  • Network load balancing, efficience and limits?

    - by Vimvq1987
    I'm about to study about NLB on Windows Server 2003. It archives both of my interests now: scalability and high-availability. But I don't know about its power in production environment. Is NLB a efficient solution? How does it implement in real-world? Is it popular? What are its limit? Thank you so much for answering my questions. :)

    Read the article

  • Varnish : Non-Cache/Data Fetch + Load-Balance

    - by xperator
    Someone commented at my previous question and said it's possible to do this with Varnish: Instead of : Client Request Varnish LB Backend Varnish LB Client I want to have (Direct reply from backend to client, instead of going through the LB) : Client Request Varnish LB Backend Client This is not working : sub vcl_pass { if (req.http.host ~ "^(www.)?example.com$") { set req.backend = baz; return (pass); } }

    Read the article

  • erratic response times with Apache 2.0.52 on redhat 4.

    - by Kevin
    Under load, we've noticed response times from Apache vary greatly for the same 7k image. It can range anywhere from .01 seconds to 25 seconds or greater. Unfortunately, due to corporate policy constraints we are pretty much stuck on Apache 2.0.52. I'm at best an Apache novice so I'm in over my head with this problem. My focus recently has turned to our choice of MPM modules. We use the worker model on a dual core hyper threaded blade. It doesn't appear that swapping is an issue, and I don't see any signs of a hardware problem. I've read that worker is optimal on hardware with many CPU's where prefork it more suitable for our specific hardware profile. I can see conceptually how choosing the wrong MPM could result in this erratic behavior, but I'm not confident that it's the root cause here. Has anyone else seen this type of range in your response times for simple static content? What else should I be looking into here?

    Read the article

  • Linux Centos 6 becomes unavailable from time to time - OS&network issue

    - by adoado0
    I am encountering following problem. There is one server (DL160 G5) running Centos 6.3 with default kernel 2.6.32-220.2.1.el6.x86_64 - at this point I'd like to add that issue appeared also at older version - 6.1 and older kernel (do not remember exactly which version). There is cPanel installed and from time to time it becomes unavailable (network connection). What I've checked is (via KVMoIP): load average is completely normal it does not lack memory or disk space when problem occurs no console notifications checked all access logs and there is no sign that it can be caused by a client script cannot even access local interface (127.0.0.1) or main IP address running tcpdump I can only see packets arriving to server - no responses all services seem to be running properly (mail,sql,http,ssh) checked crontab and all clients' crontabs too network port utilisation is low ( up to several Mbits) arriving packet rate is low - hundreds per second (according to tcpdump) console (via kvmoip) works fine, no lags there is no conntrack at this server there is no ipv6 at this server flushing iptables, unloading modules does not resolve problem restarting network does not resolve problem, no errors appear it also occurs when two sepearate networks are configured (and multiple gateways) as well as one IP, one default gw and one network is configured - so it seems network configuration independent it seems to repeat randomly (load,packet rate,bandwith usage,load independent) checked server with different rootkit detection tools - it seems to be clean server has been rebooted, it did not change anything there are no interface errors it apperas randomly can be once a week or several times per day It usually works fine after 1-15 minutes. What I can also check? It is definitely OS issue - there is traffic at interface only in one direction when problem occurs, can not even ping loopback. Any ideas? Recommended checks? Anything I did not checked above.

    Read the article

  • Dynamically created controls and the ASP.NET page lifecycle

    - by Dirk
    I'm working on an ASP.NET project in which the vast majority of the forms are generated dynamically at run time (form definitions are stored in a DB for customizability). Therefore, I have to dynamically create and add my controls to the Page every time OnLoad fires, regardless of IsPostBack. This has been working just fine and .NET takes care of managing ViewState for these controls. protected override void OnLoad(EventArgs e) { base.OnLoad(e); RenderDynamicControls() } private void RenderDynamicControls(){ //1. call service layer to retrieve form definition //2. create and add controls to page container } I have a new requirement in which if a user clicks on a given button (this button is created at design time) the page should be re-rendered in a slightly different way. So in addition to the code that executes in OnLoad (i.e. RenderDynamicControls()), I have this code: protected void MyButton_Click(object sender, EventArgs e) { RenderDynamicControlsALittleDifferently() } private void RenderDynamicControlsALittleDifferently() (){ //1. clear all controls from the page container added in RenderDynamicControls() //2. call service layer to retrieve form definition //3. create and add controls to page container } My question is, is this really the only way to accomplish what I'm after? It seems beyond hacky to effectively render the form twice simply to respond to a button click. I gather from my research that this is simply how the page-lifecycle works in ASP.NET: Namely, that OnLoad must fire on every Postback before child events are invoked. Still, it's worthwhile to check with the SO community before having to drink the kool-aid. On a related note, once I get this feature completed, I'm planning on throwing an UpdatePanel on the page to perform the page updates via Ajax. Any code/advice that make that transition easier would be much appreciated. Thanks

    Read the article

  • one page filter results in new page in javascript

    - by Jake
    I have links set up on one page and the relationship between the links is a parent child relationship. (For example: Parent: All, Children: Software; Hardware) These links of course lead the user to a new page that shows the results from a table that is populated. Currently these links are all Similar destinations, but just a filter in the url. But the problem is that there is a javascript filter on the page that gives the user to choose between All, Software, or Hardware. Understand basically that if the url is still reading that there on the software page but they just filtered on the page to be Hardware that doesn't look good IMO. So what I was trying to do was make the links on the inital page all go the the exact same destination and somehow still know on the new page which link was clicked and run the javascript filter from knowing which link was clicked on that page. Is there a way to found that out from javascript? I guess a way to pass that value to the new page and retrieving it in javascript without showing it in the url so I can filter the table for the user based on that value?

    Read the article

  • Rails paginate array items one-by-one instead of page-by-page

    - by hnovick
    Hi Guys, I have a group of assets, let's call them "practitioners". I'm displaying these practitioners in the header of a calendar interface. There are 7 columns to the calendar. 7 columns = 7 practitioners per view/page. Right now: if the first page shows you practitioners 1-7, when you go the next page you will see practitioners 8-15, next page 16-23, etc. etc. i am wondering how to page the practitioners so that if the first page shows you practitioners 1-7, the next page will show you practitioners 2-8, then 3-9, etc. etc. i would greatly appreciate any help you can offer. here is the rails code i am working with. best regards, harris novick # get the default sort order sort_order = RESOURCE_SORT_ORDER # if we've been given asset ids, start our list with them unless params[:asset_ids].blank? params[:asset_ids] = params[:asset_ids].values unless params[:asset_ids].is_a?(Array) sort_order = "#{params[:asset_ids].collect{|id| "service_provider_resources.id = #{id} DESC"}.join(",")}, #{sort_order}" end @asset_set = @provider.active_resources(:include => {:active_services => :latest_approved_version}).paginate( :per_page => RESOURCES_IN_DAY_VIEW, :page => params[:page], :order => sort_order )

    Read the article

  • [php] Firefox refreshes page 1, even after redirection to page 2

    - by Znarkus
    Hi! This is a quite weird and annoying problem, which is reproduced with the script below. Say we have two pages: script.php and script.php?second. Page 1 creates some database entries and redirects to page 2. On page 2, the user is presented with an editor for said entries. If page 1 for some reason crashes on the first try, and prints some error message, a strange thing will happend. If we refresh page 1 (and this time it redirects fine), every consecutive refresh (of page 2) will actually refresh page 1 and again redirect to page 2. In the above example this would create new database entries for every refresh, which is the problem I want to circumvent by redirecting to page 2. <?php header('Content-type: text/plain'); session_start(); if (!isset($_GET['second'])) { $_SESSION['counter'] = isset($_SESSION['counter']) ? $_SESSION['counter'] + 1 : 1; /*$_SESSION['counter'] = 0; exit('asd');*/ header("Location: {$_SERVER['PHP_SELF']}?second", true, 303); exit; } echo "Counter: {$_SESSION['counter']}"; To try the above complete script, first run it with the commented code intact, then by enabling the commented code. I've tried 301, 302 and 303 redirections. Does someone know why this is happening?

    Read the article

  • How can I replicate Google Page Speed's lossless image compression as part of my workflow?

    - by Keefer
    I love that Google's Page Speed is able to losslessly compress a lot of my images, but I'd love to make it part of my workflow, prior to uploading a site and making it live. Is there anything I can run locally to give me the same lossless compression? I currently export images from Export For Web from Photoshop, and use a little application called PNGCrusher to reduce file size of PNGs. I'd love to find a faster way though than saving out and replacing the individual images from Page Speed's results.

    Read the article

  • Can a site recover by itself after dropping google page rank for 404 errors?

    - by Jeff
    Recently redid a website and changed the directory / URL structure. I did some .htaccess redirects for the main landing pages - however when reviewing web master tools received 404 errors for the rest of the changed URLs and noticed that Google dropped my site from the #1 position to around the 5th page. I corrected all the 404s by providing redirects in the .htaccess, resubmitted the site map and tested the google crawl bot. Will my page regain its rank by itself - or am I going to have to put some time into like I originally did?

    Read the article

  • Windows 7 XP Mode disable time sync

    - by Oskar Duveborn
    So I've tried the trick from Virtual PC 2007, adding the following section to the vmc configuration file: <components> <host_time_sync> <enabled type="boolean">false</enabled> </host_time_sync> </components> Later someone suggested VPC doesn't want the components level so added this instead: <host_time_sync> <enabled type="boolean">false</enabled> <frequency type="integer">15</frequency> <threshold type="integer">10</threshold> </host_time_sync> When I start up XP Mode (Microsoft Virtual PC) it completely ignores any of these two configuration changes and if I change the clock it's instantly reset to the host time again. I've also obviously disabled the Windows Time service but as it's not joined to a domain or set up with a source it shouldn't be involved anyway. I need to test an application over a few midnight passes and thought the XP Mode machine would be perfect, so I didn't have to mess with my workstation clock... is there any way to get the VPC guest to not sync time with the host? This is easy in Hyper-V ;p

    Read the article

  • Time Machine doesn't back up some folders/files (that it should)

    - by Eric
    MacBook Pro 17" (Snow Leopard) -- WD 2TB external drive MacBook Pro 13" (Snow Leopard) -- Seagate 1TB external drive I find that Time Machine sometimes doesn't back up new folders (and the files in them). This occurs both when I choose "Back Up Now" from the Time Machine icon in the Menu Bar and in TM's scheduled backups. These are not excluded folders (nor are then in the TM do-not-back-up list); they're perfectly normal folders (at various locations) inside my home folder. The only way to force them to be backed up is to restart the computer (unmounting & mounting the TM external disk does not help). There seems to be a correlation with new folders (i.e., it's more likely to happen that an entire new folder is not backed up), but this may just be observer bias (because those are the folders that I go check to see if they've been backed up). It's not computer dependent (it happens on two different computers). It's not external disk dependent (it happens on two different external disks). It's not time dependent (not restarting for several days does not fix the problem). What does a restart change that these other events don't? I'm considering deleting the /.fseventsd folder (without restarting the computer) to see if that helps. I haven't tried logging out and logging in (without restarting the computer).

    Read the article

  • Unix VPS server going down at almost the same time every day

    - by ronnz
    My server load seems to be really spiking and many times the server goes down at the same time each night (Around midnight). I have about 20 cPanel accounts hosted on it and have tried everything I know to try to find what is causing the issue. Some of the things I have tried: Combined all site access logs found in /etc/httpd/domlogs and cannot see anything unusual at the time of server going down. Checked most other logs in the var/log directory and found nothing indicating the issue at the time the server is going down. Checked cron logs and cannot see anything unusual.. See below. Last night CPU spiked to 7.5 at 00:14. What else can I be checking? How can I really monitor to find out the root cause? Dec 8 00:05:01 v1 crond[6082]: (root) CMD (/usr/local/cpanel/bin/dcpumon /dev/null 2&1) Dec 8 00:05:01 v1 crond[6084]: (root) CMD (/usr/local/cpanel/whostmgr/bin/dnsqueue /dev/null 2&1) Dec 8 00:10:01 v1 crond[6435]: (root) CMD (/usr/lib64/sa/sa1 1 1) Dec 8 00:10:01 v1 crond[6436]: (root) CMD (/usr/local/cpanel/bin/dcpumon /dev/null 2&1) Dec 8 00:15:12 v1 crond[6775]: (root) CMD (/usr/local/cpanel/scripts/autorepair recoverymgmt /dev/null 2&1) Dec 8 00:15:12 v1 crond[6776]: (root) CMD (/usr/local/cpanel/scripts/recoverymgmt /dev/null 2&1) Dec 8 00:15:12 v1 crond[6777]: (root) CMD (/usr/local/cpanel/bin/dbindex /dev/null 2&1) Dec 8 00:15:12 v1 crond[6781]: (root) CMD (/usr/local/cpanel/bin/dcpumon /dev/null 2&1) Dec 8 00:20:33 v1 crond[7047]: (root) CMD (/usr/lib64/sa/sa1 1 1)

    Read the article

  • Error during Time Machine backups on OS X Lion

    - by user92401
    After I turn on my machine, the first couple of Time Machine backups seem to go OK, but after about an hour I get this error: Unable to complete backup. An error occurred while creating the backup folder. Latest successful backup: 7/31/11 at 12:32 PM I'm running 10.7. Time Machine is backing up an internal HD to an external USB HD. I've already run Disk Utility to repair the Time Machine partition. It's a relatively new hard drive and didn't have any issues. Here's what I've found in the Console's log filtered for backupd: 7/31/11 12:31:21.223 PM com.apple.backupd: Starting standard backup 7/31/11 12:31:21.447 PM com.apple.backupd: Backing up to: /Volumes/MyMac TM Backup/Backups.backupdb 7/31/11 12:31:29.146 PM com.apple.backupd: 983.7 MB required (including padding), 391.90 GB available 7/31/11 12:32:19.471 PM com.apple.backupd: Copied 3156 files (36.0 MB) from volume Macintosh HD. 7/31/11 12:32:20.017 PM com.apple.backupd: Copied 3173 files (36.0 MB) from volume LI. 7/31/11 12:32:20.136 PM com.apple.backupd: 934.8 MB required (including padding), 391.86 GB available 7/31/11 12:32:54.755 PM com.apple.backupd: Copied 916 files (117.8 MB) from volume Macintosh HD. 7/31/11 12:32:54.894 PM com.apple.backupd: Copied 933 files (117.8 MB) from volume LI. 7/31/11 12:32:55.937 PM com.apple.backupd: Starting post-backup thinning 7/31/11 12:32:55.937 PM com.apple.backupd: No post-back up thinning needed: no expired backups exist 7/31/11 12:32:55.960 PM com.apple.backupd: Backup completed successfully. 7/31/11 1:21:28.624 PM com.apple.backupd: Starting standard backup 7/31/11 1:21:28.631 PM com.apple.backupd: Backing up to: /Volumes/MyMac TM Backup/Backups.backupdb 7/31/11 1:21:28.682 PM com.apple.backupd: Error: (22) setxattr for key:com.apple.backupd.HostUUID path:/Volumes/MyMac TM Backup/Backups.backupdb/Will’s Mac Pro size:37 7/31/11 1:21:28.683 PM com.apple.backupd: Error: (22) setxattr for key:com.apple.backupd.HostUUID path:/Volumes/MyMac TM Backup/Backups.backupdb/Will’s Mac Pro size:37 7/31/11 1:21:38.694 PM com.apple.backupd: Backup failed with error: 2

    Read the article

  • Apple Airport Express, Extreme and Time Capsules, BT Home Hub, Wireless Extenders confusion

    - by Jamie Hartnoll
    I post quite frequently in Stack Overflow, but use Superuser less frequently. Mainly as I don't change hardware often and rarely have software issues! I live in a small stone cottage, and have an office in a separate building across a yard. I have a BT Homehub which is located in the cottage and a series of Ethernet cables running across the yard to the office. This is fine for my wired stuff. My main office computers are PCs running Windows 7 Ultimate, and one on Win7 Home, all working fine. I also have an old laptop on Win XP which works fine wirelessly in the house for those evenings in front of the TV catching up on a bit of work. I also have an iPhone and an iPad. Recently, I have been trying to get WiFi in the office so I can use Adobe Shadow (or whatever it now is!) to improve mobile web development efficiency using my iPhone and iPad, so I bought this: http://www.ebuyer.com/393462-zyxel-wre2205-500mbps-powerline-wireless-n300-range-extender-wre2205-gb0101f Thinking that would be lovely just plugged into the socket by the door in the office, extending the perimeter of the WiFi from my Homehub. I can't get it to work properly! If I plug a laptop into its ethernet port I can get it to connect to the Homehub and give me a kinda of wired, wireless extender. If, however, I plug the ethernet port into my home hub, it then seems to extend the network, but only my iOs devices work, and all my wired stuff stops working, and seems to create an infinite loop where windows connects to my homehob, and then rather to the internet, it then connects back to the extender thing. Anyway... in the meantime, I took a fatal trip to the Apple Store, where I purchased an Airport Express... solely for the purpose of hooking my iOs devices up as wireless music players in the house. I knew it had WiFi, but didn't want to use that part as an extender, I didn't think it would work on a Homehub anyway. It doesn't work on a Homehub! I now have a new wireless network in the house, which, when anything connects to it cannot connect to the Internet, so it works ONLY as a wireless music player. I then borrowed some Powerline Adaptors from someone and realised that this whole thing was getting totally out of control! It seems all the technology is out there but it's so complicated to get the right series of devices. To further add to the confusion, I wouldn't mind a network hard drive. I bought one that broke and lost everything, so now we're on to looking at the Apple Time Capsules. So my question is... IF... I buy an Apple Time Capsule, can I: Hook that up to my Homehub, leaving the homehub connected to the Internet so my Hub phones still work, then disable wireless on the homehub Link up my Airport Express to the Time Capsule PROPERLY so it will connect to the Internet Do the above with an Apple TV box should I buy one in future Use the Time Capsule as a network hard drive to store video and music that can be viewed/listened to via my iOS devices/Apple TV/Aiport Express anywhere even with my main PC off (this currently stores all this data) Hope that the IOS devices like the WiFi from the TimeCapsule better than the Homehub and work without extension, or buy another Airport Express to get WiFI in the office. Or... should I buy an Airport Extreme and use a USB hard drive for the network drive?

    Read the article

  • Cannot load rJava because cannot load a shared library

    - by Farrel
    I have been struggling to load the rJava package in R. I get the following messages > library(rJava) Error in inDL(x, as.logical(local), as.logical(now), ...) : unable to load shared library \ 'C:/PROGRA~1/R/R-210~1.1/library/rJava/libs/rJava.dll': LoadLibrary failure: The specified module could not be found. Error : .onLoad failed in 'loadNamespace' for 'rJava' Error: package/namespace load failed for 'rJava' I have tried so many solutions that they are all bamboozeled in my head. At some point I even got > R Console: Rgui.exe - System Error The > program can't start because > MSVCR71.dll is is missing from your > computer. Try reinstalling the program > to fix this problem. I made sure everything I could think of was on the path > C:\Program Files\R\Rtools\bin;C:\Program Files\R\Rtools\perl\bin; C:\Program Files\R\Rtools\MinGW\bin;%SystemRoot%\system32; %SystemRoot%;%SystemRoot%\System32\Wbem; %SYSTEMROOT%\System32\WindowsPowerShell\v1.0\; C:\Program Files\QuickTime\QTSystem\; C:\Program Files\R\R-2.10.1\library\rJava\libs\; C:\Program Files\R;C:\Program Files\Java\jre6\bin\client What should I try next? I am running R version 2.10.1 (2009-12-14) and I have also tried R version 2.10.1 Patched (2010-03-03 r51210). It is on a Windows machine running windows 7 enterprise

    Read the article

  • Biztalk Server 2009 - Failover Clustering and Network Load Balancing (NLB)

    - by FullOfQuestions
    Hi, We are planning a Biztalk 2009 set up in which we have 2 Biztalk Application Servers and 2 DB Servers (DB servers being in an Active/Passive Cluster). All servers are running Windows Server 2008 R2. As part of our application, we will have incoming traffic via the MSMQ, FILE and SOAP adapters. We also have a requirement for High-availability and Load-balancing. Let's say I create two different Biztalk Hosts and assign the FILE receive handler to the first one and the MSMQ receive handler to the second one. I now create two host instances for each of the two hosts (i.e. one for each of my two physical servers). After reviewing the Biztalk Documentation, this is what I know so far: For FILE (Receive), high-availablity and load-balancing will be achieved by Biztalk automatically because I set up a host instance on each of the two servers in the group. MSMQ (Receive) requires Biztalk Host Clustering to ensure high-availability (Host Clustering however requires Windows Failover Clustering to be set up as well). No loading-balancing option is clear here. SOAP (Receive) requires NLB to achieve Load-balancing and High-availability (if one server goes down, NLB will direct traffic to the other). This is where I'm completely puzzled and I desperately need your help: Is it possible to have a Windows Failover Cluster and NLB set up at the same time on the two application servers? If yes, then please tell me how. If no, then please explain to me how anyone is acheiving high-availability and load-balancing for MSMQ and SOAP when their underlying prerequisites are mutually exclusive! Your help is greatly appreciated, M

    Read the article

  • Cannot load rJava because cannot load a shared library

    - by Farrel
    I have been struggling to load the rJava package in R. I get the following messages > library(rJava) Error in inDL(x, as.logical(local), as.logical(now), ...) : unable to load shared library \ 'C:/PROGRA~1/R/R-210~1.1/library/rJava/libs/rJava.dll': LoadLibrary failure: The specified module could not be found. Error : .onLoad failed in 'loadNamespace' for 'rJava' Error: package/namespace load failed for 'rJava' I have tried so many solutions that they are all bamboozeled in my head. At some point I even got > R Console: Rgui.exe - System Error The > program can't start because > MSVCR71.dll is is missing from your > computer. Try reinstalling the program > to fix this problem. I made sure everything I could think of was on the path > C:\Program Files\R\Rtools\bin;C:\Program Files\R\Rtools\perl\bin; C:\Program Files\R\Rtools\MinGW\bin;%SystemRoot%\system32; %SystemRoot%;%SystemRoot%\System32\Wbem; %SYSTEMROOT%\System32\WindowsPowerShell\v1.0\; C:\Program Files\QuickTime\QTSystem\; C:\Program Files\R\R-2.10.1\library\rJava\libs\; C:\Program Files\R;C:\Program Files\Java\jre6\bin\client What should I try next? I am running R version 2.10.1 (2009-12-14) and I have also tried R version 2.10.1 Patched (2010-03-03 r51210). It is on a Windows machine running windows 7 enterprise

    Read the article

  • AJAX get data from large HTML page as the large HTML page loads

    - by Ed
    Not entirely sure whether this has a name but basically I have a large HTML page that is generated from results in a db. So viewing the HTML page (which is a report) in a browser directly does not display all contents immediately but displays what it has and additional HTML is added as the results from the DB are retrieved... Is there a way I can make an AJAX request to this HTML page and as opposed to waiting until the whole page (report) is ready, I can start processing the response as the HTML report is loaded? Or is there another way of doing it? Atm I make my AJAX response it just sits there for a minute or two until the HTML page is complete... If context is of any use: The HTML report is generated by a Java servlet and the page making the AJAX call is a JSP page. Unfortunately I can't very easily break the report up because it is generated by BIRT (Eclipse reporting extension). Thanks in advance.

    Read the article

  • Tactics for using PHP in a high-load site

    - by Ross
    Before you answer this I have never developed anything popular enough to attain high server loads. Treat me as (sigh) an alien that has just landed on the planet, albeit one that knows PHP and a few optimisation techniques. I'm developing a tool in PHP that could attain quite a lot of users, if it works out right. However while I'm fully capable of developing the program I'm pretty much clueless when it comes to making something that can deal with huge traffic. So here's a few questions on it (feel free to turn this question into a resource thread as well). Databases At the moment I plan to use the MySQLi features in PHP5. However how should I setup the databases in relation to users and content? Do I actually need multiple databases? At the moment everything's jumbled into one database - although I've been considering spreading user data to one, actual content to another and finally core site content (template masters etc.) to another. My reasoning behind this is that sending queries to different databases will ease up the load on them as one database = 3 load sources. Also would this still be effective if they were all on the same server? Caching I have a template system that is used to build the pages and swap out variables. Master templates are stored in the database and each time a template is called it's cached copy (a html document) is called. At the moment I have two types of variable in these templates - a static var and a dynamic var. Static vars are usually things like page names, the name of the site - things that don't change often; dynamic vars are things that change on each page load. My question on this: Say I have comments on different articles. Which is a better solution: store the simple comment template and render comments (from a DB call) each time the page is loaded or store a cached copy of the comments page as a html page - each time a comment is added/edited/deleted the page is recached. Finally Does anyone have any tips/pointers for running a high load site on PHP. I'm pretty sure it's a workable language to use - Facebook and Yahoo! give it great precedence - but are there any experiences I should watch out for? Thanks, Ross

    Read the article

  • OS X home, end, page up, page down movement keys and cursor focus

    - by Gerald Kaszuba
    When using the home, end, page up and page down movement keys in a text editor, the cursor stays in the same spot, even though the window has moved elsewhere. I know that you can use the option modifier to move the cursor along, but is there a way to move the cursor where your window has moved to (without using the mouse)? Another alternative would be to make these movement keys always behave as if the option modifier key is already pressed. Is this possible?

    Read the article

  • change form action from aspx page to master page behind it

    - by ferrer
    i have a master page and a child aspx page, coneected to each other. the master page has the form in it. Now the child page has checkboxes, whose value i would like to pass to another child page with same master page behind it. Can i change the action=abc.aspx and method=post? How can i send all the checkbox values (checkbox.text = [email protected]) to the next page? there are lots of these values that need to pass to nex tpage.

    Read the article

  • jQuery .load() and sub-pages

    - by user354051
    Hi, I am not just a newbie with Javascript. I am developing a simple site to get my hands on web programming. The web site is simple and structure is like this: A simple ul/li/css based navigation menu Sub pages are loaded in "div" using jQuery, when ever user click appropriate menu item. Some of the sub-pages are using different jQuery based plugins such as LightWindow, jTip etc. The jQuery function that loads the sub-page is like this: function loadContent(htmlfile){ jQuery("#content").load(htmlfile); }; The menu items fires loadContent method like this: <li><a href="javascript:void(0)" onclick="loadContent('overview.html');return false">overview</a></li> This loads a sub-page name "overview.html" inside the "div". That's it. Now this is working fine but some of the sub-pages using jQuery based plugins are not working well when loaded inside the "div". If you load them individually in the browser they are working fine. Based on above I have few qustions: Most of the plugins are based on jQuery and sub-pages are loaded inside the "index.html" using "loadContent" function. Do I have to call <script type="text/javascript" src="js/jquery-1.4.2.min.js"></script> on each and every sub-page? If a page is using a custom jQuery plugin, then where do I call it? In "index.html" or on the page where I am using it? I think what ever script you will call in "index.html", you don't to have call them again in any of the sub pages you are using. Am I right here? Thanks Prashant

    Read the article

  • PF, load balanced gateways, and Squid

    - by Santa
    Hi, So I have a FreeBSD router running PF and Squid, and it has three network interfaces: two connected to upstream providers (em0 and em1 respectively), and one for LAN (re0) that we serve. There is some load balancing configured with PF. Basically, it routes all traffic to ports 1-1024 through one interface (em0) and everything else through the other (em1). Now, I have a Squid proxy also running on the box that transparently redirects any HTTP request from LAN to port 3128 in 127.0.0.1. Since Squid redirects this request to HTTP outside, it should follow the load balancing rule through em0, no? The problem is, when we tested it out (by browsing from a computer in the LAN to http://whatismyip.com, it reports the external IP of the em1 interface! When we turn Squid off, the external IP of em0 is reported, as expected. How do I make Squid behave with the load balancing rule that we have set up? Here's the related settings in /etc/pf.conf that I have: ext_if1="em1" # DSL ext_if2="em0" # T1 int_if="re0" ext_gw1="x.x.x.1" ext_gw2="y.y.y.1" int_addr="10.0.0.1" int_net="10.0.0.0/16" dsl_ports = "1024:65535" t1_ports = "1:1023" ... squid=3128 rdr on $int_if inet proto tcp from $int_net \ to any port 80 -> 127.0.0.1 port $squid pass in quick on $int_if route-to lo0 inet proto tcp \ from $int_net to 127.0.0.1 port $squid keep state ... # load balancing pass in on $int_if route-to ($ext_if1 $ext_gw1) \ proto tcp from $int_net to any port $dsl_ports keep state pass in on $int_if route-to ($ext_if1 $ext_gw1) \ proto udp from $int_net to any port $dsl_ports pass in on $int_if route-to ($ext_if2 $ext_gw2) \ proto tcp from $int_net to any port $t1_ports keep state pass in on $int_if route-to ($ext_if2 $ext_gw2) \ proto udp from $int_net to any port $t1_ports Thanks!

    Read the article

< Previous Page | 18 19 20 21 22 23 24 25 26 27 28 29  | Next Page >