Search Results

Search found 100587 results on 4024 pages for 'page load time'.

Page 163/4024 | < Previous Page | 159 160 161 162 163 164 165 166 167 168 169 170  | Next Page >

  • Restoring Time Machine from two Macs onto one (new) mac

    - by Dan
    My parents used to have two Macs...a "iLamp-style" iMac for my Dad, and an iBook G4 for my Mom. A while back, I had setup the iMac to have an external Firewire Hard Drive for a Time Machine Volume, and backed up both the iMac and iBook to that drive. Recently, the iBook died and the iMac was really slow to work with. So my parents decided to replace the iBook with an iPad, and also purchased a Mac Mini. I need to help my parents get their data from their two computers (backed up by Time Machine) onto the same machine. Pretty much everything is identical between the two systems (same apps, etc), however, they both have individual email accounts and photos that they want to retain. Is it possible to do two Time Machine restores onto one computer?

    Read the article

  • Fortigate - Accessing a Virtual Server address from several interfaces

    - by Jeremy G
    I am setting up a new application in its own DMZ on our Fortigate 300C firewalls. I have defined a load-balancing configuration for part of the application, and this works fine for traffic coming in from our internal network. However, I would also like this application to be reachable from other DMZs, for inter-application traffic, and from the SSL VPN interface. I can't seem to define the required policy, and it seems this is due to Virtual Servers being bound to the client interface on the Fortigate rather than the server interface (and so my virtual IP is not accessible from any of these other interfaces) Does anyone have an idea how I might go about this ? I guess I could create other virtual IPs for each interface, but this gets complicated to handle as clients need to change the address they use depending on how they are connecting. Thanks, Jeremy G

    Read the article

  • scalability with nginx, passenger, ruby on rails setup

    - by Dani Cela
    Hey guys I had a question regarding scalability for my RoR application. We have been optimizing our application over the last few days and after running blitz.io, notice that our application times out after maybe 1000 hits in 30 seconds we experienced massive timeouts. In the 1 minute test apparently 74% of users would have timed out. Look at the performance of my website: http://blitz.io/report/1c8eb2f395a5eadeabd62fd831ada9e5 Not saying that our website will in any way experience this now, but I wish to design the infrastructure to handle this. What is normally done in this situation? Currently we have one web server and one database server. Would load balancing be the route to go?

    Read the article

  • loadbalancing with difference nginx location context and backend server context

    - by robinmag
    Hi, I used nginx and upstream module for load balancing with the following config upstream lb { server 127.0.0.1:8080; server 127.0.0.1:8081; } server { listen 88; server_name localhost; location /cas/ { proxy_pass http://lb; proxy_redirect off; proxy_connect_timeout 2; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; } } the problem is the "location /context/" have to match to the context of backend server so when i request localhost/context/index.html then nginx routes it to 127.0.0.1:8080/context/index.html or 127.0.0.1:8080/context/index.html. Is it possible to have difference backend context and nginx location for example with "location /" nginx will routes the request to 127.0.0.1:8080/context/index.html or 127.0.0.1:8080/context/index.html Thank you.

    Read the article

  • loadbalancing with difference nginx location context and backend context

    - by robinmag
    Hi, I used nginx and upstream module for load balancing with the following config upstream lb { server 127.0.0.1:8080; server 127.0.0.1:8081; } server { listen 88; server_name localhost; location /cas/ { proxy_pass http://lb; proxy_redirect off; proxy_connect_timeout 2; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; } } the problem is the "location /context/" have to match to the context of backend server so when i request localhost/context/index.html then nginx routes it to 127.0.0.1:8080/context/index.html or 127.0.0.1:8080/context/index.html. Is it possible to have difference backend context and nginx location for example with "location /" nginx will routes the request to 127.0.0.1:8080/context/index.html or 127.0.0.1:8080/context/index.html Thank you.

    Read the article

  • Looking for the best ec2 setup for 3 sites totaling in 1.5 mil in traffic monthly

    - by john h.
    I am looking to consolidate our current aws setup of 2 Large ubuntu ec2 servers and 2 large RDS server for our 3 websites that have a total of about 1.5 million hits a month and increasing every month with the majority of traffic (1 mil) to one forum site in the group and the rest of traffic to an ecommerce site and a small wordpress site. So here is my question/thought? Would it be better for us to combine the two ec2 large servers to just one and same with the 2 RDS servers so we run all three sites off one large ec2 and one RDS. -or- Should we setup maybe 2-3 smaller ec2 servers load balenced and a single RDS. -or- Something completely different setup? One concern is that if one site crashes it takes with it the others. It happened in the past but I am pretty sure its because of the forum software and not the server setup. -john

    Read the article

  • Good Books About Scaling Up Databases/Servers/etc.?

    - by Mehrdad
    I've applied for an internship at a startup company that expects its user base to grow by a large factor in a small amount of time, and so part of their project is to scale everything up so that they're ready: handling more/larger requests efficiently, handling server failures, load balancing, getting more JavaScript to run faster on the client computers, etc. Part of my job will also be figuring out what to do, so it's not obvious what my exact task will be at the moment. I was told that I should start reading up a little more about this so that I would have a little bit of an idea of what to do. What are some good books for me to read on this topic? I have a little bit of experience with the usage of MySQL (and also a little experience with web development), but in no way do I claim any knowledge on the internal workings of databases or distributed systems, so I might need readings more on the introductory side.

    Read the article

  • Using Performance Monitor To Get IIS7 Response Turnaround Time

    - by alphadogg
    I have a MVC2 web application on W2KR2/IIS7 that I'd like to benchmark/monitor. Some XHR requests by a browser-based client are suddenly taking 8-10 sec when they used to take much less time (as per Chrome Developer Tool timings). The underlying SQL Server queries, using the same params, runs in 1.4s according to total execution time client statistics from SSMS. I'm assuming that there are various counters that can specifically dissect time taken/waiting/processing between IIS7 itself and the web application? For example, can I check how long it takes to get a response from IIS7 app and DB? How about how long it takes to serve IIS7?

    Read the article

  • Can I use nginx to start EC2 instances on demand?

    - by Gabe Hollombe
    TL;DR - Is there a way to make nginx act as an elastic load balancer that will spin up EC2 instances on demand, allowing for the case when periods of no demand mean no instances will be running? Longer explanation - I have an nginx server that proxy_pass'es requests to a server on EC2. This server doesn't get many requests, so I'd like to keep the server spun down during periods of inactivity (I already have a script to do this). Then, when the instance is spun down and nginx gets a request for that instance, it will time out when trying to get a response from it. At this point, can I somehow trigger a shell command on the server to use EC2's command line tools to spin up the instance, then re-try the user's request after it has started?

    Read the article

  • Loadbalance UDP traffic with session affinity and way to take servers in & out of rotation

    - by William
    What is the best way to go about load balancing UDP traffic among a whole bunch of servers, while keeping session affinity based on the users' IP? I need to also be able to take servers in and out of rotation for new clients, so when they join for the first time, they get put on a server in a list of available servers, and clients already connected would stay connected to their specific server. I have written the software to maintain a list, but I can't seem to find anything that would perform this functionality. If you need the context, this is to facilitate game tournaments for Minecraft: Pocket Edition, which is done with UDP traffic, I cannot change the protocol. And, because tournaments open and close, I need to be able to place players on their proper servers. Performance is also a priority, I have a program to do this but it is very bloated and slow. Thanks for any help! William

    Read the article

  • LVS TCP connection timeouts - lingering connections

    - by Jon Topper
    I'm using keepalived to load-balance connections between a number of TCP servers. I don't expect it matters, but the service in this case is rabbitmq. I'm using NAT type balancing with weighted round-robin. A client connects to the server thus: [client]-----------[lvs]------------[real server] a b If a client connects to the LVS and remains idle, sending nothing on the socket, this eventually times out, according to timeouts set using ipvsadm --set. At this point, the connection marked 'a' above correctly disappears from the output of netstat -anp on the client, and from the output of ipvsadm -L -n -c on the lvs box. Connection 'b', however, remains ESTABLISHED according to netstat -anp on the real server box. Why is this? Can I force lvs to properly reset the connection to the real server?

    Read the article

  • Strategies for very fast delivery of webpages.

    - by Cherian
    I run a website Cucumbertown with an initial pay load of nearly 9KB zipped. All my js is delayed loaded with requirejs and modernizer is the only exception. Now all my webpages are Nginx cached and only 10-15% hits go to the backend proxy. And the cache is invalidated by logged in users as proxy_cache_bypass. So for an anonymous user its nearly always a cache hit. I have some basic OS tuning with default via ip dev eth0 initcwnd 15 net.ipv4.tcp_slow_start_after_idle 0 Despite an all cache & large initcwnd my pages still take 2.5 – 3 seconds. I have a yslow score of And page speed at Are there strategies that can help deliver webpages even faster than this? Deliver pages at 1+ second time for 10KB payload? Notes: My servers run of a fairly good data center from Linode at Fremont.

    Read the article

  • Round-Robin DNS in mobile networks

    - by k7k0
    After reading load distribution alternatives and giving my limited skills on the area I'm biased toward round-robin DNS strategy. From what I understood, one key aspect of DNS Round-Robin is setting a low TTL value, avoiding caching. My main concern is that all my traffic comes from mobile networks, almost 30% of that comes from t-mobile 3G. Some questions: 1) Is there a chance that almost all clients on the same mobile network will be redirected to the same IP in the TTL frame? That would kill the distribution technique. 2) If I choose a really low TTL (zero or one). That impacts directly over client performance? It does a DNS miss every time or it's a setting that only impacts on DNS servers? Any help would be much appreciated. Thanks

    Read the article

  • Existing connexion on Apache and mod_proxy_balancer don't fail over second JBoss node

    - by Jean-Rémy Revy
    I have a Jboss farm, load balanced by Apache HTTP + mod_proxy_balancer and mod_proxy_ajp, with the following configuration : <VirtualHost *:80> ServerName web-gui-acceptance.myorg.com ServerAlias web-gui-acceptance ProxyRequests Off ProxyPass /web-gui balancer://jbosscluster/web-gui stickysession=JSESSIONID nofailover=On ProxyPassReverse /web-gui http://srvlnx01.myorg.com:8080/web-gui ProxyPassReverse /web-gui http://srvlnx02.myorg.com:8080/web-gui <Proxy *> AuthType Kerberos [...] </Proxy> <Proxy balancer://jbosscluster> BalancerMember ajp://srvlnx01.myorg.com:8009 route=SRVLNX01_node1 BalancerMember ajp://srvlnx01.myorg.com:8009 route=SRVLNX02_node1 ProxySet lbmethod=byrequests </Proxy> </VirtualHost> When the first JBoss node fail (the hosting VM is down), my existing connexions don't fail over the second node ... the fist route is keeped (in table / .shm ?) and that provide me 503 errors. Can someone tell me what I missed ?

    Read the article

  • ntpstat response fine but server time out of sync

    - by zedoo
    Hi, I found out that the ntpd service that I've set up a few weeks ago on a Centos5 machine doesn't correctly synchronize the server time. I detected an offset of more than 5 minutes (by stopping ntpd and executing ntpdate). After setting up the service I checked the setup via ntpstat: [xxxx@xxx ~]$ ntpstat -q synchronised to local net at stratum 11 time correct to within 10 ms polling server every 1024 s I repeated this check every day and it always showed this output. Doesn't this output tell me that the server time is sane?

    Read the article

  • How to set expiration date for external files? [closed]

    - by garconcn
    I have a site included lots of external files, most of them are gif format. I have no control on the external files, but have to use them(with permission). When I check the site using Google Pagespeed, I got very low score(31) even though the page load is fast. One of the high priority suggestion is to leverage browser caching by setting an expiration date. However, all the files are on external links. I have already set the expiration date for local files.

    Read the article

  • How to limit disk performance?

    - by DrakeES
    I am load-testing a web application and studying the impact of some config tweaks (related to disk i/o) on the overall app performance, i.e. the amount of users that can be handled simultaneously. But the problem is that I hit 100% CPU before I can see any effect of the disk-related config settings. I am therefore wondering if there is a way I could deliberately limit the disk performance so that it becomes the bottleneck and the tweaks I am trying to play with actually start impacting performance. Should I just make the hard disk busy with something else? What would serve the best for this purpose? More details (probably irrelevant, but anyway): PHP/Magento/Apache, studying the impact of apc.stat. Setting it to 0 makes APC not checking PHP scripts for modification which should increase performance where disk is the bottleneck. Using JMeter for benchmarking.

    Read the article

  • IIS 7.5 Manager crashes when adding a custom error page

    - by dig412
    I'm running a local IIS 7.5 server in Win 7 Pro, and I'm trying to add a custom error page for 403 responses. When I click OK to add a custom error page for my site, IIS Manager just vanishes. The server is still running, and I can re-start IIS Manager, but the new page has not been saved. I've also tried adding it directly to web.config, but that just gives me The page cannot be displayed because an internal server error has occurred. Does anyone know why this might be happening? Edit: The event log implies that an invalid character in the path caused the crash, but It occured even when I copied & pasted a path from a valid entry. Application error log: IISMANAGER_CRASH IIS Manager terminated unexpectedly. Exception:System.Reflection.TargetInvocationException: Exception has been thrown by the target of an invocation. --- System.ArgumentException: Illegal characters in path. at System.IO.Path.CheckInvalidPathChars(String path) at System.IO.Path.IsPathRooted(String path) at Microsoft.Web.Management.Iis.CustomErrors.CustomErrorsForm.OnAccept() at Microsoft.Web.Management.Client.Win32.TaskForm.OnOKButtonClick(Object sender, EventArgs e) at System.Windows.Forms.Control.OnClick(EventArgs e) at System.Windows.Forms.Button.OnMouseUp(MouseEventArgs mevent) at System.Windows.Forms.Control.WmMouseUp(Message& m, MouseButtons button, Int32 clicks) at System.Windows.Forms.Control.WndProc(Message& m) at System.Windows.Forms.ButtonBase.WndProc(Message& m) at System.Windows.Forms.Button.WndProc(Message& m) at System.Windows.Forms.Control.ControlNativeWindow.WndProc(Message& m) at System.Windows.Forms.NativeWindow.DebuggableCallback(IntPtr hWnd, Int32 msg, IntPtr wparam, IntPtr lparam) at System.Windows.Forms.UnsafeNativeMethods.DispatchMessageW(MSG& msg) at System.Windows.Forms.Application.ComponentManager.System.Windows.Forms.UnsafeNativeMethods.IMsoComponentManager.FPushMessageLoop(Int32 dwComponentID, Int32 reason, Int32 pvLoopData) at System.Windows.Forms.Application.ThreadContext.RunMessageLoopInner(Int32 reason, ApplicationContext context) at System.Windows.Forms.Application.ThreadContext.RunMessageLoop(Int32 reason, ApplicationContext context) at System.Windows.Forms.Form.ShowDialog(IWin32Window owner) at Microsoft.Web.Management.Host.UserInterface.ManagementUIService.ShowDialogInternal(Form form, IWin32Window parent) at Microsoft.Web.Management.Host.UserInterface.ManagementUIService.Microsoft.Web.Management.Client.Win32.IManagementUIService.ShowDialog(DialogForm form) at Microsoft.Web.Management.Client.Win32.ModulePage.ShowDialog(DialogForm form) at Microsoft.Web.Management.Iis.CustomErrors.CustomErrorsPage.AddCustomError() --- End of inner exception stack trace --- at System.RuntimeMethodHandle._InvokeMethodFast(Object target, Object[] arguments, SignatureStruct& sig, MethodAttributes methodAttributes, RuntimeTypeHandle typeOwner) at System.Reflection.RuntimeMethodInfo.Invoke(Object obj, BindingFlags invokeAttr, Binder binder, Object[] parameters, CultureInfo culture, Boolean skipVisibilityChecks) at System.Reflection.RuntimeMethodInfo.Invoke(Object obj, BindingFlags invokeAttr, Binder binder, Object[] parameters, CultureInfo culture) at Microsoft.Web.Management.Client.TaskList.InvokeMethod(String methodName, Object userData) at Microsoft.Web.Management.Host.UserInterface.Tasks.MethodTaskItemLine.InvokeMethod() at System.Windows.Forms.LinkLabel.OnMouseUp(MouseEventArgs e) at System.Windows.Forms.Control.WmMouseUp(Message& m, MouseButtons button, Int32 clicks) at System.Windows.Forms.Control.WndProc(Message& m) at System.Windows.Forms.Label.WndProc(Message& m) at System.Windows.Forms.Control.ControlNativeWindow.WndProc(Message& m) at System.Windows.Forms.NativeWindow.DebuggableCallback(IntPtr hWnd, Int32 msg, IntPtr wparam, IntPtr lparam) at System.Windows.Forms.UnsafeNativeMethods.DispatchMessageW(MSG& msg) at System.Windows.Forms.Application.ComponentManager.System.Windows.Forms.UnsafeNativeMethods.IMsoComponentManager.FPushMessageLoop(Int32 dwComponentID, Int32 reason, Int32 pvLoopData) at System.Windows.Forms.Application.ThreadContext.RunMessageLoopInner(Int32 reason, ApplicationContext context) at System.Windows.Forms.Application.ThreadContext.RunMessageLoop(Int32 reason, ApplicationContext context) at Microsoft.Web.Management.Host.Shell.ShellApplication.Execute(Boolean localDevelopmentMode, Boolean resetPreferences, Boolean resetPreferencesNoLaunch) Process:InetMgr

    Read the article

  • Disk Response Time in Windows 7 Resource Monitor?

    - by Keith Nicholas
    In the resource monitor I'am looking at the disk response time. There are a lot of processes where the response time is thousands of milliseconds consistently, I'm pretty sure this is the source of my computer slowing down. I'm not sure what normal response times are though? I'm running win 7 64bit ultimate. This is running on a new computer, i5 with a terabyte drive, 4gigs of ram, etc, disk is still pretty much empty, so it should all be pretty snappy. And if it is going really slow, how do I track down whats causing it? I've turned off things like real time virus protection as experiments to see if there is something weird there, but makes no real difference (other than it doesn't contribute to the problem by accessing the disk)

    Read the article

  • Should Site Title be Before or After Page Title?

    - by NickAldwin
    Possible Duplicate: Does the order of keywords matter in a page title? Apologies if this is a dupe. I tried searching, but didn't find anything specifically addressing this concern. When creating a large(ish) site, page titles usually reference both the site name and the current page name. However, it seems there are two main conventions: Bob's Awesome Site - Contact Page and Contact Page - Bob's Awesome Site I've looked around, and pages usually use one of the two variants above. Is there any reason to use one over the other? SEO/readability/usability/etc? I've thought about it, and have only come up with: Page first - Differentiates the tab when the browser is crowded with lots of tabs Site first - Immediately see the "parent" site, so to speak; more cohesive experience

    Read the article

  • php processes owned by ppid 1 after X amount of time

    - by Kristopher Ives
    I have a CentOS server running WHM that uses FastCGI (mod_fcgid) running PHP 5.2.17 on Apache 2.0 with SuExec. When I start Apache it begins fine and serving requests. If I run ps on the terminal as root I see the php processes and they are owned by their httpd parent processes. After X amount of time - different from time to time, not much longer than a few hours typically - the server will begin spawning PHP jobs owned by the init process ID (1) Example of good listing: 12918 18254 /usr/bin/php 12918 18257 /usr/bin/php 12918 18293 /usr/bin/php 12918 18545 /usr/bin/php 12918 18546 /usr/bin/php 12918 19016 /usr/bin/php 12918 19948 /usr/bin/php Then later something like: 1 6800 /usr/bin/php 1 6801 /usr/bin/php 1 7036 /usr/bin/php 1 8788 /usr/bin/php 1 10488 /usr/bin/php 1 10571 /usr/bin/php 1 10572 /usr/bin/php The php processes running owned by (1) never get cleaned up. Why would these processes be running? We don't use setsid or anything beyond basic PHP in the code this server is running. Cheers & Thanks

    Read the article

  • Drbd Primary/Primary + iSCSI: accessing to different files avoids split brain?

    - by Eddie C.
    I have a question / curiosity about split-brain on a Drbd Primary/Primary configuration. Supposing two nodes (hosts), host1 and host2 configured with Drbd Primary/Primary and two different shares (NFS, CIFS o iSCSI) of a replicated area (saying /drbd) /drbd/file1.data /drbd/file2.data If a pool of client would access only by host1 share reading and wrinting only file1.data and another pool only by host2 share to file2.data, this scenario should avoid split brain situation in case of one node failure or it's just a conjecture? The final purpose is load balance between the two nodes in normal condition and collapsing to one node only in case of failure. Thank you! Eddie

    Read the article

  • Excel Graph: How can I turn data below in to a 'time based' graph

    - by Mike
    In my spreadsheet I am collecting time periods when certain values have been changed. The user is restricted to 4 time periods. I would like to show the data based on thos time periods. I've included a mock up' of the data and the type of graph I would like to create. I've tried to create it for the last hour but am obviously missing something so thought I'd ask around. http://i48.tinypic.com/55lezr.jpg Many thanks for any help Mike P.S How do I make this image appear in the message and not as a link?

    Read the article

  • Solution to time shifting requirement in Active Directory

    - by MikeR
    Hi, I currently have an active directory that has several child domains (consisting of nothing other than a DC and bespoke application servers) set-up for testing our CRM software, as some of it is date/time sensitive these have been set to dates in the future at some point in the past, which is causing replication errors. I'm working on getting rid of these child domains, but still have a requirement for our testers to be able to time shift. Does anyone know of any solutions that would allow our test environments to have their time changed (always forward), without affecting the production active directory? Is it as simple as creating a separate Forest on the same LAN or would that interfere with my production Forest? Thanks for any advice.

    Read the article

  • Listing the routing table takes long time to complete

    - by Rafal Rawicki
    When I print routes defined on my computer using route, it takes about 5 to 20 seconds to complete. Why does it take so much time? With VPN enabled: $ time sudo route Kernel IP routing table (...) real 0m21.423s user 0m0.000s sys 0m0.012s With no VPN, this is about 5 seconds - still, computer can do a lot in this time. I've repeated my measurements few times, getting very similar results each try. My machine is Ubuntu with 3.0.0 kernel, but as far as I know, route on the other computers works the same way.

    Read the article

< Previous Page | 159 160 161 162 163 164 165 166 167 168 169 170  | Next Page >