Search Results

Search found 4934 results on 198 pages for 'math round'.

Page 151/198 | < Previous Page | 147 148 149 150 151 152 153 154 155 156 157 158  | Next Page >

  • Managing multiple Apache proxies simultaneously (mod_proxy_balancer)

    - by Hank
    The frontend of my web application is formed by currently two Apache reverse proxies, using mod_proxy_balancer to distribute traffic over a number of backend application servers. Both frontend reverse proxies, running on separate hosts, are accessible from the internet. DNS round robin distributes traffic over both. In the future, the number of reverse proxies is likely to grow, since the webapplication is very bandwidth-heavy. My question is: how do I keep the state of both reverse balancers / proxies in sync? For example, for maintenance purposes, I might want to reduce the load on one of the backend appservers. Currently I can do that by accessing the Balancer-Manager web form on each proxy, and change the distribution rules. But I have to do that on each proxy manually and make sure I enter the same stuff. Is it possible to "link" multiple instances of mod_proxy_balancer? Or is there a tool out there that connects to a number of instances, and updates all with the same information? Update: The tool should retrieve the runtime status and make runtime changes, just like the existing Balancer-Manager, only for a number of proxies - not just for one. Modification of configuration files is not what I'm interested in (as there are plenty tools for that).

    Read the article

  • Firefox crash on first load on Ubuntu Linux on Windows Laptop

    - by Ira Baxter
    I've had a Dell Latitude laptop since about 2000 without managing to destroy it. A month ago the Windows 2000 system on it did something stupid to its file system and Windows was completely lost. No point in reinstalling Windows 2000, so I installed an Ubuntu Linux on the laptop. Everything seems normal (installed, rebooted, I can log in, run GnuChess, poke about). ... but ... when I attempt to launch Firefox from the top bar menu icon, I get a bunch of disk activity, the whirling cursor icon goes round a bit and then everything stops: disk, icon, mouse. Literally nothing happens for 5 minutes. Ubuntu is dead, as far as I can tell. A reboot, and I can repeat this reliably. So on the face of it, everything works but Firefox. That seems really strange. The only odd thing about this system when Firefox is booting is that while it has an Ethernet port (that worked fine under Windows), it isn't actually plugged into an Ethernet. As this is the first Firefox boot since the Ubuntu install, maybe Firefox mishandles Internet access? Why would that crash Ubuntu? (I need to go try the obvious experiment of plugging it in).

    Read the article

  • Why can't I route to some sites from my MacBook Pro that I can see from my iPad? [closed]

    - by Robert Atkins
    I am on M1 Cable (residential) broadband in Singapore. I have an intermittent problem routing to some sites from my MacBook Pro—often Google-related sites (arduino.googlecode.com and ajax.googleapis.com right now, but sometimes even gmail.com.) This prevents StackExchange chat from working, for instance. Funny thing is, my iPad can route to those sites and they're on the same wireless network! I can ping the sites, but not traceroute to them which I find odd. That I can get through via the iPad implies the problem is with the MBP. In any case, calling M1 support is... not helpful. I get the same behaviour when I bypass the Airport Express entirely and plug the MBP directly into the cable modem. Can anybody explain a) how this is even possible and b) how to fix it? mella:~ ratkins$ ping ajax.googleapis.com PING googleapis.l.google.com (209.85.132.95): 56 data bytes 64 bytes from 209.85.132.95: icmp_seq=0 ttl=50 time=11.488 ms 64 bytes from 209.85.132.95: icmp_seq=1 ttl=53 time=13.012 ms 64 bytes from 209.85.132.95: icmp_seq=2 ttl=53 time=13.048 ms ^C --- googleapis.l.google.com ping statistics --- 3 packets transmitted, 3 packets received, 0.0% packet loss round-trip min/avg/max/stddev = 11.488/12.516/13.048/0.727 ms mella:~ ratkins$ traceroute ajax.googleapis.com traceroute to googleapis.l.google.com (209.85.132.95), 64 hops max, 52 byte packets traceroute: sendto: No route to host 1 traceroute: wrote googleapis.l.google.com 52 chars, ret=-1 *traceroute: sendto: No route to host traceroute: wrote googleapis.l.google.com 52 chars, ret=-1 ^C mella:~ ratkins$ The traceroute from the iPad goes (and I'm copying this by hand): 10.0.1.1 119.56.34.1 172.20.8.222 172.31.253.11 202.65.245.1 202.65.245.142 209.85.243.156 72.14.233.145 209.85.132.82 From the MBP, I can't traceroute to any of the IPs from 172.20.8.222 onwards. [For extra flavour, not being able to access the above appears to stop me logging in to Server Fault via OpenID and formatting the above traceroutes correctly. Anyone with sufficient rep here to do so, I'd be much obliged.]

    Read the article

  • Firefox crash on first load on Ubuntu Linux on older Dell Laptop

    - by Ira Baxter
    I've had a Dell Latitude laptop since about 2000 without managing to destroy it. A month ago the Windows 2000 system on it did something stupid to its file system and Windows was completely lost. No point in reinstalling Windows 2000, so I installed an Ubuntu Linux on the laptop. Everything seems normal (installed, rebooted, I can log in, run GnuChess, poke about). ... but ... when I attempt to launch Firefox from the top bar menu icon, I get a bunch of disk activity, the whirling cursor icon goes round a bit and then everything stops: disk, icon, mouse. Literally nothing happens for 5 minutes. Ubuntu is dead, as far as I can tell. A reboot, and I can repeat this reliably. So on the face of it, everything works but Firefox. That seems really strange. The only odd thing about this system when Firefox is booting is that while it has an Ethernet port (that worked fine under Windows), it isn't actually plugged into an Ethernet. As this is the first Firefox boot since the Ubuntu install, maybe Firefox mishandles Internet access? Why would that crash Ubuntu? (I need to go try the obvious experiment of plugging it in).

    Read the article

  • Why does my DSDT table is different from what I found online?

    - by Hao Shen
    I have found a field in DSDT table where I want to modify from here http://www.ztex.de/misc/c2ctl.e.html Generally, I want to modify the _PSS field about the processor so that I can have more frequency levels available in the CPUfreq driver interface. I try to use this command to dissemble the DSDT table from my Desktop(Linux2.6.29,Intel CORE 2): cat /proc/acpi/dsdt > dsdt.aml iasl -d dsdt.aml Then I have a file dsdt.dsl as following(very long, so I just show the beginning of the file): /* * Intel ACPI Component Architecture * AML Disassembler version 20090123 * * Disassembly of dsdt.aml, Mon May 6 20:41:40 2013 * * * Original Table Header: * Signature "DSDT" * Length 0x00003794 (14228) * Revision 0x01 **** ACPI 1.0, no 64-bit math support * Checksum 0x46 * OEM ID "DELL" * OEM Table ID "dt_ex" * OEM Revision 0x00001000 (4096) * Compiler ID "INTL" * Compiler Version 0x20050624 (537200164) */ DefinitionBlock ("dsdt.aml", "DSDT", 1, "DELL", "dt_ex", 0x00001000) { Method (DBIN, 0, NotSerialized) { Noop } Scope (\) { Device (_SB.VBTN) ................... But I can not find the _PSS field as shown in the website I have given above. I do not know why? I am sure the current cpufreq driver shows 4 frequency levels available. So at least there should be something in the table showing this..right? Has anybody here played with the DSDT table before? Thanks,

    Read the article

  • How does Windows 7 DNS client work?

    - by Mark Allison
    I am using a local DHCP and DNS server on my home network on a linux machine. It is running CentOS 6.3 with dnsmasq 2.48. It's all working fine except for local DNS lookups for Windows machines only. I have a mix of Ubuntu, CentOS and Windows machines on the network, some virtual, some physical. I have a machine called boron and the domain is called localdomain If I ping boron from any linux machine, I get [root@lithium lists]# ping -c3 boron PING boron.localdomain (10.0.0.5) 56(84) bytes of data. 64 bytes from boron.localdomain (10.0.0.5): icmp_seq=1 ttl=64 time=0.740 ms 64 bytes from boron.localdomain (10.0.0.5): icmp_seq=2 ttl=64 time=0.478 ms 64 bytes from boron.localdomain (10.0.0.5): icmp_seq=3 ttl=64 time=0.458 ms --- boron.localdomain ping statistics --- 3 packets transmitted, 3 received, 0% packet loss, time 2000ms rtt min/avg/max/mdev = 0.458/0.558/0.740/0.131 ms If I do it from my Windows 7 machine, I get: Ping request could not find host boron. Please check the name and try again. If I try ping boron.localdomain I get: Pinging boron.localdomain [67.215.65.132] with 32 bytes of data: Reply from 67.215.65.132: bytes=32 time=16ms TTL=57 Reply from 67.215.65.132: bytes=32 time=188ms TTL=57 Reply from 67.215.65.132: bytes=32 time=15ms TTL=57 Reply from 67.215.65.132: bytes=32 time=14ms TTL=57 Ping statistics for 67.215.65.132: Packets: Sent = 4, Received = 4, Lost = 0 (0% loss), Approximate round trip times in milli-seconds: Minimum = 14ms, Maximum = 188ms, Average = 58ms which is clearly wrong. Why is it going out to the internet? Why can't my windows machine resolve the boron hostname to a FQDN? My Windows machines and linux machines get their network config from DHCP. UPDATE If I do ipconfig /all in Windows, it looks as I would expect: Windows IP Configuration Host Name . . . . . . . . . . . . : lanthanum Primary Dns Suffix . . . . . . . : Node Type . . . . . . . . . . . . : Hybrid IP Routing Enabled. . . . . . . . : No WINS Proxy Enabled. . . . . . . . : No DNS Suffix Search List. . . . . . : .localdomain Ethernet adapter Local Area Connection: Connection-specific DNS Suffix . : .localdomain Description . . . . . . . . . . . : Realtek PCIe GBE Family Controller Physical Address. . . . . . . . . : 50-E5-49-38-FC-A2 DHCP Enabled. . . . . . . . . . . : Yes Autoconfiguration Enabled . . . . : Yes IPv4 Address. . . . . . . . . . . : 10.0.0.57(Preferred) Subnet Mask . . . . . . . . . . . : 255.255.255.0 Lease Obtained. . . . . . . . . . : 23 August 2012 13:58:45 Lease Expires . . . . . . . . . . : 24 August 2012 07:58:48 Default Gateway . . . . . . . . . : 10.0.0.6 DHCP Server . . . . . . . . . . . : 10.0.0.6 DNS Servers . . . . . . . . . . . : 10.0.0.6 208.67.222.222 208.67.220.220 NetBIOS over Tcpip. . . . . . . . : Enabled When I do an nslookup I get: Server: carbon.localdomain Address: 10.0.0.6 *** carbon.localdomain can't find boron: Unspecified error However if I do ifconfig -a in Linux I get: [root@nitrogen ~]# ifconfig -a eth0 Link encap:Ethernet HWaddr 00:0C:29:AF:EC:2A inet addr:10.0.0.7 Bcast:10.0.0.255 Mask:255.255.255.0 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:187687 errors:0 dropped:0 overruns:0 frame:0 TX packets:5857 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:23910700 (22.8 MiB) TX bytes:712964 (696.2 KiB) lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:329894 errors:0 dropped:0 overruns:0 frame:0 TX packets:329894 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:67153143 (64.0 MiB) TX bytes:67153143 (64.0 MiB) and nslookup: [root@nitrogen ~]# nslookup boron Server: 10.0.0.6 Address: 10.0.0.6#53 Name: boron Address: 10.0.0.5 Both machines are on the same network using the same DHCP server. UPDATE 2 I thought the issue was resolved but I am getting intermittent DNS resolving issues but only on my Windows 7 machine. All my linux boxes are fine. This is what happens when I ping and nslookup from Windows to a Windows 2008 Server: C:\Users\mark>nslookup magnesium Server: carbon.localdomain Address: 10.0.0.6 Name: magnesium.localdomain Address: 10.0.0.12 C:\Users\mark>ping magnesium Pinging magnesium.localdomain [67.215.65.132] with 32 bytes of data: Reply from 67.215.65.132: bytes=32 time=267ms TTL=57 Reply from 67.215.65.132: bytes=32 time=162ms TTL=57 Reply from 67.215.65.132: bytes=32 time=510ms TTL=57 Reply from 67.215.65.132: bytes=32 time=146ms TTL=57 Ping statistics for 67.215.65.132: Packets: Sent = 4, Received = 4, Lost = 0 (0% loss), Approximate round trip times in milli-seconds: Minimum = 146ms, Maximum = 510ms, Average = 271ms And from Linux: [root@beryllium ~]# ping -c4 magnesium PING magnesium.localdomain (10.0.0.12) 56(84) bytes of data. 64 bytes from magnesium.localdomain (10.0.0.12): icmp_seq=1 ttl=128 time=0.176 ms 64 bytes from magnesium.localdomain (10.0.0.12): icmp_seq=2 ttl=128 time=0.634 ms 64 bytes from magnesium.localdomain (10.0.0.12): icmp_seq=3 ttl=128 time=0.685 ms 64 bytes from magnesium.localdomain (10.0.0.12): icmp_seq=4 ttl=128 time=0.263 ms --- magnesium.localdomain ping statistics --- 4 packets transmitted, 4 received, 0% packet loss, time 3002ms rtt min/avg/max/mdev = 0.176/0.439/0.685/0.223 ms [root@beryllium ~]# nslookup magnesium Server: 10.0.0.6 Address: 10.0.0.6#53 Name: magnesium.localdomain Address: 10.0.0.12 UPDATE 3 I stopped the Windows DNS client on my Windows 7 machine with net stop dnscache and it is now working fine. It would be nice to get DNS working with the DNS client on, but I might be OK without it, what do you think?

    Read the article

  • XP Pro freezes on welcome screen

    - by Peter B
    I have a problem that sounds much the same as http://superuser.com/questions/83101/how-to-diagnose-a-freeze-on-startup-in-windows-xp About every 2nd or 3rd boot, XP-Pro freezes on the Welcome page (the one showing the user name icons). The mouse moves the cursor OK, but clicking on an icon does nothing, and neither does any keystroke. If you press too many keys there is a beep and after that the mouse won't move the cursor anymore. The work-around is always to reboot into Safe mode and request a CHKDSK /R After this, the next boot is fine. There are no related entries in the Event log when the problem occurs. The only two entries are: "Microsoft (R) Windows (R) 5.01. 2600 Service Pack 3 Multiprocessor Free." "The Event log service was started." Update 1: Many thanks for that but I found no problems with any diagnostics I have run. But, I have managed to locate, and code round, the source of the problem - which is with whatever processing goes on behind the XP Welcome (as opposed to "classic") login screen. Unfortunately, having classic login means you don't get the useful Fast User Switching (FUS) login/switch mode. So, to retain this, my fix is: Add a Windows shutdown script (using gpedit.msc) to force "classic" mode for the first logon after next XP startup, by running: reg add "hklm\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Winlogon" /v LogonType /t REG_DWORD /d 0 /f Add a Scheduled Task to run at User Logon that enables Welcome screen (and hence FUS) by running the same command with 1 instead of 0 after the /d flag. The task is run as a privileged user (who can run "reg add").

    Read the article

  • How many reverse proxies (nginx, haproxy) is too many?

    - by Alysum
    I'm setting up a HA (high availability) cluster using nginx, haproxy & apache. I've been reading great things about nginx and haproxy. People tend to choose one or the other but I like both. Haproxy is more flexible for load balancing than nginx's simple round robin (even with the upstream-fair patch). But I'd like to keep nginx for redirecting non-https to https among other things right at the point of entry to the cluster. On the other hand, nginx is a lot faster for serving static contents and would reduce the load on the powerful apache which loves to eat a lot of RAM! Here is my planned setup: Load balancer: nginx listens on port 80/443 and proxy_forwards to haproxy on 8080 on the same server to load balance between the multiple nodes. Nodes: nginx on the node listens to requests coming from haproxy on 8080, if the content is static, serve it. But if it's a backend script (in my case PHP), proxy forward to apache2 on the same node server listenning on a different port number. Technically this setup works but my concerns are whether having the requests going through several proxies is going to slow down requests? Most of the requests will be PHP requests as the backends are services (which means groing from nginx - haproxy - nginx - apache). Thoughts? Cheers

    Read the article

  • What is the best way/Software to manage multiple short lived instances of virtual machines ?

    - by Newtopian
    Hi, We have a QA department that have to test our software on multiple combination of OS and DMBS. With Windows spewing out many different versions the combinatorial math of all this can be daunting. So we decided on visualizing our setups but so far it only displaces the problem. The cost of hardware is expensive and we need many different combination far exceeding your server capacity to deliver. Also, these instances are throw away, once the test is complete we no longer need it, furthermore to ensure proper test isolation we should start fresh from a new instance. Lastly we only need a small subset of these system online at any given time. What I am looking for is a way to manage inventory so that our QA staff can order instances to be put online as required and discarded once used. Instances are spawned from a pool of freshly installed systems with the appropriate combination ready to accept our software. It also should be possible for two or more people to start the same instance at the same time, though we could manage without this if it proves too complex to put in place. Finally our budget is pretty thin, we can probably make some purchases but ideally expenditures should be kept to a minimum. To summarize we should be able to : Bring instances online on demand. Ideally should offer queue and scheduling management Destroy instances on demand Keep masters in inventory but not online. Manage large inventory of VMs (30-100 maybe more) with small staff of users (5-10). Allow adding, deleting and changing instances from inventory (bring online, make changes and check back in, or create new and check in). Allow few long lived instances for support tools (normal VM server usage) Thanks for your answers

    Read the article

  • What's the best way to run Drupal and Django sites behind the same Varnish server?

    - by Alexis Bellido
    I have a high traffic website running with Drupal and Apache, five web servers behind a Varnish server load balancing. Let's say this site is example.com. I'm using five backends and a director like this in my default.vcl: director balancer round-robin { { .backend = web1; } { .backend = web2; } { .backend = web3; } { .backend = web4; } { .backend = web5; } } Now I'm working on a new Django project that will be a new section of this site running on example.com/new-section. After checking the documentation I found I can do something like this: sub vcl_recv { if (req.url ~ "^/new-section/") { set req.backend = newbackend; } else { set req.backend = default; } } That is, using a different backend for a subdirectory /new-section under the same domain. My question is, how do I make something like this work with my director and load balancing setup? I'm probably going to run two or more web servers (backends) with my new Django project, each one with a mix of Gunicorn, Nginx, and a few Python packages, and would like to put all of those in their own Varnish director to load balance. Is it possible to do use the above approach to decide which director to use?, like this: sub vcl_recv { if (req.url ~ "^/new-section/") { set req.director = newdirector; } else { set req.director = balancer; } } All suggestions welcome. Thanks!

    Read the article

  • Autodiscover service seems to reply with User Principal Name instead of email address

    - by Jeff McJunkin
    After this latest round of Windows updates (on 1/11/11, in fact) my Exchange 2007 server of course rebooted. This may have had the side effect of making any changes I'd inadvertently made take effect. Since then, the Autodiscover service in Exchange 2007 from Outlook 2007 seems to reply with the User Principal Name ([email protected] instead of [email protected]). I'm specifically seeing this from within the "Test Email AutoConfiguration" tool in Outlook (the UPN appears in the first text box labeled "E-mail") and when creating a new profile in Outlook. If I disregard the UPN and instead fill in my email address, Autodiscover works as expected and I can connect without issue. I've confirmed using ADSI Edit that the SMTP email address is properly set for my users. I even went a bit crazy and set the UPN to the email address using ADSI Edit. I've re-installed the Client Access role on the server in question. Exchange server is Server 2008, 64-bit of course. Clients are mostly XP 32-bit, though the issue happens from a Windows 7 machine as well.

    Read the article

  • Using psftp to upload and download files

    - by macha
    Hello I am trying to upload and download files from my desktop to my server. Now after some search I did download psftp. I used to use filezilla earlier, but I cannot install it on my desktop due to a few reasons. Since psftp (similar to putty) is just an executable for file transfer. So now after going through this link http://www.math.tamu.edu/~mpilant/math696/psftp.html. I understood that put and get are two commands I would use to download and upload files. Now when I logon to the server and say get filename, it actually is throwing back an error "local: unable to open filename". I tried that with other files too, and I end up getting the same error. The psftp.exe file is on my desktop. The process that I am using is I double click the .exe file open "servrname" cd /path/where/files/are get "filename" And I get this error "local: unable to open filename". Am I making a mistake or is it a problem with this executable?

    Read the article

  • Can a pool of memcache daemons be used to share sessions more efficiently?

    - by Tom
    We are moving from a 1 webserver setup to a two webserver setup and I need to start sharing PHP sessions between the two load balanced machines. We already have memcached installed (and started) and so I was pleasantly surprized that I could accomplish sharing sessions between the new servers by changing only 3 lines in the php.ini file (the session.save_handler and session.save_path): I replaced: session.save_handler = files with: session.save_handler = memcache Then on the master webserver I set the session.save_path to point to localhost: session.save_path="tcp://localhost:11211" and on the slave webserver I set the session.save_path to point to the master: session.save_path="tcp://192.168.0.1:11211" Job done, I tested it and it works. But... Obviously using memcache means the sessions are in RAM and will be lost if a machine is rebooted or the memcache daemon crashes - I'm a little concerned by this but I am a bit more worried about the network traffic between the two webservers (especially as we scale up) because whenever someone is load balanced to the slave webserver their sessions will be fetched across the network from the master webserver. I was wondering if I could define two save_paths so the machines look in their own session storage before using the network. For example: Master: session.save_path="tcp://localhost:11211, tcp://192.168.0.2:11211" Slave: session.save_path="tcp://localhost:11211, tcp://192.168.0.1:11211" Would this successfully share sessions across the servers AND help performance? i.e save network traffic 50% of the time. Or is this technique only for failovers (e.g. when one memcache daemon is unreachable)? Note: I'm not really asking specifically about memcache replication - more about whether the PHP memcache client can peak inside each memcache daemon in a pool, return a session if it finds one and only create a new session if it doesn't find one in all the stores. As I'm writing this I'm thinking I'm asking a bit much from PHP, lol... Assume: no sticky-sessions, round-robin load balancing, LAMP servers.

    Read the article

  • What is the difference between disabling hibernation and idling time for a NAS?

    - by Gary M. Mugford
    I have two D-LINK DNS-323 NAS boxes with two Seagate drives in each. The first one is about a year old, the second one about three months. The first two on Monster are each 1.5T drives while the last two on Origami are 2T drives. I have never been overly happy with the Monster drives but, outside of poor throughput on small files, they have been consistently available to all programs after I put a batch file into my startup to do a directly listing of each. I added the two new drives when I added the Origami box. But, watching the dos box that comes up, I rarely see both listed before the box disappears. Other programs, backups, Belarc, even my file browsers, seem to have a dickens of a time seeing O: and P:. Finally, I decided to go into setup and turn off hibernation. Performance HAS been better since and Belarc, for instance, now sees both drives. At the time of poking around, I noticed an Idle Time feature too. What is the difference between the two settings? And for added points, how much trouble am I in for turning off hibernation? The super bonus round ... anything ELSE I should have done? Thanks in advance, GM

    Read the article

  • Exclude list of specific files in wget

    - by nanker
    I am trying to download a lot of pages from a website on dial-up and it can be brutally slow. I have almost got the perfect wget command, but because I'm downloading pages from the same site wget wastes times downloading the same standard images for each page. If I know the name of the default page images, is there any way to have wget ignore and thus avoid downloading those for each and every page? Here is an example of one of the wget commands that my shell script generates into another shell script to download all of the pages: mkdir candy-canes-on-the-flannel-board-in-preschool cd candy-canes-on-the-flannel-board-in-preschool wget -p -nd -A jpg,html -k http://www.teachpreschool.org/2011/12/candy-canes-on-the-flannel-board-in-preschool/ wget -c --random-wait --timeout=30 --user-agent="Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.9.0.3) Gecko/2008092416 Firefox/3.0.3" http://www.teachpreschool.org/2011/12/candy-canes-on-the-flannel-board-in-preschool/ -O "candy-canes-on-the-flannel-board-in-preschool" rm Baby-and-Toddler.jpg Childrens-Books.jpg Creative-Art.jpg Felt-Fun.jpg Happy_Rainbow-e1338766526528.jpg index.html Language-and-Literacy.jpg Light-table-Button.jpg Math.jpg Outdoor-Play.jpg outer-jacket1-300x153.jpg preschoolspot-button-small.jpg robots.txt Science-and-Nature.jpg Signature-2.jpg Story-Telling.jpg Tags-on-Preschool.jpg Teaching-Two-and-Three-Year-olds.jpg cd ../ Now I realize the script is not likely as savvy as it could be but it is doing what I need at the moment except that you can see from the rm command that I would just like to prevent wget from downloading the files in the first place if possible. I almost forgot to mention, there are two wget commands and that is because the first one downloads the page as index.html and for some reason it does not open in my browser, however, when I open it and look at it in vim all of the page's content is there, so I am not sure why it does not open. But if I just issue the second wget command as it is then that page, same file really with an alternate name, opens up fine. Something that if I could fix would also help to streamline the process.

    Read the article

  • What's the piece of hardware listening on Facebook's or Wikipedia's IP address?

    - by Igor Ostrovsky
    I am trying to understand how massive sites like Facebook or Wikipedia work, for my intellectual curiosity. I read about various techniques for building scalable sites, but I am still puzzled about one particular detail. The part that confuses me is that ultimately, the DNS will map the entire domain to a single IP address, or a handful of IP addresses in the case of round-robin DNS. For example, wikipedia.org has only one type-A DNS record. So, people from all over the world visiting Wikipedia have to send a request to the one IP address specified in DNS. What is the piece of hardware that listens on the IP address for a massive site, and how can it possibly handle all the load coming from the requests for users all over the world? Edit 1: Thanks for all the responses! Anycast seems like a feasible answer... Does anyone know of a way to check whether a particular IP address is anycast-routed, so that I could verify that this really is the trick used in practice by large sites? Edit 2: After more reading on the topic, it appears that anycast is not typically used for dynamic web content. Anycast is usually used for UDP (e.g., DNS lookups), or sometimes for static content. One interesting thing to note is that Facebook uses profile.ak.fbcdn.net to host static content like style sheets and javascript libraries. Each time I ping this name, I get a response from a different IP address. However, I can't tell whether this is anycast in action, or a completely different technique. Back to my original question: as far as I can tell, even a large site will have a single expensive piece of load-balancing hardware listening on its handful of public IP addresses.

    Read the article

  • Why does this loopback device creation malfunction?

    - by user50118
    The stackoverflow people thought this was more appropriate here, I put it there as it is part of a program but I can see their POV, so here it is: At the bottom of the code you can see it failing. In fact, I'll put it here at the start too because it is the problem I need to solve: [350591.924819] EXT4-fs (loop0): bad geometry: block count 9750806 exceeds size of device (9750168 blocks) I don't understand why the device is supposedly too small. I made this partition two days ago with normal fdisk, it was created and formatted with ext4 supplying no options other than the partition (/dev/sdb2) to format. The only explaination I can think of is that ext4 has the size of the partition wrong somehow but that seems very unlikely. What is wrong with my math? The offset is correct, you can see that with the file command, and the size should be correct too because End - Start comes to the same number of sectors minus 1, just like it should (A disk starting on sector 1 and ending on sector 2 would be 2 - 1 = 1 and have two sectors). # sfdisk -luS /dev/sdb Disk /dev/sdb: 9729 cylinders, 255 heads, 63 sectors/track Units = sectors of 512 bytes, counting from 0 Device Boot Start End #sectors Id System /dev/sdb2 78295040 156296384 78001345 83 Linux # losetup -r -f --show -o $((78295040 * 512)) --sizelimit $((78001345 * 512)) /dev/sdb /dev/loop0 # file -s /dev/loop0 /dev/loop0: Linux rev 1.0 ext4 filesystem data (needs journal recovery) (extents) (large files) (huge files) # mount -o ro -t ext4 /dev/loop0 /mnt mount: wrong fs type, bad option, bad superblock on /dev/loop0, missing codepage or helper program, or other error In some cases useful info is found in syslog - try dmesg | tail or so # dmesg | tail -n 1 [350591.924819] EXT4-fs (loop0): bad geometry: block count 9750806 exceeds size of device (9750168 blocks)

    Read the article

  • Debian, How to convert filesystem from ISO-8859-1 into UTF-8?

    - by Johan
    I have a old pc that is running Debian stable, that is in need of a upgrade. The problem is that it is using latin1 (ISO-8859-1) for everything, and since the rest of the world has moved to UTF-8 I plan to convert this computer as well. And for this question I will focus in on the files that are served with Samba, and some has some latin1 characters in the filenames (like åäö). Now my plan is to move all data of this old computer onto and a brand new one that is running Debian stable (but with UTF-8). Does anybody have a good idea? Thanks Johan Note: later I plan to use iconv to convert the content of some files with something like this: iconv --from-code=ISO-8859-1 --to-code=UTF-8 iso.txt > utf.txt However I don't know of a good way to convert the filesystem it self. Note: Normally I usaly just scp from one computer to the next, but then I end up with latin1 characters in the utf-8 filesystem... Update: Did a small test round with a hand full of files (with funny chars) in the filenames, and that seemed like it could work. convmv -r -f ISO-8859-1 -t UTF-8 * So it was only to execute with the --notest convmv -r -f ISO-8859-1 -t UTF-8 --notest * Nothing more to it.

    Read the article

  • Insufficient channel capacity of 1GBit

    - by Roman S
    There is a Caching Server (Varnish): it receives data from Amazon S3 on request, saves it for some time and gives it to the client. We have encountered the problem of insufficient channel capacity of 1GBit. Peak load within 4 hours completely chokes the channel. Server performance is sufficient for now. Approximately 4.5TB of data are transmitted per day. More than 100TB are accumulated per month. The first thought that comes to mind is simply to add one more 1GBit port and sleep peacefully until 2GBit are not enough (it may happen quite quickly) or one server is not able to handle it. And then we just need to add new Caching Servers. But now we need a Load Balancer, which will send requests on one and the same URL, always on one and the same server (to avoid multiple copies of the same cached objects). Here are the questions: Does a Balancer need a band equal to sum of all bands of Caching Servers? What shall we do in case there are no ports in a Balancer? Should we add more Balancers or solve the problem by means of Round robin DNS? What are the standard approaches to such problems? Can anyone advise hosting-companies, which can solve this problem? We are interested in American and European markets.

    Read the article

  • Application (was Firefox) crash on first load on Ubuntu Linux on older Dell Laptop

    - by Ira Baxter
    I've had a Dell Latitude laptop since about 2000 without managing to destroy it. A month ago the Windows 2000 system on it did something stupid to its file system and Windows was completely lost. No point in reinstalling Windows 2000, so I installed an Ubuntu Linux on the laptop. Everything seems normal (installed, rebooted, I can log in, run GnuChess, poke about). ... but ... when I attempt to launch Firefox from the top bar menu icon, I get a bunch of disk activity, the whirling cursor icon goes round a bit and then (WAS: everything stops: icon, mouse. Literally nothing happens for 5 minutes. Ubuntu is dead, as far as I can tell. EDIT : on further investigation, spinning icon, mouse operated by touchpad freeze. There's apparantly a little disk activity occuring about every 5 seconds. I wait 5-10 minutes, behavior doesn't change) A reboot, and I can repeat this reliably. So on the face of it, everything works but Firefox. That seems really strange. The only odd thing about this system when Firefox is booting is that while it has an Ethernet port (that worked fine under Windows), it isn't actually plugged into an Ethernet. As this is the first Firefox boot since the Ubuntu install, maybe Firefox mishandles Internet access? Why would that crash Ubuntu? (I need to go try the obvious experiment of plugging it in). EDIT: I tried to run the Disk manager tool, not that I cared what it was, just a menu-available application. It started up like Firefox, I get a little tag in the lower left saying Disk P*** something had started, and then the same behavior as Firefox. At this point, I don't think its the Ethernet. Is it possible that the Ubuntu disk driver can't handle the disk controller in this older laptop? The install seemed to go fine.

    Read the article

  • New SSD, is the MBR broken? DISK BOOT FAILURE

    - by Shevek
    I've been running Windows 7 on a WD 500gb SATA single drive, single partition setup for some time with no issues. I've just installed a new Kingston V Series 64gb SSD and performed a clean install of Win7 to it, deleting the partitions on the 500gb and using that as a data drive. All was well for a few reboots but then I started to get "DISK BOOT FAILURE, INSERT SYSTEM DISK AND PRESS ENTER" messages. If I put the Win7 install DVD back in the drive it boots fine. Tried a clean install again, after replacing SATA cables and swapping SATA ports, with a complete partition wipe of both drives. Again, rebooted fine a few times then back with the "DISK BOOT FAILURE" error. Looked on the web and found some discussions about it so I then started from scratch again. This time I wiped the MBR on both drives using MBRWork, disconnected the 500gb and reinstalled to the SSD. Removed the install DVD and installed all the drivers which involved many reboots, all with no problem. To make sure I also did a few cold boots as well. Reconnected the 500gb, initialised, partitioned and formatted it. Copied data to it and did some more reboots and shutdowns. All was ok. Then out of the blue comes another "DISK BOOT FAILURE" and again, if the Win7 install DVD is in the drive it boots fine. So, is the SSD a bad'un? TIA UPDATE: It was a BIOS issue! I found a hidden away option for HDD boot order, which was separate from the usual HDD/CDRom/FDD boot order option. The WD was set to boot before the SSD... Swapped them round and all is well. Still don't understand how it worked at first though... Thanks Solaris

    Read the article

  • Exchange 2010 CAS Removal == Broken???

    - by Doug
    Hi there, I recently upgraded to exchange 2010 and have a setup with 2 of my servers running CAS roles - EXCH01, EXCH02 EXCH02 just happens to also have a mailbox role where a lot of the users sit EXCH01 is my front facing CAS server, and is facing the net with SSL etc and incoming mail moving through it as a hub transport layer server as well. As i was trying to lean things out in my VM environment i removed the CAS role from EXCH02 and all hell broke loose. All the mail users that have a mailbox on EXCH02 had their homeMTA set to a deleted items folder in AD and so did their msExchHomeServer properties. After a complete battle i manually fixed these issues to the oldvalues, and in the mean time reinstalled CAS on EXCH02 (management was going nuts with out OUTLOOK working so i just put things back the way they were in a hurry.) I must add as a strange thing on the side, that before i reset these to point at EXCH02 i tried EXCH01 and it failed. I still want to remove the CAS role from EXCH02 as it should really not have it (error on install/planning on my part) and would have thought that this would not cause the issues it did, i assumed that the fact that there was another CAS server in the admin group all would be good. Was i wrong in my assumption? and what can i do to complete this successfully the second time round? Do i need to rehome all the mailboxes to the CAS server? is this a bug in the role uninstall?

    Read the article

  • Small TCP Window on WAN between 2 Locations

    - by Brent
    Site A: Denver datacenter. 60MBPS. Site B: Chicago. 100MBPS. ICMP pings: Packets: Sent = 176, Received = 176, Lost = 0 (0% loss), Approximate round trip times in milli-seconds: Minimum = 74ms, Maximum = 94ms, Average = 75ms File transfer between sites that never goes past ~7MBPS: Windows Update download at 60MBPS+: Site to site: IPSec VPN using two Cisco 5520's. CPU at 3-4% and lots of memory to spare. The latency between to two sites is very acceptable so I can't see an issue why it is performing so slow when transferring between the two sites. I have found that any type of transfer (FTP, HTTP, Windows file shares) will never go above ~7MBPS. When the WAN was first setup, I was able to get transfers at 50-60MBPS, which is what is expected due to the WAN connection at the Site A at 60MBPS. Then a few days later, I was not able to get anything going faster than ~7MBPS. Is there a upstream router between Denver and Chicago causing this? I want to take the blame away from our setup as downloads from Windows Update go blazing fast and for the first few days after the site to site VPN came up, I was transferring VM images at 50-60MBPS. Our stack: HP P2000 MSA - HP C7000 Chassis - HP Flex-10 - Cisco Gigabit switch - Cisco ASA - WAN

    Read the article

  • How do I upgrade Windows Server 2008 R2 Standard (OEM Key) to Enterprise (MSDN Key) using DISM?

    - by Tom Crane
    (Originally asked as After upgrading to 2008 R2 Enterprise and installing more RAM, Windows can only see 4.00 GB but now I know what the question really is...) My Dell server came preinstalled with 2008 R2 Standard. I upgraded to Enterprise to take advantage of more than 32GB RAM. This server is purely for dev and testing, so I want to use my MSDN product key for the upgrade. I originally tried to uprade using the MSDN Enterprise key, but it wouldn't have it: dism /online /Set-Edition:ServerEnterprise /ProductKey:[MSDN key] => Error DISM DISM Transmog Provider: PID=5728 Product key is keyed to [], but user requested transmog to [ServerEnterprise] - CTransmogManager::ValidateTransmogrify I tried several things, including changing the current product key to the MSDN one. Eventually I used a KMS generic key which can be found in several technet forum posts. dism /online /Set-Edition:ServerEnterprise /ProductKey:[KMS Generic Key] ... and this appeared to work. I then changed the product key again (using the control panel) to the MSDN key, thinking that was the end of the matter. Only later when tried to start up VMs did I realise I only had 4GB of usable RAM. I didn't make the connection with the licensing changes at this point and went off on a wild goose chase of BIOS settings, memory configurations and the like. Only later when I saw this... http://social.technet.microsoft.com/Forums/en/winserverTS/thread/6debc586-0977-4731-b418-ca1edb34fe8b ...did I make the connection and reapply the KMS Generic key - which gave me all the RAM back. But now I have a system that isn't properly licensed, presumably I won't be able to activate it as it is, so I've got 2 days to enjoy it. With the MSDN key applied, only 4GB RAM is usable. Is there a way round this without a) rebuilding the server from scratch with the MSDN key from the start or b) buying a retail Enterprise license

    Read the article

  • General name for Macs' operating system

    - by andy124
    First of all, I hope that my question is fairly suitable for this site. I have a website where I would like to write articles about some operating systems. Therefore, I have created a main category called "Operating systems". Within a subcategory, I would like to write articles about Apple's operating system that is running on Macs. However, I do not know what to name this category. I have always thought the name was just OS X, but come to think about it, the "X" is actually part of the version (10). Therefore I cannot exactly call my category OS X, because what about when OS 11 is released in a few years? And since Apple has gone from Mac OS X to just OS X, then I cannot use "Mac OS". And, if I remove the X from OS X, then I only have "OS" left, which does not seem so proper. I am really looking for a meaningful all-round name for the Macs' operating system that does not involve the versioning. I was thinking about just calling the category "Mac", but that is not precise either - but perhaps the closest I can get?

    Read the article

< Previous Page | 147 148 149 150 151 152 153 154 155 156 157 158  | Next Page >