Search Results

Search found 13891 results on 556 pages for 'maybe homework'.

Page 406/556 | < Previous Page | 402 403 404 405 406 407 408 409 410 411 412 413  | Next Page >

  • How to configure mod_proxy_balancer to gracefully fail under high load

    - by bramp
    We have a system which has one Apache instance in front of multiple tomcats. These tomcats then connect to various databases. We balance the load to the tomcat with mod_proxy_balancer. Currently we are receiving 100 requests a second, the load on the Apache server is quite low, but due to database heavy operations on the tomcats, the load there is roughly 25% (of what I estimate they can handle). In a few weeks there is an event happening and we estimate that our requests will jump significant, maybe by a factor of 10. I'm doing everything I can do reduce the load on our tomcats, but I know we are going to run out of capacity, so I would like to fail gracefully. By this I mean, instead of trying to deal with too many connections which all timeout, I would like Apache to somehow monitor average response time, and as soon as the response time to Tomcat is getting above some threshold, I would like a error page displayed. This means that users who are lucky still get a page rendered quickly, and those who are unlucky get a error page quickly. Instead of everyone waiting far too long for their page, and eventually everyone timing out, and the database being swamped with queries which are never used. Hopefully this makes sense, so I was looking for suggestions on how I could achieve this. thanks

    Read the article

  • SSL Returning Blank Page, No Catalina Errors

    - by Mr.Peabody
    This is my second, maybe third, time configuring SSL with Tomcat. Earlier I had created a self signed, which worked, and now using my signed is proving fruitless. I am using Tomcat, operating from the Amazon Linux API. When using the signed cert/keystore, my server is starting normally without errors. However, when trying to navigate to the domain it is giving me an "ERR_SSL_VERSION_OR_CIPHER_MISMATCH" error. My server.xml file looks as follows: <Connector port="8443" maxHttpHeaderSize="8192" maxThreads="150" minSpareThreads="25" maxSpareThreads="75" enableLookups="false" disableUploadTimeout="true" acceptCount="100" scheme="https" secure="true" SSLEnabled="true" clientAuth="false" sslProtocol="TLS" keystoreFile="/home/ec2-user/.keystore/starchild.jks" keystorePass="d6b5385812252f180b961aa3630df504" /> It couldn't hurt to also mention that I'm using a wildcard certificate. Please let me know if anything looks amiss! EDIT: After looking more into this, I've determined there may be nothing is wrong with the Server.xml, or the listening ports. This is becoming more of an actual certificate error, as the curl request is giving me this error: curl: (35) Unknown SSL protocol error in connection to jira.mywebsite.com:-9824 Though, I can't seem to figure out what the "-9824" is. When comparing this curl to another similar setup (using the same Wildcard Certificate) it's turning up the full handshake, which is to be expected. I believe this is now between the protocol/cypher set default on JIRA servers.

    Read the article

  • Mail to Mailenable group sometimes bounces back because it cannot find one of the group members

    - by Stanley
    I am using MailEnable Enterprise Edition 6.53. On one of my PostOffices I have a group address set up with 5 members in the group. Under normal circumstances when someone sends a mail to this group then the mail gets successfully forwarded to all 5 members in the group. However, every now and then when a recipient sends to the group they get a bounce back message from us that the mail could not be delivered because one of the user's addresses was not found on the server. This could happen for any one of the 5 users in the group. The error message looks like: Message could not be delivered. Error was: The email address is not available on this system The following recipient(s) could not be reached: [SF:mydomain.com/my_username] What could be causing this and how do I fix it? Obviously the mailbox DOES exist as, under normal circumstances, when someone mails the group the mail gets delivered successfully and forwarded to all group members. All the mailboxes most definitely DO exist on the system. EDIT: I have done more investigation and I see an error in Windows Event Viewer: MailEnable Database Provider error: 0 FAILURE: (SQLDriverConnect), Module: MEIMAPS.exe; Error: [Microsoft][ODBC SQL Server Driver]Timeout expired This seems to be related to http://forum.mailenable.com/viewtopic.php?p=53157 and http://www.mailenable.com/kb/content/view.asp?ID=ME020535 but the setups in these links use MySQL as a database whereas we use SQL Server. Maybe this shines some new light on the problem?

    Read the article

  • How to install smtp/email server to work with php script?

    - by jiexi
    I have this code $mail->IsSMTP(); $mail->SMTPAuth = true; $mail->SMTPSecure = "ssl"; $mail->Host = "mail.craze.cc"; $mail->Port = 465; $mail->Username = "username"; $mail->Password = "pass"; $mail->SetFrom("[email protected]", "craze.cc"); $mail->AddReplyTo("[email protected]", "craze.cc"); $mail->AddAddress($this->email, $this->username); $mail->IsHTML(false); $mail->Subject = "Activate Your Craze.cc Account"; $mail->Body = $message;`enter code here` How i configure my postfix/sendmail or whatever server to actually work and send the mail? This has been driving me insane! I've tried numerous times to configure these servers. I just want to be able to send emails via my php script... Can someone please link me to a guide to get this all going? or just provide help themselves? Maybe there is an alternative way i can use to send my email in the php script? Basically, i need help just getting the emails to send...

    Read the article

  • Improve speed of "start menu" in Linux Mint 10 - Ubuntu 10.10 derivative

    - by Gabriel L. Oliveira
    I have a global menu (including application, administration and system tabs) that is taking too much time (for me) to load (about 2.5 seconds). Of course, this time is taken only during first start. After it have loaded, next times are better ( less than 0.2 miliseconds) The menu was taking more time before (about 5 seconds), and I found that was because of the 'Other' part of the menu, that included many applications installed with Wine, so I removed all of them (I didn't need them at all). I have a "normal" knowledge of programming, and I think that the process of starting the menu for the first time has some kind of "cache function", that tries to find which apps are present that need to be placed under menu to be shown to user. But didn't found this function so that I could analyze in details what he is doing (if searching for files under "~/.local/share/applications" or anything else). Also, I found that hitting "Alt-F2" also fires this "cache function", because after waiting it to load, the process of opening the menu took less than 0.2 miliseconds. So, could anyone help me in order to reduce this time? I found on internet that some user could reduce the time by resizing the icons of applications. But found here that most of my icons are already at 25x25 size. Any other idead? Maybe a multiprocess to load it, or include it under startup... don't know. Ps: Sorry if this is an awkward question, but I just do not like waiting for things to happen, and think that this process should be smoother than it's now. Also, thanks in advance!

    Read the article

  • Getting old bluetooth headset out of standby on Windows 7

    - by luagh45
    I think this is the problem, and if not, I'm open to suggestions. I have an old Jabra BT200 that I used to use on my phone. When a phone call was coming in it would beep using its own noises (meaning the phone never rang inside the headset) and then I could push the 'answer/hang up' button and sound and mic would start working. I have now paired it with Windows 7, and it looks good. Under the playback menu I have 'Bluetooth Hands free Audio / Jabra BT200 (Mono Audio) / Ready', and under the recording menu I have 'Bluetooth Audio Input Device / Jabra BT200 (Mono Audio) / Ready'. However when I try to test the speakers Windows sends a sound, but I never hear it, and when I talk in the mic, Windows never hears me. If I right click either the Bluetooth mic or speakers there's an option to 'Connect', but it's grayed out and I cannot click it. As the final piece of knowledge I have, my headset blinks once every 3 seconds when it's in standby and I can't get that to change. If everything was working it should blink once every second at which point I think all of my problems would be fixed. Hence my issue: I can't seem to get my headset out of standby. On my headset I've tried sending it test noises and then pressing the 'Answer' button, but still nothing. The headset beeps when I press it, so it works, it just doesn't ever come out of standby. Is there maybe some way to trick my headset into thinking it's getting a phone call from my computer?

    Read the article

  • cannot resolve DNS server's own domain name

    - by sims
    I have a DNS server (mega.dude - 123.123.123.123) running bind 9.4. When I: dig mega.dude I get no answer section. I have nameserver 123.123.123.123 in /etc/resolv.conf Here is my zone file: $TTL 1W @ IN SOA mega.dude. names.mega.dude. ( 2009081502 ; serial 3H ; refresh 15M ; retry 1W ; expiry 1D ) ; minimum NS ns1 NS ns2 MX 10 mail.mega.dude. A 123.123.123.123 @ A 123.123.123.123 ns1 A 123.123.123.123 ns2 A 123.123.123.123 www CNAME @ mail A 123.123.123.123 It didn't used to look like this. I read that it's evil to have an mx record pointing to a CNAME. So I changed that. Then I thought maybe that was also the case for NS. So I changed those too. Still no good. The ports are open. I can't figure it out. Oh by the way, all the other zones return fine. But not the servers own domain. So I know I'm doing something stupid. Thanks for your help all!

    Read the article

  • Firefox crash on first load on Ubuntu Linux on Windows Laptop

    - by Ira Baxter
    I've had a Dell Latitude laptop since about 2000 without managing to destroy it. A month ago the Windows 2000 system on it did something stupid to its file system and Windows was completely lost. No point in reinstalling Windows 2000, so I installed an Ubuntu Linux on the laptop. Everything seems normal (installed, rebooted, I can log in, run GnuChess, poke about). ... but ... when I attempt to launch Firefox from the top bar menu icon, I get a bunch of disk activity, the whirling cursor icon goes round a bit and then everything stops: disk, icon, mouse. Literally nothing happens for 5 minutes. Ubuntu is dead, as far as I can tell. A reboot, and I can repeat this reliably. So on the face of it, everything works but Firefox. That seems really strange. The only odd thing about this system when Firefox is booting is that while it has an Ethernet port (that worked fine under Windows), it isn't actually plugged into an Ethernet. As this is the first Firefox boot since the Ubuntu install, maybe Firefox mishandles Internet access? Why would that crash Ubuntu? (I need to go try the obvious experiment of plugging it in).

    Read the article

  • Keep Windows Installer from using largest drive for temporary files

    - by stefan.at.wpf
    By default Windows Installer uses the largest drive for temporary storage, no matter if that's needed (meaning there would also be enough space on the system drive). Taken from http://msdn.microsoft.com/en-us/library/aa371372%28VS.85%29.aspx: During an administrative installation the installer sets ROOTDRIVE to the first connected network drive it finds that can be written to. If it is not an administrative installation, or if the installer can find no network drives, the installer sets ROOTDRIVE to the local drive that can be written to having the most free space. Now my system drive is an SSD, my largest drive is a RAID, that spins down when it's not used. Remember the SSD as system drive? Everything is silent now! Until I install something and Windows Installer wakes up my RAID again just to put a small .tmp file on it... How can I prevent Windows Installer from using the largest drive as temporary storage? Can I maybe set some access rights to disallow the Windows Installer to write on my RAID drive? Any other ideas? Thank you!

    Read the article

  • Hardware doesn't work some times

    - by Ali
    I have an Asus n51vf laptop. I bought it not long time ago (less than a month) and installed Windows 7 on it. Since that it wasn't new when I bought it, I don't know, if the problem was occuring before re-installing Windows 7. There are two problems, that might (or might not) have something to do with each other: Problem The less annoying problem is that the screen flickers ONLY on YouTube when scrolling and/or pausing/playing the video or moving the curser over the video. The second more timeconsuming problem is that some of the hardware doesn't work when I turn up the computer. Then I have to reboot the computer and try again. Example: I turn on the computer, and the touchpad doesn't work - I then reboot and it works again. This has so far happened to the screen (leading to a black screen), the keyboard and the touchpad. More frequently the mouse. What have I tried to do so far? I tried reeinstalling the touchpad driver (Synaptic Touchpad driver from Asus Support Page) - didn't help at all. I even tried reinstalling the driver when the non-responding mouse on startup-problem occured (using the keyboard), but it asked me to reboot and didn't help at all. I also installed (and reinstalled) the graphics driver; it's a GeForce GT 130M. Now, the problem never occurs spontaneously - it only appears when booting my computer. I can even use the computer for heavy tasks (games, multiple programs running etc.) with no problem at all. I suspect, that the problem is in some booting mechanism that fails: Maybe the Windows driver loading? Motherboard problems?

    Read the article

  • Windows Server task manager displays much higher memory use than sum of all processes' working set s

    - by Sleepless
    I have a 16 GB Windows Server 2008 x64 machine mostly running SQL Server 2008. The free memory as seen in Task Manager is very low (128 MB at the moment), i.e. about 15.7 GB are used. So far, so good. Now when I try to narrow down the process(es) using the most memory I get confused: None of the processes have more than 200MB Working Set Size as displayed in the 'Processes' tab of Task Manager. Well, maybe the Working Set Size isn't the relevant counter? To figure that out I used a PowerShell command [1] to sum up each individual property of the process object in sort of a brute force approach - surely one of them must add up to the 15.7 GB, right? Turns out none of them does, with the closest being VirtualMemorySize (around 12.7 GB) and PeakVirtualMemorySize (around 14.7 GB). WTF? To put it another way: Which of the numerous memory related process information is the "correct" one, i.e. counts towards the server's physical memory as displayed in the Task Manager's 'Performance' tab? Thank you all! [1] $erroractionpreference="silentlycontinue"; get-process | gm | where-object {$.membertype -eq "Property"} | foreach-object {$.name; (get-process | measure-object -sum $_.name ).sum / 1MB}

    Read the article

  • Performance monitoring on Linux/Unix

    - by ervingsb
    I run a few Windows servers and (Debian and Ubuntu) Linux and AIX servers. I would like to continously monitor performance on these systems in order to easily identify bottlenecks as well as to have an overview of the general activity on the servers. On Windows, I use Windows Performance Monitor (perfmon) for this. I set up these counters: For bottlenecks: Processor utilization : System\Processor Queue Length Memory utilization : Memory\Pages Input/Sec Disk Utilization : PhysicalDisk\Current Disk Queue Length\driveletter Network problems: Network Interface\Output Queue Length\nic name For general activity: Processor utilization : Processor\% Processor Time_Total Memory utilization : Process\Working Set_Total (or per specific process) Memory utilization : Memory\Available MBytes Disk Utilization : PhysicalDisk\Bytes/sec_Total (or per process) Network Utilization : Network Interface\Bytes Total/Sec\nic name (More information on the choice of these counters on: http://itcookbook.net/blog/windows-perfmon-top-ten-counters ) This works really well. It allows me to look in one place and identify most common bottlenecks. So my question is, how can I do something equivalent (or just very similar) on Linux servers? I have looked a bit on nmon (http://www.ibm.com/developerworks/aix/library/au-analyze_aix/) which is a free performance monitoring tool developed for AIX but also availble for Linux. However, I am not sure if nmon allows me to set up the above counters. Maybe it is because Linux and AIX does not allow monitoring these exact same measures. Is so, which ones should I choose and why? If nmon is not the tool to use for this, then what do you recommend?

    Read the article

  • How can I write automated tests for iptables?

    - by Phil Frost
    I am configuring a Linux router with iptables. I want to write acceptance tests for the configuration that assert things like: traffic from some guy on the internet is not forwarded, and TCP to port 80 on the webserver in the DMZ from hosts on the corporate LAN is forwarded. An ancient FAQ alludes to a iptables -C option which allows one to ask something like, "given a packet from X, to Y, on port Z, would it be accepted or dropped?" Although the FAQ suggests it works like this, for iptables (but maybe not ipchains as it uses in the examples) the -C option seems to not simulate a test packet running through all the rules, but rather checks for the existence for an exactly matching rule. This has little value as a test. I want to assert that the rules have the desired effect, not just that they exist. I've considered creating yet more test VMs and a virtual network, then probing with tools like nmap for effects. However, I'm avoiding this solution due to the complexity of creating all those additional virtual machines, which is really quite a heavy way to generate some test traffic. It would also be nice to have an automated testing methodology which can also work on a real server in production. How else might I solve this problem? Is there some mechanism I might use to generate or simulate arbitrary traffic, then know if it was (or would be) dropped or accepted by iptables?

    Read the article

  • Firefox crash on first load on Ubuntu Linux on older Dell Laptop

    - by Ira Baxter
    I've had a Dell Latitude laptop since about 2000 without managing to destroy it. A month ago the Windows 2000 system on it did something stupid to its file system and Windows was completely lost. No point in reinstalling Windows 2000, so I installed an Ubuntu Linux on the laptop. Everything seems normal (installed, rebooted, I can log in, run GnuChess, poke about). ... but ... when I attempt to launch Firefox from the top bar menu icon, I get a bunch of disk activity, the whirling cursor icon goes round a bit and then everything stops: disk, icon, mouse. Literally nothing happens for 5 minutes. Ubuntu is dead, as far as I can tell. A reboot, and I can repeat this reliably. So on the face of it, everything works but Firefox. That seems really strange. The only odd thing about this system when Firefox is booting is that while it has an Ethernet port (that worked fine under Windows), it isn't actually plugged into an Ethernet. As this is the first Firefox boot since the Ubuntu install, maybe Firefox mishandles Internet access? Why would that crash Ubuntu? (I need to go try the obvious experiment of plugging it in).

    Read the article

  • Why am I unable to reach local network computers, but able to browse the web?

    - by Igor Zinov'yev
    I have a weird problem. Today after turning my Ubuntu 9.10 PC on I can't connect to my local network, but I can use the Internet. We have a single Windows 2003 server machine that acts as a local main DNS server, DHCP server and a domain controller. Although it seems to give me the local IP address, I can not ping it, as well as any other machine on the net. I have tried all of the below and it didn't help: Rebooting; Reconnecting to the network; Forcing the dhclient to renew the IP address; Deleting and creating new connection profiles; Plugging my machine into another network outlet; Maybe it has something to do with routing, because I have tampered with routing tables the day before, but the tables seem ok to me: $ route -n Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 192.168.0.0 0.0.0.0 255.255.255.0 U 1 0 0 eth0 169.254.0.0 0.0.0.0 255.255.0.0 U 0 0 0 vboxnet0 0.0.0.0 192.168.0.1 0.0.0.0 UG 0 0 0 eth0 Our LAN uses a D-Link DI-604 router, and it looks to me as if I am connected to the network outside the router. I can not even access its administration page. Please at least suggest what I can do to solve this. P.S. What seems strangest to me is that I can access the PC in question from outside the network by opening a port on the router. I have managed to ssh to it from outside, but I still can't ping nothing on the inside. P.P.S Today I tried reinstalling network-manager with --purge option, but it did no good. After that I created a new DCHP reservation for my PC in order to change my local IP, but that didn't change anything either. My PC is able to get a DHCP offer, but then it's unable to connect to any local computers. I am desperate.

    Read the article

  • Office 2007 constantly crashes, logged as Event ID 1000

    - by Nori
    I have a user, who despite my best efforts, is having constant Office 2007 crashes. I've tried deleting their profile and setting it up again, repairing office, uninstalling completely and then reinstalling, and swapping out memory sticks. One event log error I keep getting is the following: (note all the Office errors are event id 1000) Faulting application name: OUTLOOK.EXE, version: 12.0.6539.5000, time stamp: 0x4c12486d Faulting module name: EMSMDB32.DLL, version: 12.0.6539.5000, time stamp: 0x4c1246f8 Exception code: 0xc0000005 Fault offset: 0x0005d8e2 Faulting process id: 0xf6c Faulting application start time: 0x01cb6633f33384f3 Faulting application path: C:\Program Files (x86)\Microsoft Office\Office12\OUTLOOK.EXE Faulting module path: c:\progra~2\micros~1\office12\EMSMDB32.DLL Report Id: 0d4a2eab-d231-11df-80a0-4061868f5d10 I also get this: Faulting application name: OUTLOOK.EXE, version: 12.0.6539.5000, time stamp: 0x4c12486d Faulting module name: olmapi32.dll, version: 12.0.6538.5000, time stamp: 0x4bfc6ad9 Exception code: 0xc0000005 Fault offset: 0x002357a9 Faulting process id: 0x5e4 Faulting application start time: 0x01cb661f4546aa77 Faulting application path: C:\Program Files (x86)\Microsoft Office\Office12\OUTLOOK.EXE Faulting module path: c:\progra~2\micros~1\office12\olmapi32.dll Report Id: a4a90658-d224-11df-80a0-4061868f5d10 The Excel error is this: Faulting application name: EXCEL.EXE, version: 12.0.6535.5002, time stamp: 0x4bd2a7f1 Faulting module name: KERNELBASE.dll, version: 6.1.7600.16385, time stamp: 0x4a5bdbdf Exception code: 0xe06d7363 Fault offset: 0x0000b727 Faulting process id: 0x14a8 Faulting application start time: 0x01cb61ab7bc0abab Faulting application path: C:\Program Files (x86)\Microsoft Office\Office12\EXCEL.EXE Faulting module path: C:\Windows\syswow64\KERNELBASE.dll Report Id: ba0c454b-cd9e-11df-80a0-4061868f5d10 Also have gotten this for PowerPoint: Faulting application name: POWERPNT.EXE, version: 12.0.6500.5000, time stamp: 0x49a68f9d Faulting module name: COMShim.dll, version: 2010.3.325.110, time stamp: 0x4c51e0b1 Exception code: 0x40000015 Fault offset: 0x0001e388 Faulting process id: 0x1480 Faulting application start time: 0x01cb5fe9a0660e81 Faulting application path: C:\Program Files (x86)\Microsoft Office\Office12\POWERPNT.EXE Faulting module path: C:\Program Files (x86)\FactSet\COMShim.dll Report Id: e03d2a21-cbdc-11df-9bc8-4061868f5d10 (Some of the above lines edited to keep you from scroll horizontally.) Lastly, I get this error several times a day, I don't think it is related but maybe it is: Failed extract of third-party root list from auto update cab at: http://www.download.windowsupdate.com/msdownload/update/v3/static/trustedr/en/authrootstl.cab with error: A required certificate is not within its validity period when verifying against the current system clock or the timestamp in the signed file. Any ideas? This is driving me nuts.

    Read the article

  • Destroyed user account on OS X with dscl; how to restore? [migrated]

    - by Sam Ritchie
    I was trying to create a new user on my OS X Lion machine, and somehow managed to destroy my own user's account. Here are the steps I took; hopefully someone here can recognize what I did, and maybe identify some way around this. First, I ran these commands: sudo dscl localhost -create /Local/Default/Users/elasticsearch sudo dscl localhost -create /Local/Default/Users/elasticsearch /bin/bash # mistake! sudo dscl localhost -create /Local/Default/Users/elasticsearch UserShell /bin/bash sudo dscl localhost -create /Local/Default/Users/elasticsearch RealName "Elastic Search" sudo dscl localhost -create /Local/Default/Users/elasticsearch UniqueID 503 # MY uniqueID sudo dscl localhost -create /Local/Default/Users/elasticsearch PrimaryGroupID 1000 sudo dscl localhost -create /Local/Default/Users/elasticsearch NFSHomeDirectory /Local/Users/elasticsearch The big mistake I made here was using "503", which was my user's UniqueID. Immediately my shell username changed to "elasticsearch". I fiddled around, tried to change the current user with sudo su -u sritchie, but this didn't work. On restart, only the "Elastic Search" user was available. I logged into the Lion Recovery partition and reset the root password. After logging in as root and checking on the terminal, I made the remarkable discovery that my home folder was totally empty. I deleted the elasticsearch user, but it made no difference. I don't see anything in Deleted Users either. The odd thing is that when I log in now as myself (sritchie) I can see desktop icons with previews. I can even open a few text files from the Downloads folder if I use the dock alias to Downloads. Could this data be hiding somewhere? Any help would be REALLY appreciated! Thanks, Sam

    Read the article

  • Server nearly unusable when doing disk writes

    - by Wikser
    My question closely relates to my last question here on serverfault. I was copying about 5GB from a 10 year old desktop computer to the server. The copy was done in Windows Explorer. In this situation I would assume the server to be bored by the dataflow. But as usual with this server, it really slowed down. At least I could work with the remote session, even there was some serious latency. The copy took its time (20min?). In this time I went to a colleague and he tried to log in in the same server via remote desktop (for some other reason). It took about a minute to get to the login screen, a minute to open the control panel, a minute to open the performance monitor, ... Icons were loading maybe one per second. We saw the following (from memory): CPU: 2% Avg. Queue Length: 50 Pages/sec: 115 (?) There was no other considerable activity on the server. The server seldom serves some ASP.NET pages, which became also very slow in this time. The relevant configuration is as follows: Windows 2003 SEAGATE ST3500631NS (7200 rpm, 500 GB) LSI MegaRAID based RAID 5 4 disks, 1 hot spare Write Through No read-ahead Direct Cache Mode Harddisk-Cache-Mode: off Is this normal behaviour for such a configuration? What measurements could give further clues? Is it reasonable to reduce the priority of such copy I/O and favour other processes like the remote desktop? How would you do that? Many thanks!

    Read the article

  • flowchart for debugging a slow/unresponsive server

    - by davidosomething
    So the server is slow: Roll back to the previous known working build - Success? Code problem - Fail? Go on. Ping ip address - Success? maybe a DNS problem, go on. - Fail? Server or connection problem, go on. Ping and tracert your domain.com from inside your network - previous success - fail: DNS problem - success? go on. - previous fail and: - Fail? Go on, could be you or network. - Success? Go on. Try it from outside your network (http://centralops.net/co/) - Fail? The server's network connection sucks. - Success? If inside network was fail, your network sucks. Check the server load: CPU/RAM usage. Is it overloaded? - Yes. Who's the culprit? Kill some processes/reboot. - No? Go on. what other steps should i add?

    Read the article

  • apt-get : Size mismatch

    - by Cédric Girard
    I created a private deb repository to spread a software and it's updates to 600 Ubuntu netbooks. Each time the network is connected, my script try to do a apt-get update. But sometimes (quite often in fact), I have this : Failed to fetch https://myserver/ubuntu/dists/maverick/main/binary-i386/voosicomat.deb Size mismatch The server is an 2.2 Apache, HTTPS only. There is no error on it's logs. Here is the script : apt-get update apt-get dist-upgrade --force-yes --yes Here is the complete output of apt-get Ign https://myserver maverick Release.gpg Ign https://myserver/ubuntu/ maverick/main Translation-en Ign https://myserver maverick Release Ign https://myserver maverick/main i386 Packages/DiffIndex Ign https://myserver maverick/main i386 Packages Ign https://myserver maverick/main i386 Packages Hit https://myserver maverick/main i386 Packages Reading package lists... Reading package lists... Building dependency tree... Reading state information... The following packages will be upgraded: majdb utilitaires voosicomat 3 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. Need to get 6207kB/6273kB of archives. After this operation, 0B of additional disk space will be used. WARNING: The following packages cannot be authenticated! utilitaires voosicomat majdb Get:1 https://myserver/ubuntu/ maverick/main voosicomat all 2.0.1 [4755kB] Get:2 https://myserver/ubuntu/ maverick/main majdb all 1.0.17 [1452kB] Failed to fetch https://myserver/ubuntu/dists/maverick/main/binary-i386/voosicomat.deb Size mismatch Fetched 7091kB in 21s (324kB/s) E: Unable to fetch some archives, maybe run apt-get update or try with --fix-missing? Regards Cédric

    Read the article

  • How to access Windows Server 2008 R2 file shares from a different subnet

    - by Lloyd Cotten
    We have a couple of severs that used to be Windows Server 2003 that we recently upgraded to Windows Server 2008 R2. A couple of details to set the situation up: We wiped the OS and re-installed. These servers are on one subnet (172.16.x.x) and we are trying to access some file shares on them from another subnet (10.34.x.x). Firewall is disabled on these servers. Trying to access with UNC "\172.16.x.x\sharename" and net use \172.16.x.x However, we're having problems doing this. We are getting "The network path was not found". Here's some of the things we've tried so far and the result: Tried accessing the share from other (non-2008) servers on the same subnet... Success! Ping servers from different subnet... Success! Telnet connection into port 139 from different subnet... Success! Took a scan through Local Security Policies to see if something obvious needed to be enabled / disabled / configured... Fail I'm not sure where to look next. I know that the router between the two subnets is locked down pretty good, but this did work for our 2003 servers. Has anything changed in the way of ports used for UNC / file share access in 2008? Maybe I'm missing some security policy setting? Hoping somebody can take pity on a poor programming guy that can't figure out something really simple. :-) Thanks!

    Read the article

  • Mac Backup Plan

    - by Chuy77
    I'm reviewing my backup plan and would appreciate any thoughts about what more I should do (if anything) to make sure I'm properly covered in case of all hell breaking loose. :-) I have one machine. 1) I run a nightly clone with SuperDuper. I alternate the clone drive weekly so I have two clones, one never more than a week old. 2) I use BackBlaze as a sort of Time Machine in the cloud. It runs all the time and keeps everything on my machine backed up online. 3) I sync all my 1Password logins, etc. to my iPhone once a week. ...And that's it. I feel pretty covered. But I'm always reading stuff like this: http://www.43folders.com/2010/03/15/yes-another-backup-lecture And that doesn't even mention online backup, and seems like a huge pain in the behind. But maybe I'm being naive? Should I have more backups? Thanks for any feedback. I really appreciate it.

    Read the article

  • Why does notepad crash on desktop files in the save-as dialog?

    - by deepc
    Here's a puzzling problem - maybe somebody has an idea. Right now I am out of ideas. On Win7 64bit, the following crashes Notepad: On Desktop, right click, select "New | Text Document". This creates "New Text Document.txt". Right click on that file, select "Edit". This opens notepad with the empty file. Select "File | Save as": Notepad crashes and Win7 reports that "Notepad has stopped working". Now, move the file to c:\temp and repeat steps 2 and 3: no crash this time and the save-as dialog appears normally. I can create similar steps for the "open" dialog. Things I have tried: Safe mode - does not work, same problem Create a new user and try again logged in as that user - no crash Name file differently, or create elsewhere and then move to desktop - same problem Use Wordpad instead - same problem Review shell extensions with ShellExView - nothing extraordinary here Stare at the event log entries for each of the crashes. Does not enlighten me. At time of crash look at the process explorer stack view. Hangs at a function "TaskDialog". sfc.exe /scannow repaired some files but the problem persists. This is how the event log entries look like: Log Name: Application Source: Application Error Date: 14.12.2010 00:33:48 Event ID: 1000 Task Category: (100) Level: Error Keywords: Classic User: N/A Description: Faulting application name: NOTEPAD.EXE, version: 6.1.7600.16385, time stamp: 0x4a5bc9b3 Faulting module name: COMCTL32.dll, version: 6.10.7600.16661, time stamp: 0x4c6f6e4b Exception code: 0xc000041d Fault offset: 0x00000000000db770 Faulting process id: 0x198 Faulting application start time: 0x01cb9b1e140ab92a Faulting application path: C:\Windows\system32\NOTEPAD.EXE Faulting module path: C:\Windows\WinSxS\amd64_microsoft.windows.common-controls_6595b64144ccf1df_6.0.7600.16661_none_fa62ad231704eab7\COMCTL32.dll What else should I try, short of dumping my user and starting over with a new profile? Thanks...

    Read the article

  • Can't log in using second domain controller when first DC is unreachable

    - by rbeier
    Hi, We're a small web development company. Our domain has two DCs: a main one (BEEHIVE, 192.168.3.20) in the datacenter and a second one (SPHERE2, 10.0.66.19) in the office. The office is connected to the datacenter via a VPN. We recently had a brief network outage in the office. During this outage, we weren't able to access the domain from our office machines. I had hoped that they would fail over to the DC in the office, but that didn't happen. So I'm trying to figure out why. I'm not an expert on Active Directory so maybe I'm missing something obvious. Both domain controllers are running a DNS server. Each office workstation is configured to use the datacenter DC as its primary DNS server, and the office DC as its secondary: DNS Servers . . . . . . . . . . . : 192.168.3.20 10.0.66.19 Both DNS servers are working, and both domain controllers are working (at least, I can connect to them both using AD Users + Computers). Here are the SRV records that point to the domain controllers (I've changed the domain name but I've left the rest alone): C:\nslookup Default Server: beehive.ourcorp.com Address: 192.168.3.20 set type=srv _ldap._tcp.ourcorp.com Server: beehive.ourcorp.com Address: 192.168.3.20 _ldap._tcp.ourcorp.com SRV service location: priority = 0 weight = 100 port = 389 svr hostname = beehive.ourcorp.com _ldap._tcp.ourcorp.com SRV service location: priority = 0 weight = 100 port = 389 svr hostname = sphere2.ourcorp.com beehive.ourcorp.com internet address = 192.168.3.20 sphere2.ourcorp.com internet address = 10.0.66.19 Does anyone have any ideas? Thanks, Richard

    Read the article

  • How can I override mod-php5's .php mapping to php4-cgi per VirtualHost or Directory?

    - by geocoo
    I am running Debian Linux with apache2 and libapache2-mod-php5 5.3.3-7. I have one VirtualHost which requires php4. So I researched and compiled php4-cgi. However, I cannot seem to: Override mod-php5's mapping of .php in that vhost (or even globally, without disabling php completley). Even find where that mapping is made, in hope of disabling it and enabling mod-php5 or php4-cgi per vhost. This is my php4-cgi mapping (Inside the one php4 vhost): ScriptAlias /php4 /usr/local/php4/bin <Directory /usr/local/php4/bin> Options +ExecCGI +FollowSymLinks </Directory> <Directory /www/test> AddHandler php4-cgi-script .php Action php4-cgi-script /php4/php Options +ExecCGI </Directory> This does not work, mod-php5 still runs all .php files in that vhost/directory. If I change the file extension in the AddHandler above from .php to .php4, then .php4 files do run php4-cgi as expected, but I can't change all the files in the app to .php4. I thought maybe I could disable the mod-php5's mapping in my vhost or directory, then do my cgi-config (as above) but many combinations of these in different contexts did not work: RemoveHandler .php RemoveType .php php_flag engine off (this seems to even disable my php4-cgi so that wont work) The only other place I can find any mapping is in /etc/mime.types, but commenting out the relevant lines and restarting apache2 does not affect mod-php5's .php mapping. I have searched as much as I can, it is now a mystery to me. Any help or direction would be greatly appreciated.

    Read the article

< Previous Page | 402 403 404 405 406 407 408 409 410 411 412 413  | Next Page >