Search Results

Search found 7114 results on 285 pages for 'hope'.

Page 235/285 | < Previous Page | 231 232 233 234 235 236 237 238 239 240 241 242  | Next Page >

  • why i failed to build rtorrent?

    - by hugemeow
    i am not root, so i have to build rtorrent from source and hope to install it in my home directory, but failed, why? [mirror@hugemeow rtorrent]$ ls AUTHORS autogen.sh ChangeLog configure.ac COPYING doc INSTALL Makefile.am NEWS rak README scripts src test [mirror@hugemeow rtorrent]$ ./autogen.sh aclocal... aclocal:configure.ac:7: warning: macro `AM_PATH_CPPUNIT' not found in library autoheader... libtoolize... using libtoolize automake... configure.ac: installing `./install-sh' configure.ac: installing `./missing' src/Makefile.am: installing `./depcomp' autoconf... configure.ac:7: error: possibly undefined macro: AM_PATH_CPPUNIT If this token and others are legitimate, please use m4_pattern_allow. See the Autoconf documentation. though autoge failed, configure script is created. [mirror@hugemeow rtorrent]$ ls aclocal.m4 autogen.sh ChangeLog config.h.in configure COPYING doc install-sh Makefile.am missing rak scripts test AUTHORS autom4te.cache config.guess config.sub configure.ac depcomp INSTALL ltmain.sh Makefile.in NEWS README src run configure, and fail for syntax error near unexpected token `1.9.6', what's wrong? what i should do in order to build this rtorrent for my centos? [mirror@hugemeow rtorrent]$ ./configure checking for a BSD-compatible install... /usr/bin/install -c checking whether build environment is sane... yes checking for gawk... gawk checking whether make sets $(MAKE)... yes ./configure: line 2016: syntax error near unexpected token `1.9.6' ./configure: line 2016: `AM_PATH_CPPUNIT(1.9.6)' [mirror@hugemeow rtorrent]$ git branch * master [mirror@hugemeow rtorrent]$ git branch -a * master remotes/origin/HEAD -> origin/master remotes/origin/c++11 remotes/origin/master [mirror@hugemeow rtorrent]$ git tag 0.9.0 0.9.1 [mirror@hugemeow rtorrent]$ git clean -dfx Removing Makefile.in Removing aclocal.m4 Removing autom4te.cache/ Removing config.guess Removing config.h.in Removing config.log Removing config.sub Removing configure Removing depcomp Removing doc/Makefile.in Removing install-sh Removing ltmain.sh Removing missing Removing src/Makefile.in Removing src/core/Makefile.in Removing src/display/Makefile.in Removing src/input/Makefile.in Removing src/rpc/Makefile.in Removing src/ui/Makefile.in Removing src/utils/Makefile.in Removing test/Makefile.in Edit 1: details about libtool and libtools [mirror@hugemeow rtorrent]$ libtoolize --version libtoolize (GNU libtool) 1.5.22 Copyright (C) 2005 Free Software Foundation, Inc. This is free software; see the source for copying conditions. There is NO warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. [mirror@hugemeow rtorrent]$ libtool --version ltmain.sh (GNU libtool) 1.5.22 (1.1220.2.365 2005/12/18 22:14:06) Copyright (C) 2005 Free Software Foundation, Inc. This is free software; see the source for copying conditions. There is NO warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.

    Read the article

  • Skyrim: Heavy Performance Issues after a couple of location changes

    - by Derija
    Okay, I've tried different solutions: ENB Series, removing certain mods, checking my FPS Rate, monitoring my resources, .ini tweaks. It's all just fine, I don't see what I'm missing. A couple of days ago, I bought Skyrim. Before I bought the game, I admit I had a pirated copy because my girlfriend actually wanted to buy me the game as a present, then said she didn't have enough money. Sick of waiting, I decided to buy the game by myself. The ridiculous part is, it worked better cracked than it does now uncracked. As the title suggests, after entering and leaving houses a couple of times, my performance obviously drops extremely. My build is just fine, Intel i5 quad core processor, NVIDIA GTX 560 Ti from Gigabyte, actually stock-OC, but manually downclocked to usual settings using appropriate Gigabyte software. This fixed the CTD issues I had before with both Skyrim and BF3. I have 4GB RAM. A website about Game Tweaks suggested that my HDD may be too slow. A screenshot of a Windows Performance Index sample with the subscription "This is likely to cause issues" showed the HDD with a performance index of 5.9, the exact same mine has, so I was playing with the thought to purchase an SSD instead, load games onto it that really need it like Skyrim, and hope it'd do the trick. Unfortunately, SSDs are likewise expensive, compared to "normal" HDDs... I'm really getting desperate about it. My save is gone because the patches made it impossible to load saves of the unpatched version and I already saved more than 80 times despite being only level 8, just because every time I interact with a door leading me to another location I'm scared the game will drop again. I can't even play for 30 mins straight anymore, it's just no fun at all. I've researched for a couple of days before I decided to post my question here. Any help is appreciated, I don't want to regret having bought the game... Since it actually is the best game I've played possibly for ever. Sincerely. P.S.: I don't think it's necessary to say, but still, of course I'm playing on PC. P.P.S.: After monitoring both my PC resources including CPU usage and HDD usage as well as the GPU usage, I don't see any changes even after the said event. P.P.P.S.: Original question posted here where I've been advised to ask here.

    Read the article

  • Win XP error 0x80041003 using GetObject/winmgmts

    - by John Lewis
    My computer is called "neil" and I want to set some values using WMI in vbScript. I adapetd the script below from one supplied by Microsoft. When I run it in my browser I get Error Type: (0x80041003) /dressage/30/pdf2.asp, line 8 I suspect it is some registry/security setting. Any advice? John Lewis FULL SCRIPT call Print_HTML_Page("http://neil/dressage/ascii.asp", "ascii") Sub SetPDFFile(strPDFFile) Const HKEY_LOCAL_MACHINE = &H80000002 strKeyPath = "SOFTWARE\Dane Prairie Systems\Win2PDF" strComputer = "." Set objReg=GetObject( _ "winmgmts:{impersonationLevel=impersonate}!\\" & _ strComputer & "\root\default:StdRegProv") strValueName = "PDFFileName" objReg.SetExpandedStringValue HKEY_LOCAL_MACHINE,_ strKeyPath,strValueName,strPDFFile End Sub Sub Print_HTML_Page(strPathToPage, strPDFFile) SetPDFFile( strPDFFile ) Set objIE = CreateObject("InternetExplorer.Application") 'From http://www.tek-tips.com/viewthread.cfm?qid=1092473&page=5 On Error Resume Next strPrintStatus = objIE.QueryStatusWB(6) If Err.Number 0 Then MsgBox "Cannot find a printer. Operation aborted." objIE.Quit Set objIE = Nothing Exit Sub End If With objIE .visible=0 .left=200 .top=200 .height=400 .width=400 .menubar=0 .toolbar=1 .statusBar=0 .navigate strPathToPage End With 'Wait until IE has finished loading Do while objIE.busy WScript.Sleep 100 Loop On Error Goto 0 objIE.ExecWB 6,2 'Wait until IE has finished printing WScript.Sleep 2000 objIE.Quit Set objIE = Nothing End Sub Update: Thanks for your reply. The line breaks seem to have been introduced in the process of paasting into this form. Well spotted - I was using a PDF file name "ascii". I added a .pdf extension but still get the error. I suspect you're right that it's to do with admin rights. Here's more about the setup and what I'm trying to achieve. Win2pdf is a product for writing PDFs by works by simulating a Windows printer. You "print" the page, select win2pdf in the print dialog and it then asks for a file name. I have it installed on my pc (called Neil) and it works fine in this conventional way. My aim is to write an html page to a PDF file using win2pdf - but via ASP/vbscript/javascript rather than with manual intervention. The script for doing this was provided by win2PDF's tech support but when it did not work, that was the limit of their understanding. In the sample script the file ascii.asp just produces a table of ascii codes/characters. The URL given is on my own PC which has IIS set up to run scripts which it does fine. The error I get occurs on about the fourth line executed. I am logged in with full admin rights - I think! But I'm no expert. I hope this helps to give some more specific suggestions about how to check/fix the admin rights.

    Read the article

  • Losing internet connectivity on server after installing LogMeIn Hamachi (with server set as gateway node)

    - by Kim Jong-Un
    Our domain controller (SBS 2003) completely lost internet and network connectivity yesterday after I remotely installed LogMeIn Hamachi on it and set it to be a gateway node- in an attempt to create a VPN link between the server and a remote site. I had to go in to the office to resolve the problem as, unsurprisingly, my own remote access to the server was also lost. I was only able to restore network connectivity by deleting a virtual network adapter Hamachi created when making the server the gateway node (called "Hamachi bridge" I believe), then rebooting the server. This is a repeatable problem. Every time I try to get this to work, it just takes the server offline. Why would this bridge affect regular TCP/IP connectivity on the NIC in this way? I have tried a "hub-and-spoke" configuration between the server and our PC at a remote site (server set as hub, remote site as spoke). This caused no such problems with general internet connectivity, and file transfer worked well between the two computers. However, there was a DNS issue with the VPN between the two sites- resulting in Active Directory not being able to communicate between them (could not log on using domain user accounts at remote site if they were not already cached on that machine). I only tried a "gateway" network as LogMeIn support told me: If you can get the Active Directory to work it would only be through a "Gateway" network type with the server acting as the Gateway Node. You would configure the gateway settings on the server in the Hamachi client on that machine to push whatever IP's/DNS settings you prefer and at that point AD would be able (all things being equal) communicate to the client node when it attaches. We do not have any ActiveDirectory configuration info as that's outside the scope of our support. I hope this helps. It would be fantastic if I could get Active Directory to work over a Hamachi VPN connection, without worrying about the server going offline in this way. Does anyone have any ideas how I should proceed, or any theories as to what is going on when I try to use the "gateway" network type? I want to try to narrow down what is going on here.

    Read the article

  • Where to get laptop replacement parts?

    - by wai
    I am sure some of us have older laptops that still works but just one or two parts started not working very well. However, to send it back to the manufacturer for repair might not be worth the money (as some of them even charge consultation fees just to have it checked) as these old laptops are most probably well beyond warranty. Replacing these one or two parts could give your laptop a new lease of life. For example, I have a Dell Inspiron 640m which the fan recently failed. Everything else still works. I know enough to open up the laptop (after reading the service manual) and put it back together again, hence a replacement fan could save it. However, finding these parts turned out to be difficult. I guess I wouldn't be able to get it from Dell since I have been searching around the Dell forums and concluded if I were going to get Dell to at least talk to me, I would have to pay some money (since my warranty expired). I found this online:- http://www.parts-people.com/index.php?action=item&id=3725 However, as I don't live in the United States, shipping alone would cost me 3 times more than the part in question. In addition, if there's a problem, it would be difficult because I probably would have to mail the parts back (costing more money). One of my friend spilled water on her Macbook, she sent it back to Apple and they told her the cost of repairing it exceeds that of buying a new Macbook. They probably have to change everything except the LCD screen. I opened it up and found that most of the parts are still working, only the RAM had fried. Replaced it and it's working again. For some time before discovering only the RAM fried, I was toying with the idea of replacing the logic board myself but looks like I didn't have to. =) Now I know there's a site for Macbooks used replacement parts:- http://www.ifixit.com/ The question is, does anyone know where to get replacement parts at their own locality? It might benefit the community as a whole. Someone might know a local computer store that sells precisely just the things we needed (for now, I hope to find a shop that sells just the fan). If you own a laptop which you had an easy experience finding the replacement parts, do post that as well. PS: I wish laptop manufacturers have outlets where they sell these replacement parts (or at least let you place an order).

    Read the article

  • Passwordless ssh failed when login using username

    - by Aczire
    I was trying to setup Hadoop and was stumbled on passwordless ssh to localhost. I am getting a password prompt when trying to connect using ssh username@hostname format. But there is no problem connecting to the machine like ssh localhost or ssh hostname.com. Tried ssh-copy-id user@hostname but it did not work. Using CentOS 6.3 as normal user, I neither have root access or am a sudoer so editing any files like sshd_config is not possible (not even cat the sshd_config file contents). I hope the user login is possible since I can do login without password to localhost, right? Please advise, Here is the ssh debug output. [[email protected] ~]$ ssh -v [email protected] OpenSSH_5.3p1, OpenSSL 1.0.0-fips 29 Mar 2010 debug1: Reading configuration data /etc/ssh/ssh_config debug1: Applying options for * debug1: Connecting to hostname.com [::1] port 22. debug1: Connection established. debug1: identity file /home/user/.ssh/identity type -1 debug1: identity file /home/user/.ssh/id_rsa type -1 debug1: identity file /home/user/.ssh/id_dsa type 2 debug1: Remote protocol version 2.0, remote software version OpenSSH_5.3 debug1: match: OpenSSH_5.3 pat OpenSSH* debug1: Enabling compatibility mode for protocol 2.0 debug1: Local version string SSH-2.0-OpenSSH_5.3 debug1: SSH2_MSG_KEXINIT sent debug1: SSH2_MSG_KEXINIT received debug1: kex: server->client aes128-ctr hmac-md5 none debug1: kex: client->server aes128-ctr hmac-md5 none debug1: SSH2_MSG_KEX_DH_GEX_REQUEST(1024<1024<8192) sent debug1: expecting SSH2_MSG_KEX_DH_GEX_GROUP debug1: SSH2_MSG_KEX_DH_GEX_INIT sent debug1: expecting SSH2_MSG_KEX_DH_GEX_REPLY debug1: Host 'hostname.com' is known and matches the RSA host key. debug1: Found key in /home/user/.ssh/known_hosts:1 debug1: ssh_rsa_verify: signature correct debug1: SSH2_MSG_NEWKEYS sent debug1: expecting SSH2_MSG_NEWKEYS debug1: SSH2_MSG_NEWKEYS received debug1: SSH2_MSG_SERVICE_REQUEST sent debug1: SSH2_MSG_SERVICE_ACCEPT received debug1: Authentications that can continue: publickey,gssapi-keyex,gssapi-with-mic,password debug1: Next authentication method: gssapi-keyex debug1: No valid Key exchange context debug1: Next authentication method: gssapi-with-mic debug1: Unspecified GSS failure. Minor code may provide more information Credentials cache file '/tmp/krb5cc_500' not found debug1: Unspecified GSS failure. Minor code may provide more information Credentials cache file '/tmp/krb5cc_500' not found debug1: Unspecified GSS failure. Minor code may provide more information debug1: Unspecified GSS failure. Minor code may provide more information debug1: Next authentication method: publickey debug1: Offering public key: /home/user/.ssh/id_dsa debug1: Server accepts key: pkalg ssh-dss blen 434 Agent admitted failure to sign using the key. debug1: Trying private key: /home/user/.ssh/identity debug1: Trying private key: /home/user/.ssh/id_rsa debug1: Next authentication method: password [email protected]'s password:

    Read the article

  • HAProxy causing delay

    - by user1221444
    I am trying to configure HAProxy to do load balancing for a custom webserver I created. Right now I am noticing an increasing delay with HAProxy as the size of the return message increases. For example, I ran four different tests, here are the results: Response 15kb through HAProxy: Avg. response time: .34 secs Transacation rate: 763 trans/sec Throughput: 11.08 MB/sec Response 2kb through HAProxy: Avg. response time: .08 secs Transaction rate: 1171 trans / sec Throughput: 2.51 MB/sec Response 15kb directly to server: Avg. response time: .11 sec Transaction rate: 1046 trans/sec throughput: 15.20 MB/sec Response 2kb directly to server: Avg. Response time: .05 secs Transaction rate: 1158 trans/sec Throughput: 2.48 MB/sec All transactions are HTTP requests. As you can see, there seems to be a much bigger difference between response times for when the response is bigger, than when it is smaller. I understand there will be a slight delay when using HAProxy. Not sure if it matters, but the test itself was run using siege. And during the test there was only one server behind the HAProxy(the same that was used in the direct to server tests). Here is my haproxy.config file: global log 127.0.0.1 local0 log 127.0.0.1 local1 notice maxconn 10000 user haproxy group haproxy daemon #debug defaults log global mode http option httplog option dontlognull retries 3 option redispatch option httpclose maxconn 10000 contimeout 10000 clitimeout 50000 srvtimeout 50000 balance roundrobin stats enable stats uri /stats listen lb1 10.1.10.26:80 maxconn 10000 server app1 10.1.10.200:8080 maxconn 5000 I couldn't find much in terms of options in this file that would help my problem. I have heard suggestions that I may have to adjust a few of my sysctl settings. I could not find a lot of information on this however, most documentation is for Linux 2.4 and 2.6 on the sysctl stuff, I am running 3.2(Ubuntu server 12.04), which seems to auto tuning, so I have no clue what I should or shouldn't be changing. Most settings changes I tried had no effect or a negative effect on performance. Just a notice, this is a very preliminary test, and my hope is that at deployment time, my HAProxy will be able to balance 10k-20k requests/sec to many servers, so if anyone could provide information to help me reach that goal, it would be much appreciated. Thank you very much for any information you can provide. And if you need anymore information from me please let me know, I will get you anything I can.

    Read the article

  • Changing default datadirectory to one on a External NAS or add external share

    - by Hagbart Celine
    So I have been searching the web for days, looking for a solution to my problem. Now my only hope are you guys. I have installed Owncloud successfully on my Windows Server 2008R2. It all runs smoothly and I can connect without problems. So first checks are OK. Now I wanted to change the default data directory from my server to a shared folder on my NAS (Synology DS1813+, DSM 5.0-4493 Update 3). Tried following: changing the directory in config.php I changed the path in the config file from : "C:\inetpub\wwwroot\myfolder\data" to "\NASIP\cloud". by doing this the owncloud server only shows: Code: Select all Daten-Verzeichnis (\192.168.2.4\Cloud\data) ist ungültig Bitte stelle sicher, dass das Daten-Verzeichnis eine Datei namens ".ocdata" im Wurzelverzeichnis enthält. I also tried coping the files that were created in the local data storage, to the share on the NAS. No Luck. Now I tried it by mapping a network drive and using that in the config.php But still no luck. I get the same message with the missing .ocdata file. Now I tried the "External Storage APP" that comes with owncloud I thought that at least I could add the share as an external storage. But this also does not work. tried UNC, Mapped Drive Name (Z:) but nothing helped. So now I'm turning to you.. Does anyone have expirience with this kind of setup? Or can you even tell me how to make it work? (default or external storgae, I don't care anymore ) Using NAS (Synology DS1813+, DSM 5.0-4493 Update 3), Owncloud 7, Windows Server 2008 R2, IIS7 I got an answer on an other forum: The second option is how it should be done: 1. Put OC in maintenance mode 2. Mount (mapping in the windows world) your NAS directly to your OS 3. Copy the local data directory to the NAS mount 4. Ensure the permission is setup to give the web user access to the NAS mount 5. Update OC config.php with the new data path 6. Disable OC maintenance mode And this seems like the right way.. Ensure the permission is setup to give the web user access to the NAS mount I guess this is where I am not sure. What user is it exactly on my Server that is making the requests to the NAS? If the user is for example "IUSR" I can just create an account on my synology NAS and give him full access to my share? (But what is IUSRs password?) I have full root ssh access to my NAS, so if you can tell me what chmod or chown I need to use on my cloud folder...

    Read the article

  • VPN pptp connection Unable to pass through linux iptables

    - by user221844
    I have set up a windows VPN server behind Linux - Ubuntu box that is working as firewall and proxy server. Now I want people from outside to be able to connect to the VPN server, but the connection is not being established and I get on the client an error 619. I have checked the problem on the internet and it seems a firewall issue. what should I do to make the connection established through the firewall? here is below the information about my setup Firewall-External-IF-IP: 172.16.1.100 Firewall-LAN-IF-IP: 192.168.1.1 VPN-Server-IP: 192.168.1.10 and below is my iptables file content: #Generated by iptables-save v1.4.12 on Thu May 29 12:40:18 2014 *filter :INPUT ACCEPT [162000:140437619] :FORWARD ACCEPT [23282:27196133] :OUTPUT ACCEPT [185778:143961739] :LOGGING - [0:0] -A INPUT -p gre -j ACCEPT -A INPUT -s 192.168.1.10/32 -p tcp -m tcp --sport 1723 -j ACCEPT -A INPUT -s 192.168.1.10/32 -p udp -m udp --sport 1723 -j ACCEPT -A FORWARD -s 192.168.1.0/24 -o EXT_IF -j ACCEPT -A FORWARD -s 192.168.1.0/24 -i EXT_IF -m state --state RELATED,ESTABLISHED -j ACCEPT -A FORWARD -d 192.168.1.10/32 -i EXT_IF -o INT_IF -p tcp -m tcp --dport 1723 -m state --state NEW,RELATED,ESTABLISHED -j ACCEPT -A FORWARD -s 192.168.1.10/32 -i INT_IF -o EXT_IF -p tcp -m tcp --sport 1723 -m state --state RELATED,ESTABLISHED -j ACCEPT -A FORWARD -d 192.168.1.10/32 -i EXT_IF -o INT_IF -p gre -m state --state NEW,RELATED,ESTABLISHED -j ACCEPT -A FORWARD -s 192.168.1.10/32 -i INT_IF -o EXT_IF -p gre -m state --state RELATED,ESTABLISHED -j ACCEPT -A OUTPUT -p gre -j ACCEPT -A OUTPUT -d 192.168.1.10/32 -p tcp -m tcp --dport 1723 -j ACCEPT -A OUTPUT -d 192.168.1.10/32 -p udp -m udp --dport 1723 -j ACCEPT COMMIT # Completed on Thu May 29 12:40:18 2014 # Generated by iptables-save v1.4.12 on Thu May 29 12:40:18 2014 *nat :PREROUTING ACCEPT [17865:1053739] :INPUT ACCEPT [5490:357281] :OUTPUT ACCEPT [3723:223677] :POSTROUTING ACCEPT [3726:223870] -A PREROUTING -p tcp -m tcp --dport 80 -j REDIRECT --to-ports 3128 -A PREROUTING -d 172.16.1.100/32 -i EXT_IF -p tcp -m tcp --dport 1723 -j DNAT --to-destination 192.168.1.10 -A PREROUTING -d 172.16.1.100/32 -i EXT_IF -p gre -j DNAT --to-destination 192.168.1.10 -A PREROUTING -i -h -A POSTROUTING -s 192.168.1.0/24 -o EXT_IF -j MASQUERADE COMMIT # Completed on Thu May 29 12:40:18 2014 # Generated by iptables-save v1.4.12 on Thu May 29 12:40:18 2014 *mangle :PREROUTING ACCEPT [22695965:17811993005] :INPUT ACCEPT [13818180:11522330171] :PREROUTING ACCEPT [17865:1053739] :INPUT ACCEPT [5490:357281] :OUTPUT ACCEPT [3723:223677] :POSTROUTING ACCEPT [3726:223870] -A PREROUTING -p tcp -m tcp --dport 80 -j REDIRECT --to-ports 3128 -A PREROUTING -d 172.16.1.100/32 -i EXT_IF -p tcp -m tcp --dport 1723 -j DNAT --to-destination 192.168.1.10 -A PREROUTING -d 172.16.1.100/32 -i EXT_IF -p gre -j DNAT --to-destination 192.168.1.10 -A PREROUTING -i -h -A POSTROUTING -s 192.168.1.0/24 -o EXT_IF -j MASQUERADE COMMIT # Completed on Thu May 29 12:40:18 2014 # Generated by iptables-save v1.4.12 on Thu May 29 12:40:18 2014 *mangle :PREROUTING ACCEPT [22695965:17811993005] :INPUT ACCEPT [13818180:11522330171] :FORWARD ACCEPT [8527694:6271564562] :OUTPUT ACCEPT [14748508:11899678536] :POSTROUTING ACCEPT [23271280:18170828012] COMMIT # Completed on Thu May 29 12:40:18 2014 hope that I find the solution here ....!! :(

    Read the article

  • Is Gmail Being Blocked by my ISP?

    - by james
    I asked this over at superuser but they weren't able to help, so I was hoping the sysadmins here will be able to advise as to what's wrong. Although the issue here is with a PC and not a server it still deals with networking so I hope it's not too irrelevant. The Issue: I have a desktop on which I cannot access Gmail and also youtube sign in (I believe since youtube is owned by google they both use the same sign in system). On other computers that uses the same connection via a wireless router I can access both gmail and youtube sign in just fine. On this computer which doesn't have a wireless card and so I have to connect via Ethernet cable (connected to a USB converter since the Ethernet port doesn't work anymore) I can access all sites and services including things like aol and hotmail. But only when it comes to gmail, do I get complete and utter throttling. I even turned off my AV ad Firewall momentarily and no luck. The gmail log in page starts to load and by mid point it just stays there loading and loading and loading... never ends. I tried everything, I reset the modem and router multiple times. I reinstalled my operating system from a vista to a windows 7 hoping that a complete reinstall would solve the issue, but no luck. And yes, I am going to call my ISP but not to solve this issue, but to cancel them. I want to upgrade to cable from DSL anyway. I didn't mention my ISP because I'm not sure if that is within the rules (if it's okay some one let me know and I will). P.S. All this happened one day, before that gmail was perfectly accessible in this computer. I can't remember anything special happening on that day prior to this. The only thing I can think of is, my ISP or Google itself is blocking this computer based on it's mac address, but I don't know if that's even done. Additional info: PC: Windows 7 Ultimate 32 bit Connection Type: DSL Connecting Medium: Ethernet cable via USB converter EDIT: I should mention I can access gmail and youtube just fine through a IP proxy service.

    Read the article

  • Is Gmail Being Blocked by my ISP?

    - by james
    EDIT: I thought I pinpointed the problem. Just now I tried to go to the firefox addons page which uses https and gmail also uses https. So I thought I am unable to load https pages on this computer. So I went to a bank site which uses https but that loads just fine. Sigh.... I asked this over at superuser but they weren't able to help, so I was hoping the sysadmins here will be able to advise as to what's wrong. Although the issue here is with a PC and not a server it still deals with networking so I hope it's not too irrelevant. The Issue: I have a desktop on which I cannot access Gmail and also youtube sign in (I believe since youtube is owned by google they both use the same sign in system). On other computers that uses the same connection via a wireless router I can access both gmail and youtube sign in just fine. On this computer which doesn't have a wireless card and so I have to connect via Ethernet cable (connected to a USB converter since the Ethernet port doesn't work anymore) I can access all sites and services including things like aol and hotmail. But only when it comes to gmail, do I get complete and utter throttling. I even turned off my AV ad Firewall momentarily and no luck. The gmail log in page starts to load and by mid point it just stays there loading and loading and loading... never ends. I tried everything, I reset the modem and router multiple times. I reinstalled my operating system from a vista to a windows 7 hoping that a complete reinstall would solve the issue, but no luck. And yes, I am going to call my ISP but not to solve this issue, but to cancel them. I want to upgrade to cable from DSL anyway. I didn't mention my ISP because I'm not sure if that is within the rules (if it's okay some one let me know and I will). P.S. All this happened one day, before that gmail was perfectly accessible in this computer. I can't remember anything special happening on that day prior to this. The only thing I can think of is, my ISP or Google itself is blocking this computer based on it's mac address, but I don't know if that's even done. Additional info: PC: Windows 7 Ultimate 32 bit Connection Type: DSL Connecting Medium: Ethernet cable via USB converter I should mention I can access gmail and youtube just fine through a IP proxy service.

    Read the article

  • How to get more information from the system crash

    - by viraptor
    I'd like to debug an issue I'm having with a linux (debian stable) server, but I'm running out of ideas of how to confirm any diagnosis. Some background: The servers are running DL160 class with hardware raid between two disks. They're running a lot of services, mostly utilising network interface and CPU. There are 8 cpus and 7 "main" most cpu-hungry processes are bound to one core each via cpu affinity. Other random background scripts are not forced anywhere. The filesystem is writing ~1.5k blocks/s the whole time (goes up above 2k/s in peak times). Normal CPU usage for those servers is ~60% on 7 cores and some minimal usage on the last (whatever's running on shells usually). What actually happens is that the "main" services start using 100% CPU at some point, mainly stuck in kernel time. After a couple of seconds, LA goes over 400 and we lose any way to connect to the box (KVM is on it's way, but not there yet). Sometimes we see a kernel reporting hung task (but not always): [118951.272884] INFO: task zsh:15911 blocked for more than 120 seconds. [118951.272955] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. [118951.273037] zsh D 0000000000000000 0 15911 1 [118951.273093] ffff8101898c3c48 0000000000000046 0000000000000000 ffffffffa0155e0a [118951.273183] ffff8101a753a080 ffff81021f1c5570 ffff8101a753a308 000000051f0fd740 [118951.273274] 0000000000000246 0000000000000000 00000000ffffffbd 0000000000000001 [118951.273335] Call Trace: [118951.273424] [<ffffffffa0155e0a>] :ext3:__ext3_journal_dirty_metadata+0x1e/0x46 [118951.273510] [<ffffffff804294f6>] schedule_timeout+0x1e/0xad [118951.273563] [<ffffffff8027577c>] __pagevec_free+0x21/0x2e [118951.273613] [<ffffffff80428b0b>] wait_for_common+0xcf/0x13a [118951.273692] [<ffffffff8022c168>] default_wake_function+0x0/0xe .... This would point at raid / disk failure, however sometimes the tasks are hung on kernel's gettsc which would indicate some general weird hardware behaviour. It's also running mysql (almost read-only, 99% cache hit), which seems to spawn a lot more threads during the system problems. During the day it does ~200kq/s (selects) and ~10q/s (writes). The host is never running out of memory or swapping, no oom reports are spotted. We've got many boxes with similar/same hardware and they all seem to behave that way, but I'm not sure which part fails, so it's probably not a good idea to just grab something more powerful and hope the problem goes away. Applications themselves don't really report anything wrong when they're running. I can run anything safely on the same hardware in an isolated environment. What can I do to narrow down the problem? Where else should I look for explanation?

    Read the article

  • DVD playback with Windows Media Player 11 works fine, but when copied to HDD and then played back, t

    - by stakx
    I have several DVDs with short documentaries on it. Since the notebook I'm using (a Dell Latitude E6400) has only one DVD drive, and I might play back those short movies very often, I thought of copying them to the HDD and playing them back from there. However, I've run into a problem, namely stuttering audio. Problem description: When I play back these movies directly from DVD (with Windows Media Player 11 under Windows Vista), everything works fine. Smooth video, no significant audio problems (only the occasional click). But as soon as I copy any of these DVDs to the HDD and try to play them back from there (e.g. using the wmpdvd://drive/title/chapter?contentdir=path protocol, I get stuttering audio — audio playback sounds like a machine gun for a third of a second or so, approx. every 8 seconds. I have tried converting the VOB files from the DVD to another format (ie. ripping), but that resulted in a noticeable downgrade of picture quality. Therefore I thought it best to keep the files in their original format, if possible. Still, I suspect that the stuttering audio is due to some (de-)muxing problem, and that changing the file format might help. (After all, video playback is fine; therefore I don't think that the hardware is too slow for playback.) Only thing is, I don't know how to convert the VOB files to another Windows Media Player-compatible format without quality loss. I hope someone can help me, or give me further pointers on things I could try out to get HDD playback to work without the problem described. Some things I've tried so far, without any success: VOB2MPG, in order to convert the .vob file to a .mpg file. But that changes only the A/V container, not the content. No re-encoding takes place at all. Re-encoding with MPlayer/MEncoder. Lots of quality loss there, and I frankly haven't got the time to test all possible settings combinations available. Disabling all plug-ins, equalizers, etc. in Windows Media Player. Disabling all hardware acceleration on the audio playback device. Further info on the VOB files I'm trying to playback: The video format is MPEG ES, PAL 720x576 pixels @ 24/25 frames per second. The sound stream is uncompressed PCM, 16-bit stereo @ 48kHz. (Might it help if I somehow re-encoded the sound stream at a lower resolution, or as an MP3? If so, how would I do this without changing the video stream?) P.S.: I am limited to using Windows Media Player (11). (I previously tried MPlayer btw., but the video playback quality was surprisingly bad.)

    Read the article

  • Problems with Windows XP Plug and Play devices, maybe relating to MSVCR71.dll

    - by Richard
    I believe this question is unanswered as of now so I appologize if I've overlooked it. I have been having trouble some external devices with windows recently and I'm trying my darnedest to get to the bottom of it. At first, my Zero Tension USB mouse would stop working...as in the laser in the bottom of it would be on and would register movement, but the mouse on the screen wouldn't budge even an inch. At first this would happen randomly and then it would correct itself. As time went on, it became more and more frequent. At some point, the computer would make the "doo doo" sound of plugging or unplugging a USB device when the mouse stopped/started working. I dealt with it for a while and usually if I rebooted my machine, the mouse would work again for a day or two. As more time has gone on, the computer fully does not recognize the mouse AT ALL...I have another mouse that I use with the computer that works just fine and cannot seem to figure out why my Zero Tension mouse has failed. I tried plugging the Zero Tension mouse into my Mac and low and behold, it works without hesitation and never stops on me... Needless to say, I am stumped about this. I figured because I had another mouse I could deal with the loss of my fancy one for now...until my speakers stopped being recognized. I have a set of Logitec speakers that I have plugged into my sound card. Again, every now and again the audio devices would cease to be recognized by my computer, but a reboot would fix the problem. Now my speakers do not work at all with my computer and I feel like it's time to ask for help. My computer seems to be having a neural shutdown...where I can plug in devices and the computer doesn't seem to notice anything wrong, but none of the devices work. I hope this doesn't get any worse! Please help! Also, on a potentially (un)related note, when I start up my machine I get the message "This application has failed to start because Msvcp71.dll was not found. Re-installing the application may fix this problem." in reference to qbupdate.exe I don't know if that DLL being messed up has anything to do with my mouse or speakers, but I figure it might...anyway, thanks in advance for an answer and let me know if I need to clarify anything. Let me sum up: Zero Tension Mouse gradually stopped working Logitec Speakers gradually stopped working MSVCR71.dll seems to be messed up I don't know if any of those are related but any help would be much appreciated

    Read the article

  • How should I convert a physical drive to a VHD for use with VirtualPC?

    - by RBerteig
    I have the hard disks from a PC that was happily running Windows Me until is it suffered an unknown hardware failure. The drives are intact, and can be mounted and read on other PCs. We have data backups, but there is licensed software installed that may not be possible to migrate to newer versions running on a more modern platform making the idea of just booting a virtual image attractive. Is it possible to make VHDs from the drives such that I can boot them in VirtualPC? If not VirtualPC, would it be possible in any other virtualization tool? Edit: Some more details.... The system was running Windows Me, but upgraded from Windows 95 (or possibly 98). It can't have been more than a Pentium II, but I will have to look at the motherboard to confirm that. There were no "exotic" devices installed, and nothing beyond the usual legacy stuff that would need to survive into a virtual machine. The licensed software did not have a dongle, so I won't need to worry about virtualizing a physical dongle of some kind. Licenses were probably died to the disk serial number. There were two HDs, both IDE. The boot disk is about 6GB, and the spare data disk is 12GB, but nearly empty. I have a small bias in favor of VirtualPC just because its free and I've used it successfully in the past. But this is a good excuse to revisit the state of the art. I do know from direct experience that it is possible to install and boot DOS 5.0 and Win95 in VirtualPC, but the VM extensions weren't available so the experience isn't as seamless as I would have liked. A very old DirectX game that failed miserably under XP SP2 runs really nicely on that VM, and actually plays better in a lot of ways than it did on period hardware, so that gives me hope that this is possible. Edit 2: Well, I'm closer than I was when I asked... so thanks to all for helpful suggestions and hints to what I should be trying. I used WinImage to copy the disks, and VirtualPC 2007 to attempt to boot. So far, I have it booting in safe mode, but hanging with a black screen otherwise. I strongly suspect that the copy of Artisoft Lantastic 8.0 (anyone else remember them?) that is still installed for networking with even older PCs that mostly don't exist any more is the culprit there. In my infinite free time, I will try to resolve the differences between a Safe Mode boot and a normal boot, and feel that it is likely to yield to pressure. I'd accept more than one answer if I could... this isn't as black and white a question as the one accepted answer convention assumes.

    Read the article

  • Lag spikes at full CPU usage, maybe video card

    - by Roberts
    I am posting this thread in hurry so few things may be missed (I will update tomorrow). My PC specs: Motherboard Name - Gigabyte GA-945PL-S3 CPU Type - DualCore Intel Core 2 Duo E4300, 1800 MHz (9 x 200) OS - Microsoft Windows 7 Ultimate OS Kernel Type - 32-bit OS Version - 6.1.7601 I bougth a new video card one month ago. GeForce 210. I didn't have any problems. I wanted to overclock it, in other words: "Play with it". So I installed Gigabyte EasyBoost from CD and overclocked the GPU 590 + 110 mhz, memory to max to 960mhz from 800mhz. Benchmarks showed a little bit bigger score. Then I overclocked shader clock from 1405 to [..] (don't remeber really). So I was playing Modern Warfare 2 when off sudden computer froze when I wanted to select team, I was afk before that. I had to reset CMOS. After that I had problems with Skype: unread messages and no sound. Then I figured it out that when ever I open EasyBoost - Skype starts to glitch again. Now I use EVGA Precission X. Now after a month, I cleaned computer and closed the case, it was open all the time. I started to overclock GPU clock only (just a bit) because there was no problems that would stop me. So sometimes on heavy CPU load graphics starts to lag. Dragging a window is painful to watch too. Sometimes the screen freezes for 5 to 10 seconds (I can see that hard disk activity is maximal). You may say that CPU fault it is, isn't it? But sometimes lag spikes starts randomly when CPU load is at maximum. All 3 benchmark softwares (PerformanceTest, NovaBench and MSI Kombustor) shows that performance of my video card has dropped about 25%. BUT! CPU score is lower too. I ignored these problems but when I refreshed Windows Experience Index I was shocked. Month before (in latvian language but not so hard to understand): Now (upgraded RAM): This happened when I tried to capture Minecraft with Fraps on underclocked GPU to 580mhz (def: 590mhz): All drivers are up to date. Average CPU temperature from 55°C to 75°C (at 70°C sometimes starts these lag spikes). Video card's tempratures are from 45°C to 60°C (very hard to reach 60°C). So my hope is that the video card is fine, cause this card is very new and I want to upgrade CPU anyways. Aplogies for my mistakes in vocabulary (I am trying to type this as fast I can).

    Read the article

  • UAE and the mysteries of unreachable websites

    - by 0plus1
    I write here because I'm really lost, please stay with me because it's not easy to explain. A company asked me to set-up a private server, now I'm a programmer so I got a solution with technical support and cpanel which helped me to setup everything and it's working smoothless. I'm by no means a professional sysadmin, but I have a fair knowledge of server configurations, but this problem is way over my knowledge, and apparently way over the knowledge of most sysadmins, I really hope that here I'll find someone with enough experience to help me or at least give me more insight. Now this company for which I'm consulting operates in the UAE (United Arab Emirates) and from there the server is almost unreachable. It started with ns not registering in the UAE, after a week that sorted itself out and now the site is indeed reachable, but it takes almost 2 minutes to load a webpage with one line of text. Emails go in timeout. The domain currently parked there has been bought appositely for tests, the main one that was supposed to go there, after a catastrophic week has been transferred to a shared hosting solution in the UK, and from there it works like a charme. Now after doing some research I discovered that I'm not alone in this, there are several reports of webmasters discovering that their website is not reachable inside the UAE, and mind this has nothing to do with the state-wide block of questionable sites, because in that case an error message appears, this seems to be related to the infrastructure of the UAE, which apparently reroutes everything through their own "fake" internet. Apparently new servers with their own IP are not recognized (yet?) by the UAE infrastructure, while shared hosting solutions seeing that they operates tons of other websites are more likely to be part of the UAE network. Now my questions are: 1) Has someone a real explanation for this? The only thing I can think of is that the server is on a new IP that is not yet recognized by the UAE, but that doesn't explain why it loads (even if after 2 minutes). I don't have any help from within the UAE as the only people that are "experts" are questionable companies that simply try to sell their own services. 2) If there is really some kind of block of new servers, is it possible to know before if a server is reachable from within the UAE, currently this is not a ns problem as even accessing the server with its IP result in a 2 minute wait. 3) Can it be that the problem lies somewhere else? There are some tests that I can perform? I'm not physically in the UAE, but I can ask the people there, or use teamviewer. Could it be some misconfiguration on the server (mind that the site works EVERYWHERE else in the world). Thank you for ANY kind of help

    Read the article

  • VBA + Polymorphism: Override worksheet functions from 3rd party

    - by phi
    my company makes extensive use of a data provider using a (closed source) VBA plugin. In principal, every query follows follows a certain structure: Fill one cell with a formula, where arguments to the formula specify the query the range of that formula is extended (not an arrray formula!) and cells below/right are filled with data For this to work, however, a user has to have a terminal program installed on the machine, as well as a com-plugin referenced in VBA/Excel. My Problem These Excelsheets are used and extended by multiple users, and not all of them have access to the data provider. While they can open the sheet, it will recalculate and the data will be gone. However, frequent recalculation is required. I would like every user to be able to use the sheets, without executing a very specific set of formulas. Attempts remove the reference on those computers where I do not have terminal access. This generates a NAME error i the cell containing the query (acceptable), but this query overrides parts of the data (not acceptable) If you allow the program to refresh, all data will be gone after a failed query Replace all formulas with the plain-text result in the respective cells (press a button and loop over every cell...). Obviously destroys any refresh-capabilities the querys offer for all subsequent users, so pretty bad, too. A theoretical idea, and I'm not sure how to implement it: Replace the functions offered by the plugin with something that will be called either first (and relay the query through to the original function, if thats available) or instead of the original function (by only deploying the solution on non-terminal machines), which just returns the original value. More specifically, if my query function is used like this: =GETALLDATA(Startdate, Enddate, Stockticker, etc) I would like to transparently swap the function behind the call. Do you see any hope, or am I lost? I appreciate your help. PS: Of course I'm talking about Bloomberg... Some additional points to clarify issues raise by Frank: The formula in the sheets may not be changed. This is mission-critical software, and its way too complex for any sane person to try and touch it. Only excel and VBA may be used (which is the reason for the previous point...) It would be sufficient to prevent execution of these few specific formulas/functions on a specific machine for all excel sheets to come This looks more and more like a problem for stackoverflow ;-)

    Read the article

  • =?UTF-8?B??= in Emails sent via php mail problem

    - by Camran
    I have a website, and in the "Contact" section I have a form which users may fill in to contact me. The form is a simple form which action is a php page. The php code: $to = "[email protected]"; $name=$_POST['name']; // sender name $email=$_POST['email']; // sender email $tel= $_POST['tel']; // sender tel $subject=$_POST['subject']; // subject CHOSEN FROM DROPLIST, ALL TESTED $text=$_POST['text']; // Message from sender $text.="\n\nTel:".$tel; // Added to message to show me the telephone nr to the sender at bottom of message $headers="MIME-Version: 1.0"."\n"; $headers.="Content-type: text/plain; charset=UTF-8"."\n"; $headers.="From: $name <$email>"."\n"; mail($to, '=?UTF-8?B?'.base64_encode($subject).'?=', $text, $headers, '[email protected]'); Could somebody please tell me why this works most of the time, but sometimes I receive email whith no text and the subject line showing =?UTF-8?B??= I use outlook express, and I have read this http://stackoverflow.com/questions/454833/system-net-mail-and-utf-8bxxxxx-headers but it didn't help. The problem is not in Outlook, because when I log in to the actual mailprogram where I fetch the POP3 emails from, the email looks the same. When I right click in Outlook and chose "message source" then there is no "From" information. Ex, a good message should look like this: Subject: =?UTF-8?B?w5Z2cmlndA==?= MIME-Version: 1.0 Content-type: text/plain; charset=UTF-8 From: John Doe However, the ones with problem looks like this: Subject: =?UTF-8?B??= MIME-Version: 1.0 Content-type: text/plain; charset=UTF-8 From: As if the information has been lost somewhere. You should know also that I have a VPS, which I manage myself. I use postfix as an emailserver, if thats got anything to do with it. But then again, why does it work sometimes? Also another thing that I have noticed is that sometimes special characters are not shown correctly (by both Outlook and the webmail). For instance, the name "Björkman" in swedish is shown like Björkman, but again, only sometimes. I hope anybody knows something about this problem, because it is very hard to track down for me atleast. If you need more input let me know.

    Read the article

  • 10.6.4 Apple Wiki: New just created users can do nothing?

    - by beefon
    Hello, After update to 10.6.4 there's an issue: any new users that I create in Server Prefs/WGM can't post to their blogs, comment, create wiki pages... They can't do anything! There's log from Wiki errors (when user DURAK tries to create new blog entry): [HTTPChannel,5,127.0.0.1] Traceback (most recent call last): [HTTPChannel,5,127.0.0.1] File "/usr/share/caldavd/lib/python/twisted/web/server.py", line 126, in process self.render(resrc) [HTTPChannel,5,127.0.0.1] File "/usr/share/caldavd/lib/python/twisted/web/server.py", line 133, in render body = resrc.render(self) [HTTPChannel,5,127.0.0.1] File "/usr/share/wikid/lib/python/apple_xmlrpc_server/WebAppServer.py", line 90, in render d = defer.maybeDeferred(function, *args) [HTTPChannel,5,127.0.0.1] File "/usr/share/caldavd/lib/python/twisted/internet/defer.py", line 104, in maybeDeferred result = f(*args, **kw) [HTTPChannel,5,127.0.0.1] --- <exception caught here> --- [HTTPChannel,5,127.0.0.1] File "/usr/share/wikid/lib/python/apple_xmlrpc_server/ContentServiceBase.py", line 121, in xmlrpc_addEntry aPage = ContentEntry.newBundleBasedContentEntry (path = path, content = content, author = author, title = title, uid = uid, type = kind, versioned = self.versioned, templateName = template) [HTTPChannel,5,127.0.0.1] File "/usr/share/wikid/lib/python/apple_wlt/ContentEntry.py", line 794, in newBundleBasedContentEntry aPage.save('First created', 'created') [HTTPChannel,5,127.0.0.1] File "/usr/share/wikid/lib/python/apple_wlt/ContentEntry.py", line 445, in save revisions.addRevision(self.serializeEntry(revisionAttributes), inComment = comment, inAuthor = updateAuthor, inChangeType = editType) [HTTPChannel,5,127.0.0.1] File "/usr/share/wikid/lib/python/apple_utilities/sqlitersion.py", line 36, in _func result = f(self, *args, **kwargs) [HTTPChannel,5,127.0.0.1] File "/usr/share/wikid/lib/python/apple_utilities/sqlitersion.py", line 49, in addRevision contentPlistStr = plistlib.writePlistToString(inContentDict).decode("utf-8") [HTTPChannel,5,127.0.0.1] File "/S-m/Lib-ry/Fr-ks/Python.fr-k/Ver-s/2.6/lib/pyth-2.6/plistlib.py", line 110, in writePlistToString [HTTPChannel,5,127.0.0.1] File "/S-m/Lib-ry/Fr-ks/Python.fr-k/Ver-s/2.6/lib/pyth-2.6/plistlib.py", line 94, in writePlist [HTTPChannel,5,127.0.0.1] File "/S-m/Lib-ry/Fr-ks/Python.fr-k/Ver-s/2.6/lib/pyth-2.6/plistlib.py", line 251, in writeValue [HTTPChannel,5,127.0.0.1] File "/S-m/Lib-ry/Fr-ks/Python.fr-k/Ver-s/2.6/lib/pyth-2.6/plistlib.py", line 280, in writeDict [HTTPChannel,5,127.0.0.1] File "/S-m/Lib-ry/Fr-ks/Python.fr-k/Ver-s/2.6/lib/pyth-2.6/plistlib.py", line 238, in writeValue [HTTPChannel,5,127.0.0.1] File "/S-m/Lib-ry/Fr-ks/Python.fr-k/Ver-s/2.6/lib/pyth-2.6/plistlib.py", line 171, in simpleElement [HTTPChannel,5,127.0.0.1] File "/S-m/Lib-ry/Fr-ks/Python.fr-k/Ver-s/2.6/lib/pyth-2.6/plistlib.py", line 221, in _escapeAndEncode [HTTPChannel,5,127.0.0.1] exceptions.UnicodeDecodeError: 'ascii' codec can't decode byte 0xd0 in position 0: ordinal not in range(128) [HTTPChannel,5,127.0.0.1] 'Unparseable html in page, removing whatever was already written.' [HTTPChannel,5,127.0.0.1] Removing /Library/Collaboration/Users/durak/weblog/27133.page Any "old" user CAN create, modify, comment, etc. What can you recommend to fix this issue? Hope for your help...

    Read the article

  • networked storage for a research group, 10-100 TB

    - by Marc
    this is related to this post: http://serverfault.com/questions/80854/scalable-24-tb-nas-for-research-department but perhaps a little more general. Background: We're a research lab of around 10 people who do a lot of experiments that involve taking pictures at one of several lab setups and then analyzing it an one of several lab computers. Each experiment may produce 2 or 3 GB of data, and we are generating data at the rate of about 10 TB/year. Right now, we are storing the data on a 6-bay netgear readynas pro, but even with 2 TB drive, this only gives us 10 TB of storage. Also, right now we are not backing up at all. Our short term backup plan is to get a second readynas, put it in a different building and mirror the one drive onto the other. Obviously, this is somewhat non-ideal. Our options: 1) We can pay our university $400/ TB /year for "backed up" online storage. We trust them more than we trust us, but not a whole lot. 2) We can continue to buy small NASs and mirror them between offices. One limit, although stupid, is that we don't have an unlimited number of ethernet jacks. 3) We can try to implement our own data storage solution, which is why I'm asking you guys. One thing to consider is that we're a very transient population and none of us are network administration experts. I will probably be here only another year or so, and graduate students, who are here the longest, have a 5-6 year time scale. So nothing can require expert oversight. Our data transfer rates are low - most of the data will just sit on the server waiting for someone to look at it once or twice - so we don't need a really high speed system. Given these contraints, can someone recommend a fairly low-cost, scalable, more or less turn key shared data storage system with backup in a separate physical location. Does such a thing exist or should we just pay the university to take care of it for us? As a second question, our professor just got tenure and is putting together a budget. Here the goal is to ask for as much as you can and hope you get a fraction of it. So the same question, minus the low-cost. Without budget constraints, can you recommend a scalable turn-key backed up storage system. Thanks

    Read the article

  • Nginx giving a lot of 502 errors

    - by Loki
    Since a while I have installed nginx and everything seemed to be working fine, recently I found out that about 20% of the time users are getting 502-errors. This is also noticable when Google tries to crawl my site in Webmaster Tools (from 10000 posts, approx. 2000 502 errors) At first I was thinking to disable nginx, but I'd really like to keep using it. I'm running it on a server with 2GB RAM and 4 Reserved CPU Cores. WHM/cPanel installed and Mod_Ruid2 enabled + DSO as a PHP Handler with APC caching installed. Is there anything I can change in the config, that can fix this? I have installed Nginx Admin in WHM and here is what's in the configuration editor screen: user nobody; worker_processes 4; error_log /var/log/nginx/error.log info; worker_rlimit_nofile 20480; events { worker_connections 10240; # increase for busier servers use epoll; # you should use epoll here for Linux kernels 2.6.x } http { server_name_in_redirect off; server_names_hash_max_size 10240; server_names_hash_bucket_size 1024; include mime.types; default_type application/octet-stream; server_tokens off; disable_symlinks if_not_owner; sendfile on; tcp_nopush on; tcp_nodelay on; keepalive_timeout 5; gzip on; gzip_vary on; gzip_disable "MSIE [1-6]\."; gzip_proxied any; gzip_http_version 1.1; gzip_min_length 1000; gzip_comp_level 6; gzip_buffers 16 8k; text/plain text/xml text/css application/x-javascript application/xml image/png image/x-icon image/gif image/jpeg application/xml+rss text/javascript application/atom+xml; ignore_invalid_headers on; client_header_timeout 3m; client_body_timeout 3m; send_timeout 3m; reset_timedout_connection on; connection_pool_size 256; client_header_buffer_size 256k; large_client_header_buffers 4 256k; client_max_body_size 200M; client_body_buffer_size 128k; request_pool_size 32k; output_buffers 4 32k; postpone_output 1460; proxy_temp_path /tmp/nginx_proxy/; client_body_in_file_only on; log_format bytes_log "$msec $bytes_sent ."; include "/etc/nginx/vhosts/*"; } I hope someone can help me out. Thanks in advance!

    Read the article

  • OS X server large scale storage and backup

    - by user135217
    I really hope this question doesn't come across as trolling or asking for buying advice. It's not intended. I've just started working for a small ad agency (40 employees). I actually quit being a system administrator a few years ago (too stressful!), but the company we're currently outsourcing our IT stuff to is doing such a bad job that I've felt compelled to get involved and do what I can to improve things. At the moment, all the company's data is stored on an 8TB external firewire drive attached to a Mac Mini running OS X Server 10.6, which provides filesharing (using AFP) for the whole company. There is a single backup drive, which is actually a caddy containing two 3TB hard drives arranged in RAID 0 (arrggghhhh!), which someone brings in as and when and copies over all the data using Carbon Copy Cloner. That's the entirety of the infrastructure, and the whole backup and restore strategy. I've been having sleepless nights. I've just started augmenting the backup process with FreeBSD, ZFS, sparse bundles and snapshot sends to get everything offsite. I think this is a workable behind the scenes solution, but for people's day to day use I'm struggling. Given the quantity and importance of the data, I think we should really be looking towards enterprise level storage solutions, high availability and so on, but the whole company is all Mac all the time, and I cannot find equipment that will do what we need. No more Xserve; no rack storage; no large scale storage at all apart from that Pegasus R6 that doesn't seem all that great; the Mac Pro has fibre channel, but it's not a real server and it's ludicrously expensive; Xsan looks like it's on the way out; things like heartbeatd and failoverd have apparently been removed from Lion Server; the new Mac Mini only has thunderbolt which severely limits our choices; the list goes on and on. I'm really, really not trying to troll here. I love Macs, but I just genuinely don't know where I'm supposed to look for server stuff. I have considered Linux or FreeBSD and netatalk for serving files with all the server-y goodness those OSes bring, but some the things I've read make me wonder if it's really the way to go. Also, in my own (admittedly quite cursory) experiments with it, I've struggled to get decent transfer speeds. I guess there's also the possibility of switching everyone off AFP and making them use SMB or NFS, but I understand that this can cause big problems with resource forks and file locks. I figure there must be plenty of all Mac companies out there. If you're the sysadmin at one, what do you use? Any suggestions very gratefully received.

    Read the article

  • How much did it cost our competitor to DDoS us at 50 Gbps for two weeks?

    - by MiniQuark
    I know that this question may sound like an invalid serverfault question, but I believe that it's quite valid: the amount of time and effort that a sysadmin should spend on DDoS protection is a direct function of typical DDoS prices. Let me rephrase this: protecting a web site against small attacks is one thing, but resisting 50 Gbps of UDP flood is another and requires time & money. Deciding whether or not to spend that time & money depends on whether such an attack is likely or not, and this in turn depends on how cheap and simple such an attack is for the attacker. So here's the full story: our company has been victim to a massive DDoS attack (over 50 Gbps of UDP traffic, full-time during 2 weeks). We are pretty sure that it's one of our competitors, and we actually know which one, because we were the only two remaining competitors on a very big request for proposal, and the DDoS attack magically stopped the day we won (double hurray, by the way)! These people have proved in the past that they are very dishonest, but we know that they are not technical at all, so we believe that they simply paid for some botnet DDoS service. I would like to know how much these services typically cost, for such a large scale attack. Please do not give any link to such services, I would really hate to give these people any publicity. I understand that a hacker could very well do this for free, but what's a typical price for such an attack if our competitors paid for it through some kind of botnet service? It is really starting to scare me (if we're talking thousands of dollars here, then I am really going to freak off: who knows, they might just hire a hit-man one day?). Of course we filed a complaint, but the police says that they cannot do much about it (DDoS attacks are virtually untraceable, so they say), and our suspicions are not enough to justify them raiding our competitor's offices to search for proofs. For your information, we now changed our infrastructure to be able to sustain such attacks: we now use a major CDN service so that our servers are not directly affected by DDoS attacks. Requests for dynamic pages do get proxied to our servers, but for low level attacks (UDP flood, or Syn floods, for example) we only receive legitimate trafic, so we're fine. If they decide to launch higher level attacks (HTTP flood or slowloris attacks for example), most of the load should be handled by the CDN... at least I hope so! Thank you very much for your help.

    Read the article

  • Loss of wireless network connectivity when playing video via HDMI cable

    - by Jeff Fohl
    Hi Folks - New to Super User, so I hope this question fits in with the guidelines. Very strange problem I am having, and I am at a loss as to how to continue troubleshooting this one. The basic problem is that when I attempt to watch streamed video on a particular display device (an Optoma HD180 projector), my network connectivity drops like a stone to barely measurable levels. This is my setup: I have a Dell H2C 730x running Windows 7 64bit. This particular computer has two ATI Radeon HD 4800 video cards. I have two Samsung 22" monitors connected to one card, and an Optoma HD180 digital projector connected to the other card via an HDMI cable. My internet connection is normally a reliable 6Mbps. The problem I am having occurs when I stream video (or even just browse the web) on the Optoma Projector. When I do this, my internet connection drops to practically zero (just a few kilobits per second). When I move the browser away from the projector, and over to one of my Samsung monitors, the internet connection comes right back. Note that the Optoma projector is on and enabled as a third monitor all this time. I can move the mouse around on the projector without triggering the problem. I tried pinging my router when I was playing a movie on one of the monitors, and I get a 1 millisecond response. However, when I have the movie playing on the Optoma projecter, pinging the router gives me response times in the hundreds of milliseconds, or times out completely. So, it clearly is something local to my machine - and not some sort of throttling occurring down the line. I would think that it is possibly something to do with the HDMI driver conflicting somehow with my network driver (which is a USB-based wireless connection). This one has me really stumped. Anyone have any ideas? EDIT: I am now leaning towards the possibility that the HDMI cable is somehow interfering with the wireless network, when large amounts of data are being pushed through the cable. Is this possible?

    Read the article

< Previous Page | 231 232 233 234 235 236 237 238 239 240 241 242  | Next Page >