Search Results

Search found 16032 results on 642 pages for 'everyday problems'.

Page 456/642 | < Previous Page | 452 453 454 455 456 457 458 459 460 461 462 463  | Next Page >

  • Dante (SOCKS server) not working

    - by gregmac
    I'm trying to set up a SOCKS proxy using dante for testing purposes. However, I can't even get it to work with a web browser, after looking at several tutorials on how to do that. I've tried in both IE and Firefox, in both cases, using "Manual proxy configuration", leave everything blank except for SOCKS host, and then put in the IP of my proxy and the port number (1080). I just get "Server not found" / "Problems loading this page" and don't see anything in danted, even running in debug mode. If I do a "telnet 10.0.0.40 1080" I do see the connection open in danted debug output, so I know that much is working. Here's my config: logoutput: stdout /var/log/danted/danted.log internal: eth0 port = 1080 external: eth0 method: username none #rfc931 user.privileged: proxy user.notprivileged: nobody user.libwrap: nobody connecttimeout: 30 # on a lan, this should be enough if method is "none". client pass { from: 10.0.0.0/8 port 1-65535 to: 0.0.0.0/0 } client pass { from: 127.0.0.0/8 port 1-65535 to: 0.0.0.0/0 } client block { from: 0.0.0.0/0 to: 0.0.0.0/0 log: connect error } block { from: 0.0.0.0/0 to: 127.0.0.0/8 log: connect error } pass { from: 10.0.0.0/8 to: 0.0.0.0/0 protocol: tcp udp } pass { from: 127.0.0.0/8 to: 0.0.0.0/0 protocol: tcp udp } block { from: 0.0.0.0/0 to: 0.0.0.0/0 log: connect error } I'm sure I'm probably missing something simple, but I'm lost. I haven't even thought about SOCKS since the late 90's.

    Read the article

  • APACHE - PHP - Bounce emails

    - by user1179459
    I want to improve our mail server lists by handling all the bounces we get in our websites. I have a website which has over 8000+ users and another website which as over 1500+ users, they are emailed various notifications every second, ie. job alerts, email alerts, I am using POP connections with EXIM on APACHE server, most scripts are based on PHP language generates email on the fly. Problems i have But some users has registered long time ago and now quite few has bounce email address Some users register with dummy emails like [email protected] which never existed but a valid email address, any chance of stopping this without asking them login to the email account and clicking links which dont work most times.. (too annoying to the end user) Server is sending unnecessary emails can be avoided if i know they dont exists ? Solutions i need to have Is there a way i can download the bounce email list somewhere (WHM/Cpanel), i know exim mail has it but its not readable (i need a file like CSV or something similar to scan them over and write a php script to delete the users from the database ?) I need to know if there is any function in the PHP that can check the existence of the email address on the fly ? so that i can set the email send function in the mailer class to check before it sends out. On the server will bouncing emails are going to eatup lots of server resources ? like memory/cpu on processing them ? or are they are minimal where we dont have to worry about this at all ? May be a opensoruce or linux software to capture them and view them in a report basis and cleaning them up ? I am not a linux expert or server admin but i do lot of PHP coding, so please be descriptive of the solutions specially if they are linux commands Thank you!

    Read the article

  • It takes a long time until Windows XP recognize I connected USB drive

    - by Pavol G
    I have a problem with my new USB disk. When I connect it to my laptop with Windows XP SP2 it takes about 4-5min until Windows recognizes it and shows it as a new disk. I can also see (disk's LED is blinking) that something is scanning the disk when I connect it; when this is done Windows immediately recognize it. Also when I'm copying data to this disk the speed is about 3.5MB/sec. It's connected using USB2.0. I tried to check for spyware (using Spybot), also tried running Windows in safe mode. But still have the same problems. Do you have any idea what could help to solve this problem? On Windows Vista (another laptop) everything is ok, disk loads in about 15sec and speed is about 20-30MB/sec. Edit: I tried to update to SP3 - no change Edit2: When this "strange" scanning occurs I can see that DPCs process is taking about 50% of CPU. When the scan ends (after 5min) this process take 0% again. Edit3: About the scan time, currently it's taking about 5min, but this time is growing as I'm adding more data to the disk, currently its about 40GB and I don't want to see how long it will take with 1000GB. Thanks a lot for every advice!

    Read the article

  • Debugging "clogged" TCP connections

    - by Nikratio
    I'm having trouble with an internet connection that seems to randomly "freeze" arbitrary tcp connections. The connections stay established, but no data is coming through. When this happens, netstat still shows the connection status as ESTABLISHED on both the local computer: Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name Timer tcp 0 53 192.168.0.10:41129 173.255.235.238:143 ESTABLISHED 8219/gnutls-cli on (79.31/13/0) ..and the remote server: Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name Timer tcp 0 0 173.255.235.238:143 68.5.174.98:41129 ESTABLISHED 5303/imapd off (0.00/0/0) However, it seems that no data at all is transferred. If I run strace on the local and remote process, both just show a repeating sequence of select calls (with different fds of course), e.g. select(6, [0 5], NULL, NULL, {0, 50000}) = 0 (Timeout) select(6, [0 5], NULL, NULL, {0, 50000}) = 0 (Timeout) select(6, [0 5], NULL, NULL, {0, 50000}) = 0 (Timeout) The internet connection overall does not seem affected, I can still establish new connections to the same service on the same server without any problems. However, the affected local applications seem to be unaware of the problem and just hang. When I look at a packet capture of this connection on the client side, the last thing that happens is that the client transmits some data, then nothing happens for about 1100 seconds, and then several TCP Retransmission requests go out, with intervals increasing from 4 seconds to 130 seconds. No activity is captured after that. After about 10 minutes, the connection on the remote end disappears from the netstat (I wasn't able to catch any intermediate state), but still stays ESTABLISHED on the local end. Finally, after some more minutes, the local application aborts with a timeout and disappears from the local netstat output as well. Does anyone have a suggestion of how I could debug this further to find out where the problem lies and how to fix it? Additionaly and/or as a temporary workaround: is is there some way to globally reduce the timeout on client and/or server to reduce the time before the local application aborts?

    Read the article

  • Why are SMART error rates going down?

    - by Jeff Shattock
    I have a hard drive that's part of a Linux software raid5 array. SMART has reported that its multi_zone_error_rate was 0, then 1, then 3. So I figured I better start backing up more frequently and prepare to replace the drive. Now, today, the multi_zone_error_rate of that very same drive is back down to 1. It seems that 2 errors unhappened while I wasn't looking. I've also seen simliar behaviour by inspecting the syslog on the server. Jun 7 21:01:17 FS1 smartd[25593]: Device: /dev/sdc, SMART Usage Attribute: 7 Seek_Error_Rate changed from 200 to 100 Jun 7 21:01:17 FS1 smartd[25593]: Device: /dev/sde, SMART Usage Attribute: 7 Seek_Error_Rate changed from 200 to 100 Jun 7 21:01:18 FS1 smartd[25593]: Device: /dev/sdg, SMART Usage Attribute: 7 Seek_Error_Rate changed from 200 to 100 Jun 8 02:31:18 FS1 smartd[25593]: Device: /dev/sdg, SMART Usage Attribute: 7 Seek_Error_Rate changed from 100 to 200 Jun 8 03:01:17 FS1 smartd[25593]: Device: /dev/sdc, SMART Usage Attribute: 7 Seek_Error_Rate changed from 100 to 200 Jun 8 03:01:17 FS1 smartd[25593]: Device: /dev/sde, SMART Usage Attribute: 7 Seek_Error_Rate changed from 100 to 200 These are raw values, not the human-useful values that smartctl -a produces, but the behaviour is similar: error rates changing, then undoing the change. None of these are the drive that had the multi_zone weirdness. I haven't seen any problems from the RAID; its most recent scrub ( < 24 hours ago) came back totally clean. The only thing I can think of is that the SMART reporting circuitry on the drive isn't working properly all the time. The cables are in tight on the drive and board. What's going on here?

    Read the article

  • Updating a backup image (.wim and/or Acronis .tib)

    - by Backdraft
    Anyways, I've got a Windows 7 installation that I want to make a generalized backup image of so I can use it for future installs on not only my desktop from which the image is to be derived from, but also other systems with dissimilar hardware. Therefore I've arrived at either 2 options, using either sysprep/imagx from WAIK (guide here), or the simpler Acronis True Image w/ their Universal Restore addon. Of course, they create distinct image file types, .wim and .tib respectively. What I'd like to do is to periodically update this image, say with Windows Updates, by booting it to either a physical partition or using virtualization (VirtualBox/VMWare), perform the updates, and save the updated .wim or .tib image file again. What's the simplest way I could do this? Another question is, I created this generalized backup image on a 500GB Seagate 7200RPM HDD. Say I get an SSD as an OS drive in the future, can I just deploy this backup image to the SSD normally, or are there any potential problems to be aware/avoid (ie. is it best to completely reinstall the OS on the SSD from scratch, or can I use the image created on the normal HDD with no issue)? Thanks and Happy Holidays.

    Read the article

  • Router to WIFI Client to Router (New solution for distance when repeater doesnt help)

    - by Kangarooo
    Ethernet to TL-WR340G with WIFI enabled Using TL-WA500 tried repeater mode which was not good enough and had password problems (could not connect if using either ASCII or Normal password in one way then in repeater worked other way) and also could not forward (repeat) WPA/WPA2 security. So since this repeater can also be as client, I made it as client and used another router (TL-WR740N) to get from wire connection from that client and all was working for a little bit. Every machine is set up to be auto DHCP. 1st when setting up client mode I found it working after doing reset. Then after some tens of minutes internet stopped working. When I removed WiFi client then all went back to normal. Where is the problem and how to make this work? Ethernet- TL-WR340G(AutoDHCP) ==> wifi ==> TL-WA500 TL-WA500 wifi client mode(AutoDHCP) ==> wire ==> TL-WR740N TL-WR740N router mode (AutoDHCP) ==> My Computer In other words: TL-WR340G ) ) ) ) TL-WA500 ===== TL-WR740N ==== PC1 ) ) WiFi === Wire

    Read the article

  • Liferay - Verify each node in a cluster

    - by Schrute
    In this example, I have two clustered instances of Liferay using bundled Tomcat running, using cluster link and shared documents. Let's say the name of the public community is fubar and friendy URL used is fubar.lipsum.com. Let's say the ports listening on each server is 8080. If I go to both server1:8080 or server2:8080 I will get the default page for Liferay. How can I test fubar.lipsum.com on each node by using the backend server, so I can verify each server? If I test it, it just goes to the load balancer, I wish there was a way to append to the backend connection to bring it up. I can add the friendly URL to my local machines hosts file and this seems to kinda work, but then once something is called in the application, it tries to go out again from the backend server and then uses SSL and then we have problems. I think I may be able to do port forwarding, but this seems like a basic thing we should be able to do and what I've found so far in the admin docs has not helped. Using the option to print the server name in the page details isn't an option either.

    Read the article

  • Adding a Second Wireless Router to an Existing Wired Network

    - by KVCrawford
    I apologize ahead of time, I know this has been asked before, but I'm still having problems...maybe you guys can help. I started out with the basic instructions from the highest-voted answer at http://serverfault.com/questions/41572/adding-a-second-wireless-router-to-my-network The new Wireless router in question is a Linksys Wireless-N Gigabit Router, Model # WRT310N Here are the steps I've taken in setting it up: Plug my laptop into LAN port #2 in the new router. Nothing else is connected at this point Configure the new router to be 192.168.1.200 (the original router is 192.168.1.1, and its DHCP clients are from 192.168.1.100-x.x.x.199) Set the internet connection on the new router to "DHCP Client" Turn off the DHCP server & NAT routing on the new router Plug in a LAN cable from the original router into the LAN port #1 on the new router (NOT the WAN port, nothing is plugged in there) Reset the new router Afterwards, I try to ping 192.168.1.1 from the laptop plugged into LAN port #2 on the new router, with no response. 192.168.1.200 garners no response either. Typing "ipconfig" tells me: Autoconfiguration IP Address: 169.254.198.113 Subnet Mask: 255.255.0.0 Default Gateway: 169.254.198.113 What's going wrong? I appreciate any help!

    Read the article

  • MediaWiki migrated from Tiger to Snow Leopard throwing an exceptions

    - by Matt S
    I had an old laptop running Mac OS X 10.4 with macports for web development: Apache 2, PHP 5.3.2, Mysql 5, etc. I got a new laptop running Mac OS X 10.6 and installed macports. I installed the same web development apps: Apache 2, PHP 5.3.2, Mysql 5, etc. All versions the same as my old laptop. A Mediawiki site (version 1.15) was copied over from my old system (via the Migration Assistant). Having a fresh Mysql setup, I dumped my old database and imported it on the new system. When I try to browse to mediawiki's "Special" pages, I get the following exception thrown: Invalid language code requested Backtrace: #0 /languages/Language.php(2539): Language::loadLocalisation(NULL) #1 /includes/MessageCache.php(846): Language::getFallbackFor(NULL) #2 /includes/MessageCache.php(821): MessageCache->processMessagesArray(Array, NULL) #3 /includes/GlobalFunctions.php(2901): MessageCache->loadMessagesFile('/Users/matt/Sit...', false) #4 /extensions/OpenID/OpenID.setup.php(181): wfLoadExtensionMessages('OpenID') #5 [internal function]: OpenIDLocalizedPageName(Array, 'en') #6 /includes/Hooks.php(117): call_user_func_array('OpenIDLocalized...', Array) #7 /languages/Language.php(1851): wfRunHooks('LanguageGetSpec...', Array) #8 /includes/SpecialPage.php(240): Language->getSpecialPageAliases() #9 /includes/SpecialPage.php(262): SpecialPage::initAliasList() #10 /includes/SpecialPage.php(406): SpecialPage::resolveAlias('UserLogin') #11 /includes/SpecialPage.php(507): SpecialPage::getPageByAlias('UserLogin') #12 /includes/Wiki.php(229): SpecialPage::executePath(Object(Title)) #13 /includes/Wiki.php(59): MediaWiki->initializeSpecialCases(Object(Title), Object(OutputPage), Object(WebRequest)) #14 /index.php(116): MediaWiki->initialize(Object(Title), NULL, Object(OutputPage), Object(User), Object(WebRequest)) #15 {main} I tried to step through Mediawiki's code, but it's a mess. There are global variables everywhere. If I change the code slightly to get around the exception, the page comes up blank and there are no errors (implying there are multiple problems). Anyone else get Mediawiki 1.15 working on OS X 10.6 with macports? Anything in the migration from Tiger that could cause a problem? Any clues where to look for answers?

    Read the article

  • hosts file seems to be ignored

    - by z4y4ts
    I have almost fresh Ubuntu desktop box. OS was installed two weeks ago and updated from karmic repositories. Last week I had no problems with DNS. But this week something had changed. I'm not sure what and when, and not sure whether I changed any configs. So now I have some really weird situation. According to logs name resolving should work normally. /etc/hosts 127.0.0.1 localhost test 127.0.1.1 desktop /etc/host.conf order hosts,bind multi on /etc/resolv.conf # Generated by NetworkManager search search servers obtained via DHCP nameserver 192.168.0.3 /etc/nsswitch.conf passwd: compat group: compat shadow: compat hosts: files mdns4_minimal [NOTFOUND=return] dns mdns4 networks: files protocols: db files services: db files ethers: db files rpc: db files netgroup: nis But if fact it is not. user@test ~ping test PING localhost (127.0.0.1) 56(84) bytes of data. [skip] Pinging is ok. user@test ~host test test.mydomain.com has address xx.xxx.161.201 But pure I suspect that NetworkManager might cause this misbehavior, but don't know where to start to check it. Any thoughts, suggestions?

    Read the article

  • Fix MBR from installed Windows Vista

    - by Danilo
    Hi guys, I have a quite strange problem. I had a system with Vista and Ubuntu installed. We always use Vista and Ubuntu was something we really did not need. BUT: to boot, GRUB was used (I guess grub2). Now, while being in Vista I cancelled the Ubuntu partition and with it also GRUB. Now the system does not boot anymore. I tried to reinstall Ubuntu, but I had some problems with the CD. At the moment, when the system boots I get into the GRUB shell. From there, I am able to boot Windows Vista with some commands like this ones: grub> title windows rootnoverify (hd0,msdos3) chainloader +1 boot Now the question is: if I am able to boot in Windows Vista with this trick, is it possible to fix the MBR from inside the installed windows Vista with some command/tool of Vista itself? I shall probably mention that we are not interested in double boot at the moment. We only want Vista to start. I can sum up the question like this: is there a way to fix the MBR from the installed version on Windows Vista, considering that GRUB is at the moment installed? I hope I was clear enough. Thanks for your help.

    Read the article

  • Buffer Overflow errors when reading ConfigDelay and Manufacturer info from registry

    - by peter
    Hi All, This is a strange driver error which doesn't make a lot of sense to me. I am running an application developed in C# .NET which our company develops. I was monitoring the application using process monitor and noticed that it accesses the registry a lot. The output on Process Monitor looks like this, Operation Result Path RegQueryValue Success HKLM\System\CurrentControlSet\Enum\SWMUXBUS\SW_MODEM\7&6c4af30&0&5&0004\Driver RegQueryValue Success HKLM\System\CurrentControlSet\Control\Class\{4D36E96D-E325-11CE-BFC1-08002BE10318}\0000\Properties RegQueryValue Success HKLM\System\CurrentControlSet\Control\Class\{4D36E96D-E325-11CE-BFC1-08002BE10318}\0000\Default RegQueryValue Success HKLM\System\CurrentControlSet\Control\Class\{4D36E96D-E325-11CE-BFC1-08002BE10318}\0000\InactivityScale RegQueryValue Name Not Found HKLM\System\CurrentControlSet\Control\Class\{4D36E96D-E325-11CE-BFC1-08002BE10318}\0000\PowerDelay RegQueryValue Name Not Found HKLM\System\CurrentControlSet\Control\Class\{4D36E96D-E325-11CE-BFC1-08002BE10318}\0000\ConfigDelay RegQueryValue Buffer Overflow HKLM\System\CurrentControlSet\Control\Class\{4D36E96D-E325-11CE-BFC1-08002BE10318}\0000\Manufacturer RegQueryValue Buffer Overflow HKLM\System\CurrentControlSet\Control\Class\{4D36E96D-E325-11CE-BFC1-08002BE10318}\0000\Model RegQueryValue Name Not Found HKLM\System\CurrentControlSet\Control\Class\{4D36E96D-E325-11CE-BFC1-08002BE10318}\0000\Version The app is reading this stuff every 5 seconds from the registry, so I would ask a few questions, 1) What is this stuff 2) Why is the app reading this stuff 3) Why is it saying 'Buffer Overflow' 4) Could this cause performance problems for my app? From what I can see the app does not explicitly read this stuff, so I think this relates to a driver on the machine (which is a netbook)

    Read the article

  • PLS HLP Chrome & Internet Explorer won't connect after infected Fire Fox works.

    - by Zack
    HI Guys Please Help I am pretty New Here. I'm having problems. Cannot connect with chrome or Internet Explorer. Fire Fox works fine. It seems it happens when I was infected by a "Trojan Horse Generic 17.BWIK" and a "Trojan Horse SHeur.UHL", when I reply to a post for a Thread I posted. I have removed the treat and got Fire Fox working, "so i think", but not G'Chrome or IE still cannot connect. I do not want to loose Chrome History so re-setting would be my last option and uninstall and install will be out of the question. Is there a way around this? I am using XP Pro on a desktop and DSL connection. Be aware from "Fake_Antispyware.FAH", which I had on my computer, I just found out while doing this, according to my AVG anti-virus security. Please can you direct me for a cure. Thank you in advance for your sincere willingness contributions.

    Read the article

  • Where is the TFS database?

    - by Blanthor
    I've been using TFS 2010 with no problems. I tried adding a user and I got the following error message. "TF30063: You are not authorized to access <serverName>\DefaultCollection. -The remote server returned an error: (401) Unauthorized." I remoted into the server, <serverName>, and opened the TFS Console. The logs mentioned a connection string: ConnectionString: Data Source=<serverName>\SS2008;Initial Catalog=Tfs_DefaultCollection;Integrated Security=True While remoted in I open SQL Server 2008 Management Studio opening the (local) server with Windows Authentication. It shows the connection to be (local)(SQL Server 9.04.03 - <serverName>\Admin), and there is no Tfs_DefaultCollection database. Can someone tell me what is going on? Was I wrong in connecting to this instance of the database (i.e. Is the log file the wrong place to find the connection string)? Is the database so corrupted that SQL Manager Studio cannot see it anymore, although TFS could? Should I be logging into Management Studio as user SS2008? btw I don't know of any such credentials.

    Read the article

  • Very slow connection to xserve via afp or smb

    - by Mhoffman13
    Help. File transfer and connection speed to our Xserve are painfully slow from newly purchased iMacs. The Xserve is only used as a file server, its running 10.4.11. The problem seems to be only happening on brand new iMacs running 10.6.3. When connected either over afp or smb copying files is many times slower than usual. Other machines on the network running either 10.4 or 10.5 have a normal connection speed. To try to rule out OS incompatibility I connected the new iMac running 10.6 to another computer running 10.4 over the network. The file transfer speed was fast as normal. So it seems the problems lies with the X serve (maybe). The afp logs either access or error don't show anything unusual. One thing that did look different was when the imac was connected to the Xserve the user had its id listed as its IP address. The other machines connected, had the id of broadcasthost. I also noticed that when connected from the new iMac I can only see one of the mirrors. When any other computer connects both mirrors are shown. Tried a restart of the Xserve but the problem persists. Thanks in advance for any advice

    Read the article

  • Ubuntu: unattended-upgrades from a local package archive

    - by Novelocrat
    I have a local apt archive with a bunch of packages I built in it. The Packages and Release file are generated by apt-ftparchive. The Release file looks like Date: Thu, 06 May 2010 23:04:33 UTC Label: PPL Origin: PPL Suite: ppl MD5Sum: ebec3527ebc8351468b2ef8796c19855 37325 Packages d41d8cd98f00b204e9800998ecf8427e 0 Release SHA1: a0593b663d77fde88ee35b56ae1f3c17801cfe99 37325 Packages da39a3ee5e6b4b0d3255bfef95601890afd80709 0 Release SHA256: dd73a02846aee111cac58a869c6bf650886632ba82c2172ffddd81aa4429981c 37325 Packages e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 0 Release I'm using unattended-upgrades to keep the machines in the lab up to date on security and bug fixes, but I'm finding that it doesn't pull from my local archive. The configuration file for it looks like // Automaticall upgrade packages from these (origin, archive) pairs Unattended-Upgrade::Allowed-Origins { "Ubuntu hardy-security"; "Ubuntu hardy-updates"; "PPL ppl"; }; // List of packages to not update Unattended-Upgrade::Package-Blacklist { // "vim"; // "libc6"; // "libc6-dev"; // "libc6-i686"; }; // Send email to this address for problems or packages upgrades // If empty or unset then no email is sent, make sure that you // have a working mail setup on your system. The package 'mailx' // must be installed or anything that provides /usr/bin/mail. //Unattended-Upgrade::Mail "root@localhost"; Yet, when I run sudo unattended-upgrade on one of these machines, newer package versions don't get installed. Can anyone point out what I'm getting wrong?

    Read the article

  • How to run a website domain without redirecting if IP is already used for another website? [duplicate]

    - by SSpoke
    This question already has an answer here: Hosting multiple distinct folders for distinct domains 1 answer I bought a VPS Host that gave me only 1 IP Address which I used on my first domain name and it works without any problems. Now my second domain name I can't use the same ip address as it points to the first domain name. So I figured my only option was to use a GoDaddy hosted iframe redirection which redirects to a sub folder on my first domain which worked so far. Now I'm trying to load paypal from <?php headers() ?> and I get a permission error because of that iframe Refused to display 'https://www.paypal.com/cgi-bin/webscr?notify_url=&cmd=_cart&upload=1&business=removed&address_override=1' in a frame because it set 'X-Frame-Options' to 'SAMEORIGIN'. How do I avoid the Iframe solution for my second domain while not messing up my first domain? Somebody I forgot once told me it doesn't matter if you have 1 IP Address you could host multiple websites on it? how it that possible the DNS doesn't seem to work off ports afaik, yes I could host multiple websites on different folders but that's not what I call hosting a real website it has to be pointed by a domain name, so this iframe issue doesn't happen My server configuration is httpd (apache) that comes with CentOS 6 (Linux) operating system

    Read the article

  • SSH traffic over openvpn connection freezes when I cat a file

    - by user42055
    I have an openvpn (version 2.1_rc15 at both ends) connection setup between two gentoo boxes using shared keys. it works fine for the most part. I use mysql, http, ftp, scp over the vpn with no problems. But when I ssh from the client to the server over the vpn, weird things happen. I can login, i can execute some commands. But if i try to run an ncurses application like top, or i try to cat a file, the connection will stall and I'll have to sever the ssh session. I can, for example, execute "echo blah; echo .; echo blah" and it will output the three lines of text over the ssh session fine. But if i execute "cat /etc/motd" the session will freeze the moment I press enter. I compiled openvpn 2.1.1 on my mac and copied over my config directory from my gentoo client. The mac connected and ssh sessions worked fine without freezing. I then compiled it on my older gentoo box (2.6.26 kernel) which I am retiring due to a dying hard drive, and ssh over it also works perfectly. Why does it fail on my brand new gentoo box ? I've tried compiling three different kernels in case it was that, but other than that there should be no difference between my older and my newer gentoo boxes that I can think of. Any suggestions on what's wrong ?

    Read the article

  • Very uneven CPU utilization with SQL Server 2012 on 2 processor computer with 16 cores / processor

    - by cooplarsh
    After installing SQL Server Enterprise 2012 with the Server + Cal license model, on a computer with 2 processors each with 16 cores (and no hyperthreading involved) and putting the server under extremely heavy load the 16 cores on the first processor were very underutilized, the first 4 cores on the 2nd CPU were heavily utilized, and the last 12 cores were not used at all (because of the 20 core limit for this sql server version). Total CPU utilization was displaying as around 25%. Unfortunately, the server suffered from extremely poor performance even though if the tasks were evenly distributed across the 20 cores it wouldn't have been anywhere near as bad. The Windows Server was running on a VMWare virtual image under ESX Server, but all of the CPU was allocated to the windows server. We tried changing affinity settings (e.g., allocating most cores to CPU and the others to I/O), but that didn't help solve the performance problems. Upgrading the product edition to SQL Server Enterprise Core 2012 not only allowed the SQL Server to utilize the 12 previously unused cores on the 2nd processor, but it also resulted in a much more even distribution of tasks across all of the processors. To get through the backlog of requests cpU utilization jumped to around 90%, and then came down to around 33% once it was caught up, but performance improved dramatically since we failed over to the newly updated version And the performance issues went away. I was wondering if anyone knows what might cause SQL Server to unevenly distribute the load, relying almost exclusively on the first 4 cores of the 2nd processor that had 12 cores idle, and allocate only a few tasks to each of the 16 cores on the first processor. Also, is there any way we could have more evenly distributed the load across the 20 cores that were being used without the product edition upgrade? The flip side of that question is what did the product upgrade do that caused SQL Server to start evenly distributing the load across all of the cores that it recognized? Thanks to any insight to answer these questions and/or links that might help me better understand how to make sense of what was happenings.

    Read the article

  • Snow Leopard takes a long time to connect to Windows/Samba server

    - by hood
    We run a very heterogeneous network here: There is some XP, Vista, 7, Leopard, Snow Leopard clients, and Windows 2003 (one remaining legacy app), 2008, and Linux servers. The main file server runs Ubuntu Linux and has been added to the Windows Domain and has been used for many years; SBS 2008 is the PDC (the 2003 and 2008 are on the domain also). In Leopard there were no problems at all authenticating to the file servers. We've upgraded one of the Leopard iMacs to Snow Leopard, though the same problem occurs in a new MBP which came with the newer OS as well as a clean install on another iMac. It does not matter whether connected through wired or wireless. In the Finder when clicking on the server - whether on first boot or after it is connected - it will display "Connecting..." for up to a few minutes before either generally working (if username/password in keychain) or displaying "Connection Failed" - at which time clicking "Connect As" and typing in the username/password will take some more time and eventually work. Sometimes it will display "Connecting..." indefinitely. (I've left it as long as 15 minutes before trying something else) Accessing shares on the the 2003 and SBS servers have the problem (so I don't think it's a Samba server issue). The Server 2008 Standard is connecting instantly at the moment. Accessing the share through an alias/stacks doesn't have this problem. Leopard and Windows clients still have no problem. I've searched Google but hasn't yielded any working result. How do I get rid of this delay?

    Read the article

  • How can I fix video tearing and pausing on Windows XP Flash videos?

    - by xvs
    I have what should be a reasonably fast PC: it's a Quadcore Intel 6600 at 2.4 GHz, 4MB of RAM, an ATI 3800 series video card and an LG L246WP monitor, which I selected particularly because it was supposed to work well with video and have no trails or other artifacts. So I should be able to play video with no problems. And I can, as long as that video isn't Flash video. With Flash, what I see is tearing, especially during pans, and pausing -- every few seconds the video pauses for about 300ms while the sound stays continuous. I tried going into the video card setup and changing vertical sync, pulldown detection, windows media video acceleration, deinterlacing and triple buffering. but no combination of settings I've tried has changed or corrected the problem in any way. I've also tried enabling and disabling hardware acceleration in the Flash settings, to no avail. This problem happens whether or not the video is streaming or has fully streamed in before playing. So, what can I do? Is this just a flash issue or is there a way to get it to work?

    Read the article

  • Blu-ray BD-R: Would you physically store it in a CaseLogic Wallet pocket?

    - by Rob
    I keep several backup copies of my material and files. For my DVDs, one set of copies is kept in a CaseLogic wallet folder pack, so that I can easily move this around when visiting friends, family or for business. This is highly convenient. The other sets are kept in their jewel cases in hard plastic see thru storage boxes. Although CaseLogic wallet material is designed to be abrasion free, their caveat is that external dust will be the cause of any blemishes. If hard dust gets in these pockets, which is inevitable, this will occasionally cause light hair like scratches on the disc surface as the discs are removed and returned for access to their contents. This is of no consequence as the laser and error correction can more than cope with this. I'm aware that the blu-ray spec requires anti-scratch in disc surfaces but was wondering that, given the smaller pits, would dust and light scratches from wallet storage cause more problems with blu-rays than they would with DVDs? I'm using Blu-ray BD-R and BD-R DL write once media.

    Read the article

  • After update to Windows 8.1 brigthness isn't working (changing)

    - by Bibo
    I just update my Windows 8 to Windows 8.1 by Windows Store and I have some problems. My notebook is Acer Aspire Timelinex 3820TG and I know it's little old for Windows 8 but I install them and works fine (I updated my HDD to SSD). Now I just updated Windows and I have problem with changing brightness on my dedicated graphic card (Ati HD 5650). First I can changing brigthness with fn + keys but It just changing level in OS. No change in real. I tried reinstall drivers, install drivers for my card from Acer with compatibility to Windows 7 and without change. When I switch to integrated card changing brightness works. I think the problem is with drivers but I don't know how to get it working. Thanks for help Bonus question: I have another problem (but this one is not so important). Does anyone knows what msietxghh.exe is doing? Everytime when my system runs (after update) I get message that this program stops working but I just cancel and looks everything works fine.

    Read the article

  • Are there cloud network drives that let users lock files or mark them as "in use"?

    - by Brandon Craig Rhodes
    Having spent several hours reading about the features and limitations of services like DropBox and Jungle Disk and the hundreds of competitors they seem to have (as though everyone with an AWS account these days goes ahead and writes a file sharing application just for fun), I have yet to find one that would let a team of people at a small business collaborate without stepping all over each other's toes. At a small business there are often many small documents per project — estimates, contracts, project plans, budgets — and team members frequently have to open and edit them, with all sorts of problems happening if two people edit a file at once. Even if a sharing service is smart enough to keep both versions of the file created, most small-business software (like word processors, spreadsheets, estimating software, or billing systems) has no way to compare — much less to merge! — the changes in two rival versions of a file that two people edited at the same time without each other's knowledge. So, my question: are their cloud-based file sharing solutions that not only provide a virtual network drive that people can access, but that also let users lock files — even if it's not a real lock but just a flag or indicator — that could possibly prevent remote workers from both editing the same file at once? Having one person wait for another person to finish editing is a very, very small inconvenience compared to the hour or more than it can take to compare two estimates by hand until you find and resolve the rival changes. Given this fact, I am surprised that almost none of the popular file sharing solutions seem to recognize this problem and provide some solution! Does anyone know of a service that does?

    Read the article

< Previous Page | 452 453 454 455 456 457 458 459 460 461 462 463  | Next Page >