Search Results

Search found 16560 results on 663 pages for 'far'.

Page 425/663 | < Previous Page | 421 422 423 424 425 426 427 428 429 430 431 432  | Next Page >

  • Broke NetBeans file associations in Windows XP how do I get them back?

    - by Serhiy
    Hello... I broke my file association in XP... Does anyone have any clue how to fix it? When I right click and select Open With... the application I want to use (NetBeans) to open the file is not on the list... and when I browse for it it won't let me select it (well it does but then won't add it to the list). The way I broke it is by installing 6.7 and then uninstalling 6.5.... since then my file associations have all been broken. I even tried uninstalling NetBeans and reinstalling it again... no luck... I even went as far as adding my own action called "OpenIt" to the file types I wanted... and that works... but only if the file/folders that contain in don't have any spaces... otherwise NetBeans throws a ".....does not exist, or is not a plain file". Thus nothing off the desktop can be opened... Does anyone know of how I can fix this problem? Thanks.

    Read the article

  • Can I disable this Windows (XP) Security Warning?

    - by FumbleFingers
    I recently reformatted my hard drive and reinstalled Windows XP (I know I'll have to take the plunge and commit to Win8 "real soon, now", but I'm just not quite ready for the upheaval yet! :) I used to use WinRar (and later, when I got fed up with the "nag" messages, 7-Zip), but I haven't installed either of them in my new configuration, so I must be using the built-in XP facility when I open *.zip files. For years, I've been opening downloaded *.zip archives, and using "drag & drop" to copy to a File Explorer window open on the folder where I want the files to end up (usually, My Documents\Downloads). But now I find that when I "drop" the file(s), I get a pop-up Windows Security Warning saying Are you sure you want to copy or move files to this folder? You should only move or copy files from locations that you trust Can anyone explain why I'm getting this message, and is there any (reasonably easy, please! :) way to suppress it? Since I've already put the *.zip file on my computer, it seems a bit late to ask if I trust it. (Thus far, the files in question have always been plain text, so it's not a matter of dodgy programs, etc.) Apologies for the low quality image - I don't have the appropriate tools or knowledge to do any better, and it doesn't help that my "PrtScr" screen capture has included what would have been on my second monitor (TV) if it had been turned on. If you can't read it, trust me - I have copied the text verbatim.

    Read the article

  • PHP Output buffer flush issue on Apache/Linux

    - by Iiro Vaahtojärvi
    Hi, I'm running into issues with the PHP output buffer flushing on my Linux web server. The output buffer is maintained correctly and all the right data is pushed to it in my code, but the usual flushing mechanisms won't flush it to the browser. I have tried everything posted here: http://php.net/manual/en/function.flush.php but no success so far. I got a small script from php.net to test it: <?php ob_start(); for($i=0;$i<70;$i++) { echo 'printing...<br />'; ob_get_flush(); flush(); usleep(300000); } ?> This should print "printing..." to the browser 70 times, one line every three seconds. This works fine on my other testing environment which is based on Windows (still using apache, XAMPP package), but on my Linux server it doesn't. It waits for the script to finish before giving anything to the browser, basically ignoring the whole flush command. If anyone has experienced this before or knows of anything that could help (be it server configuration or adjustment to code) it would be greatly appreciated!

    Read the article

  • System freezes during boot process

    - by slugster
    Hi everyone, i have a machine running Win7 Ultimate. It was running fine, then it just froze - all the stuff i was doing was still on the screen, but mouse and keyboard input was ignored, any animation that was happening on the screen stopped, the machine literally just froze. So i rebooted (power off button), from then on the machine will reboot, but it ultimately freezes again. The instance when this happens will vary - i have made it as far as the Windows login screen, but mostly it will do the POST, then give me the option to press F1 to continue or Del to enter BIOS settings (but of course pressing a key has no effect - it's frozen!). I have disconnected everything not necessary for the boot process, the only peripheral that remains attached is the keyboard. (even the network cable is disconnected). Prior to this the machine was operating fine. The install of Win7 is only 2 days old, and it was a fresh reinstall (i.e. not an upgrade or repair). Can anyone give me an indication of what may be wrong here? I'm not sure if this question should be here or on SuperUser, please migrate it if i have chosen the wrong board.

    Read the article

  • SSO "Portal"

    - by Clinton Blackmore
    Pursuant to my question on alleviating the password explosion, I've contacted some of the services to whom we are paying money to access their websites to ask if we could authenticate our own users, and some of them said yes and send me specs on how to do so. (One of the sites called such a system a page a "portal"; I've never heard the term used in quite that way.) It is simple enough that I am tempted to roll my own. The largest complication is that one site wants us to store a key for every user in our database (and I think the LDAP database makes sense) after their initial login. So, non-trivial, but doable. The nature of these sorts of tasks, I expect, is that if they start out small and simple, they don't end that way. There must be some software that addresses this that is readily extended, surely. In my searching, I've come across: SimpleSAMLphp JOSSO RubyCAS-Server Shibboleth Pubcookie OpenID [Wow, gee. I'd missed some of those in my previous searches! The wikipedia page on Central Authentication Services is useful, and the section on Alternatives to OpenID makes it look like there is a lot of choice.] Can anyone recommend any of these, or suggest ones to avoid? Internally, we are authenticating using Apple's Open Directory [ == OpenLDAP + Kerberos + Password Server (which, I believe, == SAML) ]. As far as extending/tweaking/advanced configuration of a system, I am able to program in Python, C++, can do some basic PHP, and may be able to remember some Java. Looks like I need to pick up Ruby at some point. Addendum: I would also like users to be able to change their passwords over the web (and for certain users to change passwords of other users).

    Read the article

  • MAMP Pro .xip.io, fixing urls with htaccess

    - by user3540018
    I've got all my websites set up with MAMP Pro. For instance, I got it set up, so when I go to example.com, the browser displays the website that's set up on my iMac. Now, I wanna get MAMP Pro to work so I can view my website on my other computers/devices (which are all hooked up on the same network.) So far all I had to do is check the checkbox "via Xip.io (LAN only)", and now I can view my website on my other computers/devices within my LAN by simply going to example.com.10.0.1.13.xip.io. Problem is, whenever I'm on this other computer/device, when I click on the links, I get 404 error. ie. whenever I go to example.com/news, I get the 404. But when I go to example.com.10.0.1.13.xip.io/news, THEN I get the right page. So in order to solve my problem I need to rewrite the urls. So whenever someone clicks on a link ie. goes straight to example.com/news, he'll go to example.com.10.0.1.13.xip.io/news. I don't want to change all the links in my MySQL file, but I believe I can do it simply with the htaccess file. I've opened the htaccess file and added the last two lines, but it just doesn't work. <IfModule mod_rewrite.c> RewriteEngine On # Send would-be 404 requests to Craft RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteCond %{REQUEST_URI} !^/(favicon\.ico|apple-touch-icon.*\.png)$ [NC] RewriteRule (.+) index.php?p=$1 [QSA,L] RewriteCond %{HTTP_HOST} ^example\.com RewriteRule ^(.*)$ http://www.example.com.10.0.1.13.xip.io/$1 [R=permanent,L] </IfModule> Or perhaps I don't need to change the htaccess file, is there something that I could be missing in the MAMP Pro settings, or perhaps a MAMP extension that I need?

    Read the article

  • Utility to LOGICALLY compare two xml files?

    - by Matthew
    Right now we are attempting to build golden configurations for our environment. One piece of software that we use relies on large XML files to contain the bulk of its configuration. We want tot ake our lab environment, catalog it as our "golden configuration" and then be able to audit against that configuration in the future. Since diff is bytewise comparison and NOT logical comparison, we can't use it to compare files in this case (XML is unordered, so it won't work). What I am looking for is something that can parse the two XML files, and compare them element by element. So far we have yet to find any utilities that can do this. OS doesn't matter, I can do it on anything where it will work. The preference is something off the shelf. Any ideas? Edit: One issue we have run into is one vendor's config files will occasionally mention the same element several times, each time with different attributes. Whatever diff utility we use would need to be able to identify either the set of attributes or identify them all as part of one element. Tall order :)

    Read the article

  • Collect temperature and fan speed with munin from Windows 7 PC?

    - by mfn
    Hi, I'm quite fond of munin and using it also at home to monitor my PCs. What was super-duper easy under Linux is pretty much unsolvable for me under Windows: I'd like to monitor CPU and Motherboard temperatures as well as fan speed. On Linux I'm using lm-sensors and the plugin for munin was basically there. I access already some information from my Windows machine via SNMP (disk space, CPU usage, memory usage); the graphs are simple as is the information exposed via SNMP, but they do their job. But when it comes to temperature and fan speed I'm running against a wall. My research so far resulted in that Windows does not by default provide out of the box ability to retrieve temperature/fan speed data. Third party applications are necessary which have know-how how to communicate with the Motherboard chips. The best I cam up with is that SpeedFan exposes a shared memory interface and there exists a library which hooks into Windows SNMP facility and bridges over to SpeedFans shared memory interface; it's called SFSNMP (site currently down). Unfortunately the library doesn't work, there's a bug report at SpeedFan open about it, but it's currently not moving (although the SFSNMP author is active there) . So, unless that's going to work like anytime soon, are there any alternatives? I'm not found of buying any software to get that feature, given that I take it as granted that my system exposes me the information to properly monitor it, but anyway don't just not answer because of this.

    Read the article

  • Rsync Push files from linux to windoes. ssh issue - connection refused

    - by piyush c
    For some reason I want to run a script to move files from Linux machine to Windows. I have installed cwRsync on my windows machine and able to connect to linux machine. When i execute following command: rsync -e "ssh -l "piyush"" -Wgovz --timeout 120 --delay-updates --remove-sent-files /usr/local/src/piyush/sync/* "[email protected]:/cygdrive/d/temp" Where 10.0.0.60 is my widows machine and I am running above command on Linux - CentOS 5.5. After running command I get following error message: ssh: connect to host 10.0.0.60 port 22: Connection refused rsync: connection unexpectedly closed (0 bytes received so far) [sender] rsync error: error in rsync protocol data stream (code 12) at io.c(463) [sender=2.6.8] [root@localhost sync]# ssh [email protected] ssh: connect to host 10.0.0.60 port 22: Connection refused I have modified my firewall settings on widows to allow all ports. I think this issue is due to SSH Daemon not present on my windows machine. So I tried installing OpenSSH on my machine and running ssh-agent but didn't helped. I tried similar command to run on my widows machine to pull files from Linux and its working fine. For some reason I want command for Linux machine so that I can embed it in a shell script. Can you suggest me if I am missing anything.

    Read the article

  • Yahoo marked my mail as spam and says domainkey fails

    - by mGreet
    Hi Yahoo is marking our mail as spam. We are using PHP Zend framework to send the mail. Mail header says that Domain Key is failed. Authentication-Results: mta160.mail.in.yahoo.com from=mydomain.com; domainkeys=fail (bad sig); from=mydomain.com; dkim=pass (ok) We configured our SMTP server (Same server used to send mail from zend framework.) in outlook and send the mail to yahoo. This time yahoo says domainkeys is pass. Authentication-Results: mta185.mail.in.yahoo.com from=speedgreet.com; domainkeys=pass (ok); from=speedgreet.com; dkim=pass (ok) Domainkey is added in mail header on our server which is used by both outlook client and PHP client. yahoo recognize the mail which is sent from outlook and yahoo does not recognize the mail from PHP client. As far as I know, Signing the email is done on the server side with help of domain key. PHP and Outlook uses the same server to sign the mail. But why yahoo handling differently? What I am missing here? Any Idea? Can anyone help me?

    Read the article

  • Black Screen on Logon (windows 7 home premium)

    - by Blacknight334
    i have been having some trouble with a dell Xps 15 laptop that i recently purchased. it is under a month old, and a problem has occurred, upon logging on just after start up, the computer will just sit on a black log on screen (with the mouse still visible and active) for a few minutes. it is extremely annoying, especially when im in a rush. the laptop is under a month old. So far, i have tried to update the drivers, all windows update, and still, nothing. also, it doesnt seem to do it when i log into safe mode, or if it does, it will do it for less than 10 seconds, then load the desktop (in normal boot, it usually takes a few minutes). i have also run a number of the inbuilt diagnostics, but found no errors. i want to avoid having to do a system restore for as long as i can. does anyone know anything that can help? (the laptop is running a 500gb SSD, 2gb Nvidia 640m, 8gb ram, 3rd gen i7 quad core with 8 threads) thanks.

    Read the article

  • Is the exhaust fan necessary?

    - by Borek
    On my new PC, the component making the most noise is the rear exhaust fan on my case (it is the only exhaust fan in my PC). I tried to disconnect it and watched temperatures in SpeedFan and CPU was usually at about 35C, peaking to about 50C when the system was under load - this doesn't look too bad. So I'm considering that I'll leave the exhaust fan disconnected permanently after which the computer is very quiet - the only noise-making components are Arctic Cooling Freezer 7 Pro Rev.2 (CPU fan) and PSU fan (Enermax Pro 82+), both being quiet enough as far as I can tell. (My GPU has a passive cooler.) Also, those 2 components are moving parts so will provide some air flow in the case and, even better, PSU fan sucks the air out of the case so it kind of is an exhaust fan in itself. Does anyone run with the exhaust fan disconnected? You don't have to tell me that it's always better to have more air flow than less, I know that, but the noise is also a consideration for me and temperatures around 40C should be fine shouldn't they? (I might also consider getting a quieter case fan but I'm specifically interested in your opinion on the no exhaust fan scenario.)

    Read the article

  • fcgiwrap listening to a unix socket file: how to change file permissions

    - by user36520
    I have a web server (nginx) and a CGI application (gitweb) that is ran with fcgiwrap to enable Fast CGI access to it. I want the Fast CGI protocol to take place over a unix socket file. To start the fcgiwrap daemon, I run: setuidgid git fcgiwrap -s "unix:$PWD/fastcgi.sock" (this is a daemontools daemon) The problem is that my web server runs as the user www-data and not the user git. And fcgiwrap creates the socket fastcgi.sock with user git, group git and read only fort the non owner. Thus, nginc with the user www-data can't access the socket. Apparently, fcgiwrap is not able to select permissions of unix socket files. And this is quite annoying. Moreover, if I manage to have the socket file exists before I run fcgiwrap (which is quite difficult given I did not find any shell command to create a socket file), it quits with the following error: Failed to bind: Address already in use The only solution I found is to start the server the following way: rm -f fastcgi.sock # Ensure that the socket doesn't already exists (sleep 5; chgrp www-data fastcgi.sock; chmod g+w fastcgi.sock) & exec setuidgid git fcgiwrap -s "unix:$PWD/fastcgi.sock" Which is far from the most elegant solution. Can you think of anything better ? Thanks

    Read the article

  • Trouble cloning a Macbook Pro hard drive

    - by Mirko Froehlich
    I am trying to upgrade the 250GB hard drive in my MacBook Pro (early 2008 model) to a 750GB drive. I have connected the new drive via an external USB enclosure. The drive is recognized fine, I can format it, etc. However, every time I try to clone the drive, I am getting Input/Output errors. Before the clone operation, I have verified both the internal and the external drive using Disk Utility, and they both check out fine. After the clone operation, the external drive shows multiple "Invalid node structure" errors: I have tried two approaches for cloning the drive: Using Disk Utility, by starting from the OSX install DVD Using Carbon Copy Cloner The outcome is the same in both cases. The Carbon Copy Cloner logs show a handful of the following types of errors: rsync: mkstemp "<... an external filename ...>" failed: Input/output error (5) rsync: stat "<... an external filename ...>" failed: Input/output error (5) The actual files affected seem to be different across different runs of the application. Before the last run, I used Disk Utility to (once more) reformat the external drive and explicitly overwrite it with zeros, but this made no difference. I also tried running a surface scan in Tech Tool Pro overnight. It got about 2/3 of the way through before I had to disconnect the drive (had to take my MacBook Pro to work), but so far it didn't report any bad blocks. Assuming it scans the drive in the same order in which blocks would be allocated during actual use, it seems like if bad blocks were to blame for the clone failures, they should have been found already (given that the source drive is only 250GB). As a last attempt, I may try SuperDuper as well, although my understanding is that it uses the same underlying rsync approach as Carbon Copy Cloner, so it's unlikely to perform any better. Are there any other things I should try before I send the drive in for a replacement? Could these problems be caused by my internal drive, even though it works fine and checks out fine in Disk Utility?

    Read the article

  • Using virtualization infrastructure for J2EE application distribution- viable alternative?

    - by Dan
    Our company builds custom J2EE web solutions. At the moment, we use standard J2EE distribution mechanisms (ear/war archives). Application servers are generally administered by our clients' IT departments and since we do not have complete control over the environment, a lot of entropy can be introduced into the solution. For example: latest app. server patch not applied conflicting third party libraries inside the app. server root server runtime and tuning parameters not configured (for example, number of connections in database pool) We are looking into using virtualization infrastructure for J2EE application distribution. Instead of sending the ear/war archive, we’d send image with application server node and our application preinstalled. Some of the benefits are same as using with using virtualization infrastructure in general, namely better use of hardware resources. For us, it reduces the entropy of hosting infrastructure - distributing VM should be less affected by hosting environment. So far, the downside I see can be in application server licenses, here they will have to use dedicated servers for our solution, but this is generally already done that way. Also, there is a complexity with maintaining virtualization infrastructure, but this is often something IT departments have more experience with than with administering and fine-tuning J2EE solutions. Anyone has experience with this model? What are the downsides? Will we not just replace one type of complexity with other?

    Read the article

  • CentOS centralised logging, syslogd, rsyslog, syslog-ng, logstash sender?

    - by benbradley
    I'm trying to figure out the best way to setup a central place to store and interrogate server logs. syslog, Apache, MySQL etc. I've found a few different options but I'm not sure what would be best. I'm looking for something that is easy to install and keep updated on many virtual machines. I can add it to a VM template going forward but I'd also like it to be easy to install to keep the VM complexity down. The options I've found so far are: syslogd syslog-ng rsyslog syslogd/syslog-ng/rsyslog to logstash/ElasticSearch logstash agent in each log "client" to send to Redis/logstash/ElasticSearch And all sorts of permutations of the above. What's the most resilient and light from the log "client" perspective? I'd like to avoid the situation where log "clients" hang because they are unable to send their logs to the logging server. Also I would still like to keep local logging and the rotation/retention provided by logrotate in place. Any ideas/suggestions or reasons for or against any of the above? Or suggestions of a different structure entirely? Cheers, B

    Read the article

  • Understanding NFS4 exports and pseudofilesystem

    - by Trevor Harrison
    I think I understand the way pre-NFS4 exports work, specifically the namespace of the exported point. (ie. export /mnt/blah on server, use mount server:/mnt/blah /my/mnt/point on client) However, I'm having a hard time wrapping my head around NFS4 exports. What I've been able to gather so far is that you export a 'root' by marking it with fsid=0, which you then import on the client side by referring to it as '/'. (ie. exportfs -o fsid=0 /mnt/blah on server, mount server:/ on client) However, after that, it gets a little weird. From my playing around, it seems I can't export anything else thats not under /mnt/blah. For example, exportfs /home/user1 fails when trying to mount from the client unless /mnt/blah/home/user1 exists on the server. If this is the case, what is the difference between exportfs /mnt/blah/subdir1 on server and mount server:/subdir1 on client and just skipping the exportfs and mounting whatever subdir of /mnt/blah you want? Why would you need to export anything other than the root? Its all in the same namespace anyway.

    Read the article

  • What parts of a motherboard age, and how can I choose one with the longest possible life?

    - by Robert Harvey
    I have a home-built computer that's probably about four years old. I realize this probably seems ancient to some folks, but computers have no moving parts (except the fans), so theoretically they should last a long time, if I still have software to run on them. A few weeks ago, it began blue-screening and freezing up, with various error messages. It almost always happened about five minutes after startup. I assumed that the video card was overheating, since the cheap little fan on the heatsink died, so I replaced it. Long story short, after upgrading the video drivers a couple of times and performing some other troubleshooting, I remembered that the last time this happened, I took out the memory SIMS and cleaned the contacts with a gum eraser, so I did that again (noting that the SATA cables were very close to the chips on the SIMS). I re-routed the cables and reinstalled the SIMS. So far, so good; the machine has been trouble-free since. But blue-screens are distressing; I never know what bits are being chewed up in my OS installation when something like this happens. So I'm wondering if I'm choosing my components properly. If it matters, it's an Intel D915GAG motherboard and Corsair memory, but what I'm wondering is, should I be looking for certain characteristics when I choose these parts for my next computer, so that I can avoid this problem in my next build?

    Read the article

  • Ubuntu second static IP, ifconfig, /etc/network/interfaces

    - by Schmoove
    I would like to add a second static IP to my local Ubuntu 11.10 desktop machine and have it automatically available after rebooting. So far I am successfully using ifconfig to to temporarily set up an alias for my primary network interface: # ifconfig eth1:0 192.168.178.3 up # ifconfig eth1 Link encap:Ethernet HWaddr c8:60:00:ef:a3:d9 inet addr:192.168.178.2 Bcast:192.168.0.255 Mask:255.255.255.0 inet6 addr: fe80::ca60:ff:feef:a3d9/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:61929 errors:0 dropped:0 overruns:0 frame:0 TX packets:64034 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:45330863 (45.3 MB) TX bytes:28175192 (28.1 MB) Interrupt:42 Base address:0x4000 eth1:0 Link encap:Ethernet HWaddr c8:60:00:ef:a3:d9 inet addr:192.168.178.3 Bcast:192.168.178.255 Mask:255.255.255.0 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 Interrupt:42 Base address:0x4000 However, when I add the following to /etc/network/interfaces, the alias is not up and running as expected after a reboot: # vi /etc/network/interfaces auto eth1:0 iface eth1:0 inet static address 192.168.178.3 netmask 255.255.255.0 I would like to know what to configure to get this to work. As a side note, I am running gnome shell.

    Read the article

  • What can cause Powershell execution policy not to be taken into account?

    - by Stephane
    We have in our infrastructure a number of powershell scripts used for various tasks ranging from user login to support technician simulating a user context. These scripts are centralized on our file server (through DFS) for easier management. Some of them are run at logon, some are run through published Citrix applications. We have applied a policy for the whole domain and all users that sets the Powershell execution policy to "unrestricted" so that the scripts can run from the file server. This works perfectly fine for logon script (at least, so far) but for scripts that are run later (usually through a published application but the same applies when using terminal services and a full desktop), the results are inconsistent: some users can run the script fine, some are always prompted in the powershell console for letting the scripts run. I cannot find anything that could cause this behavior and it's really inconsistent: if I start powershell manually and runs get-executionpolicy, I am told that the current policy is unrestricted. Yet, if from the same session I try to run a script through a program that calls powershell <script file name> <parameters> I get prompted before the script can run. What could cause such behavior ?

    Read the article

  • Cannot establish XMPP server-to-server connection to gmail

    - by v_2e
    My jabber-server fails to connect to gmail.com giving the error: outgoing s2s stream myserver.com.ua-bot.talk.google.com closed: undefined-condition (myserver.com.ua is a Google Apps Domain with Talk service enabled.) I am using the Prosody XMPP server. It works just fine with other jabber-servers I tested so far (e.g. jabber.ru). However, when some of my clients tries to add a gmail contact to his contact-list, the subscription request lasts forever, and the Prosody gives the following sequence of messages in its log: Oct 21 22:57:16 s2sout95897f8 info Beginning new connection attempt to gmail.com ([173.194.70.125]:5269) Oct 21 22:57:16 s2sout95897f8 info sent dialback key on outgoing s2s stream Oct 21 22:57:16 s2sout95897f8 info Session closed by remote with error: undefined-condition (myserver.com.ua is a Google Apps Domain with Talk service enabled.) Oct 21 22:57:16 s2sout95897f8 info outgoing s2s stream myserver.com.ua->gmail.com closed: undefined-condition (myserver.com.ua is a Google Apps Domain with Talk service enabled.) Oct 21 22:57:16 s2sout95897f8 info sending error replies for 2 queued stanzas because of failed outgoing connection to gmail.com Here for the domain name of my server I use myserver.com.ua I found a similar problem described in this thread, but there is no detailed description of the solution there. As for the Google services, I did have a google account where I added the domain name under question to the Webmasters tools page. However, I deleted my account long ago, so now it is unclear, how any of the Google services can relate to my domain name. So my question is: What is the real cause of this problem (my jabber-server configuration or imaginary Google account or something else) and how can I make my Prosody server connect to gmail.com jabber service?

    Read the article

  • WEIRD netstat behavior on Windows XP re: www.partypoker.com

    - by tbone
    I really don't know if this is the right place to ask this, but I would really appreciate if someone that is more savvy on Windows XP (Professional) could help me out. For background, I am a 10+ years programmer, so I'm not a total idiot, but I am far from an expert on TCP/IP, etc, and this has me totally confused. When I do a netstat (on Windows XP) I seem to always get a huge amount of www.partypoker.com connections and I can't figure out where they are coming from. A netstat -o shows me that some are coming from PID xxx, which is firefox, but if I kill it, the connections still remain. Some are coming from PID 0, which makes no sense to me. SECOND PROBLEM: One would think you could edit the C:\WINDOWS\system32\drivers\etc\hosts file to block this, but it seems like my machine is ignoring the hosts file! (I have tried with the DNS client service both enabled and disabled, same result). So I just rebooted, killed all my normal programs, and I can't seem to reproduce the problem. If I was a paranoid person, I would think there was some sort of an intelligent trojan running. I am running Windows XP Pro, Kaspersky Antivirus, ccCleaner, and am fully up to date on Windows Update. What gives???? So, I guess my questions are: 1. Is anyone else seeing these wird connections to partypoker.com? 2. Why isn't my hosts filter working? 3. Is there some utility I can run to find out whats happening? I've tried autoruns.exe from sysinternals but don't see anything interesting. Am I the only one with this problem? If there are any additional things you need me to run, let me know.

    Read the article

  • How to change libcurl SSL backend from gnutls to openssl on Ubuntu server

    - by Jayesh
    I am getting gnutls specific errors in my Tornado webserver while processing Google OpenID SSL responses. One of the suggestions I got from Tornado mailing list is to try OpenSSL backend instead of gnutls. But it doesn't seem to be straightforward on Ubuntu server (11.10). On Ubuntu server, gnutls is provided by libcurl3-gnutls package and openssl curl support is provided by libcurl4-openssl-dev package. (I don't know why the later is named 4 and dev, but I couldn't find any other openssl+curl package in apt-cache search). I had libcurl3-gnutls installed by default, but not libcurl4-openssl-dev. So I installed the later and restarted Torando instances. But that didn't seem to work. I still got same gnutls errors. I found old discussions on curl mailing lists regarding the problems of supporting different SSL backends to libcurl, but didn't find exactly how is it done today. So far my guess is openssl is built into libcurl and gnutls is provided through separate package (that will explain why there is no libcurl3-openssl). But how do I make libcurl to pick up openssl backend and not gnutls? Is there some option in libcurl/pycurl API to do this? I tried uninstalling libcurl3-gnutls, but apt-get prompted that it will also remove python-pycurl along with it. So that won't do.

    Read the article

  • Per connection bandwidth limit

    - by Kyr
    Apparently, our server box running Windows Server 2008 R2 has a per connection bandwidth limit of 0.2 MB/s. Meaning, while one TCP connection can pull at max 0.2 MB/s, 60 parallel connections can pull 12 MB/s. We first noticed this when trying to checkout large SVN repository from this server. I used a simple Java application to test this, transferring data from server to workstation using variable number of threads (one connection per thread). Server part of the application simply writes 1 MB memory buffer to socket 100 times, so there is no disk involvement. Each connection topped at 0.2 MB/s. Same per connection limit was for only one as was for 60 parallel connections. The problem is that I have no idea from where this limit comes from. I have very little experience administrating Windows Server, so I was mostly trying to find something by googling. I have checked the following: Local Computer Policy QoS Packet Scheduler Limit reservable bandwidth: it's Not configured; Group Policy Management Console: we have two GOPs, but neiher has any Policy-based QoS defined; There isn't any bandwidth limiter program installed, as far as I can tell. We're using standard Windows Firewall. I can update this question with any additional information if needed.

    Read the article

  • Cannot find network path for computer in workgroup of home Windows XP PCs

    - by John Galt
    VMWare Workstation 6.5 is running as an app on a Windows Vista 64bit PC host. Thanks to Workstation we have 2 guest machines running: TerriVM and MattVM (both of these run Windows XP SP2). We are attempting to get virtual networking configured so we can access the files of both of these VM guest systems from other real PCs connected to this home network. We think we are close but we can't quite get it right... Here is what we've done so far: * On VM Workstation, we set "Host Virtual Network Mapping" to use VMnet0 with the setting "Bridge to an automatically chosen adapter". * On each VM guest (i.e. using Windows explorer on XP), we rightmouse on the C disk, click "Sharing" tab, set shareName to "C_Disk" and check both boxes labeled "Share this folder on the network" and "Allow network users to change my files". Symptoms: On "JohnsRealXP" PC, we go to Windows Explorer, My Computer, Map Network Drive, type into Folder textbox: \TerriVM\C_Disk and assign drive letter T. We see all the folders on this shared drive and can open files on them. So that is good. On same "JohnsRealXP" PC, we go to Windows Explorer, My Computer, Map Network Drive, type into Folder textbox: \MattVM\C_Disk and assign drive letter M. We get a message box "_The network path \mattvm\C_Disk could not be found_". Alternatively, we type just \mattvm\ into the Folder box and click "Browse" and get a dialog box where we drill down from "Entire Network" to "Microsoft Windows Network" to "Workgroup" where both TerriVM and MattVM are listed as computers on the network. Clicking the + sign next to MattVM gives an hourglass and never enables the OK button and I have to cancel. In summary, I think we've attempted to share both of these virtual machines using the same techniques and connect to them in similar fashion, but one connects properly and the other machine can be seen but no shared resources on it can be accessed. Can anyone suggest something possibly overlooked or something to try? Thanks so much in advance.

    Read the article

< Previous Page | 421 422 423 424 425 426 427 428 429 430 431 432  | Next Page >