Search Results

Search found 54098 results on 2164 pages for 'something broken'.

Page 372/2164 | < Previous Page | 368 369 370 371 372 373 374 375 376 377 378 379  | Next Page >

  • AWS Linux EC2: yum won't run with plugins

    - by Patrick
    Short Version: yum commands on my Amazon Linux EC2 AMI only work with --noplugins. Long Version: A couple of days ago, I ran yum update at the behest of the SSH Login MoTD telling me I had updates to install. About midway through the update (specifically while updating the kernel), the update abruptly ended (79 of 138 items completed). The website I host on EC2 got weird for a few minutes, but eventually seemed to stabilize back out (maybe EC2 restarted itself?), and I didn't have further issues (other than MySQL started running out of memory, but I think that's probably unrelated to this). Today, I went to install gcc-c++ (with yum install gcc-c++). When I did, I got the following message: Loaded plugins: priorities, security, update-motd, upgrade-helper Config error: Command "updateinfo" already defined and I get that for any command I can think to run using yum. However, If I throw in the --noplugins flag, then magically it seems to work. To be clear, when I installed a different package a week ago, it worked totally correctly, so the yum update is the only thing I can think of that changed. I could find nothing on Google with regard to "updateinfo" already defined (with and without quotes). I tried running yum update --noplugins which spit out a message telling me that I should have run yum-complete-transaction instead, but proceeded to try to update something on its own. When that completed, I tried yum-complete-transaction but that gave me a message about the transactions not lining up correctly, so it removed the old transaction (Probably since I should have completed the first transaction before trying to update again, if I had known). Based on the SF question "Linux EC2 Broken Yum", I've also tried yum clean all --noplugins (fails the same with plugins) which just gives me Cleaning repos: amzn-main amzn-updates rpmforge Cleaning up everything I also tried package-cleanup --problems Loaded plugins: priorities, update-motd, upgrade-helper No Problems Found and package-cleanup --dupes Gives a lot of dupes, so I pasted them here: http://pastebin.com/VVFQEkTT instead of inline. At this point, I'm not sure what else there even is to check.

    Read the article

  • Adding add-ins to excel - strange communicates

    - by Jacob
    I am using Excel 2010 and 2013. I would like to add an excel add-in from page http://xlloop.sourceforge.net/ . There is file with name xlloop-0.3.2 and extension Microsoft Excel XLL Add-In. I added this file from menu File - Options - Add-Ins - In combobox Manage i choosed Excel Add-Ins - Go... - Browse and I choosed my file. I see the following comunicate: "C:\...\xlloop-0.3.2.xll" is not a valid add-in. Thus, I do next attempt. I go from menu File - Open - and I choosed my file. I see comunicate: The file you are trying to open "xlloop-0.3.2.xll", is in a difference format than specified by the file extension. Verify that the file is not corrupted and is from a trusted source before opening the file. Do you want to open the file now? After I clicked Yes I see a lot of signs (something like from chinese :)) My last attempt was double clicked on file. I see: The file format and extension of "xlloop-0.3.2.xll" don't match. The file could be corrupted or unsafe. Unless you trust its source , don't open it. Do you want open it anyway? After clicked yes I see something like the second attempt. I am really very confused because some of my friends have the same version of excel and they don't have these communicates. Do you have any idea where is the problem in my excel? I very need this addin to work with Java. I will very grateful for your help! Thanks in advance!

    Read the article

  • Issues installing apache debian

    - by Belgin Fish
    I'm having issues installing apache2, and pretty much everything in general, I'm using debian. I run sudo apt-get install apache2 and then it returns root@debian:~# apt-get install apache2 Reading package lists... Done Building dependency tree Reading state information... Done Some packages could not be installed. This may mean that you have requested an impossible situation or if you are using the unstable distribution that some required packages have not yet been created or been moved out of Incoming. The following information may help to resolve the situation: The following packages have unmet dependencies: apache2 : Depends: apache2-mpm-worker (= 2.2.16-6+squeeze7) but it is not going to be installed or apache2-mpm-prefork (= 2.2.16-6+squeeze7) but it is not going to be installed or apache2-mpm-event (= 2.2.16-6+squeeze7) but it is not going to be installed or apache2-mpm-itk (= 2.2.16-6+squeeze7) but it is not going to be installed Depends: apache2.2-common (= 2.2.16-6+squeeze7) but it is not going to be installed E: Broken packages Not really sure what's up :S Seems like it can't find any of the required packages for anything, Anyone know what I'm doing wrong?

    Read the article

  • Router recommendation to virtualize 800 IPs

    - by delerious010
    I've recently been looking at getting some new load balancers for our environment as we are expecting to double our client base in the next 12 months. Currently we have 400 public IPS serving 800 clusters ( 2 clusters / IP due to ports ) on Coyote Point Balancers, and distributing connections to 3 web servers serving about 6GBytes outgoing, 2Gbytes in per day. If we double, this would be about 800 IPs, possibly 1600 clusters, and about 6 servers per cluster ( for a total of 9600 so called "real servers" using Barracuda's lingo ). Due to the amount of clusters, most solutions I've looked at ( Coyote, Barracuda, Loadbalancer.org ) seem to be unsure whether they'll be able to handle our planned growth, mostly due to health checks performed on the servers ... which makes total sense when you think of it. So the fine folk at loadbalancer.org recommended that we may be better off offload the 400-800 public IPs, which we require for SSL eCommerce solutions, over to a forward facing router. From that point on, the router could do some mangling to route EXT_IP:443 to INT_IP:INT_PORT which would then allow us to reduce the Load Balancer configuration to 1 or 2 clusters, thus resolving the health check problem. Does this idea make sense to yall ? Or would you have other recommendations to make ? Secondly, what router would you recommend for such an undertaking ? I'd be looking at something that has some form of failover mechanism built in. On a totally unrelated note, I've got to admit that I'm extremely pleased with the responses I got from loadbalancer.org. Their responses to my inquiries were surprisingly helpful ( i.e. I didn't feel as if I was taking to a sales guy trying to push something ). ( No I don't work for them, and sadly nor are they sending me free gear ).

    Read the article

  • Windows 7 64-bit installation from alternative media (no DVD/USB Flash drive)

    - by Niels Willems
    Greetings I currently have Windows 7 x86 installed on my computer. I want to install Windows 7 x64 on a different partition on my computer. However there is a little issue, I cannot run the x64 install from Windows 7 x86 which I currently have. I was planning to Install Windows 7 x64 on another partition to then boot up from that partition to install it on the partition I actually want my OS on. Once that is complete I could just format the partition from the Windows 7 x64 that I didn't need anymore. But the installer will not run from the x86 version of Windows 7 even though I do not want to upgrade that Windows directly. The reason I'm doing this in such a weird way is that my optical drive is broken and I'm really not into buying a new one since I would use it like once every year or so. I also don't have a USB Flash Drive which is big enough to hold the installation files. As far as I'm aware I cannot use an external hard drive such as this one, which I do have. Are there any alternatives in which I can install Windows 7 x64 or am I forced into buying a USB Flash Drive or new optical drive? Thank you in advance for your replies. Edit: This picture shows my current partitions on my laptop. I want to get Windows 7 x64 on the C partition but have to install it first on the F partition to then boot up the F partition windows to format C and install x64 on that one. My external drive is J. Edit 2: No alternative computer which has a DVD drive, install files are located on an iso from MA3D. To install my 32 bit version I mounted the ISO in Daemon Tools to replace my Windows Vista but since I cannot run 64 bit into my 32 bit OS this doesn't work.

    Read the article

  • How to recover a Linksys WRT54GL router that has a blinking green power LED and no response from the

    - by Peter Mounce
    I was flashing the router with the Tomato firmware, but something went wrong; I'm not sure what. Now, the router responds to ping at 192.168.1.1 (my Mac's on a static IP 192.168.1.21), but the web-interface doesn't come up. I have read that this situation is recoverable in a [couple of places][2], but I haven't been having much success and so I wondered whether anyone could help. From my Mac (OSX 10.5) I have tried to tftp a new vanilla-Linksys firmware to the router and reboot; according to the trace, this sends it but the router behaves no differently after a reboot. I've read that if boot_wait is turned on, I'll have an easier time, but I haven't been able to find any instructions that tell me how I can tell whether I did this or not (I don't think I have, but I might have, when I tinkered the first time months ago - the router has worked since then, though). I have found a couple of references to [something called JTAG][3], which seems like some kind of [homebrew diagnostic cable thing][4], but that's a little beyond my ken. Happy to try it, with muppet-level instructions, though (I do software, not hardware!). So, I'm at a bit of a loss, really, and wondered whether anyone could provide me with the route (ha. ha.) out of this mess? Hm, I can't post all the links I wanted to until I have some more reputation.

    Read the article

  • Recovering a damaged microSDHC

    - by djechelon
    I just bought from eBay a Kingston 32GB microSDHC that was advertised as defective. The seller said that there could be formatting problems or with transfer of large files. Unfortunately, when I got it, it was a total mess. My Nikon camera doesn't read it at all (OK, maybe it doesn't support 32GB) My Linux laptop doesn't mount it: can't read superblock The same laptop refuses to mkfs.msdos because it failed whilst writing reserved sector The same laptop, under Windows, doesn't read nor format the card HTC HD2 mounts the MMC, allows me to write via USB, but is unable to open the just written files OK, folks, now you would say I would have to go through Paypal complaint... that's not that easy. I consciously bought a half-price card that was known to show some defects, and Paypal complaints take time. Obviously, I can't accept somebody sold me a completely use-less computer decoration. So I'll keep it as last option. My question is Do you know a way, under either Linux or Windows, to thoroughly scan, test and possibly repair memory cards, even if I have to lose some percentage of space because of bad sectors? If I can keep at least half of the card intact it would certainly be fine. I used to do broken sector marking with hard disks in the past. I almost forgot: MONSTR:/home/djechelon # fsck /dev/mmcblk0p1 fsck from util-linux-ng 2.17.2 dosfsck 3.0.9, 31 Jan 2010, FAT32, LFN Read 512 bytes at 0:Input/output error

    Read the article

  • Combining URL mapping and Access-Control-Allow-Origin: *

    - by ksangers
    I am in the progress of migrating an old banner system to a new one and in doing so I want to rewrite the old banner system's URL's to the new one. I load my banners via an AJAX request, and therefore I require the Access-Control-Allow-Origin to be set to *. I have the following VirtualHost configuration: <VirtualHost *:80> ServerAdmin [email protected] ServerName banner.studenten.net # we want to allow XMLHTTPRequests Header set Access-Control-Allow-Origin "*" RewriteEngine on RewriteMap bannersOldToNew txt:/home/user/banner.studenten.net/banner-studenten-net-to-ads-all4students-nl-map # check whether a zoneid exists in the query string RewriteCond %{QUERY_STRING} ^(.*)zoneid=([1-9][0-9]*)(.*) # make sure the requested banner has been mapped RewriteCond ${bannersOldToNew:%2|NOTFOUND} !=NOTFOUND # rewrite to ads.all4students.nl RewriteRule ^/ads/.* http://ads.all4students.nl/delivery/ajs.php?%1zoneid=${bannersOldToNew:%2}%3 [R] # else 404 or something ErrorLog ${APACHE_LOG_DIR}/banner.studenten.net-error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog ${APACHE_LOG_DIR}/banner.studenten.net-access.log combined </VirtualHost> My map file, /home/user/banner.studenten.net/banner-studenten-net-to-ads-all4students-nl-map, contains something like: # oldId newId 140 11 141 12 142 13 Based on the above configuration I was expecting the following: GET /ads/ajs.php?zoneid=140 HTTP/1.1 Host: banner.studenten.net HTTP/1.1 302 Found ... Access-Control-Allow-Origin: * Location: http://ads.all4students.nl/delivery/ajs.php?zoneid=11 But instead I get the following: GET /ads/ajs.php?zoneid=140 HTTP/1.1 Host: banner.studenten.net HTTP/1.1 302 Found ... Location: http://ads.all4students.nl/delivery/ajs.php?zoneid=11 Note the missing Access-Control-Allow-Origin header, this means the XMLHttpRequest is denied and the banner is not displayed. Any suggestions on how to fix this in Apache?

    Read the article

  • Is there a way to avoid Wacom Control Panel to stop showing in certain cases ?

    - by S.gfx
    This is the problem: Suddenly, you double click on desktop wacom tablet settings icon, and it wont show the dialog. It appears to be loaded as it's shown down in the windows taskbar. I suspect is caused by change of resolution or some setting, maybe suddenly it sets the origin of the panel dialog at some 3000 thosand pixels to the right or something. I have digged that wacom_tablet.dat file to see if I fix it changing some value there, but looks like a log, a history, more than a ini for settings... And anyway does not solve it. My solution is having always a very complete settings file done and backed up to restore (with wacom utility for this, which in previous driver versions did not exist) when this happens, but is counter productive: You change the settings even per each project(and software) needs. I have seen it happenning with Cintiq 12", intuos4 A6, Graphires, Intuos 1. Is just me, doing something weird every time? I doubt it, is normal use, I am amazed that seems no one else had this prob(or nobody asked), happens often with typical use. Maybe is because am setting a shortcut in the desktop? Weird as it works perfect till some random moment... (Have digged Wacom's forums and FAQs, then here, but nothing related to it... Neither in "related questions".) The thing happens in Win XP, 7, etc. Done so during years in my experience, and several times at work, which is worse.

    Read the article

  • PhpMyAdmin Missing parameter:

    - by Ali
    Everything was working fine until this moment I want to create a new database and I'm receiving this error db_create.php: Missing parameter: new_db (FAQ 2.8) Also when I was trying to export my database I also receive the following error export.php: Missing parameter: what (FAQ 2.8) export.php: Missing parameter: export_type (FAQ 2.8) When I looked it FAQ 2.8 from the suggested link in PHPMYADMIN 2.8 I get "Missing parameters" errors, what can I do? Here are a few points to check: In config.inc.php, try to leave the $cfg['PmaAbsoluteUri'] directive empty. See also FAQ 4.7. Maybe you have a broken PHP installation or you need to upgrade your Zend Optimizer. See http://bugs.php.net/bug.php?id=31134. If you are using Hardened PHP with the ini directive varfilter.max_request_variables set to the default (200) or another low value, you could get this error if your table has a high number of columns. Adjust this setting accordingly. (Thanks to Klaus Dorninger for the hint). In the php.ini directive arg_separator.input, a value of ";" will cause this error. Replace it with "&;". If you are using Hardened-PHP, you might want to increase request limits. The directory specified in the php.ini directive session.save_path does not exist or is read-only. I did tried with php.ini to make sure that I've session.save_path = "/tmp" I'm using Mac Xserver and running with MAMP I did tried everything to restart my server and nothing help. Please if anyone help me to give me some suggestion. Apologize if I post in a wrong place.

    Read the article

  • Error upgrading Ubuntu server from Intrepid to Jaunty

    - by Martin
    I'm trying to upgrade an old ubuntu server from 8.10 (Intrepid) to 9.04 (Jaunty). But it fails. root@server1:/# do-release-upgrade Checking for a new ubuntu release Failed Upgrade tool signature Failed Upgrade tool Done downloading extracting 'jaunty.tar.gz' Failed to extract Extracting the upgrade failed. There may be a problem with the network or with the server. Does anyone have an idea why I get this error and how to fix it? UPDATE: I think i might have tracked the problem down. My /etc/update-manager/meta-release looks like this: [METARELEASE] URI = http://changelogs.ubuntu.com/meta-release URI_LTS = http://changelogs.ubuntu.com/meta-release-lts URI_UNSTABLE_POSTFIX = -development URI_PROPOSED_POSTFIX = -proposed If i go to http://changelogs.ubuntu.com/meta-release it has this info for Jaunty: Dist: jaunty Name: Jaunty Jackalope Version: 9.04 Date: Thu, 23 Apr 2009 12:00:00 UTC Supported: 0 Description: This is the 9.04 release Release-File: http://archive.ubuntu.com/ubuntu/dists/jaunty/Release ReleaseNotes: http://changelogs.ubuntu.com/EOLReleaseAnnouncement UpgradeTool: http://archive.ubuntu.com/ubuntu/dists/jaunty-proposed/main/dist-upgrader-all/0.111.8/jaunty.tar.gz UpgradeToolSignature: http://archive.ubuntu.com/ubuntu/dists/jaunty-proposed/main/dist-upgrader-all/0.111.8/jaunty.tar.gz.gpg Those links starting with archive.ubuntu.com are broken since jaunty is EOL. I guess i could fix this by copying this file, replacing "archive" with "old-releases", host the modified file somewhere and change the url in the meta-release file. Is this a good solution or will it make me run into worse problems?

    Read the article

  • Procmail Postfix issue

    - by Blucreation
    Our server is using CENTOS uses postfix: Nov 1 11:31:52 webserver postfix/smtpd[30424]: 822A91872F: client=unknown[5.133.168.42], sasl_method=PLAIN, [email protected] Nov 1 11:31:52 webserver postfix/cleanup[30427]: 822A91872F: message-id=<[email protected]> Nov 1 11:31:52 webserver postfix/qmgr[1067]: 822A91872F: from=<[email protected]>, size=620, nrcpt=1 (queue active) Nov 1 11:31:52 webserver postfix/virtual[30505]: 822A91872F: to=<[email protected]>, relay=virtual, delay=0.12, delays=0.12/0/0/0, dsn=2.0.0, status=sent (delivered to maildir) Nov 1 11:31:52 webserver postfix/qmgr[1067]: 822A91872F: removed Nov 1 11:31:52 webserver postfix/smtpd[30424]: disconnect from unknown[5.133.168.42] I have this in my etc/postfix/main.cf: mailbox_command = /usr/bin/procmail -a "$EXTENSION" My etc/procmailrc contains: PATH="/usr/bin" SHELL="/bin/bash" LOGFILE="/var/log/procmail.log" VERBOSE="YES" LOG="#TEST#" I don't think procmail is picking up on my procmailrc as nothing ever gets logged from normal emails. If i type this: procmail DEFAULT=/dev/null VERBOSE=yes LOGFILE=/var/log/procmail.log /dev/null </dev/null I get entries in my log file so i know procmail is working Am i doing something wrong? am i missing something? I eventually want my rule to call a php script only if the subject contains "SUPPORT TICKET" and the to is "[email protected]" but that's once i this issue solved.

    Read the article

  • Question about malware-sites: How can I be in danger?

    - by juanmaflyer
    Hi people!! My english is horrible but i will do the best to be as accurate as posible. I have been reading here (in superuser) some questions about the necessity of antivirus software in Windows and some doubts arise. As far as i know (and imagine) virus software can only be harmful if I download any type of infected executable file and then I RUN IT. I mean that if i have the infected executable in my desktop but i leave it there for years without clicking it, I won't be in danger... My question is: How can i be in danger browsing the so called "malware pages or sites"??. If i am just browsing an "infected site" how could I be affected by a virus. In any moment the browser is asking me for the permission to download "something", so how could it be?? Although i don't give permission to the browser to download 'something' is data being downloaded to my computer?? Its some kind of cookie? I will ask in another way... What is the level of riskiness if i get inffected in a malware site compared with the level of an executable virus?? Thanks a lot!

    Read the article

  • How to move the Windows 7 bootloader to the Windows 7 partition?

    - by pauldoo
    I recently installed Windows 7 in a triple boot setup alongside XP and Linux. When I was finished and was in the process of restoring the bootloader for Linux I discovered something strange about what Windows 7 had done. I discovered that Windows 7 had not installed a bootloader to it's own partition, and instead had instead set up a bootloader on the pre-existing XP partition that offers a choice between 7 and XP. This behaviour has been noticed by others. Now my booting is slightly odd. I have GRUB on the MBR which lets me choose between Linux and Windows. When I select Windows I have Grub boot to the XP partition where I get the 2nd choice between 7 and XP. Why doesn't the Windows 7 installer put the Windows 7 bootloader on the Windows 7 partition like all previous MS OSs? This is now going to be a real problem for me, as I now want to wipe the XP partition and install something else there (probably another non-MS OS). How can I move the bootloader for Windows 7 onto the Windows 7 partition, thus making it bootable and allowing me to safely wipe the XP partition?

    Read the article

  • Creating a tar file with checksums included

    - by wazoox
    Here's my problem : I need to archive to tar files a lot ( up to 60 TB) of big files (usually 30 to 40 GB each). I would like to make checksums ( md5, sha1, whatever) of these files before archiving; however not reading every file twice (once for checksumming, twice for tar'ing) is more or less a necessity to achieve a very high archiving performance (LTO-4 wants 120 MB/s sustained, and the backup window is limited). So I'd need some way to read a file, feeding a checksumming tool on one side, and building a tar to tape on the other side, something along : tar cf - files | tee tarfile.tar | md5sum - Except that I don't want the checksum of the whole archive (this sample shell code does just this) but a checksum for each individual file in the archive. I've studied GNU tar, Pax, Star options. I've looked at the source from Archive::Tar. I see no obvious way to achieve this. It looks like I'll have to hand-build something in C or similar to achieve what I need. Perl/Python/etc simply won't cut it performance-wise, and the various tar programs miss the necessary "plugin architecture". Does anyone know of any existing solution to this before I start code-churning ?

    Read the article

  • Dos/ Flood Lag even though Port not Saturated

    - by Asad Moeen
    My GameServers had been under some UDP Floods due to which they generated outputs to the attacker which gave the GameServers some huge lags. Thanks to friends at ServerFault that upon different kind of testing, I was able to successfully block the attack. My question is actually something else but it is important to know how the GameServers reacted to the attack and if the machine kept stable or not: 300kb/s Input would cause GameServer to generate 2mb/s Output. So as the Input Rate kept increasing, output rate would reach so high that it would no longer be possible for the GameServer to control it and hence it would give a huge Lag until the attack is stopped. Usually the game server starts to lag when it sends out something greater than 5mb/s and under that is controllable. Theoretically, I was able to receive a 60mb/s output from my GameServer on inputting 10mb/s. Its just the way the GameServer works if not protected. Now on some of my machines, only the GameServer under attack lagged and although the server was generating 60mb/s output, rest of the gameservers on other ports would run fine without lags on the same machine. But there was another machine which also runs on a 100 MBPS Network port, even 1 mbps input ( and ZERO output because attack is blocked ) even on an unused port would give a constant yellow line ( on the Lag-o-Meter ) to all the clients on all GameServers indicating lag because that line is actually blue under normal conditions. It would remain the same even on 50mbps or 900mbps input. I tried contacting the host about it because I believe its the way their Network is bridged, but they can't help me about it. Anyone else knowing about such issues because if 900mbps input does not Saturate the port, how can 1mbps input lag the servers although port is not saturated and enough bandwidth is available?

    Read the article

  • Installing Ruby 1.9.3 OSX 10.7.4 breaks after altering PATH

    - by R V
    I was having trouble installing ruby 1.9.3-p194 from ruby 1.8.7 on my mac osx 10.7.4. I have was trying to fix my homebrew after running "brew doctor" and got the message of "/usr/bin occurs before /usr/local/bin This means that system-provided programs will be used instead of those provided by Homebrew. The following tools exist at both paths: c++-4.2 cpp-4.2 erb g++-4.2 gcc-4.2 gcov-4.2 gem i686-apple-darwin11-cpp-4.2.1 i686-apple-darwin11-g++-4.2.1 i686-apple-darwin11-gcc-4.2.1 irb rake rdoc ri ruby testrb" I fixed it by entering the following, which I found on another stackoverflow answer: export PATH="/usr/local/bin:/usr/local/sbin:~/bin$PATH" Lo and behold! when I typed that ruby updates to 1.9.3-p194. Ruby files seem to compile and run just fine. However, afterward, my navigation around terminal is messed up severely. For instance I can't do the command "open example_file.html" and have the file pop up in Chrome, instead I get the error: "-bash: open: command not found" Also, when I change directory, I get an error, inputting "$ cd desktop" yields the output, "-bash: dirname: command not found" but the directory does then changes... strange. When I exit out of a terminal window all this resets. I'm back to Ruby 1.8.7, have to use the PATH command again to update to 1.9.3, command line navigation gets broken again. Any guidance on how to remedy so I can use 1.9.3-p194 and also have normal terminal navigation would be greatly appreciated.

    Read the article

  • Chrome browser completely messing with network?

    - by kiasecto
    I have a bizare problem with Google Chrome on a intel core i5 running windows 7 32bit. Whenever chrome is installed, access to other computers in the home group becomes really slow - such as opening shares. Its becomes really slow to resolve windows names. Something goes hay-wire with the local network - pining local machines which is usually 0mS pings I get random timeouts and random successes. Whenever I try to load a local address inside of chrome (including localhost, 192.168.0.1 etc) - it always says something in the status bar about resolving proxy and times out after about 5 seconds, then seems to work fine. If I go to settings inside of chrome, it just brings up the internet explorer connection settings, where I have not set any proxy settings. One I uninstall chrome, all these problems go away. Network shares and name resolvings work instantly, pings to any machines never have a problem. Localhost and other network IP address work fine in all other browsers. Anyone heard of this problem before and know what it might be? I even tried re-installing winodws 7 and the problem came straight back when chrome was loaded on again.

    Read the article

  • All websites migrated from server running IIS6 to IIS7

    - by Leah
    Hi, I hope someone will be able to help me with this. We have recently migrated all of our clients' sites to a new server running IIS7 - all the sites were originally running on a server running IIS6. Ever since the migration, lots of our clients are reporting error messages. There seems to be quite a number of issues related to sending emails and also, we have had the following error message reported by several different clients: Server Error in '/' Application. -------------------------------------------------------------------------------- Validation of viewstate MAC failed. If this application is hosted by a Web Farm or cluster, ensure that <machineKey> configuration specifies the same validationKey and validation algorithm. AutoGenerate cannot be used in a cluster. Description: An unhandled exception occurred during the execution of the current web request. Please review the stack trace for more information about the error and where it originated in the code. Exception Details: System.Web.HttpException: Validation of viewstate MAC failed. If this application is hosted by a Web Farm or cluster, ensure that <machineKey> configuration specifies the same validationKey and validation algorithm. AutoGenerate cannot be used in a cluster. I have read elsewhere that this error can appear if a button is clicked before the whole page has finished loading. But as this error has now appeared on multiple sites and only since the server migration, it seems to me that it must be something else. I was wondering if someone could tell me if there is something specific which needs to be changed for .NET sites when sites are moved from a server running IIS6 to a server running IIS7? I don't deal with the actual servers very much so I'm afraid this is very much a grey area for me. Any help would be very much appreciated.

    Read the article

  • How to stop my VPS from picking up ARP reqs it is not supposed to?

    - by Charles Stewart
    Machine: Xen-3.0 image running stable Debian Linux 2.6.18, pretty vanilla. My VPS provider asks me to deal with some trouble my image is causing, namely handling IP addresses it is not supposed to: The problem is that your server seems to be configured to use IPs that have not been appointed to you. Your server responds to ARP requests for the IPs 81.171.111.219 and 81.171.111.218. But you are not allowed to use those. Not explicitly, as far as I can tell! At least, nothing under /etc or /var/tmp mentions these IP addresses. But arp -v says something I can't make sense of: Address HWtype HWaddress Flags Mask Iface 81.171.111.1 ether 00:0C:DB:E3:80:00 C eth0 Entries: 1 Skipped: 0 Found: 1 What is it listening to? The possibilities seem to be: It's not my fault: my VPS providers have overlooked something. What might that be? 81.171.111.1 means I'm happy listening in on ARP requests that I shouldn't be: how do I change this? In any case, what does this mean? I'm looking in completely the wrong place for information on what my image is doing. Where should I be looking?

    Read the article

  • Distributed File Systems.

    - by GruffTech
    So, I've been reading several articles around ServerFault as well as google. (For Example, this link) My Requirements are very similar to the link above, however i'd like to also have dynamic or at least resizeable file volumes, so if necessary i can add 4-5 servers to the pool, and then expand the volume. Any Distributed File systems that support that, to save me some time? Thanks! LustreFS will be my next test cluster to build. GlusterFS I've build a 3-machine test GlusterFS cluster, However i quickly became aware of several of its limitations that it doesn't seem to make clearly public. One, i can't seem to resize a volume. Once a volume is created, its done. Which seems retarded, why have a fully scalable file system if i can't scale a volume? So maybe i'm doing something wrong. I'm not sure. AmazonS3 while gives the cheapest startup adds too much cost when broken down to per client per month, so its out. Building my own system when prorated over several years with no bandwidth costs makes it significantly cheaper. MogileFS isn't an option as we'd like this server to be a SAN-Replacement, for storing tons of media from a multitude of systems, which for us means it needs to be POSIX compliant so it can be remotely mounted via NFS or CIFS.

    Read the article

  • CentOS 6.0 yum audacity dependency

    - by Kaemic
    I'm trying to install audacity on centos using yum and I cant force yum to resolve all dependencies, here's what i got, can someone help me with it (when I download the rpm file and click-installit i get the dependency problem too): # yum --disablerepo=c6-media install audacity-1.2.3-2.2.el4.rf.i386.rpm Loaded plugins: fastestmirror, refresh-packagekit Loading mirror speeds from cached hostfile * base: mirror.karneval.cz * centosplus: mirror.karneval.cz * extras: centos.vieth-server.de * rpmforge: fr2.rpmfind.net * updates: mirror.karneval.cz Setting up Install Process Examining audacity-1.2.3-2.2.el4.rf.i386.rpm: audacity-1.2.3-2.2.el4.rf.i386 Marking audacity-1.2.3-2.2.el4.rf.i386.rpm to be installed Resolving Dependencies --> Running transaction check ---> Package audacity.i386 0:1.2.3-2.2.el4.rf set to be updated --> Processing Dependency: wxGTK >= 2.4.0 for package: audacity-1.2.3-2.2.el4.rf.i386 --> Processing Dependency: libwx_gtk-2.4.so.0 for package: audacity-1.2.3-2.2.el4.rf.i386 --> Processing Dependency: libwx_gtk-2.4.so.0(WXGTK_2.4) for package: audacity-1.2.3-2.2.el4.rf.i386 --> Running transaction check ---> Package audacity.i386 0:1.2.3-2.2.el4.rf set to be updated --> Processing Dependency: libwx_gtk-2.4.so.0 for package: audacity-1.2.3-2.2.el4.rf.i386 --> Processing Dependency: libwx_gtk-2.4.so.0(WXGTK_2.4) for package: audacity-1.2.3-2.2.el4.rf.i386 ---> Package wxGTK.i686 0:2.8.12-1.el6.rf set to be updated --> Finished Dependency Resolution Error: Package: audacity-1.2.3-2.2.el4.rf.i386 (/audacity-1.2.3-2.2.el4.rf.i386) Requires: libwx_gtk-2.4.so.0 Error: Package: audacity-1.2.3-2.2.el4.rf.i386 (/audacity-1.2.3-2.2.el4.rf.i386) Requires: libwx_gtk-2.4.so.0(WXGTK_2.4) You could try using --skip-broken to work around the problem You could try running: rpm -Va --nofiles --nodigest

    Read the article

  • Apache2.2 not responding on Windows 7 desktop

    - by Adam
    Afternoon! I'm having some trouble with Apache2.2 on Windows 7. For over a year it's been running no problem, but all of a sudden requests have just stopped responding. They don't ever time out, the browser just keeps on waiting for a response, which makes me think it's something blocking communication with Apache. Interestingly though, if I stop Apache the requests fail immediately. The Apache service is running, and using netstat I can see it listening on port 80 as configured: TCP 127.0.0.1:80 0.0.0.0:0 LISTENING If I stop the Apache service, that line disappears. I have an entry within my hosts file for each VHost I'm trying, all pointing to 127.0.0.1. Each VHost is configured to *:80. Nothing however is getting recorded in the access or error (at debug level) log files. I've verified the file paths are correct, even though they were never changed. Neither is anything getting recorded within Windows' Event Log. The problem showed up when I added a new VHost and restarted, however I hadn't been using it for a couple of days prior so I don't believe it's the config change. I have performed a syntax check to be sure, and when starting from the command prompt no errors are reported there. I do have Windows Firewall running, however I've verified the Apache rule is correct and tried turning it off to ensure that wasn't the problem. I've reinstalled Apache, in the hope it might magically fix something using the default config, but still no joy. I've also tried using a different port. I'm completely lost for ideas now. Can anybody help? Cheers Adam

    Read the article

  • Cron won't use msmtp to send emails in case of failed cronjob

    - by Glister
    I'm trying to configure a machine so that it will send me an email if one of the cronjobs output something in case of an error. I'm using Debian Wheezy. Cron is working normally (without the email functionality). msmtp is installed and configured. Have already symlinked /usr/{bin|sbin}/sendmail to /usr/bin/msmtp. I can send email by using: echo "test" | mail -s "subject" [email protected] or by executing: echo "test" | /usr/sbin/sendmail Without the symlink (/usr/sbin/sendmail) cron will tell me that: (CRON) info (No MTA installed, discarding output) With the symlinks I get: (root) MAIL (mailed 1 byte of output; but got status 0x004e, #012) Can you suggest how to config the cron/msmtp pair? Thanks! EDIT: Note: I've written "msmtpd" by mistake. Its not a daemon but rather an SMTP client named just "msmtp" (without the "d" ending). It is executed on demand and it is not running in the background all the time. When I try to send an email by using msmtp like that it works: echo "test" | msmtp [email protected] On the far side, in the logs of the SMTP server I read: Nov 2 09:26:10 S01 postfix/smtpd[12728]: connect from unknown[CLIENT_IP] Nov 2 09:26:12 S01 postfix/smtpd[12728]: 532301C318: client=unknown[CLIENT_IP], sasl_method=CRAM-MD5, [email protected] Nov 2 09:26:12 S01 postfix/cleanup[12733]: 532301C318: message-id=<> Nov 2 09:26:12 S01 postfix/qmgr[2404]: 532301C318: from=<[email protected]>, size=191, nrcpt=1 (queue active) Nov 2 09:26:12 S01 postfix/local[12734]: 532301C318: to=<[email protected]>, orig_to=<[email protected]>, relay=local, delay=0.62, delays=0.59/0.01/0/0.03, dsn=2.0.0, status=sent (delivered to command: IFS=' ' && exec /usr/bin/procmail -f- || exit 75 #1001) Nov 2 09:26:12 S01 postfix/qmgr[2404]: 532301C318: removed Nov 2 09:26:13 S01 postfix/smtpd[12728]: disconnect from unknown[CLIENT_IP] And the Email is delivered to the target user. So it looks like that the msmtp client is working properly. It has to be something in the cron/msmtp integration, but I have no clue what that thing might be. Can you help me?

    Read the article

  • How to tell which process is hogging my CPU when they don't add up to 100%?

    - by endolith
    Ubuntu's System Monitor applet shows 100% CPU usage continuously. If I click it, the resources tab shows it at 100% continuously, too. If I go to processes, though, to find out which process is the culprit, there is nothing above 10%. If I run top there is nothing above 10%. The individual processes do not add up to 100%. I try killing lots of processes, but the overall usage continues to be 100%. How can I find out what's hogging the CPU? This is an unusual situation on a computer I use daily, which is never anywhere near 100% CPU unless I'm doing something that requires it (like loading 32 Firefox tabs), after which it goes back to a normal idle level. It's not a new install or anything. There is no reason the processor should be maxed out. I'm not sure when it started or if I changed something that caused it to happen. Normally I would use top or System Monitor and find the process that had gone out of control, but I can't find anything with those tools this time. It persists after reboots and everything. And the processor is obviously hot, so it's not an erroneous reading. Update: I tried killing every process, one at a time, until the problem went away, and killing vino-server finally fixed it, even though that process never went above 5%. I had enabled Remote Desktop a few days ago (and have obviously now disabled it). But the question remains: How did a single process manage to use 100% CPU while top only showed that process at 5%? How do I identify culprits like this in the future? Looks like I'm not the only one who's had this problem: Still a problem in both jaunty & karmic. Interestingly, both System Monitor & htop do not show the sum of individual processes being anywhere near 100% cpu.

    Read the article

< Previous Page | 368 369 370 371 372 373 374 375 376 377 378 379  | Next Page >