Search Results

Search found 10005 results on 401 pages for 'regex trouble'.

Page 308/401 | < Previous Page | 304 305 306 307 308 309 310 311 312 313 314 315  | Next Page >

  • MSE updating fails, no warning or error message.

    - by WebDevHobo
    I'm running Windows 7 Ultimate, 32-bit. For the last couple of days, MSE doesn't fails to update, remaining stuck at version 1.75.119 I presume that an error log is created somewhere, or an event log, but I don't know where to find those. It just says "connection failed". Tried it at home, at work and friends places, but never works. Restarted computer a lot of times now, checked for Microsoft Updates in general, but nothing shows up. EDIT: I've opened a bounty for this, because I really don't know what to do anymore. The oldest answer(the long post) here did not work. Besides this problem, I'm having trouble using MSI installers too. I've had to add the SYSTEM group to a lot of maps and give them full control, but shouldn't the SYSTEM already be there? Also, I had to remove the "read-only" attribute from the ProgramData and Users folders, add the SYSTEM group there too and give them full control. Only then will the MSI install work and even then, it says I doesn't have the rights to create a shortcut on the desktop. Don't know what I need to modify and where for that. I'm saying this because I don't know how MSE updates, but if they use MSI files to do that, that might explain things. The SYSTEM group remains added, but every time I take away the read only attribute, click OK and check the settings again, read-only is still active... That's all I know. Screenshot, all those updates were manual:

    Read the article

  • Deploying concrete5 on nginx

    - by Nithin
    I have a concrete5 site that works 'out of the box' in apache server. However I am having a lot of trouble running it in nginx. The following is the nginx configuration i am using: server { root /home/test/public; index index.php; access_log /home/test/logs/access.log; error_log /home/test/logs/error.log; location / { # First attempt to serve request as file, then # as directory, then fall back to index.html try_files $uri $uri/ index.php; # Uncomment to enable naxsi on this location # include /etc/nginx/naxsi.rules } # pass the PHP scripts to FastCGI server listening on unix socket # location ~ \.php($|/) { fastcgi_pass unix:/tmp/phpfpm.sock; fastcgi_split_path_info ^(.+\.php)(/.+)$; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; fastcgi_param PATH_INFO $fastcgi_path_info; include fastcgi_params; } location ~ /\.ht { deny all; } } I am able to get the homepage but am having problem with the inner pages. The inner pages display an "Access denied". Possibly the rewrite is not working, in effect I think its querying and trying to execute php files directly instead of going through the concrete dispatcher. I am totally lost here. Thank you for your help, in advance.

    Read the article

  • Dynamic fowarding with SOCKS5 proxy [on hold]

    - by bh3244
    I'm building my own SOCKS5 client and HTTP library and am having trouble figuring out how things work with dynamic port forwarding. So far I can connect successfully with my SOCKS5 client, but from there on I am stuck. I am using the ssh -D command. Considering I have my local machine "home" and my server "server" and I wanted to use "server" as proxy for all connections I understand I would type ssh -D "localport" "serverhostname" on my local machine "home". This command I understand has ssh accept connections with the SOCKS5 protocol. So now if I want to connect to google.com(74.125.224.72:80) and issue a GET for the front page, I assume I would send the SOCKS5 client request and the server would respond back with a 0x00 "succeeded" and from then on I am connected and I would send the HTTP GET request and the server would respond back accordingly with the data. Now if I want to navigate to a different website, must I issue another SOCKS5 connection request for that sites IP/hostname? I'm confused if this is the way it is done, or if there is a program listening on the local port of the "server" and handling outgoing and incoming data. To reiterate: Do SOCKS5 proxies work by sending repeated SOCKS5 connection requests for different addresses or is there just one connection to a local port on "server" and another program on "server" handles the outgoing connection to the internet by using that local port to send and receive data to/from "home"?

    Read the article

  • Installing/enabling PHP Pecl Intl extension on CentOs 5

    - by Marijn Huizendveld
    Original question: I'm having trouble installing the PHP Pecl Intl extension on my CentOs 5 machine. After installing both icu and libicu with the following commands: $ yum install icu $ yum install libicu I tried to install the Intl extension like so: $ /usr/bin/pecl install intl I selected to search for the default location for the ICU libraries and header files. It ends up crashing like this: checking whether to enable internationalization support... yes, shared checking for icu-config... no checking for location of ICU headers and libraries... not found configure: error: Unable to detect ICU prefix or no failed. Please verify ICU install prefix and make sure icu-config works. ERROR: `/tmp/pear/temp/intl/configure --with-icu-dir=DEFAULT' failed update After successfully installing the development version of icu as suggested by RusAlex (thanks RusAlex) like so: $ yum install libicu-devel I ran into a new problem which I also encountered locally the following command: $ /usr/bin/pecl install intl now produces this error: /private/tmp/pear/temp/intl/collator/collator_class.c:92: error: duplicate 'static' /private/tmp/pear/temp/intl/collator/collator_class.c:96: error: duplicate 'static' /private/tmp/pear/temp/intl/collator/collator_class.c:101: error: duplicate 'static' /private/tmp/pear/temp/intl/collator/collator_class.c:107: error: duplicate 'static' make: *** [collator/collator_class.lo] Error 1 ERROR: `make' failed It appears to have something to do with PHP 5.3 being bundled with Intl already. But how can I enable this extension, if I look in my PHP Info than I cannot find any reference to it...

    Read the article

  • Home server hard drive: 186k start-stop cycles in 325 days?

    - by j-g-faustus
    I set up a home server about a year ago, using Ubuntu server (10.04 LTS at the moment), four disks in RAID 5 for storage (WD Green 1.5 TB) and a laptop drive for the OS. Today the output of smartctl, a command line utility for checking the SMART attributes of a hard drive, tells me that the primary OS drive has had no less than 186,000 start-stop cycles in 325 days and may be nearing the end of its lifespan. The smartctl output is in "normalized values", in this case a number between 200 and 000, where 200 is "brand new" and 000 means "worn out". My disk gets 001. So I wonder what happened: 186k start/stop cycles in 7820 hours is about one start/stop per 2.5 minutes around the clock. This seems somewhat excessive for a computer that sees actual use once or twice per day. (The RAID disks are normal, averaging to one start/stop per day, as expected.) Does anyone have similar experiences, or pointers to what might be the issue here? Specifically I'd like to know Why the massive start/stop count? Do I have some sort of configuration issue? Could there be a background service that is causing trouble? Could having a laptop disk as the OS drive be part of the problem? Can anyone confirm or deny this? Here is the /etc/hdparm.conf configuration /dev/sda { apm = 127 spindown_time = 120 } and the most relevant parts of smartctl --attributes /dev/sda: smartctl version 5.38 [x86_64-unknown-linux-gnu] Copyright (C) 2002-8 Bruce Allen === START OF READ SMART DATA SECTION === SMART Attributes Data Structure revision number: 16 Vendor Specific SMART Attributes with Thresholds: ID# ATTRIBUTE_NAME FLAG VALUE WORST THRESH TYPE UPDATED WHEN_FAILED RAW_VALUE 1 Raw_Read_Error_Rate 0x002f 200 200 051 Pre-fail Always - 0 4 Start_Stop_Count 0x0032 001 001 000 Old_age Always - 185875 9 Power_On_Hours 0x0032 090 090 000 Old_age Always - 7820 12 Power_Cycle_Count 0x0032 100 100 000 Old_age Always - 109 193 Load_Cycle_Count 0x0032 118 118 000 Old_age Always - 246833 194 Temperature_Celsius 0x0022 107 098 000 Old_age Always - 36 As I generally prefer my drives to last more than a year, any advice is appreciated.

    Read the article

  • Abysmal transfer speeds on gigabit network

    - by Vegard Larsen
    I am having trouble getting my Gigabit network to work properly between my desktop computer and my Windows Home Server. When copying files to my server (connected through my switch), I am seeing file transfer speeds of below 10MB/s, sometimes even below 1MB/s. The machine configurations are: Desktop Intel Core 2 Quad Q6600 Windows 7 Ultimate x64 2x WD Green 1TB drives in striped RAID 4GB RAM AB9 QuadGT motherboard Realtek RTL8810SC network adapter Windows Home Server AMD Athlon 64 X2 4GB RAM 6x WD Green 1,5TB drives in storage pool Gigabyte GA-MA78GM-S2H motherboard Realtek 8111C network adapter Switch dLink Green DGS-1008D 8-port Both machines report being connected at 1Gbps. The switch lights up with green lights for those two ports, indicating 1Gbps. When connecting the machines through the switch, I am seeing insanely low speeds from WHS to the desktop measured with iperf: 10Kbits/sec (WHS is running iperf -c, desktop is iperf -s). Using iperf the other way (WHS is iperf -s, desktop iperf -c) speeds are also bad (~20Mbits/sec). Connecting the machines directly with a patch cable, I see much higher speeds when connecting from desktop to WHS (~300 Mbits/sec), but still around 10Kbits/sec when connecting from WHS to the desktop. File transfer speeds are also much quicker (both directions). Log from desktop for iperf connection from WHS (through switch): C:\temp>iperf -s ------------------------------------------------------------ Server listening on TCP port 5001 TCP window size: 8.00 KByte (default) ------------------------------------------------------------ [248] local 192.168.1.32 port 5001 connected with 192.168.1.20 port 3227 [ ID] Interval Transfer Bandwidth [248] 0.0-18.5 sec 24.0 KBytes 10.6 Kbits/sec Log from desktop for iperf connection to WHS (through switch): C:\temp>iperf -c 192.168.1.20 ------------------------------------------------------------ Client connecting to 192.168.1.20, TCP port 5001 TCP window size: 8.00 KByte (default) ------------------------------------------------------------ [148] local 192.168.1.32 port 57012 connected with 192.168.1.20 port 5001 [ ID] Interval Transfer Bandwidth [148] 0.0-10.3 sec 28.5 MBytes 23.3 Mbits/sec What is going on here? Unfortunately I don't have any other gigabit-capable devices to try with.

    Read the article

  • mod_rewrite REQUEST_FILENAME doesn't contain absolute path

    - by Paul Dixon
    I have a problem with a file test operation in a mod_rewrite RewriteCond entry which is testing whether %{REQUEST_FILENAME} exists. It seems that rather than %{REQUEST_FILENAME} being an absolute path, I'm getting a path which is rooted at the DocumentRoot instead. Configuration I have this inside a <VirtualHost> block in my apache 2.2.9 configuration: RewriteEngine on RewriteLog /tmp/rewrite.log RewriteLogLevel 5 #push virtually everything through our dispatcher script RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule ^/([^/]*)/?([^/]*) /dispatch.php?_c=$1&_m=$2 [qsa,L] Diagnostics attempted That rule is a common enough idiom for routing requests for non-existent files or directories through a script. Trouble is, it's firing even if a file does exist. If I remove the rule, I can request normal files just fine. But with the rule in place, these requests get directed to dispatch.php Rewrite log trace Here's what I see in the rewrite.log init rewrite engine with requested uri /test.txt applying pattern '^/([^/]*)/?([^/]*)' to uri '/test.txt' RewriteCond: input='/test.txt' pattern='!-f' => matched RewriteCond: input='/test.txt' pattern='!-d' => matched rewrite '/test.txt' -> '/dispatch.php?_c=test.txt&_m=' split uri=/dispatch.php?_c=test.txt&_m= -> uri=/dispatch.php, args=_c=test.txt&_m= local path result: /dispatch.php prefixed with document_root to /path/to/my/public_html/dispatch.php go-ahead with /path/to/my/public_html/dispatch.php [OK] So, it looks to me like the REQUEST_FILENAME is being presented as a path from the document root, rather than the file system root, which is presumably why the file test operator fails. Any pointers for resolving this gratefully received...

    Read the article

  • Postfix + procmail - delivery fails because "can't create user output file" - on CentOS 6.2

    - by jshin47
    I verified that my postfix installation / relaying setup worked. Now I am having trouble with procmail. I have it wired to postfix with the following command: mailbox_command = /usr/bin/procmail -f -a "$USER" I have nothing in my procmail config but the following: LOGFILE=/var/procmailrc/log And I send an email to a recipient that previously worked (before I attached procmail). Now it fails with error: Apr 6 14:07:05 localhost postfix/qmgr[15194]: D0C3DFF6E1: from=<[email protected]>, size=938, nrcpt=1 (queue active) Apr 6 14:07:05 localhost postfix/local[1953]: D0C3DFF6E1: to=<[email protected]>, orig_to=<postmaster>, relay=local, delay=0.05, delays=0.02/0.01/0/0.02, dsn=5.2.0, status=bounced (can't create user output file. Command output: procmail: Couldn't create "/var/spool/mail/nobody" procmail: Couldn't read "//root" ) Apr 6 14:07:05 localhost postfix/bounce[1955]: warning: D0C3DFF6E1: undeliverable postmaster notification discarded Apr 6 14:07:05 localhost postfix/qmgr[15194]: D0C3DFF6E1: removed It seems like there is some sort of permissions issue but I do not know what the problem is, nor do I understand how I would go about diagnosing it further. The logfile that I specified is empty, by the way. How can I make procmail+postfix work?

    Read the article

  • Verizon 4G LTE vs. a LAN

    - by n8wrl
    I have been having quite a bit of trouble getting my new Verizon 4G-LTE service running on a Windows 7 desktop. My desktop is on a LAN here at home with two other PC's. We all share printers, files, media, etc. Until yesterday, we also shared a Verizon 3G modem via a NetGear 3G broadband WAP. That isn't compatible with the 4G so now I am just trying to get the 4G modem working directly connected to one of the desktops. After some USB wrangling, it seems to work. Except, every 7-10 minutes the connection would drop. After some time on the phone with a very nice Verizon technician, it seems to be staying up - it's been up for 20 minutes now. He told me that my LAN was causing the 4G to drop. That traffic on my LAN, even though it is not destined for the internet (ICS not working yet) was causing the cell tower to detect an 'IP change' and a 'security violation' in my modem and drop my connection. Is this Verizon's way of forbidding more than one computer to share a modem? I have my computer running now without a LAN connection and the 4G is still up. But this isn't practical. Has anyone heard of this?

    Read the article

  • Overheating Toshiba Satellite L300

    - by ldigas
    A coleague of mine's having trouble with his new Toshiba Satellite L300 ... I tool a kook at it, and indeed it's hot as hell. I couldn't hold my hand on it for too long. He says it also has a tendency to turn itself off (WinXP 32bit running) with no forewarning. Hasn't happened to me while I was using it, but that wasn't long anyways. The first guess was it was too dirty ... problem is it's new, came out of a package a quarter of a year ago. Kept in a clean environment (office). Looks clean. No dust in sight. Second guess is that the fan wasn't working properly, cause indeed it has intervals of working, and non working. But when I listen to it, it sounds like normal usage. I took a SpeedFan measurement, and it reports temp. up to 85 Celsius ... which is definitely too high. Anyone knows what else I could do to it ? It is under warranty and it will go to the service, but I thought if there is something we can do, as to avoid carrying it there / be without it for a week ...

    Read the article

  • Changes to grub in ubuntu 10

    - by jdege
    I've been running CentOS 5 for some years. I've decided to upgrade to Ubuntu, and with 10.04 just out, this seemed like a good time. I'm a tad paranoid, so I started off with a new set of drives - one to install on, one to backup to, and one as a spare. I removed my existing CentOS 5 drives, and did an install, and had no problems. I installed the server version, and used the default full-disk LVM installation. Next, I copies my backup scripts over, edited them to work with the new configuration, and did a test backup. That worked fine, as well. Then comes the real test, could I do an install of the backup onto the spare drive? (I won't put anything of importance on a system that doesn't have a reliable backup, and if I've never done a restore, it's not reliable.) I booted from a System Rescue CD (ver 1.5.3), with the spare drive as /dev/sda, and the backup drive as /dev/sdb. I had no trouble in partitioning, configuring LVM, formatting, making swap, or restoring the file systems. But when I got to restoring grub to the MBR, I ran into problems. My restore instructions from CentOS 5 said run grub, then enter two commands: root (hd0,0) setup (hd0) The first command exits with an error: "Checking if /boot/grub/stage1 exists ... no" I did some googling around, and found that the Grub2 included in recent Ubuntus is very different than the Grub 0.97 included in CentOS 5. One site suggested I use: grub-install --root-dir=/mnt/restore /dev/sda That appeared to work, but when I booted from the drive, I ended up at a grub prompt. Any ideas as to what I need to do? It seems like a simple problem, but my attempts at searching out answers on the web are being swamped by references to the old version of Grub. Help would be appreciated.

    Read the article

  • How do I create a "here document" within a shell function?

    - by BenU
    I'm working my way through William Shotts Jr.'s great The Linux Command Line on my Mac OSX 10.7.5 system. 90% of the linux that Shotts covers is close enough to Darwin that I can figure out or GTEM to figure out what's going on. I've made it to chapter 27 on "Writing Shell Scripts" and am getting hung up creating "here files" within a function. I get an syntax error: unexpected end of file error when I include the following function: report_uptime () { cat <<- _EOF_ <H2>System Uptime</H2> <PRE>$(uptime)</PRE> _EOF_ return } The error goes away if I use the following function placeholder: report_uptime () { return } Also, elsewhere in the script, outside of a function I use the cat << _EOF_ format to create a "here file" with no trouble: cat << _EOF_ <HTML> <HEAD> <TITLE>$TITLE</TITLE> </HEAD> <BODY> <H1>$TITLE</H1> <P>$TIME_STAMP</P> $(report_uptime) $(report_disk_space) $(report_home_space) </BODY> </HTML> _EOF_ If anyone has any idea what I'm doing wrong I would be grateful!

    Read the article

  • Robocopy silently missing files

    - by John Hunt
    I'm using Robocopy to sync data from our server's hard disk to an external disk as a backup. It's a pretty simple solution but pretty much the best/easiest one we could come up with - we use two external disks and rotate them offsite. Anyway, here's the script (with the comments taken out) that I'm using to do it. It works very well, it's quick and almost 100% complete - however it's acting pretty strange with a few files (note company name has been changed in paths to protect the innocent): @ECHO OFF set DATESTAMP=%DATE:~10,4%/%DATE:~4,2%/%DATE:~7,2% %TIME:~0,2%:%TIME:~3,2%:%TIME:~6,2% SET prefix="E:\backup_log-" SET source_dir="M:\Company Names Data\Working Folder\_ADMIN_BACKUP_FILES\COMPA AANY Business Folder_Backup_040407\COMPANY_sales order register\BACKUP CLIENT FOLDERS & CURRENT JOBS pre 270404\CLIENT SALES ORDER REGISTER" SET dest_dir="E:\dest" SET log_fname=%prefix%%date:~-4,4%%date:~-10,2%%date:~-7,2%.log SET what_to_copy=/COPY:DAT /MIR SET options=/R:0 /W:0 /LOG+:%log_fname% /NFL /NDL ROBOCOPY %source_dir% %dest_dir% %what_to_copy% %options% set DATESTAMP=%DATE:~10,4%/%DATE:~4,2%/%DATE:~7,2% %TIME:~0,2%:%TIME:~3,2%:%TIME:~6,2% cscript msg.vbs "Backup completed at %DATESTAMP% - Logs can be found on the E: drive." :END Normally the source would just be M:\Comapany name data\ but I altered the script a bit to test the problem. The following files in the source are not copied to the dest: Someclient\SONICP~1.DOC Someclient\SONICP~2.DOC Someclient\SONICP~3.DOC However, files in the same directory named: TIMESH~1.XLS TIMESH~2.XLS are copied. I'm able to open the files that aren't copied with no trouble at all, and they certainly weren't opened when I ran robocopy so it's not a locking issue. Robocopy is running as administrator so it's not a permissions issue. There's no trace these files were even attempted to be copied as there are no errors being output in the log or in my command prompt. Does anyone have any suggestions as to what this might be? Busted hard disk? Cheers, John.

    Read the article

  • PDU management interface has low availability - product flaw or isolated issue

    - by DeanB
    Our colocation provider has supplied us with APC AP7932 switched 0U PDUs as part of several cabinets they provide us. We have had a lot of trouble with the network management aspect of these PDUs, which I'll describe below. We are moving to cage space in the same datacenter, and plan to provide our own PDUs, so I'd like to determine which enterprise-grade PDUs have been reliable performers from a remote management perspective. Our colo-provided PDUs are configured to support management via an SSL web UI and via telnet. We updated the firmware on all of them to the current version as of NOV2011. They respond to pings reliably, and we have no reason to suspect a network layer issue. However, we experience frequent hangs, timeouts, disconnects, and general unavailability from the embedded management host in all of the PDUs. We occasionally have to restart the microcontroller on the PDU to recover from what appears to be an occasional hard fault. The outlets stay powered (thankfully), but the management aspect is so unreliable that it has become an ops liability - we can't be confident that we could get into the PDU to power cycle a host if we needed to. We have 3 PDUs that all exhibit identical behavior. There are many manufacturers of enterprise-grade 0U switched PDUs, all with comparable features. If I looked at the datasheet for our current PDUs, they would appear to be a good fit -- only with the benefit of suffering through using them do we know to avoid them. I'd like to avoid picking a PDU that looks fine on paper, but has similar reliability issues. What has been others' experience with switched PDUs? Is this level of flakiness normal?

    Read the article

  • Default Browser hangs (IE, Chrome)

    - by Craig Hinrichs
    IE was my default browser about three months ago when I started experiencing this issue. Intermittent hangs would occur when I would open a new main page or new tab to a site I know would be up. What I mean by a hang: The browser would open and say "Waiting for site " and do nothing more. If I closed the window and reopened it it would immediatly connect. Over time I would have to close and reopen the window to get to the page. This would happen to any page including Google. I finally got sick of it and started using chrome and I will never go back. I recently upgraded my anti-virus and now I am experiencing the same issue with Chrome. I use AVG for my antivirus. Empirically it seems if I don't make Chrome my default browser I don't experience the issue. I tested this theory for over two hours yesterday. Possible issues I have found this coudl be but not confirmed yet: MTU settings are not correct. I am infected but my antivirus has not caught it (unlikely but possible) ?? I would like to think this is related to my antivirus but I am unsure how to verify. I don't like the idea of killing my antivirus if #2 is a possibility. I am looking for tips on how I can trouble shoot possible issues. I am on Windows XP SP3 Thanks in advance.

    Read the article

  • Change Laptop Wireless Card?

    - by Craig
    I have just bought a new Samsung RV511 Laptop (the i5 Version). Anyway it comes installed with a Broadcom Wifi Card, I have a Super Atheros Mini Wifi Card that I would like to swap for the Broadcom one. I use Atheros for security auditing purposes in Linux that the Broadcom one is not good for. Trouble is I have never got to the barebones of a laptop, although I am a Custom PC Builder, laptops i generally have no concern for. Take a look at this picture, I took one back panel off to reveal some of the components: I believe the Wireless Card is where I marked the arrow pointing too. That little white square thing appears to be a sticker of the Wifi card, it says WLAN. How can I access it to swap it with my Atheros one. Do I need to remove all the outermost screws? There are screws located to the right of where I scribbled in Yellow and next to the HDD, do I just remove those? I dont want to destroy the machine, only just got it today. Appreciate some help thanks.

    Read the article

  • Reasonable automatic HTML to PDF conversion (in UNIX/Linux environment)

    - by Alex Balashov
    Is there a way to generate PDF documents from HTML files automatically in Linux where the PDF offers some kind of reasonable level of resemblance to the input file? A command-line tool - as opposed to an interactive GUI of some kind - is key. I have tried htmldoc and some related cousins, of course. But these tools are hopelessly stone-age; htmldoc doesn't support CSS at all. You won't find a lot of HTML documents these days that don't have at least some CSS styling. I don't really care about stupid effects or minor embellishments, but the issue is that CSS is at the core of most layouts these days; not many folks are using 6 layers of nested tables anymore. So, if the conversion tool has no grasp of CSS whatsoever, it's not just a matter of "the document doesn't look quite right"; it is likely to not meet the minimum standard of usability at all. It has been suggested to me by some folks to try to use the Gecko rendering engine to generate images that can be converted to PDFs, but I have no idea how one would go about doing this, let alone easily. I have no trouble believing that there are good commercial tools that do this, but I'm really looking for an open-source package if possible, as the endeavour itself is an open-source one and doesn't pay. Thanks in advance!

    Read the article

  • Changing default openVPN IP in linux server

    - by Lamboo
    The problem is that we have a public OpenVPN service. Pay €9.95 and you get an OpenVPN account at currently half a dozen of servers for a month. This means there are always and will always be some people who create a certain amount of abuse or trouble. On the long run, the external IP every OpenVPN user gets assigned is prohibited from editing Wikipedia, it might be banned by e-gold and on some popular webforums, one-click-hosters, etc. Not a pleasant experience for the 97% of our customers who use our service responsibly and legitimately to regain their privacy. So even if I could change the assigned external IP every few months; e. g. from 216.xx.xx.164 to 216.xx.xx.170, it would help us a lot to combat this abuse and to provide our paying clients with "fresh" IP addresses that aren't banned or restricted on some popular Internet sites and services, yet. Does anybody know how to change the first IP address assigned to the public interface in CentOS? So that e.g. OpenVPN in future doesn't give our OpenVPN clients the external IP 123.xx.xx.164 but rather 123.xx.xx.170?

    Read the article

  • How to stop my VPS from picking up ARP reqs it is not supposed to?

    - by Charles Stewart
    Machine: Xen-3.0 image running stable Debian Linux 2.6.18, pretty vanilla. My VPS provider asks me to deal with some trouble my image is causing, namely handling IP addresses it is not supposed to: The problem is that your server seems to be configured to use IPs that have not been appointed to you. Your server responds to ARP requests for the IPs 81.171.111.219 and 81.171.111.218. But you are not allowed to use those. Not explicitly, as far as I can tell! At least, nothing under /etc or /var/tmp mentions these IP addresses. But arp -v says something I can't make sense of: Address HWtype HWaddress Flags Mask Iface 81.171.111.1 ether 00:0C:DB:E3:80:00 C eth0 Entries: 1 Skipped: 0 Found: 1 What is it listening to? The possibilities seem to be: It's not my fault: my VPS providers have overlooked something. What might that be? 81.171.111.1 means I'm happy listening in on ARP requests that I shouldn't be: how do I change this? In any case, what does this mean? I'm looking in completely the wrong place for information on what my image is doing. Where should I be looking?

    Read the article

  • How to diagnose occasional sudden resets?

    - by steve314
    I have a Windows XP system, and have recently upgraded by adding 2 1GB sticks of RAM to the 2x0.5GB already present. Since then, about once per day (the system is used 8+ hours per day), the system has suddenly and unexpectedly reset. On a couple of occasions, the system has frozen completely, only responding to the power button being held in for several seconds to force power off. Nothing at all ever appears in the system event log that might indicate a possible cause - everything seems to suggest business as usual. Sounds like faulty memory - but memtest86+ says otherwise. A full test, taking over an hour, found no issues. The next likely suspicion, then, is that I've knocked something while installing the RAM. Trouble is, everything I can think of to test seems fine. I've opened up the case and prodded a few things around, hoping to get better contact on connections etc, but there's no sign yet as to whether that has made a difference or not. I thought about a malware-related timing fluke, but again, so far as I can tell I'm all clear. All I can think of to add to my checklist (mainly of things that I can't easily check) is... The power supply is (1) only 350W, (2) not necessarily the best quality, and (3) powering a Prescott P4 640 3.2GHz. Could that be borderline overloaded or about to die? How do I check? Is it possible that the CPU isn't getting cooled properly? I haven't had the fan past normal tickover even doing video encoding, and the only sane temperature reading from SpeedFan is pretty steady at 36 celcius, so probably not. Any thoughts? Is there a standard procedure for diagnosing this kind of fault?

    Read the article

  • Blinking power button

    - by Mike Ramsey
    A friend asked me to look at his Gateway DX4640 desktop. When he presses the power button, power goes to the mobo (NVIDIA nForce 630i MCP73PV, GeForce 7100 chipset) and the CPU fan starts spinning. The power button slowely blinks on and off (blue) and the screen briefly says no signal and then goes black. And nothing else; no post code beeps. My initial two conjectures were: 1. Vista was stuck in sleep/hibernation mode, or 2. A power off had left the mobo in a bad state. The fix for both is to: a) Unplug the AC power cord b) hold the power button for 30 second to fully discharge the mobo It didn't help. I left the system unplugged from AC power for an hour. No change. I am out of ideas. Has anybody seen anything like this before? What does a blinking blue power button mean? How can I get more data points to guide trouble shooting? --Thank you, --Mike

    Read the article

  • Some HTTPS connections via NAT fail, but work on firewall itself.

    - by hnxn
    Hi, I am having trouble establishing some HTTPS connections from internal machines, even though these same connections work if initiated on the firewall itself. The firewall machine is running Ubuntu 10.04.1 and shorewall 4.4.6. The internet connection is Bell PPPoE DSL (in Canada). I have tried various MTU settings, it doesn't seem to make any difference. Other protocols (HTTP, FTP, etc) generally work. The problem seems to be limited to certain sites; this one never works from an internal machine, but always works from the firewall itself: From internal machine: $ wget https://images.fedex.com/images/ascend/shared/headers/nxgen/corp_logo.gif --2011-01-13 20:51:31-- https://images.fedex.com/images/ascend/shared/headers/nxgen/corp_logo.gif Resolving images.fedex.com... 184.24.96.69 Connecting to images.fedex.com|184.24.96.69|:443... connected. ^C From firewall: $ wget https://images.fedex.com/images/ascend/shared/headers/nxgen/corp_logo.gif --2011-01-13 20:58:28-- https://images.fedex.com/images/ascend/shared/headers/nxgen/corp_logo.gif Resolving images.fedex.com... 184.24.96.69 Connecting to images.fedex.com|184.24.96.69|:443... connected. HTTP request sent, awaiting response... 200 OK Length: 840 [image/gif] Saving to: `corp_logo.gif' 2011-01-13 20:58:28 (149 MB/s) - `corp_logo.gif' saved [840/840] This URL always works from both internal and firewall: https://encrypted.google.com/images/logos/ssl_logo_lg.gif Any troubleshooting tips would be greatly appreciated!

    Read the article

  • Debootstrap Ubuntu over NFS leads to mknod I/O error

    - by Aaron B. Russell
    Hi everyone, I'm trying to prepare an Ubuntu environment for a diskless machine that will PXE boot and mount an NFS share as it's root. I've currently got another Ubuntu machine mounting the NFS share and I'm trying to debootstrap into it, but it has trouble creating devices over NFS: root@kimiko:~# mount | grep Seiuchi 192.168.0.203:/mnt/user/Seiuchi on /mnt type nfs (rw,addr=192.168.0.203) root@kimiko:~# debootstrap --arch i386 maverick /mnt http://gb.archive.ubuntu.com/ubuntu/ mknod: `/mnt/test-dev-null': Input/output error E: Cannot install into target '/mnt' mounted with noexec or nodev My NFS rule on the unRAID server is 192.168.0.201/32(rw,no_root_squash,sync). I don't have the noexec or nodev options set. I've not got much experience with NFS, so I'm probably missing something basic in the way I'm sharing this, but my attempts at Googling for an answer isn't really turning anything useful up. Does anyone have suggestions on what I might have missed or maybe relevant docs? Edit: Creating normal files (and directories) works just fine, I just can't create devices... root@kimiko:/mnt# mkdir foo root@kimiko:/mnt# cd foo root@kimiko:/mnt/foo# touch bar root@kimiko:/mnt/foo# mknod quux c 4 64 mknod: `quux': Input/output error root@kimiko:/mnt/foo# ls bar

    Read the article

  • Apache2.2 not responding on Windows 7 desktop

    - by Adam
    Afternoon! I'm having some trouble with Apache2.2 on Windows 7. For over a year it's been running no problem, but all of a sudden requests have just stopped responding. They don't ever time out, the browser just keeps on waiting for a response, which makes me think it's something blocking communication with Apache. Interestingly though, if I stop Apache the requests fail immediately. The Apache service is running, and using netstat I can see it listening on port 80 as configured: TCP 127.0.0.1:80 0.0.0.0:0 LISTENING If I stop the Apache service, that line disappears. I have an entry within my hosts file for each VHost I'm trying, all pointing to 127.0.0.1. Each VHost is configured to *:80. Nothing however is getting recorded in the access or error (at debug level) log files. I've verified the file paths are correct, even though they were never changed. Neither is anything getting recorded within Windows' Event Log. The problem showed up when I added a new VHost and restarted, however I hadn't been using it for a couple of days prior so I don't believe it's the config change. I have performed a syntax check to be sure, and when starting from the command prompt no errors are reported there. I do have Windows Firewall running, however I've verified the Apache rule is correct and tried turning it off to ensure that wasn't the problem. I've reinstalled Apache, in the hope it might magically fix something using the default config, but still no joy. I've also tried using a different port. I'm completely lost for ideas now. Can anybody help? Cheers Adam

    Read the article

  • Error code 1005 (errno: 121) upon create table while restoring MySQL database from a dump

    - by Jonathan
    I have a linux prod machine and a Win7 64bit dev machine. My workflow includes dumping the production MySQL database on the linux machine and restoring it in my local MySQL database on the windows machine (using SQLyog). This worked fine for a long time. Following some trouble, I formatted and reinstalled my windows dev machine. Since then I'm unable to restore the db on it. I keep receiving the following error: Query: CREATE TABLE `auth_group` ( `id` int(11) NOT NULL auto_increment, `name` varchar(80) collate utf8_unicode_ci NOT NULL, PRIMARY KEY (`id`), UNIQUE KEY `name` (`name`) ) ENGINE=InnoDB AUTO_INCREMENT=2 DEFAULT CHARSET=utf8 COLLATE=utf8_unicode_ci Error occured at:2010-06-26 17:16:14 Line no.:30 Error Code: 1005 - Can't create table 'ap_site.auth_group' (errno: 121) Notice that this is the first create table statement in the sql dump file. This error occurs both on MySQL Community Server 5.1.41 and 5.1.48 and with SQLyog Community 8.0.4 and 8.5.1. I really don't know what's different in my configuration from before the reinstall and now and why does it have this effect. Restoring from sql dump is something I need to keep on doing, so I need a permanent fix and not a tailored workaround.

    Read the article

< Previous Page | 304 305 306 307 308 309 310 311 312 313 314 315  | Next Page >