Search Results

Search found 16528 results on 662 pages for 'chen xiao long'.

Page 452/662 | < Previous Page | 448 449 450 451 452 453 454 455 456 457 458 459  | Next Page >

  • Invalid configuration `noarch-redhat-linux-gnu': machine `noarch-redhat' not recognized

    - by Spacedust
    When I try to build rpm from src rpm (Apache 2.4.1) I got this error: rpmbuild -tb httpd-2.4.1.tar.bz2 --ba httpd.spec + ./configure --build=x86_64-redhat-linux-gnu --host=x86_64-redhat-linux-gnu --target=noarch-redhat-linux-gnu --program-prefix= --prefix=/usr --exec-prefix=/usr --bindir=/usr/bin --sbindir=/usr/sbin --sysconfdir=/etc --datadir=/usr/share --includedir=/usr/include --libdir=/usr/lib64 --libexecdir=/usr/libexec --localstatedir=/var --sharedstatedir=/usr/com --mandir=/usr/share/man --infodir=/usr/share/info --enable-layout=RPM --libdir=/usr/lib64 --sysconfdir=/etc/httpd/conf --includedir=/usr/include/httpd --libexecdir=/usr/lib64/httpd/modules --datadir=/var/www --with-installbuilddir=/usr/lib64/httpd/build --enable-mpms-shared=all --with-apr=/usr --with-apr-util=/usr --enable-suexec --with-suexec --with-suexec-caller=apache --with-suexec-docroot=/var/www --with-suexec-logfile=/var/log/httpd/suexec.log --with-suexec-bin=/usr/sbin/suexec --with-suexec-uidmin=500 --with-suexec-gidmin=100 --enable-pie --with-pcre --enable-mods-shared=all --enable-ssl --with-ssl --enable-socache-dc --enable-bucketeer --enable-case-filter --enable-case-filter-in --disable-imagemap checking for chosen layout... RPM checking for working mkdir -p... yes checking for grep that handles long lines and -e... /bin/grep checking for egrep... /bin/grep -E checking build system type... x86_64-redhat-linux-gnu checking host system type... x86_64-redhat-linux-gnu checking target system type... Invalid configuration `noarch-redhat-linux-gnu': machine `noarch-redhat' not recognized configure: error: /bin/sh build/config.sub noarch-redhat-linux-gnu failed blad: Bledny stan wyjscia z /var/tmp/rpm-tmp.48153 (%build) Bledy budowania RPM-a: Bledny stan wyjscia z /var/tmp/rpm-tmp.48153 (%build)

    Read the article

  • net.tcp Listener Adapter and net.tcp Port Sharing Service not starting on reboot

    - by Peter K.
    I am using the net.tcp protocol for various web services. When I reboot my Windows 7 Ultimate (64-bit) macbook pro, the service never restarts automatically, even though that is how they are set: The only relevant events I can see are in the System Event Log: Error 6/9/2011 19:47 Service Control Manager 7001 None The Net.Tcp Listener Adapter service depends on the Net.Tcp Port Sharing Service service which failed to start because of the following error: The service did not respond to the start or control request in a timely fashion." Error 6/9/2011 19:47 Service Control Manager 7000 None The Net.Tcp Port Sharing Service service failed to start due to the following error: The service did not respond to the start or control request in a timely fashion." Error 6/9/2011 19:47 Service Control Manager 7009 None A timeout was reached (30000 milliseconds) while waiting for the Net.Tcp Port Sharing Service service to connect. This post suggests that it's something else blocking the port (in the post it's SCCM 2007 R3 Client which I don't use). What else could be the problem? If it's something else blocking the port, how do I figure out what? When I manually start the services, they start correctly. Dependencies are: Net.Tcp Port Sharing Service Net.Tcp Listener Adapter Still no luck, but I think the problem might be that my network connection takes too long to come up. I put in a custom view of the event log, and found these items: The first in the series says: A timeout was reached (30000 milliseconds) while waiting for the Net.Tcp Port Sharing Service service to connect.

    Read the article

  • Generic/Text Printer on Windows 7 not prompting for file name

    - by FantaFan
    Guys & gals, Hope someone can shed some light on this. I am downloading reports from an AIX-based system by directing them to a TT printer which the terminal emulator (MultiView 2000) intercepts and directs to the default printer on the local system. This local printer is configured as a vanilla Generic/Text printer attached to a FILE port. When I print from AIX, the output is spooled down and the local printer prompts for a file name into which to save the file...but not under Windows 7. This has worked fine for many years, on both Win2K and WinXP. However, on Windows 7 the output gets spooled as a file into spool\PRINTERS (and looks as expected) but the print job then hangs with a status of "Error - Printing" and never prompts for a file name. I have to cancel the job. The Generic/Text printer works as expected with other applications. I have tried setting the printer to print directly rather than spooling but this only serves to hang the terminal session too. I've also tried to run the emulator in Windows 2000 Compatibility Mode and as Administrator in case it was something like that but with no luck. As you might expect, it does work fine in XP Mode (as long as I print to a printer defined therein and not the host's printer) but operationally this isn't going to be an option. Obviously this emulation software is a decade old (at least) and I could just cross/upgrade all the users (at a cost) but, before I do so, has anyone seen this sort of behaviour before and found some sort of fix? Remote OS: AIX 5 Client OS: Windows 7 Pro (32-bit) Printer: Generic/Text on a FILE port TE Software: MultiView 2000 (32-bit) Thanks in advance.

    Read the article

  • pfSense: How to route traffic out the WAN port?

    - by Ian Boyd
    Expert version i want to create a route in pfSense that will send traffic out the physical WAN port, not the PPPoE WAN port. i want to talk to talk to the web-server on my DSL modem, but it doesn't see packets wrapped in a PPPoE header. Long version My pfSense router is responsible for setting up the PPPoE connection over DSL to my ISP. When a machine on the LAN wants to sent packets to the internet, the default route sends packets out over the PPPoE connection. Those packets, wrapped in a PPPoE header, are sent on the ethernet cable to my DSL modem. From there they are sent the ISP, and the internet at large. i want a way to send a packet out the WAN port itself - not the PPPoE WAN port. My modem is sitting out there, with a http interface where i can monitor connection speed signal-to-noise ratio bandwidth connection time Whenever i try to set a route for destination of 192.168.2.1 (the IP that the modem will listen to for HTTP requests) to go out the WAN port, they instead end up going out the PPPoE port. The difference being that they're wrapped in a PPPoE protocol packet, and the modem isn't being sent the packet, it's being delivered to the ISP. Given that pfSense has no ability to direct traffic out the physical WAN port: how can i direct traffic out the physical WAN port on pfSense?

    Read the article

  • HA Proxy Stick-table and tcp-connection configuration

    - by Vladimir
    I am using HA Proxy HA-Proxy version 1.4.18 2011/09/16 I am trying to insert the following into /etc/init.d/haproxy.cfg file # Use General Purpose Couter (gpc) 0 in SC1 as a global abuse counter # Monitors the number of request sent by an IP over a period of 10 seconds stick-table type ip size 1m expire 10s store gpc0,http_req_rate(10s) tcp-request connection track-sc1 src tcp-request connection reject if { src_get_gpc0 gt 0 } # Table definition stick-table type ip size 100k expire 30s store conn_cur(3s) # Allow clean known IPs to bypass the filter tcp-request connection accept if { src -f /etc/haproxy/whitelist.lst } # Shut the new connection as long as the client has already 10 opened tcp-request connection reject if { src_conn_cur ge 10 } tcp-request connection track-sc1 src I get the following error: [ALERT] 256/113143 (4627) : parsing [/etc/haproxy/haproxy.cfg:36] : stick-table: unknown argument 'store'. [ALERT] 256/113143 (4627) : parsing [/etc/haproxy/haproxy.cfg:37] : unknown argument 'connection' after 'tcp-request' in proxy 'http_proxy' [ALERT] 256/113143 (4627) : parsing [/etc/haproxy/haproxy.cfg:38] : unknown argument 'connection' after 'tcp-request' in proxy 'http_proxy' [ALERT] 256/113143 (4627) : parsing [/etc/haproxy/haproxy.cfg:41] : stick-table: unknown argument 'store'. [ALERT] 256/113143 (4627) : parsing [/etc/haproxy/haproxy.cfg:43] : unknown argument 'connection' after 'tcp-request' in proxy 'http_proxy' [ALERT] 256/113143 (4627) : parsing [/etc/haproxy/haproxy.cfg:45] : unknown argument 'connection' after 'tcp-request' in proxy 'http_proxy' [ALERT] 256/113143 (4627) : parsing [/etc/haproxy/haproxy.cfg:46] : unknown argument 'connection' after 'tcp-request' in proxy 'http_proxy' [ALERT] 256/113143 (4627) : Error(s) found in configuration file : /etc/haproxy/haproxy.cfg [WARNING] 256/113143 (4627) : Proxy 'http_proxy': in multi-process mode, stats will be limited to process assigned to the current request. [ALERT] 256/113143 (4627) : Fatal errors found in configuration. [fail] Could you please tell me what is wrong with the code? Thanks!

    Read the article

  • How to shrink Windows 7 boot partition with unmovable files.

    - by Alex Che
    I have just bought HP laptop with Windows 7 (64 bit). It has 500 GB HDD with three partitions: small hidden system partition, 12 GiB HP recovery partition, and 450 GiB C: boot partition. I would like to split this large C: partition into two partitions, leaving only 100 GiB for system, and giving the rest to new data partition. Although Windows built-in Disk Management utility has an option to shrink the bootable partition, it only allows me to shrink it roughly by half, even though only 20 GiB on the partition is used. As far as I understand, system unmovable files lie in the middle of the partition, preventing Disk Management utility to do what I want. And since new HP laptops don't come with OS installation disks (they only allow you to create recovery disks youself), I can't just repartition HDD and then reinstall OS. So, is there any way to shrink C: bootable partition and preserve Windows 7 working? P.S.: I have tried to use 3rd party GParted utility, and after shrinking the partition Windows 7 stopped booting with BSOD. System recovery didn't work, and I had to do factory recover. Since this is a long process, I would like to avoid doing it again :) So, please, suggest only proven solutions.

    Read the article

  • OpenLDAP mirror mode replication failing with TLS behind a load balancer

    - by Lynn Owens
    I have two OpenLDAP servers that are both running TLS. They are: ldap1.mydomain.com ldap2.mydomain.com I also have a load balancer cluster with a dns name of it's own: ldap.mydomain.com The SSL certificate has a CN of ldap.mydomain.com, with SANs of ldap1.mydomain.com and ldap2.mydomain.com. Everything works... Except mirror mode replication. My mirror mode replication is setup like this: ldap.conf TLS_REQCERT allow cn=config.ldif olcServerID: 1 ldap://ldap1.mydomain.com olcServerID: 2 ldap://ldap2.mydomain.com On ldap1, olcDatabase{1}hdb.ldif olcMirrorMode: TRUE olcSyncrepl: {0}rid=001 provider=ldap://ldap2.mydomain.com bindmethod=simple bindmethod=simple binddn="cn=me,dc=mydomain,dc=com" credentials="REDACTED" starttls=yes searchbase="dc=mydomain,dc=com" schemachecking=on type=refreshAndPersist retry="60 +" On ldap2, olcDatabase{1}hdb.ldif olcMirrorMode: TRUE olcSyncrepl: {0}rid=001 provider=ldap://ldap1.mydomain.com bindmethod=simple bindmethod=simple binddn="cn=me,dc=mydomain,dc=com" credentials="REDACTED" starttls=yes searchbase="dc=mydomain,dc=com" schemachecking=on type=refreshAndPersist retry="60 +" Here's the errors I'm getting in syslog: Dec 1 21:05:01 ldap1 slapd[6800]: slap_client_connect: URI=ldap://ldap2.mydomain.com DN="cn=me,dc=mydomain,dc=com" ldap_sasl_bind_s failed (-1) Dec 1 21:05:01 ldap1 slapd[6800]: do_syncrepl: rid=001 rc -1 retrying Dec 1 21:05:08 ldap1 slapd[6800]: conn=1111 fd=20 ACCEPT from IP=ldap.mydomain.com:2295 (IP=ldap1.mydomain.com:636) Dec 1 21:05:08 ldap1 slapd[6800]: conn=1111 fd=20 closed (TLS negotiation failure) Any ideas? I've been working on OpenLdap for way too long now.

    Read the article

  • What exactly is a Mobile mouse? + Mouse Recommendation

    - by chobo2
    I am really disappointed with Logitech. My first cordless wireless mouse was from them and it lasted like 5 years. So I decided to get another one from them http://www.futureshop.ca/catalog/proddetail.asp?logon=&langid=EN&sku_id=0665000FS10099373&catid= And this mouse sucks bad. After 6 months it broke. I returned it under warranty and got a new one now 4-6 month later it is on the verge of breaking again.... You pay like $50 for this mouse and it lasts like 6 months that sad. I just lost faith in Logitech mice as I remember my bro also had a logitech mouse too and it broke after like 6 months. He then bought another logitech mouse(different model) that has been working for maybe 2 years(and no signs of breaking) but I am not crazy about the mouse(I don't like the 2 buttons by the wheel) and I not sure if they even sell it(maybe they got rid of it because it lasts too long). http://www.tigerdirect.ca/applications/SearchTools/item-details.asp?EdpNo=1578495&CatId=1285 So I am looking at a Microsoft mouse. http://www.futureshop.ca/catalog/proddetail.asp?logon=&langid=EN&sku_id=0665000FS10125565&catid= I am looking at this one but I am not sure what they mean by mobile mouse. I think that is what MS calls notebook mice. So I am not sure if this would be a good mouse to get for a desktop. I see it uses like a micro usb receiver but I am not sure if it is smaller then a standard mouse. But almost all the mice I looked at at futureshop.ca or staples are labeled notebook mice or mobile mice. So not sure what mice would be right for me. I don't want a corded one though. I really liked the LX6 design alot but it can't last more than 6 months. Thanks

    Read the article

  • Zscaler. Certs, cookies, and port 80 traffic

    - by 54's_lol
    So I work at HQ for a large company that shall remain nameless. We use Zscaler and I had to roll out a 2048 cert per zscaler's request. People around me at work dont understand the technology and think that the cert's are what is allowing internet connectivity. From my understanding(and please chime in) is the cookie located C:\Users\$$$$$$4$$\AppData\Roaming\Macromedia\Flash Player#SharedObjects\Q3JQJQJV\gateway.zscaler.net\zscaler.swf here that gets created when you provide your creds the first time you use the browser. The cert's are just simply a way of inspecting the SSL traffic as zscaler had no way of doing this before without them. They are essentially using the classic MITM attack to parse your SSL traffic. Gmail is smart enough to recognize this as you get a warning. My question is this, is there a product or service that I can use to verify my web browser when at home(I.E. off company network) isn't still getting routed to zscaler's cloud? If i do a tracert that will work fine. It's the port 80 and 443 web traffic zscaler and my company is after. I would like to verify that when I'm off their premise that my web traffic is using only my isp and the path to whatever content I'm searching for. Do the cert's i'm pushing and browser authentication do something behind the curtain that forces web traffic to get routed to zscaler? I searched quite a bit and would very much like to know if I'm ever off company scrutiny. I do know zscaler offers the service to force the scenario im asking about. Can I prove how my web traffic is getting routed? Thanks for any insight. I've been a fan for a long time and your guy's kung fu is very strong:-)

    Read the article

  • Apache Bad Request "Size of a request header field exceeds server limit" with Kerberos SSO

    - by Aurelin
    I'm setting up an SSO for Active Directory users through a website that runs on an Apache (Apache2 on SLES 11.1), and when testing with Firefox it all works fine. But when I try to open the website in Internet Explorer 8 (Windows 7), all I get is "Bad Request Your browser sent a request that this server could not understand. Size of a request header field exceeds server limit. Authorization: Negotiate [ultra long string]" My vhost.cfg looks like this: <VirtualHost hostname:443> LimitRequestFieldSize 32760 LimitRequestLine 32760 LogLevel debug <Directory "/data/pwtool/sec-data/adbauth"> AuthName "Please login with your AD-credentials (Windows Account)" AuthType Kerberos KrbMethodNegotiate on KrbAuthRealms REALM.TLD KrbServiceName HTTP/hostname Krb5Keytab /data/pwtool/conf/http_hostname.krb5.keytab KrbMethodK5Passwd on KrbLocalUserMapping on Order allow,deny Allow from all </Directory> <Directory "/data/pwtool/sec-data/adbauth"> Require valid-user </Directory> SSLEngine on SSLCipherSuite ALL:!ADH:!EXPORT56:RC4+RSA:+HIGH:+MEDIUM:+LOW:+SSLv2:+EXP:+eNULL SSLCertificateFile /etc/apache2/ssl.crt/hostname-server.crt SSLCertificateKeyFile /etc/apache2/ssl.key/hostname-server.key </VirtualHost> I also made sure that the cookies are deleted and tried several smaller values for LimitRequestFieldSize and LimitRequestLine. Another thing that seems weird to me is that even with LogLevel debug I won't get any logs about this. The log's last line is ssl_engine_kernel.c(1879): OpenSSL: Write: SSL negotiation finished successfully Does anyone have an idea about that?

    Read the article

  • What kind of “sysadmin stuff” should I show to students during a talk?

    - by Gregory Eric Sanderaon
    A teacher asked me If I could talk about my job as a linux sysadmin in his class. The course is called "Introduction to Operating systems" and i've been given 45 minutes to talk. The students are beginning their second year, so they've had a bit of experience with programming in different languages. What i'm like to do is show a series of hands-on examples of the kinds of things I do on a regular basis. I've already got a few ideas jotted down, but I'm afraid that they might be either too advanced or too simple for the students to appreciate. Another concern is that a topic might be too long to explain and use too much time overall. Here are a few ideas : Program deployment using version control (git in my case) filtering apache logs using grep, awk, uniq, tail A couple of bash scripts that i've made for various stuff on servers live montitoring (htop, iotop, iptraf) creating databases and assigning roles in mysql/postgresql So, are these ideas any good ? Do you have better ideas ? are the ideas too simple and should I go for more "advanced" stuff ?

    Read the article

  • CopSSH SFTP -- limit users access to their home directory only

    - by bradvido
    Let me preface this by saying I've read and followed these instructions at the FAQ many times: http://www.itefix.no/i2/node/37 It does not do what the title claims... It allows every user access to every other user's home directory, as well as access to all subfolders below the copssh installation path. I'm only using this for SFTP access and I need my users to be sandboxed into only their home directory. If you know a fool-proof way to lock users down so they can see only their home directory and its subfolders, stop reading now and reply with the solution. The details: Here is exactly what i tried as I followed the FAQ. My copSSH installation directory is: C:\Program Files\CopSSH net localgroup sftp_users /ADD **Create a user group to hold all my SFTP users cacls c:\ /c /e /t /d sftp_users **For that group, deny access at the top level and all levels below cacls "C:\Program Files\CopSSH" /c /e /t /r sftp_users **Allow my user group access to the copSSH installation directory and its subdirectories For each sftp user, I create a new windows user account, then I: net localgroup sftp_users sftp_user_1 /add **Add my user to the group I've created Open the activate user wizard for CopSSH, choosing the user, "/bin/sftponly" and Remove copssh home directory if it exists **Remains checked Create keys for public key authentication **Remains checked Create link to user's real home directory **Remains checked This works, however, every user has access to every other user's home directory as well as the CopSSH root directory.... So I tried denying access for all users to the user home directory: cacls "C:\Program Files\CopSSH\home" /c /e /t /d sftp_users **Deny access for users to the user home directory Then I tried adding permissions on a user-by-user basis for each users home\username folder. However,these permission were not allowed by windows because of the above deny rule i created at the home directory was being inherited and over-riding my allow rule. The next step for me would be to remove the deny rule at the home directory and for each user folder, add a deny rule for every user it doesn't belong to, and add an allow rule for the one user it does belong to. However, as my user list gets long, this will become very cumbersome. Thanks for the help!

    Read the article

  • How can I recover data from a dmg that cannot mount?

    - by Benjamin Lee
    I backed up a hard drive in a dmg, then reformatted the hard drive. (I had deleted the efi partition before which was preventing me from reinstalling the operation system). When I tried use the restore function in disk utility it gave an input/output error. I get this error with anything I do to the image including mounting, converting, attaching, verifying, scanning, and getting info through hdiutil imageinfo. I have run all of these with hdiutil and the -noverify, -nomount, and -ignorebadchecksums flags. When I copy the image onto another disk/partition I get a different error: something like "No filesystem". I cannot repair the image with disk utility or asr, which both throw the I/O error. When I put the -verbose flag on the command I actually get a different error: "hdiutil: attach failed - No child processes". I have output from both the -verbose and -debug flags but it is fairly long so I had to attach a link to avoid the 3000 character limit. No recovery system can get the data because it is both compressed and unmountable. How can I get the data back, and what has gone wrong? -debug -verbose

    Read the article

  • Gitlab and Nginx not loading gitlab

    - by paperids
    I have just installed gitlab and nginx on Ubuntu LTS 12.04 using this guide: http://blog.compunet.co.za/gitlab-installation-on-ubuntu-server-12-04/ I installed this on another server last night and had absolutely no problems with it (sort of a test run to see how long it would take to get going). I am not getting any errors when restarting gitlab or nginx with /etc/init.d and my error logs are empty. The only thing I know of to go on is the vhost config: upstream gitlab { server unix:/home/gitlab/gitlab/tmp/sockets/gitlab.sock$ } server { listen localhost:80; server_name gitlab.bluringdev.com; root /home/gitlab/gitlab/public; # individual nginx logs for this gitlab vhost access_log /var/log/nginx/gitlab_access.log; error_log /var/log/nginx/gitlab_error.log; location / { # serve static files from defined root folder;. # @gitlab is a named location for the upstream fallback$ try_files $uri $uri/index.html $uri.html @gitlab; } # if a file, which is not found in the root folder is r$ # then the proxy pass the request to the upsteam (gitla$ location @gitlab { proxy_redirect off; # you need to change this to "https", if you set "ssl" $ proxy_set_header X-FORWARDED_PROTO http; proxy_set_header Host gitlab.bluringdev.com:80; proxy_set_header X-Real-IP $remote_addr; proxy_pass http://gitlab; } } If there's any other information that would be helpful, just let me know and I'll get it up asap.

    Read the article

  • NFS "Permission Denied" getting cached on NetApp Filer

    - by Christopher Karel
    We have a bunch of Linux boxes mounting NFS shares off a NetApp filer. From time to time, I will flub some part of the export configuration. Typo on one of the allowed hosts, incorrect IP address, etc, etc. No worries, this is usually done on a test system, or with brand new exports that aren't yet in production. However, I've found that once I've been denied permission to mount something from a Linux machine, the failure gets cached for as long as a day. I will correct the problem that was blocking the mount, re-export on the NetApp, and still not be able to mount the share. I'm pretty sure this caching is done at the NetApp side. It normally ages out after a day or so, but it really sucks having to wait until tomorrow to mount a share. I've tried exportfs -f on the NetApp, as well as dns flush. (I found both suggestions via Google) However, neither one works. I would sell my soul if someone could help out with a command/pagan ritual that would clear up this cache issue. --Christopher Karel

    Read the article

  • How can I forward an application with X11 in grayscale

    - by ??????? ???????????
    I am trying to run a graphical application at home and display it on a it on a laptop which is located about six routing hops away. The problem is that the connection is so slow (or rather there is so much GOOEY being transfered) that the mouse is unresponsive and it takes a "long time" to redraw the window even at a resolution of 800x600 pixels. The connection speeds are 10MBit up at home and about 1MBit down on the laptop, which I think should be sufficient for looking at some GUI in (almost) real time. Since this traffic is sent over over a secure shell, I have enabled Compression with highest CompressionLevel along with Ciphers set to blowfish-cbc. This has substantially improved the responsiveness of the application, making it nearly usable. However, my goal is to improve the performance even further by sacrificing colors and even frame rate. The application to be displayed a Qemu SDL window with a graphically-oriented OS in it. This is not strictly relevant, but perhaps there are options to tweak the SDL output which I am not aware of. A possible workaround would be to run the application in a "hidden" X server and enabling TigerVNC on that X server. This would automatically give me the benefits of an optimized VNC viewport, but the goal is to do without (reduce complexity). The question I'm asking is what are my options for reducing the data-rate generated on the server in order to make the graphical application more usable on the client. As mentioned, colors are not important and I could probably work with 5-16 fps. Both machines are running Gentoo with the software in question being: workstation X.Org X Server 1.10.4 OpenSSH_5.8p1-hpn13v10, OpenSSL 1.0.0e QEMU emulator version 0.15.1 (qemu-kvm-0.15.1) laptop X.Org X Server 1.12.2 OpenSSH_5.8p1-hpn13v10lpk, OpenSSL 1.0.0j

    Read the article

  • Crazy problem with Nginx, PHP5-FPM on Ubuntu

    - by Emmanuel
    I've been trying to get a domain from shared hosting to my new VPS. Everything was working just 100% fine, and then all of a sudden rewrites stopped working, pictures that should work started returning 404s. I've got no idea why, but for some reason on my site: http://www.onlythebible.com/ only the home page works, all the other pages depend on rewrites which were working perfectly fine at one stage, but all of a sudden stopped working. Some of the pictures like this url: http://www.onlythebible.com/bgsPreview/Matthew-8.10.jpg which doesn't use a rewrite throws a 404? I almost certain it was nothing to do with the nginx configuration. I've got suspicions that it could be something to do with php5-fpm? The funny thing is, all of a sudden it started working again. And then an hour or so later it broke again and has now gone back to only displaying the home-page and all of the links (and some of the pictures) are just showing 404s. Does anyone have an idea of what the problem might be? I'm pretty new to the whole Linux VPS thing, but this just seems very strange. *edit Here's a line from the error log which might shed some light on the problem: 2011/02/06 03:04:59 [error] 2873#0: *220 open() "/usr/local/nginx/html/bgsPreview/Matthew-8.10.jpg" failed (2: No such file or directory), client: 114.77.115.211, server: onlythebible.com, request: "GET /bgsPreview/Matthew-8.10.jpg HTTP/1.1", host: "www.onlythebible.com", referrer: "http://www.onlythebible.com/" I wonder why it's trying to find the file in /usr/local/nginx/html instead of the proper root which is /var/www/ etc... Oh, and for some reason it's just started working again... for how long I don't know. Another thing that was a bit weird, is that the pages on my website are pulled from a database. But when I edited the database, the pages didn't change... It's almost like they've been cached or something.

    Read the article

  • svchost consuming more than 50% CPU all the time in windows 7

    - by claws
    Hello, I'm using windows 7 ultimate. svchost containing DCOM Server Process Launcher Plug and Play Power services is consuming more than 50% of CPU for most of the time. I found this blog post: http://blog.hansmelis.be/2007/06/17/windows-vista-long-delay-when-switching-songs-in-media-player/ That process is associated with two services: DCOM Server Process Launcher and Plug and Play. For the Vulcans among us, all logic stops there for a second. What do those two services have to do with WMP? The answer is provided by Vista's new audio engine. The new engine supports several audio "enhancements". But for the enhancements to work, the engine needs to determine if your hardware is up to the task. And when does it check that? Each time a sound output device is accessed. That's pretty nice if you can do a hot swap of sound hardware, but I don't see me doing that anytime soon. Anyways, it does provide us with the link to the correct service because checking hardware is done by the "Plug and Play" service. One might think that deactivating each enhancement would solve the problem, but that's wishful thinking. The configuration of the enhancements is located in the properties of the sound hardware. When opening the tab, I found out that no enhancements were active. Hmmm... so why does it check the hardware? Well, it does that in case you actually enable an enhancement. To completely stop the hardware checking, you have to tick the box labelled Disable all enhancements. As soon as you do that, Vista finally understands you don't want to use them buts thats for vista. Is it the same case with windows 7 too? and I couldn't find any "Disable all enhancements" in my controlpanelsounds (mmsys.cpl). Where can I find this option in windows 7? How to solve this?

    Read the article

  • Scripting an automated SQLServer 2008 DR move

    - by ItsAMystery
    Hi All We use the built in logshipping in SQLServer to logship to our DR site but once in a month do a DR test which requires us to move back and forth between our Live and BAckup servers. We run multiple (30) databases on the system so manually backing up the final logs and disabling the jobs is too much work and takes too long. I though no problem, I will script it but have run into trouble with it always complaninig that the final logship is too early to apply even though I dont export the final log until putting the database into norecovery mode. Firstly, does any one no a simple and reliable way of doing this? I have lokoed at some 3rd party software (redgate sqlbackup I think it was) but that didnt make it easy in this situation either. What I want to be able to do is basically run a script (a series of stored procedures) to get me to DR and run another to get me back with no dataloss. My scripts are very simplistic at the moment but here they are: 2 servers Primary Paris Secondary ParisT The StartAgentJobAndWait is a script written by someone else (ta) and just checks the jobs have finished or quits it if it never ends. At the moment I am just using a test database called BOB2 but if I can get it working will pass in the database and job names. from PARIS: /* Disable backup job */ exec msdb..sp_update_job @job_name = 'LSBackup_BOB2', @enabled = 0 exec PARIST.msdb..sp_update_job @job_name = 'LSCopy_PARIS_BOB2', @enabled = 0 exec PARIST.msdb..sp_update_job @job_name = 'LSRestore_PARIS_BOB2', @enabled = 0 exec PARIST.master.dbo.DRStage2 ParisT DRStage2 DECLARE @RetValue varchar (10) EXEC @RetValue = StartAgentJobAndWait LSCopy_PARIS_BOB2 , 2 SELECT ReturnValue=@RetValue if @RetValue = 1 begin print 'The Copy Task completed Succesffuly' END ELSE print 'The Copy task failed, This may or may not be a problem, check restore state of database' SELECT @RetValue = 0 EXEC @RetValue = StartAgentJobAndWait LSRestore_PARIS_BOB2 , 2 SELECT ReturnValue=@RetValue if @RetValue = 1 begin print 'The Restore Task completed Succesffuly' END ELSE print 'The Copy task failed, This may or may not be a problem, check restore state of database' exec PARIS.master.dbo.DRStage3 /* Do the last logship and move it to Trumpington */ BACKUP log "BOB2" to disk='c:\drlogshipping\BOB2.bak' with compression, norecovery EXEC xp_cmdshell 'copy c:\drlogshipping \\192.168.7.11\drlogshipping' EXEC PARIST.master.dbo.DRTransferFinish AS BEGIN restore database "BOB2" from disk='c:\drlogshipping\bob2.bak' with recovery

    Read the article

  • Benchmarking a file server

    - by Joel Coel
    I'm working on building a new file server... a simple Windows Server box with a few terabytes of disk space to share on the LAN. Pain for current hard drive prices aside :( -- I would like to get some benchmarks for this device under load compared to our old server. The old server was installed in 2005 and had 5 136GB 10K disks in RAID 5. The new server has 8 1TB disks in two RAID 10 volumes (plus a hot spare for each volume), but they're only 7.2K rpm, and of course with a much larger cache size. I'd like to get an idea of the performance expectations of the new server relative to the old. Where do I get started? I'd like to know both raw potential under different kinds of load for each server, as well an idea of what our real-world load looks like and how it will translate. Will disk load even matter, or will performance be more driven by the network connection? I could probably fumble through some disk i/o and wait counters in performance monitor, but I don't really know what to look for, which counters to watch, or for how long and when. FWIW, I'm expecting a nice improvement because of the benefits of having two different volumes and the better RAID 10 performance vs RAID 5, in spite of using slower disks... but I'd like to get an idea of how much.

    Read the article

  • New user profile creation error - Windows cannot open *.exe

    - by Jake
    I have a windows 7 laptop with the user "mydomain\boy" that cannot log in to the laptop. the error message is something like "User profile service cannot log in the user boy". I then logged in with the domain admin account "mydomain\admin" and then went to delete the "mydomain\boy" from my computer system properties advance system settings user profiles settings. I also ensure that the user is deleted from control panel user accounts. I then also deleted the user folder c:\users\boy I also checked that the registry at this location HKLM\software\microsoft\windows nt\currentversion\profilelist\ and ensure that there is no entry for boy. I followed http://support.microsoft.com/kb/947215 using the method 3 "fix it for me" but does not seem to do anything. (or i don't know how to use it). AFTER EVERYTHING DONE ABOVE... Everytime i log in with a new user, be it boy, girl or anything other domain account (other than the admin account already created when I first logged in to begin the fix/break), it takes a long time, and when the "preparing desktop" goes away, it starts to exe cannot open error e.g. regsvr.exe etc.. file association problem with exe QUESTION (phew finally..): Please tell me how to fix it? Thanks!

    Read the article

  • Alter charset and collation in all columns in all tables in MySQL

    - by The Disintegrator
    I need to execute these statements in all tables for all columns. alter table table_name charset=utf8; alter table table_name alter column column_name charset=utf8; Is it possible to automate this in any way inside MySQL? I would prefer to avoid mysqldump Update: Richard Bronosky showed me the way :-) The query I needed to execute in every table: alter table DBname.DBfield CONVERT TO CHARACTER SET utf8 COLLATE utf8_general_ci; Crazy query to generate all other queries: SELECT distinct CONCAT( 'alter table ', TABLE_SCHEMA, '.', TABLE_NAME, ' CONVERT TO CHARACTER SET utf8 COLLATE utf8_general_ci;' ) FROM information_schema.COLUMNS WHERE TABLE_SCHEMA = 'DBname'; I only wanted to execute it in one database. It was taking too long to execute all in one pass. It turned out that it was generating one query per field per table. And only one query per table was necessary (distinct to the rescue). Getting the output on a file was how I realized it. How to generate the output to a file: mysql -B -N --user=user --password=secret -e "SELECT distinct CONCAT( 'alter table ', TABLE_SCHEMA, '.', TABLE_NAME, ' CONVERT TO CHARACTER SET utf8 COLLATE utf8_general_ci;' ) FROM information_schema.COLUMNS WHERE TABLE_SCHEMA = 'DBname';" > alter.sql And finally to execute all the queries: mysql --user=user --password=secret < alter.sql Thanks Richard. You're the man!

    Read the article

  • lighttpd: Backend is overloaded + fcgi-server re-enabled + all handlers are down

    - by AbuZubair
    We have a standard lighttpd deployment with PHP-CGI and our error logs are flooding with the following. This is causing a huge problem because we keep returning 500's to our clients: 2012-10-14 14:28:38: (mod_fastcgi.c.3001) backend is overloaded; we'll disable it for 1 seconds and send the request to another backend instead: reconnects: 0 load: 36 2012-10-14 14:28:38: (mod_fastcgi.c.2764) fcgi-server re-enabled: 0 /tmp/php-7735.socket 2012-10-14 14:28:39: (mod_fastcgi.c.2764) fcgi-server re-enabled: 0 /tmp/php-7735.socket 2012-10-14 14:28:40: (mod_fastcgi.c.3001) backend is overloaded; we'll disable it for 1 seconds and send the request to another backend instead: reconnects: 0 load: 37 2012-10-14 14:28:40: (mod_fastcgi.c.2764) fcgi-server re-enabled: 0 /tmp/php-7735.socket 2012-10-14 14:28:41: (mod_fastcgi.c.3001) backend is overloaded; we'll disable it for 1 seconds and send the request to another backend instead: reconnects: 0 load: 57 2012-10-14 14:28:41: (mod_fastcgi.c.3001) backend is overloaded; we'll disable it for 1 seconds and send the request to another backend instead: reconnects: 0 load: 57 2012-10-14 14:28:42: (mod_fastcgi.c.3597) all handlers for /index.php? on .php are down. Does anyone have any clue as to what is going on? We restarted all php and lighttpd related processes and that didn't fix the problem. We ended up rebooting the whole box and now its gone away, although we fear it may come back later.... In general our deployment has been doing fine for a long time and this is the first time this has happened.

    Read the article

  • Proliant RAID 1 Rebuild Questions

    - by Nicholas
    I have a HP Proliant ML350 G5 server that experienced a power supply failure overnight. The power supply was replaced but unfortunately it got restarted with only 1 disk in the RAID 1 set plugged in. (The raid controller is the build in E200i). The raid BIOS then said on start-up that it had entered Interim Recovery Mode. However I would have expected it to still start up with only the 1 drive. The bios however says that it cannot find a C: drive and enters a reboot loop polling the other boot devices. First question is, is this normal behaviour not to start up on 1 disk? The second drive was then plugged in (all drives are ok) and the raid bios started an automatic rebuild on that disk. This appears to be a background process as there is no progress shown. However based on the light flashing it looks like it is working. My second question is how long will this rebuild take? (36GB 15K SAS drive). I cannot see any error messages and it looks like it is rebuilding the drive ok, but the computer still will not start-up. It still says during the boot up process that the C: drive is not found. If I wait for the rebuild to finish, is it likely to fix itself and find the C: drive? Or is there some other problem here?

    Read the article

  • First Request to IIS Express Fails with 503 Service Unavailable, Second Succeeds

    - by Chris Moschini
    Each time I start my ASP.Net MVC 3 app from Visual Studio 2010, IIS Express launches and IE spins waiting. The request fails with HTTP 503 Service Unavailable. I hit Refresh in IE, and the request succeeds. All subsequent requests succeed until I stop debugging. The next time I go to start debugging, the first request fails again. Has anyone else experienced this? In IISExpress\applicationhost.config I have: <site name="ProjectName" id="6"> <application path="/" applicationPool="Clr4IntegratedAppPool"> <virtualDirectory path="/" physicalPath="c:\users\chris\dropbox\code\2010\SolutionName\ProjectName" /> </application> <bindings> <binding protocol="http" bindingInformation="*:80:laptop" /> </bindings> </site> I have this in my hosts file: 127.0.0.1 laptop And my Project is set to start with IIS Express, with Project Url set to: http://laptop It's very strange that only the first request fails, perhaps as though Visual Studio isn't waiting long enough for IIS Express to start? Is there some way to make it wait? Stopping debugging, making a change, and then starting again is one of the most common tasks I do so adding another step to get there is pretty annoying.

    Read the article

< Previous Page | 448 449 450 451 452 453 454 455 456 457 458 459  | Next Page >