Search Results

Search found 1388 results on 56 pages for 'michelle jun lee'.

Page 23/56 | < Previous Page | 19 20 21 22 23 24 25 26 27 28 29 30  | Next Page >

  • apc.stat causes 500 internal server error

    - by Legit
    When I turn off apc.stat it causes a 500 internal server error. I checked the apache error_log and it's something about: [Tue Jun 26 10:02:59 2012] [error] [client 127.0.0.1] PHP Warning: require(): Filename cannot be empty in /var/www/site1/public/index.php on line 17 [Tue Jun 26 10:02:59 2012] [error] [client 127.0.0.1] PHP Fatal error: require(): Failed opening required '' (include_path='.:/usr/share/pear:/usr/share/php') in /var/www/site1/public/index.php on line 17 I checked that line and here's what it contains: require('./wp-blog-header.php'); I don't see anything wrong with it. Here's my current APC config: APC version: 3.1.10 PHP Version: 5.4.4 How do I resolve this error when i disable apc.stat?

    Read the article

  • OSS App Hackathon @ National Information Society Agency

    - by Edward J. Yoon
    Yesterday, there was a OSS App Hackathon arranged by the NIA (National Information Society Agency) in Seoul. I attended as a panel of judges w/ Prof. Lee of the Next, NHN University. A lot of people were in there. You can read more details (Korean news) here:  - http://news.naver.com/main/read.nhn?mode=LSD&mid=sec&sid1=105&oid=138&aid=0001997038

    Read the article

  • Suppress log messages about 3ware disk temperature changes on CentOS?

    - by Stefan Lasiewski
    I have a number of CentOS 5 servers which use 3ware RAID controllers. These servers are bugging my team with messages about minor temperature changes, like this: Jun 8 12:32:39 HOST smartd[1231]: Device: /dev/twa0 [3ware_disk_01], SMART Usage Attribute: 194 Temperature_Celsius changed from 119 to 118 Jun 8 12:32:39 HOST smartd[1231]: Device: /dev/twa0 [3ware_disk_03], SMART Usage Attribute: 194 Temperature_Celsius changed from 122 to 121 How can I suppress these messages? According to man smartd.conf : To disable any of the 3 reports, set the corresponding limit to 0. Trailing zero arguments may be omitted. By default, all temperature reports are disabled (´-W 0´). On my systems, smartd is reporting about temperature changes by default. I tried a manual approach. In /etc/smartd.conf, I have the following: /dev/twa0 -d 3ware,1 -a -W 0 /dev/twa0 -d 3ware,3 -a -W 0 But this still does not suppress the messages. Since these messages show up in /var/log/messages, LogWatch is sending unnecessary emails every night.

    Read the article

  • How to find source of 301/302 redirect loop? Heroku GoDaddy Zerigo

    - by user179288
    this should be a relatively simple problem but I'm having trouble.I hope this is the right forum to post on as I've seen people get booted off stack-overflow for this sort of thing. I've setup a web app on heroku (cedar stack) at my-web-app.herokuapp.com and I'm trying to direct my-domain.com and www.my-domain.com to it. As per instructions on the heroku documentation, I've set my-domain.com to redirect (forwarding) to www.my-domain.com and then set a C-Name from www.my-domain.com to my-web-app.herokuapp.com. But the C-Name doesn't seem to be working right and is sending back to my-domain.com, causing a loop and I can't work out why. I first configured these setting at GoDaddy.com where I registered the domain but then tried to avoid the problem by using Heroku's Zerigo DNS add-on, setting the nameservers on GoDaddy to the ones given for Zerigo. However the problem remains. Here is the output from dig for my-domain.com ("drop-circles.com"): ; <<>> DiG 9.3.2 <<>> any drop-circles.com ;; global options: printcmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 671 ;; flags: qr rd ra; QUERY: 1, ANSWER: 8, AUTHORITY: 0, ADDITIONAL: 5 ;; QUESTION SECTION: ;drop-circles.com. IN ANY ;; ANSWER SECTION: drop-circles.com. 433 IN NS b.ns.zerigo.net. drop-circles.com. 433 IN NS d.ns.zerigo.net. drop-circles.com. 433 IN NS e.ns.zerigo.net. drop-circles.com. 433 IN NS a.ns.zerigo.net. drop-circles.com. 433 IN NS c.ns.zerigo.net. drop-circles.com. 433 IN SOA a.ns.zerigo.net. hostmaster.zerigo.com. 1372250760 10800 3600 604800 900 drop-circles.com. 433 IN A 64.27.57.29 drop-circles.com. 433 IN A 64.27.57.24 ;; ADDITIONAL SECTION: d.ns.zerigo.net. 68935 IN A 174.36.24.250 e.ns.zerigo.net. 69015 IN A 72.26.219.150 a.ns.zerigo.net. 72602 IN A 64.27.57.11 c.ns.zerigo.net. 69204 IN A 109.74.192.232 b.ns.zerigo.net. 70549 IN A 174.37.229.229 ;; Query time: 15 msec ;; SERVER: 194.168.4.100#53(194.168.4.100) ;; WHEN: Wed Jun 26 14:29:07 2013 ;; MSG SIZE rcvd: 293 Here is the output from dig for www.my-domain.com ("www.drop-circles.com"): ; <<>> DiG 9.3.2 <<>> any www.drop-circles.com ;; global options: printcmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 1608 ;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 0 ;; QUESTION SECTION: ;www.drop-circles.com. IN ANY ;; ANSWER SECTION: www.drop-circles.com. 407 IN CNAME drop-circles-website.herokuapp.com. ;; Query time: 19 msec ;; SERVER: 194.168.4.100#53(194.168.4.100) ;; WHEN: Wed Jun 26 14:29:15 2013 ;; MSG SIZE rcvd: 83 And from Fiddler if I use the inspector when I try either address I get a series of requests, with the my-domain.com ("drop-circles.com") looking like this: Request: GET http://drop-circles.com/ HTTP/1.1 Accept: text/html, application/xhtml+xml, */* Accept-Language: en-gb User-Agent: Opera/9.80 (Windows NT 5.1; U; Edition IBIS; Trident/5.0) Accept-Encoding: gzip, deflate Connection: Keep-Alive Host: drop-circles.com Response: HTTP/1.1 302 Found Server: nginx/0.8.54 Date: Wed, 26 Jun 2013 13:26:55 GMT Content-Type: text/html;charset=utf-8 Connection: keep-alive Status: 302 Found Location: http://www.drop-circles.com/ Content-Length: 113 <html><body>Redirecting to <a href="http://www.drop-circles.com/">http://www.drop-circles.com/</a></body></html> And the www.my-domain.com ("www.drop-circles.com") looking like this: Request: GET http://www.drop-circles.com/ HTTP/1.1 Accept: text/html, application/xhtml+xml, */* Accept-Language: en-gb User-Agent: Opera/9.80 (Windows NT 5.1; U; Edition IBIS; Trident/5.0) Accept-Encoding: gzip, deflate Connection: Keep-Alive Host: www.drop-circles.com Response: HTTP/1.1 301 Moved Permanently Content-Type: text/html Date: Wed, 26 Jun 2013 13:26:56 GMT Location: http://drop-circles.com/ Vary: Accept X-Powered-By: Express Content-Length: 104 Connection: keep-alive <p>Moved Permanently. Redirecting to <a href="http://drop-circles.com/">http://drop-circles.com/</a></p> Any and all help would be greatly appreciated. If it is not at all obvious from these readouts what it might be could someone at least tell me which company GoDaddy, Zerigo or Heroku should I go to for support since I don't really know enough to be able to say where the problem lies. Thank you.

    Read the article

  • Why doesn't Firefox cache my images and CSS

    - by Richard A
    I am using IIS7, I have already set up the following. But when I run Firefox it seems not to cache any of my images even with "remember history" set. <?xml version="1.0" encoding="UTF-8"?> <configuration> <system.webServer> <staticContent> <clientCache cacheControlCustom="public" cacheControlMode="UseMaxAge" cacheControlMaxAge="7.00:00:00" /> </staticContent> </system.webServer> </configuration> However when I use Firebug it still points to Firefox not caching images and CSS: public,max-age=604800 Content-Type text/css Content-Encoding gzip Last-Modified Mon, 27 Jun 2011 03:53:22 GMT Accept-Ranges bytes Etag "507968c27d34cc1:0" Vary Accept-Encoding Server Microsoft-IIS/7.5 X-Powered-By ASP.NET Date Mon, 27 Jun 2011 13:06:41 GMT Content-Length 5067 Request Headersview source Host www.xx.com User-Agent Mozilla/5.0 (Windows NT 6.1; rv:2.0.1) Gecko/20100101 Firefox/4.0.1 Accept text/css,*/*;q=0.1 Accept-Language en-us,en;q=0.5 Accept-Encoding gzip, deflate Accept-Charset ISO-8859-1,utf-8;q=0.7,*;q=0.7 Keep-Alive 115 Connection keep-alive Referer http://www.xx.com/ Cookie __utma=62996397.135679654.1309106351.1309159743.1309164158.8; __utmz=62996397.1309106351.1.1.utmcsr=(direct)|utmccn=(direct)|utmcmd=(none); __utmc=62996397

    Read the article

  • How does this main domain have a CNAME record?

    - by TRiG
    I was under the impression that only subdomains could have CNAME records: main domains need to define all their own records. However, apt-get.com seems to have only a CNAME record. How can this work? $ dig apt-get.com ; <<>> DiG 9.8.1-P1 <<>> apt-get.com ;; global options: +cmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 45743 ;; flags: qr rd ra; QUERY: 1, ANSWER: 9, AUTHORITY: 0, ADDITIONAL: 0 ;; QUESTION SECTION: ;apt-get.com. IN A ;; ANSWER SECTION: apt-get.com. 86336 IN CNAME thie5ku9.dsgeneration.com. thie5ku9.dsgeneration.com. 60 IN A 208.73.211.242 thie5ku9.dsgeneration.com. 60 IN A 208.73.211.246 thie5ku9.dsgeneration.com. 60 IN A 208.73.211.166 thie5ku9.dsgeneration.com. 60 IN A 208.73.211.232 thie5ku9.dsgeneration.com. 60 IN A 208.73.211.161 thie5ku9.dsgeneration.com. 60 IN A 208.73.210.233 thie5ku9.dsgeneration.com. 60 IN A 208.73.211.186 thie5ku9.dsgeneration.com. 60 IN A 208.73.211.188 ;; Query time: 59 msec ;; SERVER: 127.0.0.1#53(127.0.0.1) ;; WHEN: Tue Jun 10 15:05:48 2014 ;; MSG SIZE rcvd: 193 $ dig apt-get.com ns ; <<>> DiG 9.8.1-P1 <<>> apt-get.com ns ;; global options: +cmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: SERVFAIL, id: 43831 ;; flags: qr rd ra; QUERY: 1, ANSWER: 0, AUTHORITY: 0, ADDITIONAL: 0 ;; QUESTION SECTION: ;apt-get.com. IN NS ;; Query time: 26 msec ;; SERVER: 127.0.0.1#53(127.0.0.1) ;; WHEN: Tue Jun 10 15:12:37 2014 ;; MSG SIZE rcvd: 29 $ dig apt-get.com ns @b.gtld-servers.net ; <<>> DiG 9.8.1-P1 <<>> apt-get.com ns @b.gtld-servers.net ;; global options: +cmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 38228 ;; flags: qr rd; QUERY: 1, ANSWER: 0, AUTHORITY: 2, ADDITIONAL: 2 ;; WARNING: recursion requested but not available ;; QUESTION SECTION: ;apt-get.com. IN NS ;; AUTHORITY SECTION: apt-get.com. 172800 IN NS ns1.domainrecover.com. apt-get.com. 172800 IN NS ns2.domainrecover.com. ;; ADDITIONAL SECTION: ns1.domainrecover.com. 172800 IN A 66.45.232.66 ns2.domainrecover.com. 172800 IN A 65.23.159.179 ;; Query time: 70 msec ;; SERVER: 192.33.14.30#53(192.33.14.30) ;; WHEN: Tue Jun 10 15:07:05 2014 ;; MSG SIZE rcvd: 111 The domain does resolve. I get the following headers: GET / HTTP/1.1 User-Agent: Testing_Sniffer/4.15 Host: apt-get.com Accept: */* HTTP/1.0 200 (OK) Cache-Control: private, no-cache, must-revalidate Connection: Keep-Alive Pragma: no-cache Server: Oversee Turing v1.0.0 Content-Length: 1347 Content-Type: text/html Expires: Mon, 26 Jul 1997 05:00:00 GMT Keep-Alive: timeout=3, max=96 P3P: policyref="http://www.dsparking.com/w3c/p3p.xml", CP="NOI DSP COR ADMa OUR NOR STA" Set-Cookie: parkinglot=1; domain=.apt-get.com; path=/; expires=Wed, 11-Jun-2014 14:10:37 GMT <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Frameset//EN" "http://www.w3.org/TR/html4/frameset.dtd"> <!-- turing_cluster_prod --> <html> <head> <meta http-equiv="Content-Type" content="text/html; charset=utf-8" /> <title>apt-get.com</title> <meta name="keywords" content="apt-get.com" /> <meta name="description" content="apt-get.com" /> <meta name="robots" content="index, follow" /> <meta name="revisit-after" content="10" /> <meta name="viewport" content="width=device-width, initial-scale=1.0" /> <script type="text/javascript"> document.cookie = "jsc=1"; </script> </head> <frameset rows="100%,*" frameborder="no" border="0" framespacing="0"> <frame src="http://apt-get.com?epl=5PfLSSqWrYDAt-gbwMDK_rA3b1UJCYVTJHfxTzr9FTDQV84b6vAgVhU3FTeCRQNiuRNv79Ni0V3mkEVNRhpqo2gpMjp5iOIR1w2_EISPENaqzoXohVXl2QI3ryXlRCB4FaIIaxynnWXWY6QBgBgNiIZ6agD1NBoNGg0ajXpUCXUAIJDer78AAOB_AwAAQIDbCwAAe_NWlVlTJllBMTZoWkKPAAAA8A" name="apt-get.com"> </frameset> <noframes> <body><a href="http://apt-get.com?epl=5PfLSSqWrYDAt-gbwMDK_rA3b1UJCYVTJHfxTzr9FTDQV84b6vAgVhU3FTeCRQNiuRNv79Ni0V3mkEVNRhpqo2gpMjp5iOIR1w2_EISPENaqzoXohVXl2QI3ryXlRCB4FaIIaxynnWXWY6QBgBgNiIZ6agD1NBoNGg0ajXpUCXUAIJDer78AAOB_AwAAQIDbCwAAe_NWlVlTJllBMTZoWkKPAAAA8A">Click here to go to apt-get.com</a>.</body> </noframes> </html>

    Read the article

  • Why doesn't Firefox cache my images and CSS

    - by Richard A
    I am using IIS7, I have already set up the following. But when I run Firefox it seems not to cache any of my images even with "remember history" set. <?xml version="1.0" encoding="UTF-8"?> <configuration> <system.webServer> <staticContent> <clientCache cacheControlCustom="public" cacheControlMode="UseMaxAge" cacheControlMaxAge="7.00:00:00" /> </staticContent> </system.webServer> </configuration> However when I use Firebug it still points to Firefox not caching images and CSS: public,max-age=604800 Content-Type text/css Content-Encoding gzip Last-Modified Mon, 27 Jun 2011 03:53:22 GMT Accept-Ranges bytes Etag "507968c27d34cc1:0" Vary Accept-Encoding Server Microsoft-IIS/7.5 X-Powered-By ASP.NET Date Mon, 27 Jun 2011 13:06:41 GMT Content-Length 5067 Request Headersview source Host www.xx.com User-Agent Mozilla/5.0 (Windows NT 6.1; rv:2.0.1) Gecko/20100101 Firefox/4.0.1 Accept text/css,*/*;q=0.1 Accept-Language en-us,en;q=0.5 Accept-Encoding gzip, deflate Accept-Charset ISO-8859-1,utf-8;q=0.7,*;q=0.7 Keep-Alive 115 Connection keep-alive Referer http://www.xx.com/ Cookie __utma=62996397.135679654.1309106351.1309159743.1309164158.8; __utmz=62996397.1309106351.1.1.utmcsr=(direct)|utmccn=(direct)|utmcmd=(none); __utmc=62996397

    Read the article

  • Sendmail problem

    - by trobrock
    I am trying to get my server to be able to send email from PHP. Currently it is using send mail, but whenever I try to send mail to a gmail address I get this sort of response: --o54Mqd5s008981.1275691959/ServerName Content-Type: message/delivery-status Reporting-MTA: dns; ServerName Received-From-MTA: DNS; localhost Arrival-Date: Fri, 4 Jun 2010 22:52:38 GMT Final-Recipient: RFC822; [email protected] Action: failed Status: 5.7.1 Remote-MTA: DNS; gmail-smtp-in.l.google.com Diagnostic-Code: SMTP; 550-5.7.1 [xxx.xxx.xxx.xxx] The IP you're using to send mail is not authorized Last-Attempt-Date: Fri, 4 Jun 2010 22:52:39 GMT How can I set this up to relay through a google account that I own? Is sendmail the best thing to use, or should I switch to Postfix or something? This is on an Ubuntu Server 9.10

    Read the article

  • oddities in interference of linux extened ACLs and 'regular' permissions

    - by abbot
    I've got some legacy code which checks that some file is read-only and readable only by it's owner, i.e. permissions set to 0400. I also need to give read-only access to this file to some other user on the system. I'm trying to set extended ACLs, but this changes 'regular' permission bits in a strange way also: $ ls -l hostkey.pem -r-------- 1 root root 0 Jun 7 23:34 hostkey.pem $ setfacl -m user:apache:r hostkey.pem $ getfacl hostkey.pem # file: hostkey.pem # owner: root # group: root user::r-- user:apache:r-- group::--- mask::r-- other::--- $ ls -l hostkey.pem -r--r-----+ 1 root root 0 Jun 7 23:34 hostkey.pem And after this the legacy code starts complaining that the file is group-readable (while it is actually not!) Is it possible to set the extended ACLs in such a way that some other user will also have read-only access, while the file will appear to have only 0400 'regular' permissions?

    Read the article

  • mini-dinstall chmod 0600 changes file: Operation not permitted

    - by V. Reileno
    I'm getting "Operation not permitted" in the mini-dinstall.log everytime a new debian package has been uploaded on the custom debian repository using dput. The deb file is installed successfuly but the changes file remains in the incoming folder. I can not use a post-install script when the changes file can not be processed. How can I fix this problem? Traceback (most recent call last): File "/usr/bin/mini-dinstall", line 780, in install retval = self._install_run_scripts(changefilename, changefile) File "/usr/bin/mini-dinstall", line 826, in _install_run_scripts do_chmod(changefilename, 0600) File "/usr/bin/mini-dinstall", line 193, in do_chmod do_and_log('Changing mode of "%s" to %o' % (name, mode), os.chmod, name, mode) File "/usr/bin/mini-dinstall", line 176, in do_and_log function(*args) OSError: [Errno 1] Operation not permitted: '/srv/debian-repository/mini-dinstall/incoming/debian-repository_1.3_amd64.changes' The mini-dinstall permissions: ls -lad incoming/ drwxrws--- 2 mini-dinstall debian-repository-uploader 4096 Jun 6 11:45 incoming/ ls -la incoming/debian-repository_1.3_amd64.changes -rw-rw---- 1 uploader-user debian-repository-uploader 1322 Jun 6 11:43 incoming/debian-repository_1.3_amd64.changes groups uploader-user uploader-user : uploader-user adm users debian-repository debian-repository-uploader puppet-client-updater groups mini-dinstall mini-dinstall : mini-dinstall debian-repository-uploader Cheers and thanks V.

    Read the article

  • Linux: Can I link multiple destinations via softlinks?

    - by kds1398
    Attempting to end up with something similar to this: $ ls -l lrwxrwxrwx 1 user group 4 Jun 28 2010 foo -> /home/bar lrwxrwxrwx 1 user group 4 Jun 29 2010 foo -> /etc/bar The intention is to be able to move a file to foo & have it go to both destination directories for now. The goal is to eventually unlink /home/bar link after confirming there are no issues with moving the files to /etc/bar. I am restricted in that I am unable to change or add to the process that moves the files.

    Read the article

  • Error on error log

    - by Ryan Murphy
    I am trying to use zend framework 2, i follow these instructions on centos6 via ssh. http://framework.zend.com/manual/2.0/en/user-guide/skeleton-application.html and when trying to start my website up, it gives an error, i go to the error log and i get this. [Sun Jun 30 16:02:17 2013] [error] [client 109.217.190.75] SoftException in Application.cpp:357: UID of script "/home/mydomain/public_html/public/index.php" is smaller than min_uid [Sun Jun 30 16:02:17 2013] [error] [client 109.217.190.75] Premature end of script headers: index.php What do they mean, how I fix them?

    Read the article

  • FTP Error 550 when trying to access a folder via symbolic link

    - by OrangeTux
    I'm configuring svftp on a linux machine. At the moment local users can login via ftp and they will see listened their home dir. They have write acces to it. No I want the users to write in de /var/www/ dir. Therefore I created an new group apache. Added users to the group and gave the group write access to /var/www. Via the terminal all users can write .var/www/. I created a link in the home directory to /var/www via ln -s /var/www/ /home/user/www ls gives: drwxr-xr-x 2 orangetux orangetux 4096 Jun 23 15:06 ftp lrwxrwxrwx 1 orangetux orangetux 21 Jun 23 15:00 www -> /var/www/ But when I use FTP I see the link but I cannot follow it. Error 550 which means file not found or bad access. How can I solve this, so that the users have access to /var/www via their home dir?

    Read the article

  • Raid 5 with 4 disks on Debian automatically creates a spare drive

    - by Razer
    I'm trying to to create a RAID 5 with 4x 2TB disks on Debian 6. I followed the instructions from: http://zackreed.me/articles/38-software-raid-5-in-debian-with-mdadm I created the raid with following command: sudo mdadm --create --verbose /dev/md0 --auto=yes --level=5 --raid-devices=4 /dev/sdb1 /dev/sdc1 /dev/sdd1 /dev/sde1 After creating the RAID mdadm --detail /dev/md0 shows me: /dev/md0: Version : 1.2 Creation Time : Mon Jun 11 18:14:26 2012 Raid Level : raid5 Array Size : 5860535808 (5589.04 GiB 6001.19 GB) Used Dev Size : 1953511936 (1863.01 GiB 2000.40 GB) Raid Devices : 4 Total Devices : 4 Persistence : Superblock is persistent Update Time : Mon Jun 11 18:14:26 2012 State : clean, degraded Active Devices : 3 Working Devices : 4 Failed Devices : 0 Spare Devices : 1 Layout : left-symmetric Chunk Size : 512K Name : rsserver:0 (local to host rsserver) UUID : a68c3c99:1ef865e9:5a8a7bdc:64710ed8 Events : 0 Number Major Minor RaidDevice State 0 8 17 0 active sync /dev/sdb1 1 8 33 1 active sync /dev/sdc1 2 8 49 2 active sync /dev/sdd1 3 0 0 3 removed 4 8 65 - spare /dev/sde1 Why is there a spare drive? I didn't create one. I don't want to use a spare drive.

    Read the article

  • Crontab + .sh + php

    - by Kristaps Karlsons
    Hi. I'm trying to call a shell script every 5 minutes, witch executes php file under root. # crontab -l */5 * * * * /home/regularuser/call.sh permissions: -rw-rw-rw- 1 root root 162 Jun 6 23:40 call.php -rwxr-xr-x 1 root root 66 Jun 7 01:20 call.sh call.sh contents: #!/bin/bash php -q /home/regularuser/call.php echo "request processed" My problem is that my php file doesn't get executed via crontab. However, if I call call.sh - everything works perfectly. I'm new to crontab and shell scripting, so any advice/resources are welcome.

    Read the article

  • Using sudo /etc/init.d/httpd start complains for log file rights

    - by SCO
    I created a custom log directory with the root account, and chmoded it to 777 teporarily. ls -la /var/mylogs/log/ total 16 drwxrwxrwx 2 root root 4096 Jun 24 06:27 . drwxr-xr-x 5 root root 4096 Jun 24 06:25 .. When I try to start the service from a user (lets say "myuser", which is in the sudoers files as myuser ALL=(ALL) ALL), it fails because of the permissions : sudo /etc/init.d/httpd start Starting httpd: (13)Permission denied: httpd: could not open error log file /var/mylogs/log/httpd_error.log. Unable to open logs However, the following is successfull : sudo bash /etc/init.d/http start So I guess these two methods are not equivalent, although to me doing sudo was the same than logging into the root account and issuing the commands. Any clue ? Thank you !

    Read the article

  • Stop sending packets to private IPs

    - by SlasherZ
    I have a problem that my server got locked down because it was sending packets to private IPs. My question is, what is the best solution to stop that? Here is the log that I got from my hosting provider: [Mon Jun 2 00:04:36 2014] forward-to-private:IN=br0 OUT=br0 PHYSIN=vm-44487.0 PHYSOUT=eth0 MAC=78:fe:3d:47:3d:20:00:1c:14:01:4e:cd:08:00 SRC=78.46.198.21 DST=192.168.249.128 LEN=1454 TOS=0x00 PREC=0x00 TTL=64 ID=58859 DF PROTO=UDP SPT=41366 DPT=41234 LEN=1434 [Mon Jun 2 00:17:15 2014] forward-to-private:IN=br0 OUT=br0 PHYSIN=vm-44487.0 PHYSOUT=eth0 MAC=78:fe:3d:47:3d:20:00:1c:14:01:4e:cd:08:00 SRC=78.46.198.21 DST=192.168.249.128 LEN=1456 TOS=0x00 PREC=0x00 TTL=64 ID=52234 DF PROTO=UDP SPT=55430 DPT=41234 LEN=1436

    Read the article

  • Unreadable sectors reported by smartd, is it serious?

    - by stribika
    I have a RAID 5 array of 4 disks. In the last 2 days I began to see these messages in the log: Jun 13 23:01:05 localhost smartd[4537]: Device: /dev/sda [SAT], 1 Currently unreadable (pending) sectors Jun 13 23:01:05 localhost smartd[4537]: Device: /dev/sdb [SAT], 2 Currently unreadable (pending) sectors If I have 2 faulty disks then the array should not show all disks OK: md0 : active raid1 sdd1[3] sdb1[1] sdc1[2] sda1[0] 64128 blocks [4/4] [UUUU] Strangely there are no other problems just the log messages. I am worried because sda is new and I previously had problems with sdb. (Completely died but the guy who sold it to me fixed it somehow.) Am I in danger of losing data? What should I do now?

    Read the article

  • Subdomain returns error when restarting Apache

    - by xXx
    I try to install a subdomain on my dedicated server. I made a new DNS rules to point my sub domain to the IP of my serv. After reading this Subdomain on apache i tried to add new rules on Apache : NameVirtualHost *:80 <VirtualHost *:80> ServerName tb.mysite.org DocumentRoot /home/mysite/wwww/tb/ <Directory "/home/mysite/wwww/tb/"> AllowOverride All Allow from all </Directory> </VirtualHost> Then i restart Apache but it returns sudo /etc/init.d/apache2 restart * Restarting web server apache2 Warning: DocumentRoot [/home/mysite/wwww/tb/] does not exist [Wed Jun 27 10:32:58 2012] [warn] NameVirtualHost *:80 has no VirtualHosts ... waiting Warning: DocumentRoot [/home/mysite/wwww/tb/] does not exist [Wed Jun 27 10:32:59 2012] [warn] NameVirtualHost *:80 has no VirtualHosts the tb/ folder is existing, don't why Apache can't find it... And it says that NameVirtualHost:80 has no VirtualHosts...

    Read the article

  • How iptables behaves on timezone change?

    - by pradipta
    I have doubt how iptables keep changing the info in iptables when timezone is change. I am using iptables s v 1.4.8 I have blocked one IP with following details # date Thu Jun 6 12:46:42 IST 2013 #iptables -A INPUT -s 10.0.3.128 -m time --datestart 2013-6-6T12:0:00 --datestop 2013-6-6T13:0:00 -j DROP # iptables -L Chain INPUT (policy ACCEPT) target prot opt source destination DROP all -- 10.0.3.128 anywhere TIME starting from 2013-06-06 12:00:00 until date 2013-06-06 13:00:00 Chain FORWARD (policy ACCEPT) target prot opt source destination Chain OUTPUT (policy ACCEPT) target prot opt source destination But after I change the timezone following things happened automatically . AFTER TIME ZONE CHANGE +++++++++++++++++++++++ #date Thu Jun 6 15:17:48 HKT 2013 # iptables -L Chain INPUT (policy ACCEPT) target prot opt source destination DROP all -- 10.0.3.128 anywhere TIME starting from 2013-06-06 14:30:00 until date 2013-06-06 15:30:00 Chain FORWARD (policy ACCEPT) target prot opt source destination Chain OUTPUT (policy ACCEPT) target prot opt source destination # The time value is changed in the rule . It is changing with the timezone how. Where iptables keeps track of timezone. Kindly explain me.

    Read the article

  • How to migrate deploy RESTful WCF Web Service from IIS 6.0 to IIS 7.0

    - by Chris Lee
    Hi all, I have a WCF Restful Web Service. It works find under VS and IIS 6.0. Now I want to move it to another work station with IIS 7.0 on it. I tried to copy all the deploy file from IIS 6 to IIS 7, but it cannot be accessed by other client, except for the request from it's own machine. I don't know what's wrong and I tried to enable the anonymous access. Please help me. Thanks.

    Read the article

  • UpdatePanel, JavaScript postback and changing querystring at same time in SharePoint Search Page

    - by Lee Dale
    Hi Guys, Been tearing my hear out with this one. Let me see if I can explain: I have a SharePoint results page on which I have a Search Results Core WebPart. Now I want to change the parameter in the querystring when I postback the page so that the WebPart returns different results for each parameter e.g. the querystring will be interactivemap.aspx?k=Country:Romania this will filter the results for Romania. First issue is I want to do this with javascript so I call: document.getElementById('aspnetForm').action = "interactivemap.aspx?k=Country:" + country; Nothing special here but the reason I need to call from Javascript is there is also a flash applet on this page from which the Javascript calls originate. When the javascript calls are made the page needs to PostBack but not reload the flash applet. I turned to ASP.Net AJAX for this so I wrapped the search results webpart in an update panel. Now if I use a button within the UpdatePanel to postback the UpdatePanel behaves as expected and does a partial render of the search results webpart not reloading the flash applet. Problem comes because I need postback the page from javscript. I called __doPostBack() as I have used this successully in the past. It works on it's own but fails when I first call the above Javascript before the __doPostBack() (I also tried calling click() on a hidden button) the code for the page is at the bottom. I think the problem comes with the scriptmanager not allowing a partial render when the form post action has changed. My questions are. A) Is there some other way to change the search results webpart parameter without using the querystring. or B) Is there a way around changing the querystring when doing an AJAX postback and getting a partial render. <asp:Content ContentPlaceHolderID="PlaceHolderFullContent" runat="server"> function update(country) { //__doPostBack('ContentUpdatePanel', ''); //document.getElementById('aspnetForm').action = "interactivemap.aspx?k=ArticleCountry:" + country; document.getElementById('ctl00_PlaceHolderFullContent_UpdateButton').click(); } Romania <div class="firstDataTitle"> <div class="datatabletitleOuterWrapper"> <div class="datatabletitle"> <span>Content</span></div> </div> <div class="datatableWrapper"> <div class="dataHolderWrapper"> <div class="datatable"> <div> <div class="searchMain"> <div class="searchZoneMain"> <asp:UpdatePanel runat="server" id="ContentUpdatePanel" UpdateMode="Conditional"> <ContentTemplate> <WebPartPages:webpartzone runat="server" AllowPersonalization="false" title="<%$Resources:sps,LayoutPageZone_BottomZone%>" id="BottomZone" orientation="Vertical" QuickAdd-GroupNames="Search" QuickAdd-ShowListsAndLibraries="false"><ZoneTemplate></ZoneTemplate></WebPartPages:webpartzone> <asp:Button id="UpdateButton" name="UpdateButton" runat="server" Text="Update"/> </ContentTemplate> </asp:UpdatePanel> </div> </div> </div> </div> </div> </div>

    Read the article

  • Launching waiting for xdebug session 57%

    - by Lee Timms
    Problem: In short, I am getting the "Launching waiting for xdebug session 57%" problem. I am running Eclipse Gallileo Build id: 20100218-1602 on Windows 7 Ultimate I am running php 5.3.8 The Zend Extension Build and PHP Extension Build are both - API220090626,TS,VC9 Solutions Tried: I have verified that port 9000 is available for use, I have even tried other ports as a double check that ports was not the issue. That includes setting the ports in both the php.ini file and in the Eclipse development environment. I have set the php.ini file as follows (I have also tried numerous other configurations - none worked): zend_extension = "c:/wamp/bin/php/php_xdebug-2.1.3-5.3-vc9.dll" [xdebug] xdebug.remote_enable=on xdebug.remote_host="localhost" xdebug.remote_port=9000 xdebug.remote_handler="dbgp" I am at a loss, can anyone help?

    Read the article

  • Do SEO-friendly URLs really affect a page's ranking?

    - by Lee Harold
    SEO-friendly URLs are all the rage these days. But do they actually have a meaningful impact on a page's ranking in Google and other search engines? If so, why? If not, why not? (Note that I would absolutely agree that SEO-friendly URLs are nicer to use for human beings. My question is whether they actually make a difference to the ranking algorithms.) Update: As it turns out, the Google post that endorphine points to here has caused tremendous confusion in the SEO community. For a sampling of the discussion, see here, here, and here. Part of the problem is that the Google post is addressing the worst case where URL rewriting is done poorly and so you'd be better off sticking with a dynamic URL rather than a mangled static "SEO-friendly" URL. There's no question dynamic URLs can be crawled by Google and can achieve high rankings. Maybe it would be easier to reframe the question more concretely: given 2 otherwise equivalent pages, which will rank higher for the search "do seo friendly urls really affect page ranking"? A) http://stackoverflow.com/questions/505793/do-seo-friendly-urls-really-affect-a-pages-ranking or B) http://stackoverflow.com?question=505793 (a fake URL for comparison only)

    Read the article

< Previous Page | 19 20 21 22 23 24 25 26 27 28 29 30  | Next Page >