Search Results

Search found 27819 results on 1113 pages for 'linux intel'.

Page 666/1113 | < Previous Page | 662 663 664 665 666 667 668 669 670 671 672 673  | Next Page >

  • How to activate nVidia cards programmatically on new MacBookPros for CUDA programming?

    - by Carsten Kuckuk
    The new MacBookPros come with two graphic adapters, the Intel HD Graphics, and the NVIDIA GeForce GT 330M. OS X switches back and forth between them, depending on the workload, detection of an external monitor, or activation of Rosetta. I want to get my feet wet with CUDA programming, and unfortunately the CUDA SDK doesn't seem to take care of this back-and-forth switching. When Intel is active, no CUDA device gets detected, and when the NVidia card is active, it gets detected. So my current work-around is to use the little tool gfxCardStatus (http://codykrieger.com/gfxCardStatus/) to force the card on or off, just as I need it, but that's not satisfactory. Does anybody here know what the Apple-blessed, Apple-recommended way is to (1) detect the presence of a CUDA card, (2) to activate this card when present?

    Read the article

  • IS NULL doesn't work as expected in SQL Server 2000 with no Service Pack on it

    - by user306825
    The following batch executed on different instances of SQL Server 2000 illustrates the problem. select @@version create table a (a int) create table b (b int) insert into a(a) values (1) insert into a(a) values (2) insert into a(a) values (3) insert into b(b) values (1) insert into b(b) values (2) select * from a left outer join (select 1 as test, b from b) as j on j.b = a.a where j.test IS NULL drop table a drop table b Output 1: Microsoft SQL Server 2000 - 8.00.194 (Intel X86) Aug 6 2000 00:57:48 Copyright (c) 1988-2000 Microsoft Corporation Developer Edition on Windows NT 6.1 (Build 7600: ) (1 row(s) affected) (1 row(s) affected) (1 row(s) affected) (1 row(s) affected) (1 row(s) affected) (1 row(s) affected) a test b ----------- ----------- ----------- (0 row(s) affected) Output 2: Microsoft SQL Server 2000 - 8.00.2039 (Intel X86) May 3 2005 23:18:38 Copyright (c) 1988-2003 Microsoft Corporation Developer Edition on Windows NT 5.2 (Build 3790: Service Pack 2) (1 row(s) affected) (1 row(s) affected) (1 row(s) affected) (1 row(s) affected) (1 row(s) affected) (1 row(s) affected) a test b ----------- ----------- ----------- 3 NULL NULL (1 row(s) affected) If someone encounters the same problem - make sure you have the SP installed!

    Read the article

  • IS NULL doesn't work as expected in MSSQL 2000 with no Service Pack on it

    - by user306825
    The following batch executed on different instances of mssql 2000 illustrates the problem. select @@version create table a (a int) create table b (b int) insert into a(a) values (1) insert into a(a) values (2) insert into a(a) values (3) insert into b(b) values (1) insert into b(b) values (2) select * from a left outer join (select 1 as test, b from b) as j on j.b = a.a where j.test IS NULL drop table a drop table b Output 1: Microsoft SQL Server 2000 - 8.00.194 (Intel X86) Aug 6 2000 00:57:48 Copyright (c) 1988-2000 Microsoft Corporation Developer Edition on Windows NT 6.1 (Build 7600: ) (1 row(s) affected) (1 row(s) affected) (1 row(s) affected) (1 row(s) affected) (1 row(s) affected) (1 row(s) affected) a test b (0 row(s) affected) Output 2: Microsoft SQL Server 2000 - 8.00.2039 (Intel X86) May 3 2005 23:18:38 Copyright (c) 1988-2003 Microsoft Corporation Developer Edition on Windows NT 5.2 (Build 3790: Service Pack 2) (1 row(s) affected) (1 row(s) affected) (1 row(s) affected) (1 row(s) affected) (1 row(s) affected) (1 row(s) affected) a test b 3 NULL NULL (1 row(s) affected) If someone encounters the same problem - make sure you have the SP installed!

    Read the article

  • Disable IPV6 on specific NIC via PowerShell using a Com Object on Windows Server 2008 R2?

    - by user1256194
    I need to script some Windows Server 2008 R2 builds, preferably in PowerShell. I need to disable or uncheck IPV6 on a specific NIC (the same NIC every time). Currently, I have to set it manually. I do not want to disable IPV6 completely for the entire server other things may use that in the future. Is there an object I can reference in a PowerShell command specifying my NIC "Intel(R) PRO/1000 MT Network Connection" and disable IPV6? Unfortunately, Group Policy is not an option says the boss. I've tried finding an appropriate WMI object via "PowerShell Scriptomatic" but failed to find the difference between an enabled setting versus disabled on the Intel NIC. Thanks in advance.

    Read the article

  • imported vm gives "failed to open/create network" error

    - by Colleen
    steps: 1. created a vm in windows 2. partitioned drive and installed ubuntu 3. exported the vm I created 4. mounted windows drive in ubuntu 5. imported the vm from the export, in the mounted drive 6. tried to start vm, got the following error: "Failed to open a session for the virtual machine XXXX. Failed to open/create the internal network 'HostInterfaceNetworking-Intel(R) 82579LM Gigabit Network Connection' (you might need to modprobe vboxnetflt to make it accessible) (VERR_INTNET_FLT_IF_NOT_FOUND). Result Code: NS_ERROR_FAILURE (0x80004005) Component: Console Interface: IConsole {1968b7d3-e3bf-4ceb-99e0-cb7c913317bb} " Network settings: Adapter 1: PCnet-FAST III (Bridged adapter, Intel(R) 82579LM Gigabit Network Connection)

    Read the article

  • C++0x optimizing compiler quality

    - by aaa
    hello. I do some heavy numbercrunching and for me floating-point performance is very important. I like performance of Intel compiler very much and quite content with quality of assembly it produces. I am thinking at some point to try C++0x mainly for sugar parts, like auto, initializer list, etc, but also lambdas. at this point I use those features in regular C++ by the means of boost. How good of assembly code do compilers C++0x generate? specifically Intel and gcc compilers. Do they produce SSE code? is performance comparable to C++? are there any benchmarks? My Google search did not reveal much. Thank you.

    Read the article

  • yum not working on EC2 Red Hat instance: Cannot retrieve repository metadata

    - by adev3
    For some reason yum has stopped working in my Amazon EC2 instance, located in the EU West sector. There seems to be something wrong with the path of the repo metadata, is this correct? I would be very grateful for any help, as my experience in this field is somewhat limited. Thank you very much. cat /etc/redhat-release: Red Hat Enterprise Linux Server release 6.2 (Santiago) yum repolist: Loaded plugins: amazon-id, rhui-lb, security https://rhui2-cds01.eu-west-1.aws.ce.redhat.com/pulp/repos//rhui-client-config/rhel/server/6/x86_64/os/repodata/repomd.xml: [Errno 14] PYCURL ERROR 22 - "The requested URL returned error: 401" Trying other mirror. https://rhui2-cds02.eu-west-1.aws.ce.redhat.com/pulp/repos//rhui-client-config/rhel/server/6/x86_64/os/repodata/repomd.xml: [Errno 14] PYCURL ERROR 22 - "The requested URL returned error: 401" Trying other mirror. repo id repo name status rhui-eu-west-1-client-config-server-6 Red Hat Update Infrastructure 2.0 Client Configuration Server 6 0 rhui-eu-west-1-rhel-server-releases Red Hat Enterprise Linux Server 6 (RPMs) 0 rhui-eu-west-1-rhel-server-releases-optional Red Hat Enterprise Linux Server 6 Optional (RPMs) 0 repolist: 0 yum update: (I needed to remove the base URLs below because of ServerFault's restrictions for new users) Loaded plugins: amazon-id, rhui-lb, security [same as base url 1 above]/pulp/repos//rhui-client-config/rhel/server/6/x86_64/os/repodata/repomd.xml: [Errno 14] PYCURL ERROR 22 - "The requested URL returned error: 401" Trying other mirror. [same as base url 2 above]/pulp/repos//rhui-client-config/rhel/server/6/x86_64/os/repodata/repomd.xml: [Errno 14] PYCURL ERROR 22 - "The requested URL returned error: 401" Trying other mirror. Error: Cannot retrieve repository metadata (repomd.xml) for repository: rhui-eu-west-1-client-config-server-6. Please verify its path and try again

    Read the article

  • Nginx + Nagios : 502 Bad gateway

    - by MrROY
    I have a fully new install nagios, but I can't access to it. Here's my Nginx config: server{ listen 80; server_name 61.148.45.10; # blahblah # Nagios Monitoring location /nagios3/ { proxy_pass http://127.0.0.1:80; } } Nagios is installed step by step(From this Linode guide): sudo apt-get install -y nagios3 Then I try to visit http://ip-address/nagios3/, but it shows 502 bad gateway. How do I deal with this ? This is my /var/log/syslog: Oct 25 14:18:17 my-server nagios3: SERVICE ALERT: localhost;Disk Space;WARNING;SOFT;1;DISK WARNING - free space: /boot 43 MB (20% inode=99%): Oct 25 14:19:07 my-server nagios3: SERVICE ALERT: localhost;HTTP;WARNING;SOFT;1;HTTP WARNING: HTTP/1.1 403 Forbidden - 319 bytes in 0.000 second response time Oct 25 14:19:17 my-server nagios3: SERVICE ALERT: localhost;Disk Space;WARNING;SOFT;2;DISK WARNING - free space: /boot 43 MB (20% inode=99%): Oct 25 14:20:07 my-server nagios3: SERVICE ALERT: localhost;HTTP;WARNING;SOFT;2;HTTP WARNING: HTTP/1.1 403 Forbidden - 319 bytes in 0.000 second response time Oct 25 14:20:17 my-server nagios3: SERVICE ALERT: localhost;Disk Space;WARNING;SOFT;3;DISK WARNING - free space: /boot 43 MB (20% inode=99%): Oct 25 14:21:07 my-server nagios3: SERVICE ALERT: localhost;HTTP;WARNING;SOFT;3;HTTP WARNING: HTTP/1.1 403 Forbidden - 319 bytes in 0.000 second response time Oct 25 14:21:17 my-server nagios3: SERVICE ALERT: localhost;Disk Space;WARNING;HARD;4;DISK WARNING - free space: /boot 43 MB (20% inode=99%): Oct 25 14:21:17 my-server nagios3: SERVICE NOTIFICATION: root;localhost;Disk Space;WARNING;notify-service-by-email;DISK WARNING - free space: /boot 43 MB (20% inode=99%): Oct 25 14:21:17 my-server postfix/pickup[24474]: 4F89F394034C: uid=109 from=<nagios> Oct 25 14:21:17 my-server postfix/cleanup[27756]: 4F89F394034C: message-id=<20131025062117.4F89F394034C@my-server> Oct 25 14:21:17 my-server postfix/qmgr[24475]: 4F89F394034C: from=<nagios@[email protected]>, size=594, nrcpt=1 (queue active) Oct 25 14:21:17 my-server postfix/local[27758]: 4F89F394034C: to=<root@localhost>, relay=local, delay=0.15, delays=0.11/0/0/0.04, dsn=2.0.0, status=sent (delivered to mailbox) Oct 25 14:21:17 my-server postfix/qmgr[24475]: 4F89F394034C: removed Oct 25 14:22:07 my-server nagios3: SERVICE ALERT: localhost;HTTP;WARNING;HARD;4;HTTP WARNING: HTTP/1.1 403 Forbidden - 319 bytes in 0.000 second response time Oct 25 14:22:07 my-server nagios3: SERVICE NOTIFICATION: root;localhost;HTTP;WARNING;notify-service-by-email;HTTP WARNING: HTTP/1.1 403 Forbidden - 319 bytes in 0.000 second response time Oct 25 14:22:07 my-server postfix/pickup[24474]: 219CA3940381: uid=109 from=<nagios> Oct 25 14:22:07 my-server postfix/cleanup[27756]: 219CA3940381: message-id=<20131025062207.219CA3940381@my-server> Oct 25 14:22:07 my-server postfix/qmgr[24475]: 219CA3940381: from=<nagios@[email protected]>, size=605, nrcpt=1 (queue active) Oct 25 14:22:07 my-server postfix/local[27758]: 219CA3940381: to=<root@localhost>, relay=local, delay=0.12, delays=0.07/0/0/0.05, dsn=2.0.0, status=sent (delivered to mailbox) Oct 25 14:22:07 my-server postfix/qmgr[24475]: 219CA3940381: removed Oct 25 14:39:01 my-server CRON[28242]: (root) CMD ( [ -x /usr/lib/php5/maxlifetime ] && [ -d /var/lib/php5 ] && find /var/lib/php5/ -depth -mindepth 1 -maxdepth 1 -type f -cmin +$(/usr/lib/php5/maxlifetime) ! -execdir fuser -s {} 2>/dev/null \; -delete) And there're lot of 127.0.0.1 visit in nginx log, but I actually visit from a external ip: 127.0.0.1 - - [25/Oct/2013:14:21:02 +0800] "GET /nagios3/ HTTP/1.0" 502 575 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_8_5) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/3 0.0.1599.69 Safari/537.36" 127.0.0.1 - - [25/Oct/2013:14:21:02 +0800] "GET /nagios3/ HTTP/1.0" 502 575 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_8_5) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/3 0.0.1599.69 Safari/537.36" 127.0.0.1 - - [25/Oct/2013:14:21:02 +0800] "GET /nagios3/ HTTP/1.0" 502 575 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_8_5) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/3 0.0.1599.69 Safari/537.36"

    Read the article

  • centos yum problems

    - by Malachi Soord
    I am really new to using linux and have just formatted my centos 5.2 vps and am trying to install links by using the command yum install links. But the following error gets displayed: [root@inverses ~]# yum install links Loading "fastestmirror" plugin Loading mirror speeds from cached hostfile * lxlabsupdate: download.lxlabs.com * lxlabslxupdate: download.lxlabs.com * base: ftp.nluug.nl * updates: distrib-coffee.ipsl.jussieu.fr * addons: mirror.answerstolove.com * extras: distrib-coffee.ipsl.jussieu.fr http://ftp.nluug.nl/ftp/pub/os/Linux/distr/CentOS/5.2/os/i386/repodata/repomd.xml: [Errno 14] HTTP Error 404: Not Found Trying other mirror. http://distrib-coffee.ipsl.jussieu.fr/pub/linux/centos/5.2/os/i386/repodata/repomd.xml: [Errno 14] HTTP Error 404: Not Found Trying other mirror. http://mirror.ukhost4u.com/centos/5.2/os/i386/repodata/repomd.xml: [Errno 14] HTTP Error 404: Not Found Trying other mirror. http://centosh2.centos.org/centos/5.2/os/i386/repodata/repomd.xml: [Errno 14] HTTP Error 404: Not Found Trying other mirror. http://mirror.atrpms.net/centos/5.2/os/i386/repodata/repomd.xml: [Errno 14] HTTP Error 404: Not Found Trying other mirror. http://centosf.centos.org/centos/5.2/os/i386/repodata/repomd.xml: [Errno 14] HTTP Error 404: Not Found Trying other mirror. http://centoso3.centos.org/centos/5.2/os/i386/repodata/repomd.xml: [Errno 14] HTTP Error 404: Not Found Trying other mirror. http://centosk.centos.org/centos/5.2/os/i386/repodata/repomd.xml: [Errno 14] HTTP Error 404: Not Found Trying other mirror. http://centosv.centos.org/centos/5.2/os/i386/repodata/repomd.xml: [Errno 14] HTTP Error 404: Not Found Trying other mirror. http://centosk3.centos.org/centos/5.2/os/i386/repodata/repomd.xml: [Errno 14] HTTP Error 404: Not Found Trying other mirror. Error: Cannot retrieve repository metadata (repomd.xml) for repository: base. Please verify its path and try again From what I gather after checking out some of the URLs to see if they exist or not is that they require a redirect from .../5.2/.. to just ../5/ Is this a common thing to have to change? and how could I change this? Here is my CentOS-Base.repo http://pastebin.com/m67c1a022

    Read the article

  • James - mail server configuration help needed

    - by Chaitanya
    Hi, I am trying to setup James mail server on a linux machine. The linux machine has public static ip address assigned. I installed James and added in the config.xml added the servername as mydomain.com. In the DNS for mydomain.com, I have created a A-record, say mx.mydomain.com, which corresponds to the ipaddress of the above mail server machine. Then added mx.mydomain.com as MX record for mydomain.com. In James, I have created a new user test. Then from gmail, I sent a mail to [email protected]. The mail is not received back and it didn't even bounce back. The linux machine is behind a firewall with only 22, 80, 8080 ports open for external network. My question here is, Do I require do open any other ports on the firewall so that the mail I send from gmail arrives to James? If it's not the port problem, any views on solving this issue? I don't want to send mails from this server. It's only for receiving the mails.

    Read the article

  • therubyracer on Ubuntu install issues with ruby-2.0.0-p247

    - by Victor S
    I can't seem to get therubyracer gem installed on ubuntu, i get the following errors, anyone can help? ERROR: Error installing therubyracer: ERROR: Failed to build gem native extension. /home/victorstan/.rvm/rubies/ruby-2.0.0-p247/bin/ruby extconf.rb checking for main() in -lpthread... yes creating Makefile make compiling constraints.cc compiling array.cc compiling value.cc compiling invocation.cc compiling primitive.cc compiling trycatch.cc compiling context.cc compiling exception.cc compiling template.cc compiling accessor.cc compiling object.cc compiling script.cc compiling external.cc compiling stack.cc compiling gc.cc compiling backref.cc compiling heap.cc compiling v8.cc compiling constants.cc compiling date.cc compiling function.cc compiling rr.cc compiling message.cc compiling init.cc compiling string.cc compiling handles.cc compiling signature.cc compiling locker.cc linking shared-object v8/init.so g++: /home/victorstan/.rvm/gems/ruby-2.0.0-p247@global/gems/libv8-3.11.8.17-x86_64-linux/vendor/v8/out/ia32.release/obj.target/tools/gyp/libv8_base.a: No such file or directory g++: /home/victorstan/.rvm/gems/ruby-2.0.0-p247@global/gems/libv8-3.11.8.17-x86_64-linux/vendor/v8/out/ia32.release/obj.target/tools/gyp/libv8_snapshot.a: No such file or directory make: *** [init.so] Error 1 Update It seems it's trying to use a 32 bit library instead of a 64 bit library, anyone else experience issue trying to install 64 bit applications and packages on Ubuntu, and it trying to install 32 bit version instead (even though the system is a 64 Linux)? Related: https://github.com/cowboyd/therubyracer/issues/262

    Read the article

  • VMware Tools in Ubuntu guest on VMware Server 2 do not build

    - by ulf
    When trying to build the VMware tools in my Ubuntu 9.10 64 bit guest on a VMware Server 2.0.2 host with Debian 5 I'm getting strange errors like: Building the vmmemctl module. Using 2.6.x kernel build system. make: Gehe in Verzeichnis '/tmp/vmware-config8/vmmemctl-only' make -C /lib/modules/2.6.31-19-server/build/include/.. SUBDIRS=$PWD SRCROOT=$PWD/. modules make[1]: Betrete Verzeichnis '/usr/src/linux-headers-2.6.31-19-server' CC [M] /tmp/vmware-config8/vmmemctl-only/backdoorGcc64.o In file included from /tmp/vmware-config8/vmmemctl-only/backdoor.h:29, from /tmp/vmware-config8/vmmemctl-only/backdoorGcc64.c:38: /tmp/vmware-config8/vmmemctl-only/vm_basic_types.h:108:7: warning: "__FreeBSD__" is not defined CC [M] /tmp/vmware-config8/vmmemctl-only/os.o In file included from /tmp/vmware-config8/vmmemctl-only/os.c:51: /tmp/vmware-config8/vmmemctl-only/compat_wait.h:78: error: conflicting types for ‘poll_initwait’ include/linux/poll.h:70: note: previous declaration of ‘poll_initwait’ was here make[2]: *** [/tmp/vmware-config8/vmmemctl-only/os.o] Fehler 1 make[1]: *** [_module_/tmp/vmware-config8/vmmemctl-only] Fehler 2 make[1]: Verlasse Verzeichnis '/usr/src/linux-headers-2.6.31-19-server' make: *** [vmmemctl.ko] Fehler 2 make: Verlasse Verzeichnis '/tmp/vmware-config8/vmmemctl-only' Unable to build the vmmemctl module. I googled half the Internet but couldn't come to a solution. None of the kernel modules seems to build correctly. While googling I read something about a bug in this kernel tree.

    Read the article

  • can't get php mail() working on Ubuntu desktop version with sendmail and postfix

    - by EricP
    I'm running Ubuntu 9.10 LAMP and trying to do a simple email test with PHP and I'm not getting any emails sent. mail("[email protected]", "eric-linux test", "test") or die("can't send mail"); I get no errors from PHP when running that script. In my php.ini file is: sendmail_path = /usr/lib/sendmail -t -i $ sudo ps aux | grep sendmail eric 2486 0.0 0.4 8368 2344 pts/0 T 14:52 0:00 sendmail -s “Hello world” [email protected] eric 8747 0.0 0.3 5692 1616 pts/2 T 16:18 0:00 sendmail eric 8749 0.0 0.3 5692 1636 pts/2 T 16:18 0:00 sendmail start eric 9190 0.0 0.3 5692 1636 pts/2 T 19:12 0:00 sendmail start eric 9192 0.0 0.3 5692 1616 pts/2 T 19:12 0:00 sendmail eric 9425 0.0 0.3 5692 1620 pts/1 T 19:37 0:00 sendmail eric 9427 0.0 0.3 6584 1636 pts/1 T 19:37 0:00 sendmail restart eric 9429 0.0 0.3 5692 1636 pts/1 T 19:38 0:00 /usr/lib/sendmail restart eric 9432 0.0 0.1 3040 804 pts/1 R+ 19:38 0:00 grep --color=auto sendmail When I run $ sendmail start it just hangs there doing nothing. I installed postfix also to see if it would help, but it didn't. I tried to see port 25: eric@eric-linux:~$ telnet localhost 25 Trying ::1... Trying 127.0.0.1... Connected to localhost. Escape character is '^]'. 220 eric-linux ESMTP Postfix (Ubuntu) thanks

    Read the article

  • Ubuntu12.04 crashed while trying to install Ubuntu14.04, no longer have access

    - by FACE
    Posting from my laptop, as my Compaq desktop crashed. Not very computer saavy, and not sure what info I should provide...but here's a start: 9 year old desktop Compaq SR 1650NX, AMD Athlon 3500+ probably 1 gB ram (duh....) Was running 12.04....was attempting to upgrade to 14.04, not sure if I interrupted it. Pretty much stuck on (constantly redirected to) a page which says "GNU Grub version 1.99-2"; it offers several choices (as written): Ubuntu, with Linux 3.2.0-67-generic Ubuntu, with Linux 3.2.0-67-generic (recovery mode) Previous Linux versions Memory Test (memtest86+) Memory Test (memtest86+, serial console 115200) But none of the selections seem to be able to get me anywhere (i.e., I click yes, but can't seem to run any commands - not that I know anything useful); I escape from those pages by repeatedly hitting CTRL-ALT-DEL Any help would be appreciated; will provide additional info when requested. Thanx (hopefully), in advance. (If I don't respond immediately, it just means I had to attend to other concerns...will leave page open and check back in as often as possible). FACE

    Read the article

  • Error 2013: Lost connection to MySQL server during query when executing CHECK TABLE FOR UPGRADE

    - by Dean Richardson
    I just upgraded Ubuntu from 11.10 to 12.04. My rails app now returns the (passenger) error "Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (111) (Mysql2::Error)". I get a similar error when I try to access mysql at the command line on my Ubuntu server using mysql -u root -p. I have mysql-server 5.5 installed. I've checked and mysql is not running. When I try to restart it, it fails. Here are some key lines from the tail of /var/log/syslog after an attempted restart: dean@dgwjasonfried:/etc/mysql$ tail -f /var/log/syslog Mar 7 08:55:27 dgwjasonfried /etc/mysql/debian-start[5107]: Looking for 'mysqlcheck' as: /usr/bin/mysqlcheck Mar 7 08:55:27 dgwjasonfried /etc/mysql/debian-start[5107]: Running 'mysqlcheck' with connection arguments: '--port=3306' '--socket=/var/run/mysqld/mysqld.sock' '--host=localhost' '--socket=/var/run/mysqld/mysqld.sock' '--host=localhost' '--socket=/var/run/mysqld/mysqld.sock' Mar 7 08:55:27 dgwjasonfried /etc/mysql/debian-start[5107]: Running 'mysqlcheck' with connection arguments: '--port=3306' '--socket=/var/run/mysqld/mysqld.sock' '--host=localhost' '--socket=/var/run/mysqld/mysqld.sock' '--host=localhost' '--socket=/var/run/mysqld/mysqld.sock' Mar 7 08:55:27 dgwjasonfried /etc/mysql/debian-start[5107]: /usr/bin/mysqlcheck: Got error: 2013: Lost connection to MySQL server during query when executing 'CHECK TABLE ... FOR UPGRADE' Mar 7 08:55:27 dgwjasonfried /etc/mysql/debian-start[5107]: FATAL ERROR: Upgrade failed Mar 7 08:55:27 dgwjasonfried /etc/mysql/debian-start[5107]: molex_app_development.assets OK Mar 7 08:55:27 dgwjasonfried /etc/mysql/debian-start[5107]: molex_app_development.ecd_types OK Mar 7 08:55:27 dgwjasonfried /etc/mysql/debian-start[5124]: Checking for insecure root accounts. Mar 7 08:55:27 dgwjasonfried kernel: [ 7551.769657] init: mysql main process (5064) terminated with status 1 Mar 7 08:55:27 dgwjasonfried kernel: [ 7551.769697] init: mysql respawning too fast, stopped Here is most of /etc/mysql/my.cnf: Remember to edit /etc/mysql/debian.cnf when changing the socket location. [client] port = 3306 socket = /var/run/mysqld/mysqld.sock Here is entries for some specific programs The following values assume you have at least 32M ram This was formally known as [safe_mysqld]. Both versions are currently parsed. [mysqld_safe] socket = /var/run/mysqld/mysqld.sock nice = 0 [mysqld] Basic Settings user = mysql pid-file = /var/run/mysqld/mysqld.pid socket = /var/run/mysqld/mysqld.sock port = 3306 basedir = /usr datadir = /var/lib/mysql tmpdir = /tmp lc-messages-dir = /usr/share/mysql skip-external-locking Instead of skip-networking the default is now to listen only on localhost which is more compatible and is not less secure. bind-address = 127.0.0.1 And here are permissions for var/run/mysqld/mysqld.sock: srwxrwxrwx 1 mysql mysql 0 Mar 7 09:18 mysqld.sock I'd be grateful for any suggestions the community might have. I reviewed the related questions here and attempted some of the fixes offered but to no avail. Thanks! Dean Richardson Update: Thanks to quanta's suggestion, I looked at the /var/log/mysql/error.log file. I found error messages relating to pointers, fatal signals, and more stuff that I really couldn't make much sense of. I also found mysql man page references, however. One suggested that I try starting mysqld with the --innodb_force_recovery=# option, then attempt to dump (or drop) the offending/corrupted database or table. I worked through the escalating option levels one-by-one (innodb_force_recovery=1, innodb_force_recovery=2, etc.) This allowed me to successfully run mysql -u root -p from the command line and execute several commands. I was able to run queries on my production database, but any attempt to query, dump, or even drop my development database raised an error and led to me losing the connection to mysql. So I've made progress, but until I'm somehow able to drop or repair my development db I'm still unable to get my app to load. Any further advice or suggestions? Thanks! Dean Update: Right after running sudo mysqld --innodb_force_recover=1 from the command line, the error.log contains this: Right after retrying sudo mysqld --innodb_force_recover=1, The error.log file shows this: 130308 4:55:39 [Note] Plugin 'FEDERATED' is disabled. 130308 4:55:39 InnoDB: The InnoDB memory heap is disabled 130308 4:55:39 InnoDB: Mutexes and rw_locks use GCC atomic builtins 130308 4:55:39 InnoDB: Compressed tables use zlib 1.2.3.4 130308 4:55:39 InnoDB: Initializing buffer pool, size = 128.0M 130308 4:55:39 InnoDB: Completed initialization of buffer pool 130308 4:55:39 InnoDB: highest supported file format is Barracuda. InnoDB: The log sequence number in ibdata files does not match InnoDB: the log sequence number in the ib_logfiles! 130308 4:55:39 InnoDB: Database was not shut down normally! InnoDB: Starting crash recovery. InnoDB: Reading tablespace information from the .ibd files... InnoDB: Restoring possible half-written data pages from the doublewrite InnoDB: buffer... 130308 4:55:40 InnoDB: Waiting for the background threads to start 130308 4:55:41 InnoDB: 1.1.8 started; log sequence number 10259220 130308 4:55:41 InnoDB: !!! innodb_force_recovery is set to 1 !!! 130308 4:55:41 [Note] Server hostname (bind-address): '127.0.0.1'; port: 3306 130308 4:55:41 [Note] - '127.0.0.1' resolves to '127.0.0.1'; 130308 4:55:41 [Note] Server socket created on IP: '127.0.0.1'. 130308 4:55:41 [Note] Event Scheduler: Loaded 0 events 130308 4:55:41 [Note] mysqld: ready for connections. Version: '5.5.29-0ubuntu0.12.04.2' socket: '/var/run/mysqld/mysqld.sock' port: 3306 (Ubuntu) Then after mysql -u root -p and mysql> drop database molex_app_development; ERROR 2013 (HY000): Lost connection to MySQL server during query mysql> the error.log contains: dean@dgwjasonfried:/var/log/mysql$ tail -f error.log /lib/x86_64-linux-gnu/libc.so.6(clone+0x6d)[0x7f6a3ff9ecbd] Trying to get some variables. Some pointers may be invalid and cause the dump to abort. Query (7f6a1c004bd8): is an invalid pointer Connection ID (thread ID): 1 Status: NOT_KILLED The manual page at http://dev.mysql.com/doc/mysql/en/crashing.html contains information that should help you find out what is causing the crash. 130308 4:55:39 [Note] Plugin 'FEDERATED' is disabled. 130308 4:55:39 InnoDB: The InnoDB memory heap is disabled 130308 4:55:39 InnoDB: Mutexes and rw_locks use GCC atomic builtins 130308 4:55:39 InnoDB: Compressed tables use zlib 1.2.3.4 130308 4:55:39 InnoDB: Initializing buffer pool, size = 128.0M 130308 4:55:39 InnoDB: Completed initialization of buffer pool 130308 4:55:39 InnoDB: highest supported file format is Barracuda. InnoDB: The log sequence number in ibdata files does not match InnoDB: the log sequence number in the ib_logfiles! 130308 4:55:39 InnoDB: Database was not shut down normally! InnoDB: Starting crash recovery. InnoDB: Reading tablespace information from the .ibd files... InnoDB: Restoring possible half-written data pages from the doublewrite InnoDB: buffer... 130308 4:55:40 InnoDB: Waiting for the background threads to start 130308 4:55:41 InnoDB: 1.1.8 started; log sequence number 10259220 130308 4:55:41 InnoDB: !!! innodb_force_recovery is set to 1 !!! 130308 4:55:41 [Note] Server hostname (bind-address): '127.0.0.1'; port: 3306 130308 4:55:41 [Note] - '127.0.0.1' resolves to '127.0.0.1'; 130308 4:55:41 [Note] Server socket created on IP: '127.0.0.1'. 130308 4:55:41 [Note] Event Scheduler: Loaded 0 events 130308 4:55:41 [Note] mysqld: ready for connections. Version: '5.5.29-0ubuntu0.12.04.2' socket: '/var/run/mysqld/mysqld.sock' port: 3306 (Ubuntu) 130308 4:58:23 [ERROR] Incorrect definition of table mysql.proc: expected column 'comment' at position 15 to have type text, found type char(64). 130308 4:58:23 InnoDB: Assertion failure in thread 140168992810752 in file fsp0fsp.c line 3639 InnoDB: We intentionally generate a memory trap. InnoDB: Submit a detailed bug report to http://bugs.mysql.com. InnoDB: If you get repeated assertion failures or crashes, even InnoDB: immediately after the mysqld startup, there may be InnoDB: corruption in the InnoDB tablespace. Please refer to InnoDB: http://dev.mysql.com/doc/refman/5.5/en/forcing-innodb-recovery.html InnoDB: about forcing recovery. 10:58:23 UTC - mysqld got signal 6 ; This could be because you hit a bug. It is also possible that this binary or one of the libraries it was linked against is corrupt, improperly built, or misconfigured. This error can also be caused by malfunctioning hardware. We will try our best to scrape up some info that will hopefully help diagnose the problem, but since we have already crashed, something is definitely wrong and this may fail. key_buffer_size=16777216 read_buffer_size=131072 max_used_connections=1 max_threads=151 thread_count=1 connection_count=1 It is possible that mysqld could use up to key_buffer_size + (read_buffer_size + sort_buffer_size)*max_threads = 346681 K bytes of memory Hope that's ok; if not, decrease some variables in the equation. Thread pointer: 0x7f7ba4f6c2f0 Attempting backtrace. You can use the following information to find out where mysqld died. If you see no messages after this, something went terribly wrong... stack_bottom = 7f7ba3065e60 thread_stack 0x30000 mysqld(my_print_stacktrace+0x29)[0x7f7ba3609039] mysqld(handle_fatal_signal+0x483)[0x7f7ba34cf9c3] /lib/x86_64-linux-gnu/libpthread.so.0(+0xfcb0)[0x7f7ba2220cb0] /lib/x86_64-linux-gnu/libc.so.6(gsignal+0x35)[0x7f7ba188c425] /lib/x86_64-linux-gnu/libc.so.6(abort+0x17b)[0x7f7ba188fb8b] mysqld(+0x65e0fc)[0x7f7ba37160fc] mysqld(+0x602be6)[0x7f7ba36babe6] mysqld(+0x635006)[0x7f7ba36ed006] mysqld(+0x5d7072)[0x7f7ba368f072] mysqld(+0x5d7b9c)[0x7f7ba368fb9c] mysqld(+0x6a3348)[0x7f7ba375b348] mysqld(+0x6a3887)[0x7f7ba375b887] mysqld(+0x5c6a86)[0x7f7ba367ea86] mysqld(+0x5ae3a7)[0x7f7ba36663a7] mysqld(_Z15ha_delete_tableP3THDP10handlertonPKcS4_S4_b+0x16d)[0x7f7ba34d3ffd] mysqld(_Z23mysql_rm_table_no_locksP3THDP10TABLE_LISTbbbb+0x568)[0x7f7ba3417f78] mysqld(_Z11mysql_rm_dbP3THDPcbb+0x8aa)[0x7f7ba339780a] mysqld(_Z21mysql_execute_commandP3THD+0x394c)[0x7f7ba33b886c] mysqld(_Z11mysql_parseP3THDPcjP12Parser_state+0x10f)[0x7f7ba33bb28f] mysqld(_Z16dispatch_command19enum_server_commandP3THDPcj+0x1380)[0x7f7ba33bc6e0] mysqld(_Z24do_handle_one_connectionP3THD+0x1bd)[0x7f7ba346119d] mysqld(handle_one_connection+0x50)[0x7f7ba3461200] /lib/x86_64-linux-gnu/libpthread.so.0(+0x7e9a)[0x7f7ba2218e9a] /lib/x86_64-linux-gnu/libc.so.6(clone+0x6d)[0x7f7ba1949cbd] Trying to get some variables. Some pointers may be invalid and cause the dump to abort. Query (7f7b7c004b60): is an invalid pointer Connection ID (thread ID): 1 Status: NOT_KILLED The manual page at http://dev.mysql.com/doc/mysql/en/crashing.html contains information that should help you find out what is causing the crash. --Dean

    Read the article

  • Converting Lighttpd config to NginX with php-fpm

    - by Le Dude
    Having so much issue with NginX configuration since I'm new with NginX. Been using Lighttpd for quite sometime. Here are the base info. New Machine - CentOS 6.3 64 Bit - NginX 1.2.4-1.e16.ngx - Php-FPM 5.3.18-1.e16.remi Old Machine - CentOS 6.2 64Bit - Lighttpd 1.4.25-3.e16 Original Lighttpd config file: ####################################################################### ## ## /etc/lighttpd/lighttpd.conf ## ## check /etc/lighttpd/conf.d/*.conf for the configuration of modules. ## ####################################################################### ####################################################################### ## ## Some Variable definition which will make chrooting easier. ## ## if you add a variable here. Add the corresponding variable in the ## chroot example aswell. ## var.log_root = "/var/log/lighttpd" var.server_root = "/var/www" var.state_dir = "/var/run" var.home_dir = "/var/lib/lighttpd" var.conf_dir = "/etc/lighttpd" ## ## run the server chrooted. ## ## This requires root permissions during startup. ## ## If you run Chrooted set the the variables to directories relative to ## the chroot dir. ## ## example chroot configuration: ## #var.log_root = "/logs" #var.server_root = "/" #var.state_dir = "/run" #var.home_dir = "/lib/lighttpd" #var.vhosts_dir = "/vhosts" #var.conf_dir = "/etc" # #server.chroot = "/srv/www" ## ## Some additional variables to make the configuration easier ## ## ## Base directory for all virtual hosts ## ## used in: ## conf.d/evhost.conf ## conf.d/simple_vhost.conf ## vhosts.d/vhosts.template ## var.vhosts_dir = server_root + "/vhosts" ## ## Cache for mod_compress ## ## used in: ## conf.d/compress.conf ## var.cache_dir = "/var/cache/lighttpd" ## ## Base directory for sockets. ## ## used in: ## conf.d/fastcgi.conf ## conf.d/scgi.conf ## var.socket_dir = home_dir + "/sockets" ## ####################################################################### ####################################################################### ## ## Load the modules. include "modules.conf" ## ####################################################################### ####################################################################### ## ## Basic Configuration ## --------------------- ## server.port = 80 ## ## Use IPv6? ## #server.use-ipv6 = "enable" ## ## bind to a specific IP ## #server.bind = "localhost" ## ## Run as a different username/groupname. ## This requires root permissions during startup. ## server.username = "lighttpd" server.groupname = "lighttpd" ## ## enable core files. ## #server.core-files = "disable" ## ## Document root ## server.document-root = server_root + "/lighttpd" ## ## The value for the "Server:" response field. ## ## It would be nice to keep it at "lighttpd". ## #server.tag = "lighttpd" ## ## store a pid file ## server.pid-file = state_dir + "/lighttpd.pid" ## ####################################################################### ####################################################################### ## ## Logging Options ## ------------------ ## ## all logging options can be overwritten per vhost. ## ## Path to the error log file ## server.errorlog = log_root + "/error.log" ## ## If you want to log to syslog you have to unset the ## server.errorlog setting and uncomment the next line. ## #server.errorlog-use-syslog = "enable" ## ## Access log config ## include "conf.d/access_log.conf" ## ## The debug options are moved into their own file. ## see conf.d/debug.conf for various options for request debugging. ## include "conf.d/debug.conf" ## ####################################################################### ####################################################################### ## ## Tuning/Performance ## -------------------- ## ## corresponding documentation: ## http://www.lighttpd.net/documentation/performance.html ## ## set the event-handler (read the performance section in the manual) ## ## possible options on linux are: ## ## select ## poll ## linux-sysepoll ## ## linux-sysepoll is recommended on kernel 2.6. ## server.event-handler = "linux-sysepoll" ## ## The basic network interface for all platforms at the syscalls read() ## and write(). Every modern OS provides its own syscall to help network ## servers transfer files as fast as possible ## ## linux-sendfile - is recommended for small files. ## writev - is recommended for sending many large files ## server.network-backend = "linux-sendfile" ## ## As lighttpd is a single-threaded server, its main resource limit is ## the number of file descriptors, which is set to 1024 by default (on ## most systems). ## ## If you are running a high-traffic site you might want to increase this ## limit by setting server.max-fds. ## ## Changing this setting requires root permissions on startup. see ## server.username/server.groupname. ## ## By default lighttpd would not change the operation system default. ## But setting it to 2048 is a better default for busy servers. ## ## With SELinux enabled, this is denied by default and needs to be allowed ## by running the following once : setsebool -P httpd_setrlimit on server.max-fds = 2048 ## ## Stat() call caching. ## ## lighttpd can utilize FAM/Gamin to cache stat call. ## ## possible values are: ## disable, simple or fam. ## server.stat-cache-engine = "simple" ## ## Fine tuning for the request handling ## ## max-connections == max-fds/2 (maybe /3) ## means the other file handles are used for fastcgi/files ## server.max-connections = 1024 ## ## How many seconds to keep a keep-alive connection open, ## until we consider it idle. ## ## Default: 5 ## #server.max-keep-alive-idle = 5 ## ## How many keep-alive requests until closing the connection. ## ## Default: 16 ## #server.max-keep-alive-requests = 18 ## ## Maximum size of a request in kilobytes. ## By default it is unlimited (0). ## ## Uploads to your server cant be larger than this value. ## #server.max-request-size = 0 ## ## Time to read from a socket before we consider it idle. ## ## Default: 60 ## #server.max-read-idle = 60 ## ## Time to write to a socket before we consider it idle. ## ## Default: 360 ## #server.max-write-idle = 360 ## ## Traffic Shaping ## ----------------- ## ## see /usr/share/doc/lighttpd/traffic-shaping.txt ## ## Values are in kilobyte per second. ## ## Keep in mind that a limit below 32kB/s might actually limit the ## traffic to 32kB/s. This is caused by the size of the TCP send ## buffer. ## ## per server: ## #server.kbytes-per-second = 128 ## ## per connection: ## #connection.kbytes-per-second = 32 ## ####################################################################### ####################################################################### ## ## Filename/File handling ## ------------------------ ## ## files to check for if .../ is requested ## index-file.names = ( "index.php", "index.rb", "index.html", ## "index.htm", "default.htm" ) ## index-file.names += ( "index.xhtml", "index.html", "index.htm", "default.htm", "index.php" ) ## ## deny access the file-extensions ## ## ~ is for backupfiles from vi, emacs, joe, ... ## .inc is often used for code includes which should in general not be part ## of the document-root url.access-deny = ( "~", ".inc" ) ## ## disable range requests for pdf files ## workaround for a bug in the Acrobat Reader plugin. ## $HTTP["url"] =~ "\.pdf$" { server.range-requests = "disable" } ## ## url handling modules (rewrite, redirect) ## #url.rewrite = ( "^/$" => "/server-status" ) #url.redirect = ( "^/wishlist/(.+)" => "http://www.example.com/$1" ) ## ## both rewrite/redirect support back reference to regex conditional using %n ## #$HTTP["host"] =~ "^www\.(.*)" { # url.redirect = ( "^/(.*)" => "http://%1/$1" ) #} ## ## which extensions should not be handle via static-file transfer ## ## .php, .pl, .fcgi are most often handled by mod_fastcgi or mod_cgi ## static-file.exclude-extensions = ( ".php", ".pl", ".fcgi", ".scgi" ) ## ## error-handler for status 404 ## #server.error-handler-404 = "/error-handler.html" #server.error-handler-404 = "/error-handler.php" ## ## Format: <errorfile-prefix><status-code>.html ## -> ..../status-404.html for 'File not found' ## #server.errorfile-prefix = "/srv/www/htdocs/errors/status-" ## ## mimetype mapping ## include "conf.d/mime.conf" ## ## directory listing configuration ## include "conf.d/dirlisting.conf" ## ## Should lighttpd follow symlinks? ## server.follow-symlink = "enable" ## ## force all filenames to be lowercase? ## #server.force-lowercase-filenames = "disable" ## ## defaults to /var/tmp as we assume it is a local harddisk ## server.upload-dirs = ( "/var/tmp" ) ## ####################################################################### ####################################################################### ## ## SSL Support ## ------------- ## ## To enable SSL for the whole server you have to provide a valid ## certificate and have to enable the SSL engine.:: ## ## ssl.engine = "enable" ## ssl.pemfile = "/path/to/server.pem" ## ## The HTTPS protocol does not allow you to use name-based virtual ## hosting with SSL. If you want to run multiple SSL servers with ## one lighttpd instance you must use IP-based virtual hosting: :: ## ## $SERVER["socket"] == "10.0.0.1:443" { ## ssl.engine = "enable" ## ssl.pemfile = "/etc/ssl/private/www.example.com.pem" ## server.name = "www.example.com" ## ## server.document-root = "/srv/www/vhosts/example.com/www/" ## } ## ## If you have a .crt and a .key file, cat them together into a ## single PEM file: ## $ cat /etc/ssl/private/lighttpd.key /etc/ssl/certs/lighttpd.crt \ ## > /etc/ssl/private/lighttpd.pem ## #ssl.pemfile = "/etc/ssl/private/lighttpd.pem" ## ## optionally pass the CA certificate here. ## ## #ssl.ca-file = "" ## ####################################################################### ####################################################################### ## ## custom includes like vhosts. ## #include "conf.d/config.conf" #include_shell "cat /etc/lighttpd/vhosts.d/*.conf" ## ####################################################################### ####################################################################### ### Custom Added by me #url.rewrite-once = (".*\.(js|ico|gif|jpg|png|css|jar|class)$" => "$0", "" => "/index.php") url.rewrite-once = ( ".*\?(.*)$" => "/index.php?$1", "^/js/.*$" => "$0", "^.*\.(js|ico|gif|jpg|png|css|swf |jar|class)$" => "$0", "" => "/index.php" ) # expire.url = ( "" => "access 1 days" ) include "myvhost-vhosts.conf" ####################################################################### Here is my Vhost file for lighttpd $HTTP["host"] =~ "192.168.8.35$" { server.document-root = "/var/www/lighttpd/qc41022012/public" server.errorlog = "/var/log/lighttpd/error.log" accesslog.filename = "/var/log/lighttpd/access.log" server.error-handler-404 = "/e404.php" } and here is my nginx.conf file user nginx; worker_processes 5; error_log /var/log/nginx/error.log warn; pid /var/run/nginx.pid; events { worker_connections 1024; } http { include /etc/nginx/mime.types; default_type application/octet-stream; log_format main '$remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"'; access_log /var/log/nginx/testsite/logs/access.log main; sendfile on; #tcp_nopush on; keepalive_timeout 65; #gzip on; # include /etc/nginx/conf.d/*.conf; ## I added this ## include /etc/nginx/sites-available/*; } Here is my NginX Vhost file server { server_name 192.168.8.91; access_log /var/log/nginx/myapps/logs/access.log; error_log /var/log/nginx/myapps/logs/error.log; root /var/www/html/myapps/public; location / { index index.html index.htm index.php; } location = /favicon.ico { return 204; access_log off; log_not_found off; } # location ~ \.php$ { # try_files $uri /index.php; # include /etc/nginx/fastcgi_params; # fastcgi_pass 127.0.0.1:9000; # fastcgi_index index.php; # fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; # fastcgi_param SCRIPT_NAME $fastcgi_script_name; location ~ \.php.*$ { rewrite ^(.*.php)/ $1 last; fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; include fastcgi_params; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; # fastcgi_intercept_errors on; # fastcgi_param SCRIPT_FILENAME $document_root/index.php; # fastcgi_param PATH_INFO $uri; # fastcgi_pass 127.0.0.1:9000; # include fastcgi_params; } } We have a custom apps that we created that works great with lighttpd. I went through some headache also when we were trying to figure out how to make it work with lighttpd. this is the line that helps make it work in lighttpd. url.rewrite-once = ( ".*\?(.*)$" => "/index.php?$1", "^/js/.*$" => "$0", "^.*\.(js|ico|gif|jpg|png|css|swf |jar|class)$" => "$0", "" => "/index.php" ) but I couldn't figure out how to make it works in NginX. The webserver run just fine when we use the phpinfo.php test file. However as soon as I point it to my apps, nothing comes up. Check the error.log file and there's no error. Very mind boggling. I spent over 1 week trying to figure it out with no luck.. Please help?

    Read the article

  • Kerberos & signle-sign-on for website

    - by Dylan Klomparens
    I have a website running on a Linux computer using Apache. I've employed mod_auth_kerb for single-sign-on Kerberos authentication against a Windows Active Directory server. In order for Kerberos to work correctly, I've created a service account in Active Directory called dummy. I've generated a keytab for the Linux web server using ktpass.exe on the Windows AD server using this command: ktpass /out C:\krb5.keytab /princ HTTP/[email protected] /mapuser [email protected] /crypto RC4-HMAC-NT /ptype KRB5_NT_PRINCIPAL /pass xxxxxxxxx I can successfully get a ticket from the Linux web server using this command: kinit -k -t /path/to/keytab HTTP/[email protected] ... and view the ticket with klist. I have also configured my web server with these Kerberos properties: <Directory /> AuthType Kerberos AuthName "Example.com Kerberos domain" KrbMethodK5Passwd Off KrbAuthRealms EXAMPLE.COM KrbServiceName HTTP/[email protected] Krb5KeyTab /path/to/keytab Require valid-user SSLRequireSSL <Files wsgi.py> Order deny,allow Allow from all </Files> </Directory> However, when I attempt to log in to the website (from another Desktop with username 'Jeff') my Kerberos credentials are not automatically accepted by the web server. It should grant me access immediately after that, but it does not. The only information I get from the mod_auth_kerb logs is: kerb_authenticate_user entered with user (NULL) and auth_type Kerberos However, more information is revealed when I change the mod_auth_kerb setting KrbMethodK5Passwd to On: [Fri Oct 18 17:26:44 2013] [debug] src/mod_auth_kerb.c(1939): [client xxx.xxx.xxx.xxx] kerb_authenticate_user entered with user (NULL) and auth_type Kerberos [Fri Oct 18 17:26:44 2013] [debug] src/mod_auth_kerb.c(1031): [client xxx.xxx.xxx.xxx] Using HTTP/[email protected] as server principal for password verification [Fri Oct 18 17:26:44 2013] [debug] src/mod_auth_kerb.c(735): [client xxx.xxx.xxx.xxx] Trying to get TGT for user [email protected] [Fri Oct 18 17:26:44 2013] [debug] src/mod_auth_kerb.c(645): [client xxx.xxx.xxx.xxx] Trying to verify authenticity of KDC using principal HTTP/[email protected] [Fri Oct 18 17:26:44 2013] [debug] src/mod_auth_kerb.c(1110): [client xxx.xxx.xxx.xxx] kerb_authenticate_user_krb5pwd ret=0 [email protected] authtype=Basic What am I missing? I've studied a lot of online tutorials and cannot find a reason why the Kerberos credentials are not allowing access.

    Read the article

  • Apache Won't Restart After Compiling PHP with Postgres

    - by gonzofish
    I've compiled PHP (v5.3.1) with Postgres using the following configure: ./configure \ --build=x86_64-redhat-linux-gnu \ --host=x86_64-redhat-linux-gnu \ --target=x86_64-redhat-linux-gnu \ --program-prefix= \ --prefix=/usr/ \ --exec-prefix=/usr/ \ --bindir=/usr/bin/ \ --sbindir=/usr/sbin/ \ --sysconfdir=/etc \ --datadir=/usr/share \ --includedir=/usr/include/ \ --libdir=/usr/lib64 \ --libexecdir=/usr/libexec \ --localstatedir=/var \ --sharedstatedir=/usr/com \ --mandir=/usr/share/man \ --infodir=/usr/share/info \ --cache-file=../config.cache \ --with-libdir=lib64 \ --with-config-file-path=/etc \ --with-config-file-scan-dir=/etc/php.d \ --with-pic \ --disable-rpath \ --with-pear \ --with-pic \ --with-bz2 \ --with-exec-dir=/usr/bin \ --with-freetype-dir=/usr \ --with-png-dir=/usr \ --with-xpm-dir=/usr \ --enable-gd-native-ttf \ --with-t1lib=/usr \ --without-gdbm \ --with-gettext \ --without-gmp \ --with-iconv \ --with-jpeg-dir=/usr \ --with-openssl \ --with-zlib \ --with-layout=GNU \ --enable-exif \ --enable-ftp \ --enable-magic-quotes \ --enable-sockets \ --enable-sysvsem \ --enable-sysvshm \ --enable-sysvmsg \ --with-kerberos \ --enable-ucd-snmp-hack \ --enable-shmop \ --enable-calendar \ --with-libxml-dir=/usr \ --enable-xml \ --with-system-tzdata \ --with-mime-magic=/usr/share/file/magic \ --with-apxs2=/usr/sbin/apxs \ --with-mysql=/usr/include/mysql \ --without-gd \ --with-dom=/usr/include/libxml2/libxml \ --disable-dba \ --without-unixODBC \ --disable-pdo \ --enable-xmlreader \ --enable-xmlwriter \ --without-sqlite \ --without-sqlite3 \ --disable-phar \ --enable-fileinfo \ --enable-json \ --without-pspell \ --disable-wddx \ --with-curl=/usr/include/curl \ --enable-posix \ --with-mcrypt \ --enable-mbstring \ --with-pgsql=/mnt/mv/pgsql I'm using Postgres 8.4.0 and Apache 2.2.8; I have the following line in my Apache conf file: LoadModule php5_module /usr/lib64/httpd/modules/libphp5.so And when I attempt to restart Apache, I get the following error message: Starting httpd: httpd: Syntax error on line 205 of /etc/httpd/conf/httpd.conf: Cannot load /usr/lib64/httpd/modules/libphp5.so into server: /usr/lib64/httpd/modules/libphp5.so: undefined symbol: lo_import_with_oid Now, I know that this is a problem with Postgres with PHP because lo_import_with_oid is a function in the Postgres source which allows the importing of large objects; also, if I remove the --with-pgsql option, PHP and Apache get along great. I've scoured the Internet looking for answers all day, but to no avail. Does anyone have ANY insight into what is causing my problems.

    Read the article

  • Boot iMac into Centos from external hard drive

    - by user1704978
    I have Centos 6.3 installed on an external Western Digital drive with Firewire and USB interfaces. I want to be able to boot an iMac (2008, 3.06GHz Core 2 Duo) from this disk. The iMac has Mac OS X 10.5.8 and also a Window XP installation. I have tried holding 'T' on bootup for target disk mode but the external disk is ignored (presumably as it's not a Mac OSX image). I created an rEFit boot DVD which when booted in CD mode (holding 'C' on startup) displays three options, Mac OS (on internal drive), Linux and Windows. Selecting the Linux option unfortunately boots the Mac into XP. Three options are only displayed when the external disk is plugged into the Firewire port. If the external disk is plugged into a USB port the Linux option is not displayed and I can only boot into Mac OS X or Windows. This external disk will happily boot a Lenovo T410 laptop into Centos. My questions are: 1) Is it actually possible to boot into Centos on an iMac with an external hard drive. If so how do I achieve this? 2) Why is rEFit apparently booting from the wrong partition?

    Read the article

  • Apache > 2.2.22 rewrite rule not working?

    - by EBAH
    since yesterday I'm trying to figure out how to fix the following: running phpipam (http://www.phpipam.net/) with WAMP (Windows environment). The problem I am facing is related with RewriteRule functionality, so forget phpipam for a moment and concentrate on few lines of code. Here is the directory structure of my test website that emulate the first steps phpipam does (you can download http://goo.gl/ksvuGc): C:\wamp\www\rewrite-tst\ C:\wamp\www\rewrite-tst\.htaccess C:\wamp\www\rewrite-tst\index.php C:\wamp\www\rewrite-tst\install C:\wamp\www\rewrite-tst\install\index.php It seems that the following rewrite rule in .htaccess doesn't work: C:\wamp\www\rewrite-tst\.htaccess # install RewriteRule ^install$ install/ [R] RewriteRule ^install/$ index.php?page=install When opening C:\wamp\www\rewrite-tst\index.php the first step check the URL for "install" argument. Since the URL is: http://localhost/rewrite-tst no arguments are supplied and the browser is redirected to: header("Location: /rewrite-tst/install/") At this point the browser opens the page: C:\wamp\www\rewrite-tst\install\index.php >> http://localhost/rewrite-tst/install Apache, thanks to C:\wamp\www\rewrite-tst.htaccess should intercept this URL and redirect to: http://localhost/rewrite-tst/index.php?page=install Here are my tries: Win Apache 2.2.22: works Win Apache 2.4.4: KO Win Apache 2.4.6: KO In the attached zip file you can also find two traces from apache RewriteLog which I can't understand very well. Why Apache 2.4 doesn't work on Windows? Is it possible that there's a bug on Windows version of Apache (2.4.4 and 2.4.6) or am I wrong someway? Thanks for your help!!! Evan -- UPDATE 12 oct 2013 Now I'm really confused! Working on Linux, Kubuntu 13.04. Linux Apache 2.2.22: works Linux Apache 2.4.6: KO I guess there's something wrong in my rules at this point, or some change happened from Apache 2.2 to 2.4 ...

    Read the article

  • mcelog doesn't fails to start PUIAS 6.4 amd hardware

    - by Predrag Punosevac
    Folks, I am a total Linux n00b. I am trying to deploy mcelog on one of my computing nodes running PUIAS 6.4 (i86_64) [root@lov3 edac]# uname -a Linux lov3.mylab.org 2.6.32-358.18.1.el6.x86_64 #1 SMP Tue Aug 27 22:40:32 EDT 2013 x86_64 x86_64 x86_64 GNU/Linux a free clone of Red Hat 6.4 on AMD hardware [root@lov3 mcelog]# lscpu Architecture: x86_64 CPU op-mode(s): 32-bit, 64-bit Byte Order: Little Endian CPU(s): 64 On-line CPU(s) list: 0-63 Thread(s) per core: 2 Core(s) per socket: 8 Socket(s): 4 NUMA node(s): 8 Vendor ID: AuthenticAMD CPU family: 21 Model: 2 Stepping: 0 CPU MHz: 1400.000 BogoMIPS: 4999.30 Virtualization: AMD-V L1d cache: 16K L1i cache: 64K L2 cache: 2048K L3 cache: 6144K NUMA node0 CPU(s): 0-7 NUMA node1 CPU(s): 8-15 NUMA node2 CPU(s): 16-23 NUMA node3 CPU(s): 24-31 NUMA node4 CPU(s): 32-39 NUMA node5 CPU(s): 40-47 NUMA node6 CPU(s): 48-55 NUMA node7 CPU(s): 56-63 My mcelog.conf file is more or less default apart of the fact that I would like to run mcelog as a daemon and to log errors. When I start mcelog [root@lov3 mcelog]# mcelog --config-file mcelog.conf AMD Processor family 21: Please load edac_mce_amd module. However the module is present [root@lov3 mcelog]# locate edac_mce_amd.ko /lib/modules/2.6.32-358.18.1.el6.x86_64/kernel/drivers/edac/edac_mce_amd.ko /lib/modules/2.6.32-358.el6.x86_64/kernel/drivers/edac/edac_mce_amd.ko and loaded [root@lov3 edac]# lsmod | grep mce edac_mce_amd 14705 1 amd64_edac_mod Is there anything that I can do to get mcelog working? The only reference I found is this thread http://lists.centos.org/pipermail/centos/2012-November/130226.html

    Read the article

  • Xen kernel can't see 2 disks of 6 of 1TB, does it have a limitation?

    - by PartySoft
    Linux gentoo-xen 2.6.18-xen-r12 #3 SMP Tue Oct 5 09:28:53 PDT 2010 x86_64 Intel(R) Xeon(R) CPU E5506 @ 2.13GHz GenuineIntel GNU/Linux I have 6 disks of 1 TB and i can't see all of them only 4, can anyone give me an ideea what can i do ? Filesystem Size Used Avail Use% Mounted on rootfs 886G 4.4G 836G 1% / /dev/sda3 886G 4.4G 836G 1% / rc-svcdir 1.0M 44K 980K 5% /lib64/rc/init.d shm 7.9G 0 7.9G 0% /dev/shm /dev/sdb1 917G 200M 871G 1% /home2 /dev/sdc1 917G 200M 871G 1% /home3 /dev/sdd1 917G 200M 871G 1% /home4 The hardware is Dual xeon E5506 processors on a supermicro X8DTL mobo 4.346585] ata3.00: ATA-8, max UDMA/133, 1953525168 sectors: LBA48 NCQ (depth 0/32) [ 4.346588] ata3.00: ata3: dev 0 multi count 16 [ 4.352861] ata3.00: configured for UDMA/133 [ 4.352867] scsi3 : ata_piix [ 4.352875] PM: Adding info for No Bus:host3 [ 4.510584] ata4.00: ATA-8, max UDMA/133, 1953525168 sectors: LBA48 NCQ (depth 0/32) [ 4.510587] ata4.00: ata4: dev 0 multi count 16 [ 4.516848] ata4.00: configured for UDMA/133 [ 4.516861] PM: Adding info for No Bus:target2:0:0 [ 4.516905] Vendor: ATA Model: SAMSUNG HD103SJ Rev: 1AJ1 [ 4.516910] Type: Direct-Access ANSI SCSI revision: 05 [ 4.516920] PM: Adding info for scsi:2:0:0:0 [ 4.517452] SCSI device sde: 1953525168 512-byte hdwr sectors (1000205 MB) [ 4.517460] sde: Write Protect is off [ 4.517461] sde: Mode Sense: 00 3a 00 00 [ 4.517478] SCSI device sde: drive cache: write back [ 4.517514] SCSI device sde: 1953525168 512-byte hdwr sectors (1000205 MB) [ 4.517521] sde: Write Protect is off [ 4.517522] sde: Mode Sense: 00 3a 00 00 [ 4.517532] SCSI device sde: drive cache: write back [ 4.517534] sde: sde1 [ 4.524551] sd 2:0:0:0: Attached scsi disk sde [ 4.524855] sd 2:0:0:0: Attached scsi generic sg4 type 0 [ 4.524874] PM: Adding info for No Bus:target3:0:0 [ 4.524928] Vendor: ATA Model: SAMSUNG HD103SJ Rev: 1AJ1 [ 4.524933] Type: Direct-Access ANSI SCSI revision: 05 [ 4.524946] PM: Adding info for scsi:3:0:0:0 [ 4.525216] SCSI device sdf: 1953525168 512-byte hdwr sectors (1000205 MB) [ 4.525227] sdf: Write Protect is off [ 4.525228] sdf: Mode Sense: 00 3a 00 00 [ 4.525242] SCSI device sdf: drive cache: write back [ 4.525280] SCSI device sdf: 1953525168 512-byte hdwr sectors (1000205 MB) [ 4.525286] sdf: Write Protect is off [ 4.525289] sdf: Mode Sense: 00 3a 00 00 [ 4.525301] SCSI device sdf: drive cache: write back [ 4.525302] sdf: sdf1 [ 4.532691] sd 3:0:0:0: Attached scsi disk sdf [ 4.533010] sd 3:0:0:0: Attached scsi generic sg5 type 0 [ 4.977669] scsi: <fdomain> Detection failed (no card) [ 5.030479] GDT-HA: Storage RAID Controller Driver. Version: 3.05 [ 5.030635] GDT-HA: Found 0 PCI Storage RAID Controllers [ 5.372350] Fusion MPT base driver 3.04.01 [ 5.372358] Copyright (c) 1999-2005 LSI Logic Corporation [ 5.579176] Fusion MPT SPI Host driver 3.04.01 [ 5.881777] ieee1394: Initialized config rom entry `ip1394' [ 6.166745] ieee1394: sbp2: Driver forced to serialize I/O (serialize_io=1) [ 6.166748] ieee1394: sbp2: Try serialize_io=0 for better performance [ 6.428866] md: md driver 0.90.3 MAX_MD_DEVS=256, MD_SB_DISKS=27 [ 6.428872] md: bitmap version 4.39 [ 6.431518] md: raid0 personality registered for level 0 [ 6.495979] md: raid1 personality registered for level 1 [ 6.570270] raid5: automatically using best checksumming function: generic_sse [ 6.575523] generic_sse: 6608.000 MB/sec [ 6.575526] raid5: using function: generic_sse (6608.000 MB/sec) [ 6.596226] raid6: int64x1 1835 MB/s [ 6.613231] raid6: int64x2 1773 MB/s [ 6.630256] raid6: int64x4 1675 MB/s [ 6.647296] raid6: int64x8 1027 MB/s [ 6.664267] raid6: sse2x1 3578 MB/s [ 6.681268] raid6: sse2x2 4207 MB/s [ 6.698280] raid6: sse2x4 4625 MB/s [ 6.698281] raid6: using algorithm sse2x4 (4625 MB/s) [ 6.698285] md: raid6 personality registered for level 6 [ 6.698286] md: raid5 personality registered for level 5 [ 6.698288] md: raid4 personality registered for level 4 [ 6.781090] md: raid10 personality registered for level 10 [ 7.007043] Intel(R) PRO/1000 Network Driver - version 7.1.9-k4 [ 7.007046] Copyright (c) 1999-2006 Intel Corporation. [ 9.229465] kjournald starting. Commit interval 5 seconds [ 9.229476] EXT3-fs: mounted filesystem with ordered data mode.

    Read the article

  • Can't access server on LAN new Router

    - by RMDan
    Earlier this week my roommates decided to change the router we are using for our home network. On the old router I had no problems accessing a laptop running Mint Linux over the network(SSH, FTP, and Shared Folders). However I am now not able to connect. I have verified the IP address of my Linux machine has not changed and I have no problems connecting to our NAS(WDMYCLOUD). The new router is a D-Link DIR-868L. PuTTY is giving me a Timed Out error(was giving me a EHOSTUNREACH before). Running Windows 8.1 the connecting laptop, but tried connecting using my phone via SSH did not work either. More information: Can ping the Linux machine from the router but not my computer. Pinging 192.168.0.111 with 32 bytes of data: Reply from 192.168.0.102: Destination host unreachable. Reply from 192.168.0.102: Destination host unreachable. Reply from 192.168.0.102: Destination host unreachable. Reply from 192.168.0.102: Destination host unreachable. Ping statistics for 192.168.0.111: Packets: Sent = 4, Received = 4, Lost = 0 (0% loss),

    Read the article

  • How to prevent nginx from locking files on mounted samba partition in Centos 6

    - by Bruce Kirkpatrick
    I'm using nginx 1.3.8 inside a centos 6.3 virtualbox 4.2.4 virtual machine. The system is running the latest software available via yum update. The host OS is windows 7. The site files nginx is serving are on mounted samba partition, which is a folder on the host Windows system. I.e., inside linux, nginx paths are referring to /home/vhosts and this is mounted from D:\vhosts\ on windows. The samba partition is mounted as root with 777 privileges. /etc/fstab looks like this, but with real ip, username, password: //hostip/vhosts /home/vhosts cifs username=username,password=SECRETPASSWORD,uid=root,gid=root,file_mode=0777,dir_mode=0777,rw,_netdev 0 0 I.e. linux/nginx reads from the windows share, and not the opposite. in /etc/samba/smb.conf, I have tried to disable all samba locking features, but it seems to have no effect even after rebooting the virtual machine. locking=no share modes=no oplocks = no level2 oplocks = no kernel oplocks =no I'm receiving "Access is denied" errors in Windows or linux when attempting to overwrite the javascript file in windows that has been accessed at least once with nginx. If I run "service nginx reload", the lock is removed and I'm able to save the file. That's why I think it is nginx causing the lock. The same problem occurs with directories. However, that may be a different issue not related to the use of samba. I'm using samba so that I can manage the source code outside of the virtual machine. Also note that after I run "service nginx reload", the file I'm editing is actually automatically deleted from the windows host. SOLVED: I just reviewed my nginx.conf file. It appears the "open_file_cache" feature is what is causing the lock and deleted files. When I set this option to open_file_cache off;, My problem is resolved. I will repost as the answer when it allows me to do so.

    Read the article

< Previous Page | 662 663 664 665 666 667 668 669 670 671 672 673  | Next Page >