Search Results

Search found 11303 results on 453 pages for 'info'.

Page 395/453 | < Previous Page | 391 392 393 394 395 396 397 398 399 400 401 402  | Next Page >

  • Strange permission errors with Windows Server 2008

    - by Spirit
    I just don't know a better way to describe my issue that is driving me nuts. I am trying to establish a test domain with virtual machines on a box that has Win7 with VMwware workstation installed. The purpouse with this domain will be so that we can try and test different situations before they go into the production network. I build a VM with WinSrv2008R2 and I am using that VM as a template to make other servers for the domain by making clones of it. Now I raise a DC with one clone and a member server with another clone - I add the server to the domain. I am following a standard procedure as always (it is not my first domain). Then I make an admin account and I am adding the admin to be a member of the Domain and Enterprise Admins group. That admin is admin with full priviledges on the DC.. no problem there. But on the other server has ... somewhat half the privileges and I cant log in via RDP. I tryed with another account. Same issues. For example (with half the privileges): I can't open the Even Viewer if I go via Start - Administrative Tools - Event Viewer. But I can open the Even Viewer via the server manager. You can notice this on the image below. I mean WTF??? I am going crazy, I haven't experienced anything similar in my three years of expertise. I already lost 3 days troubleshooting this. Could this be related with the cloning? Perhaps if I make fresh installs of WinSrv2008 there won't be any problems? I've had raised test domains as VMs on other occasions before, and there weren't any problems then. This is VMware Workstation 8. I've made clones before, on Workstation 7 it didn't had any problems. Anyone has any ideas? UPDATE: This is the info from the event log when I try to access via RDP: An account failed to log on. Subject: Security ID: NULL SID Account Name: - Account Domain: - Logon ID: 0x0 Logon Type: 3 Account For Which Logon Failed: Security ID: NULL SID Account Name: pat.coleman Account Domain: lab Failure Information: Failure Reason: Domain sid inconsistent. Status: 0xc000006d Sub Status: 0xc000019b

    Read the article

  • Why MySQL sat for 2 minutes doing nothing?

    - by Alex R
    This was a one-time thing, not reproducible... But I saved the show innodb status output. Can anybody tell what's going on here? The simple insert took almost 3 minutes to complete. | InnoDB | | ===================================== 110201 15:58:10 INNODB MONITOR OUTPUT ===================================== Per second averages calculated from the last 34 seconds ---------- SEMAPHORES ---------- OS WAIT ARRAY INFO: reservation count 11963, signal count 11766 --Thread 1824 has waited at .\btr\btr0cur.c line 443 for 118.00 seconds the sema phore: S-lock on RW-latch at 09D6453C created in file .\buf\buf0buf.c line 550 a writer (thread id 1824) has reserved it in mode wait exclusive number of readers 1, waiters flag 1 Last time read locked in file .\buf\buf0flu.c line 599 Last time write locked in file .\btr\btr0cur.c line 443 Mutex spin waits 0, rounds 527817, OS waits 7133 RW-shared spins 2532, OS waits 1226; RW-excl spins 1652, OS waits 1118 ------------ TRANSACTIONS ------------ Trx id counter 0 95830 Purge done for trx's n:o < 0 95814 undo n:o < 0 0 History list length 11 LIST OF TRANSACTIONS FOR EACH SESSION: ---TRANSACTION 0 0, not started, OS thread id 3704 MySQL thread id 551, query id 2702112 localhost 127.0.0.1 root show innodb status ---TRANSACTION 0 95829, not started, OS thread id 3132 MySQL thread id 534, query id 2702020 localhost 127.0.0.1 root ---TRANSACTION 0 95828, not started, OS thread id 3152 MySQL thread id 527, query id 2701973 localhost 127.0.0.1 root ---TRANSACTION 0 95827, ACTIVE 118 sec, OS thread id 1824 inserting, thread decl ared inside InnoDB 500 mysql tables in use 1, locked 1 1 lock struct(s), heap size 320, 0 row lock(s) MySQL thread id 526, query id 2701972 localhost 127.0.0.1 root update INSERT INTO log_searchcriteria (userid,search_criteria,date,search_type) VALUES ( NAME_CONST('userid',NULL), NAME_CONST('search_criteria',_latin1' SELECT SQL_C ALC_FOUND_ROWS idx_search.CTCX_LATITUDE, idx_search.CTCX_LONGITUDE, idx_search.b uilding_id, idx_search.LN_LIST_NUMBER, idx_search.LP_LIST_PRICE, idx_search.HSN_ ADRESS_HOUSE_NUMBER, idx_search.STR_ADDRESS_STREET, idx_search.CP_ADDRESS_COMPAS S_POINT, idx_search.UN_UNIT, idx_search.CIT_CITY, idx_search.ZP_ZIP_CODE, idx_se arch.AR_AREA_NAME, idx_search.BR_BEDROOMS, idx_search.BTH_BATHS, idx_search.ST_S TATUS, idx_search.CTCX_STYLE_TYPE, idx_s -------- FILE I/O -------- I/O thread 0 state: wait Windows aio (insert buffer thread) I/O thread 1 state: wait Windows aio (log thread) I/O thread 2 state: wait Windows aio (read thread) I/O thread 3 state: wait Windows aio (write thread) Pending normal aio reads: 0, aio writes: 1, ibuf aio reads: 0, log i/o's: 0, sync i/o's: 0 Pending flushes (fsync) log: 0; buffer pool: 0 151006 OS file reads, 120758 OS file writes, 6844 OS fsyncs 0.00 reads/s, 0 avg bytes/read, 0.00 writes/s, 0.00 fsyncs/s ------------------------------------- INSERT BUFFER AND ADAPTIVE HASH INDEX ------------------------------------- Ibuf: size 1, free list len 5, seg size 7, 24664 inserts, 24664 merged recs, 4612 merges Hash table size 553253, node heap has 629 buffer(s) 0.00 hash searches/s, 0.00 non-hash searches/s --- LOG --- Log sequence number 5 2318193115 Log flushed up to 5 2318193115 Last checkpoint at 5 2318129891 0 pending log writes, 0 pending chkp writes 3036 log i/o's done, 0.00 log i/o's/second ---------------------- BUFFER POOL AND MEMORY ---------------------- Total memory allocated 213459462; in additional pool allocated 1720192 Dictionary memory allocated 240416 Buffer pool size 8192 Free buffers 0 Database pages 7563 Modified db pages 18 Pending reads 0 Pending writes: LRU 0, flush list 18, single page 0 Pages read 150973, created 28788, written 115137 0.00 reads/s, 0.00 creates/s, 0.00 writes/s No buffer pool page gets since the last printout -------------- ROW OPERATIONS -------------- 1 queries inside InnoDB, 0 queries in queue 1 read views open inside InnoDB Main thread id 2992, state: flushing buffer pool pages Number of rows inserted 794294, updated 89203, deleted 13698, read 1453084305 0.00 inserts/s, 0.00 updates/s, 0.00 deletes/s, 0.00 reads/s ---------------------------- END OF INNODB MONITOR OUTPUT ============================ Thanks

    Read the article

  • SMTP for multiple domains on virtual interfaces

    - by Pawel Goscicki
    The setup is like this (Ubuntu 9.10): eth0: 1.1.1.1 name.isp.com eth0:0 2.2.2.2 example2.com eth0:1 3.3.3.3 example3.com example2.com and example3.com are web apps which need to send emails to their users. 2.2.2.2 points to example2.com and vice-versa (A/PTR). MX - Google. Google handles all incoming mail. 3.3.3.3 points to example3.com and vice-versa (A/PTR). MX - Google. Google handles all incoming mail. Requirements: Local delivery must be disabled (must deliver to MX specified server), so that the following works (note that there is no local user bob on the machine, but there is an existing bob email user): echo "Test" | mail -s "Test 6" [email protected] I need to be able to specify from which IP/domain name the email is delivered when sending an email. I fought with sendmail. With not much luck. Here's some debug info: sendmail -d0.12 -bt < /dev/null Canonical name: name.isp.com UUCP nodename: host a.k.a.: example2.com a.k.a.: example3.com ... Sendmail always uses canonical name (taken from eth0). I've found no way for it to select one of the UUCP codenames. It uses it for sending email: echo -e "To: [email protected]\nSubject: Test\nTest\n" | sendmail -bm -t -v [email protected]... Connecting to [127.0.0.1] via relay... 220 name.isp.com ESMTP Sendmail 8.14.3/8.14.3/Debian-9ubuntu1; Wed, 31 Mar 2010 16:33:55 +0200; (No UCE/UBE) logging access from: localhost(OK)-localhost [127.0.0.1] >>> EHLO name.isp.com I'm ok with other SMTP solutions. I've looked briefly at nbsmtp, msmtp and nullmailer but I'm not sure thay can deal with disabling local delivery and selecting different domains when sending emails. I also know about spoofing sender field by using mail -a "From: <[email protected]>" but it seems to be a half-solution (mails are still sent from isp.com domain instead of proper example2.com, so PTR records are unused and there's more risk of being flagged as spam/spammer).

    Read the article

  • Increase Timeout for remote sessions in Debian 5 Lenny

    - by Ash
    I always get a remote connection time out when using PuTTy and also when i send emails with attachments from a mail sever installed on Debian. I always get this error. I'm not sure if this is firewall or the new Debian 5 installation which i made. Is there any settings i need to fix after fresh install. Any inputs are highly appreciated. This error is pulling my brains out. Thanks. Error: 2011-01-10 15:21:13,454 INFO [btpool0-23://69.19.19.89/service/upload?fmt=extended] [[email protected];mid=72;ip=10.10.01.78;ua=Mozilla/5.0 (Windows;; U;; Windows NT 5.2;; en-US;; rv:1.9.2.13) Gecko/20101203 Firefox/3.6.13 (.NET CLR 3.5.30729);] FileUploadServlet - File upload failed org.apache.commons.fileupload.FileUploadBase$IOFileUploadException: Processing of multipart/form-data request failed. timeout at org.apache.commons.fileupload.FileUploadBase.parseRequest(FileUploadBase.java:367) at org.apache.commons.fileupload.servlet.ServletFileUpload.parseRequest(ServletFileUpload.java:126) at com.zimbra.cs.service.FileUploadServlet.handleMultipartUpload(FileUploadServlet.java:430) at com.zimbra.cs.service.FileUploadServlet.doPost(FileUploadServlet.java:412) at javax.servlet.http.HttpServlet.service(HttpServlet.java:727) at com.zimbra.cs.servlet.ZimbraServlet.service(ZimbraServlet.java:181) at javax.servlet.http.HttpServlet.service(HttpServlet.java:820) at org.mortbay.jetty.servlet.ServletHolder.handle(ServletHolder.java:511) at org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1166) at com.zimbra.cs.servlet.SetHeaderFilter.doFilter(SetHeaderFilter.java:79) at org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1157) at org.mortbay.servlet.UserAgentFilter.doFilter(UserAgentFilter.java:81) at org.mortbay.servlet.GzipFilter.doFilter(GzipFilter.java:132) at org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1157) at org.mortbay.jetty.servlet.ServletHandler.handle(ServletHandler.java:388) at org.mortbay.jetty.security.SecurityHandler.handle(SecurityHandler.java:218) at org.mortbay.jetty.servlet.SessionHandler.handle(SessionHandler.java:182) at org.mortbay.jetty.handler.ContextHandler.handle(ContextHandler.java:765) at org.mortbay.jetty.webapp.WebAppContext.handle(WebAppContext.java:418) at org.mortbay.jetty.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:230) at org.mortbay.jetty.handler.HandlerCollection.handle(HandlerCollection.java:114) at org.mortbay.jetty.handler.HandlerWrapper.handle(HandlerWrapper.java:152) at org.mortbay.jetty.handler.rewrite.RewriteHandler.handle(RewriteHandler.java:230) at org.mortbay.jetty.handler.HandlerWrapper.handle(HandlerWrapper.java:152) at org.mortbay.jetty.handler.DebugHandler.handle(DebugHandler.java:77) at org.mortbay.jetty.handler.HandlerWrapper.handle(HandlerWrapper.java:152) at org.mortbay.jetty.Server.handle(Server.java:326) at org.mortbay.jetty.HttpConnection.handleRequest(HttpConnection.java:543) at org.mortbay.jetty.HttpConnection$RequestHandler.content(HttpConnection.java:939) at org.mortbay.jetty.HttpParser.parseNext(HttpParser.java:755) at org.mortbay.jetty.HttpParser.parseAvailable(HttpParser.java:218) at org.mortbay.jetty.HttpConnection.handle(HttpConnection.java:405) at org.mortbay.io.nio.SelectChannelEndPoint.run(SelectChannelEndPoint.java:413) at org.mortbay.thread.BoundedThreadPool$PoolThread.run(BoundedThreadPool.java:451) Caused by: org.mortbay.jetty.EofException: timeout at org.mortbay.jetty.HttpParser$Input.blockForContent(HttpParser.java:1172) at org.mortbay.jetty.HttpParser$Input.read(HttpParser.java:1122) at org.apache.commons.fileupload.MultipartStream$ItemInputStream.makeAvailable(MultipartStream.java:977) at org.apache.commons.fileupload.MultipartStream$ItemInputStream.read(MultipartStream.java:887) at java.io.InputStream.read(InputStream.java:85) at org.apache.commons.fileupload.util.Streams.copy(Streams.java:94) at org.apache.commons.fileupload.util.Streams.copy(Streams.java:64) at org.apache.commons.fileupload.FileUploadBase.parseRequest(FileUploadBase.java:362) ... 33 more

    Read the article

  • How to make local drive available in apache localhost

    - by Ronald Allan
    How can I make my "Drive D:" "Drive E" available in localhost. I'm running apache on my backtrack machine. My default is /var/www/. Every directory I created inside the /var/www/ is available and all working fine. Let's say I created /var/www/PENTEST/ the contents of that PENTEST directory can be accessed through: localhost/PENTEST/ How can I make this work: localhost/media/DATA/ The /media/DATA/ is my DRIVE D: I edited this: ServerAdmin webmaster@localhost DocumentRoot /media/DATA/ <Directory /> Options FollowSymLinks AllowOverride None </Directory> <Directory /media/DATA/> Options Indexes FollowSymLinks MultiViews AllowOverride None Order allow,deny allow from all </Directory> ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/ <Directory "/usr/lib/cgi-bin"> AllowOverride None Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch Order allow,deny Allow from all </Directory> ErrorLog /var/log/apache2/error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog /var/log/apache2/access.log combined Alias /doc/ "/usr/share/doc/" <Directory "/usr/share/doc/"> Options Indexes MultiViews FollowSymLinks AllowOverride None Order deny,allow Deny from all Allow from 127.0.0.0/255.0.0.0 ::1/128 </Directory> Still not working. I'm getting 404. # # I figured it out. Thank for the post of "RiggsFolly" which can be found here: http://forum.wampserver.com/read.php?2,89163. I just have to change this: ServerAdmin webmaster@localhost DocumentRoot /media/DATA/ <Directory /> Options FollowSymLinks AllowOverride None </Directory> <Directory /media/DATA/> Options Indexes FollowSymLinks MultiViews AllowOverride None Order allow,deny allow from all </Directory> Into this: ServerAdmin webmaster@localhost DocumentRoot D:/media/DATA/ <Directory /> Options FollowSymLinks AllowOverride None </Directory> <Directory D:/media/DATA/> Options Indexes FollowSymLinks MultiViews AllowOverride None Order allow,deny allow from all </Directory>

    Read the article

  • Connecting multiple ColdFusion 10 instances to a single Apache 2.2 server

    - by Adam Cameron
    This is on Windows 7 Home Premium edition. I have got two ColdFusion 10 (updater 2) instances: "cfusion" (the default one), and "scratch". I have got a single instance of Apache 2.2 running. Within Apache, I have set up two virtual hosts, each of which needs to be served by a different ColdFusion instance. Each of the CF instances serves files fine via Tomcat's internal web server. Apache serves vanilla HTML files fine too. So both CF instances, and both virtual hosts separately work OK. I can get wsconfig.exe to connect either one of the CF instances to the Apache server, and serve CF files via Apache & that instance. However I cannot find a way of connecting the second CF instance to Apache as well, so that both CF instances are conected, each serving one of the virtual hosts. WSConfig doesn't seem to understand the notion of "multiple CF instances", and the changes it makes to the httpd.conf (via mod_jk.conf) does not seem to be implemented in such a way as to accommodate multiple CF instances talking to a single Apache instance, or multiple virtual hosts. I freely admit to not being confident enough with how mod_jk (or even really httpd.conf) works to be able to guess if I can change stuff to make it work. If I try to add the second CF instance using WSConfig, I just get a message "the web server is already configured for ColdFusion". Be that as it may... not the instance of ColdFusion I want to connect it to! If I remove the existing connector to whichever instance is already connected, I can then connect the other one no problems. Not that this helps, but it demonstrates that the CF instance can connect to Apache. This all used to be fairly straight fwd under older versions of CF and JRun :-( The only docs I have found are on the "Connect multiple Apache virtual hosts on a web server to a single ColdFusion server" page, but that specifically only deals with a single CF instance. There is no equivalent page for multiple CF instances. I'm kinda hoping I can move some of the mod_jk config into my virtual host entries in httpd-vhosts.conf (this is how it used to work for JRun), but I've no idea what to put where. I think I've covered all the necessary info here? If not, sing out and I'll add more. Thanks. PS: tried to specifically tag this as "ColdFusion-10" as the answer will be different from previous CF versions, but it won't let me cos my rep on this site is too low (odd how it doesn't consider my rep from other S/O sites...). If someone with sufficient rep can add it, that'd be cool: it's probably a valid tag to have. Ta.

    Read the article

  • LXC container can only access host via bridge

    - by vitaut
    I have an LXC container with i686 Ubuntu 12.04 running on a x86_64 Ubuntu 12.04 host. I've set up a bridge using instructions here. However the ping from the container only goes through to the host and not to other machines on the local network. Similarly only the host and not the other machines see the container OS. The host's /etc/network/interfaces file looks as follows: auto lo iface lo inet loopback iface eth0 inet manual auto br0 iface br0 inet dhcp bridge_ports eth0 bridge_fd 0 bridge_maxwait 0 The container's /etc/network/interfaces file looks as follows: auto lo iface lo inet loopback auto eth0 iface eth0 inet dhcp And here's the relevant part of the container's config: lxc.network.type=veth lxc.network.link=br0 lxc.network.flags=up Any ideas what I'm doing wrong? Additional info: The output of iptables-save on host: $ sudo iptables-save # Generated by iptables-save v1.4.12 on Sat Oct 26 06:06:48 2013 *filter :INPUT ACCEPT [6854:721708] :FORWARD ACCEPT [4067:538895] :OUTPUT ACCEPT [4967:522405] COMMIT # Completed on Sat Oct 26 06:06:48 2013 # Generated by iptables-save v1.4.12 on Sat Oct 26 06:06:48 2013 *nat :PREROUTING ACCEPT [82235:21547307] :INPUT ACCEPT [16:1070] :OUTPUT ACCEPT [9386:583359] :POSTROUTING ACCEPT [14693:1291952] -A POSTROUTING -s 10.0.3.0/24 ! -d 10.0.3.0/24 -j MASQUERADE COMMIT # Completed on Sat Oct 26 06:06:48 2013 The output of brctl show on host: $ brctl show bridge name bridge id STP enabled interfaces br0 8000.080027409684 no eth0 vethBkwWyV The output of ifconfig br0 on host: $ ifconfig br0 br0 Link encap:Ethernet HWaddr 08:00:27:40:96:84 inet addr:192.168.1.11 Bcast:192.168.1.255 Mask:255.255.255.0 inet6 addr: fe80::a00:27ff:fe40:9684/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:232863 errors:0 dropped:0 overruns:0 frame:0 TX packets:59518 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:34437354 (34.4 MB) TX bytes:198492871 (198.4 MB) The output of ifconfig eth0 on host: $ ifconfig eth0 eth0 Link encap:Ethernet HWaddr 08:00:27:40:96:84 inet6 addr: fe80::a00:27ff:fe40:9684/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:299419 errors:0 dropped:0 overruns:0 frame:0 TX packets:203569 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:59077446 (59.0 MB) TX bytes:372056540 (372.0 MB) The output of ifconfig eth0 on container: $ ifconfig eth0 eth0 Link encap:Ethernet HWaddr 00:16:3e:74:08:2b inet addr:192.168.1.12 Bcast:192.168.1.255 Mask:255.255.255.0 inet6 addr: fe80::216:3eff:fe74:82b/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:81 errors:0 dropped:0 overruns:0 frame:0 TX packets:113 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:8506 (8.5 KB) TX bytes:9021 (9.0 KB)

    Read the article

  • RSH between servers not working

    - by churnd
    I have two servers: one CentOS 5.8 & one Solaris 10. Both are joined to my workplace AD domain via PBIS-Open. A user will log into the linux server & run an application which issues commands over RSH to the solaris server. Some commands are also run on the linux server, so both are needed. Due to the application these servers are being used for (proprietary GE software), the software on the linux server needs to be able to issue rsh commands to the solaris server on behalf of the user (the user just runs a script & the rest is automatic). However, rsh is not working for the domain users. It does work for a local user, so I believe I have the necessary trust settings between the two servers correct. However, I can rlogin as a domain user from the linux server to the solaris server. SSH works too (how I wish I could use it). Some relevant info: via rlogin: [user@linux~]$ rlogin solaris connect to address 192.168.1.2 port 543: Connection refused Trying krb4 rlogin... connect to address 192.168.1.2 port 543: Connection refused trying normal rlogin (/usr/bin/rlogin) Sun Microsystems Inc. SunOS 5.10 Generic January 2005 solaris% via rsh: [user@linux ~]$ rsh solaris ls connect to address 192.168.1.2 port 544: Connection refused Trying krb4 rsh... connect to address 192.168.1.2 port 544: Connection refused trying normal rsh (/usr/bin/rsh) permission denied. [user@linux ~]$ relevant snippet from /etc/pam.conf on solaris: # # rlogin service (explicit because of pam_rhost_auth) # rlogin auth sufficient pam_rhosts_auth.so.1 rlogin auth requisite pam_lsass.so set_default_repository rlogin auth requisite pam_lsass.so smartcard_prompt try_first_pass rlogin auth requisite pam_authtok_get.so.1 try_first_pass rlogin auth sufficient pam_lsass.so try_first_pass rlogin auth required pam_dhkeys.so.1 rlogin auth required pam_unix_cred.so.1 rlogin auth required pam_unix_auth.so.1 # # Kerberized rlogin service # krlogin auth required pam_unix_cred.so.1 krlogin auth required pam_krb5.so.1 # # rsh service (explicit because of pam_rhost_auth, # and pam_unix_auth for meaningful pam_setcred) # rsh auth sufficient pam_rhosts_auth.so.1 rsh auth required pam_unix_cred.so.1 # # Kerberized rsh service # krsh auth required pam_unix_cred.so.1 krsh auth required pam_krb5.so.1 # I have not really seen anything useful in either system log that seem to be directly related to the failed login attempt. I've tail -f'd /var/adm/messages on solaris & /var/log/messages on linux during the failed attempts & nothing shows up. Maybe I need to be doing something else?

    Read the article

  • Compiling PHP with GD crashes with EXC_BREAKPOINT (SIGTRAP) on PPC Mac

    - by Ömer
    First of all, I should say that I have searched the whole Internet for this problem but I couldn't find any solution yet. I have a Mac mini PowerPC (PPC) and I run Apache webserver (httpd-2.2.22) with PHP (5.4.0) and I do all the configure & compilation jobs by myself. If configure with: './configure' '--prefix=/usr/local/php5' '--mandir=/usr/share/man' '--infodir=/usr/share/info' '--sysconfdir=/etc' '--with-config-file-path=/etc' '--with-zlib' '--with-zlib-dir=/usr' '--with-openssl=/usr' '--without-iconv' '--enable-exif' '--enable-ftp' '--enable-mbstring' '--enable-mbregex' '--enable-sockets' '--with-mysql=/usr/local/mysql' '--with-pdo-mysql=/usr/local/mysql' '--with-mysqli=/usr/local/mysql/bin/mysql_config' '--with-apxs2=/usr/local/apache2/bin/apxs' '--with-mcrypt' then the PHP works flawlessly. But if I add the GD module by adding these to the script above: '--with-gd' '--with-jpeg-dir=/usr/local/lib' '--with-freetype-dir=/usr/X11R6' '--with-png-dir=/usr/X11R6' '--with-xpm-dir=/usr/X11R6' the PHP gets configured and compiled without any errors but it causes EXC_BREAKPOINT (SIGTRAP) (see the Crash Reporter log below) when I request a page which calls PHP module. It's obvious that something related to the GD module is causing this, probably FreeType module because it's present in the log but it may not be definite of course. When the PHP crashes (or more accurately, httpd) the CPU goes 100% for 10 to 15 seconds until it recovers. I need to use the GD module and keep the Mac mini PowerPC. So, what should I do to solve this problem? Process: httpd [79852] Path: /usr/local/apache2/bin/httpd Identifier: httpd Version: ??? (???) Code Type: PPC (Native) Parent Process: httpd [79846] Date/Time: 2013-11-04 15:44:28.444 +0200 OS Version: Mac OS X 10.5.8 (9L31a) Report Version: 6 Anonymous UUID: 0178B7F8-2241-43F7-A651-9E7234D41A37 Exception Type: EXC_BREAKPOINT (SIGTRAP) Exception Codes: 0x0000000000000001, 0x0000000093c11e0c Crashed Thread: 0 Application Specific Information: *** single-threaded process forked *** Thread 0 Crashed: 0 com.apple.CoreFoundation 0x93c11e0c __CFRunLoopFindMode + 328 1 com.apple.CoreFoundation 0x93c13d88 CFRunLoopAddSource + 276 2 com.apple.DiskArbitration 0x901a6e8c DAApprovalSessionScheduleWithRunLoop + 52 3 ...ple.CoreServices.CarbonCore 0x9512e67c _FSGetDiskArbSession(__DASession**, __DAApprovalSession**) + 540 4 ...ple.CoreServices.CarbonCore 0x9512e420 CreateDiskArbDiskForMountPath(char const*) + 84 5 ...ple.CoreServices.CarbonCore 0x9512d2c8 FSCacheableClient_GetVolumeCachedInfo(char const*, statfs const*, CachedVolumeInfo*, __DADisk*, __DADisk**) + 280 6 ...ple.CoreServices.CarbonCore 0x9512cca4 MountVolume(char const*, statfs*, unsigned char, unsigned char, __DADisk*, short*) + 352 7 ...ple.CoreServices.CarbonCore 0x9512ca48 MountInitialVolumes() + 172 8 ...ple.CoreServices.CarbonCore 0x9512c4d4 INIT_FileManager() + 164 9 ...ple.CoreServices.CarbonCore 0x9512c390 GetRetainedVolFSVCBByVolumeID(unsigned long) + 48 10 ...ple.CoreServices.CarbonCore 0x9512adf4 PathGetObjectInfo(char const*, unsigned long, unsigned long, VolumeInfo**, unsigned long*, unsigned long*, char*, unsigned long*, unsigned char*) + 184 11 ...ple.CoreServices.CarbonCore 0x9512acc4 FSPathMakeRefInternal(unsigned char const*, unsigned long, unsigned long, FSRef*, unsigned char*) + 64 12 libfreetype.6.dylib 0x0070a0fc FT_New_Face_From_Resource + 56 13 libfreetype.6.dylib 0x0070a3b0 FT_New_Face + 48 14 libphp5.so 0x0118d1a8 fontFetch + 824 15 libphp5.so 0x0118edac php_gd_gdCacheGet + 220 16 libphp5.so 0x0118d6d8 php_gd_gdImageStringFTEx + 360 17 libphp5.so 0x011763c0 php_imagettftext_common + 1504 18 libphp5.so 0x01176494 zif_imagefttext + 20 19 libphp5.so 0x014b9c68 zend_do_fcall_common_helper_SPEC + 1048 20 libphp5.so 0x01452898 _ZEND_DO_FCALL_SPEC_CONST_HANDLER + 440 21 libphp5.so 0x014ba878 execute + 776 22 libphp5.so 0x013f190c zend_execute_scripts + 316 23 libphp5.so 0x013779f4 php_execute_script + 596 24 libphp5.so 0x014bbe64 php_handler + 1972 25 httpd 0x000020c0 ap_run_handler + 96 26 httpd 0x00006ae0 ap_invoke_handler + 224 27 httpd 0x000305c4 ap_process_request + 116 28 httpd 0x0002c768 ap_process_http_connection + 104 29 httpd 0x00012d30 ap_run_process_connection + 96 30 httpd 0x00012ecc ap_process_connection + 92 31 httpd 0x000373e4 child_main + 1220 32 httpd 0x000376a8 make_child + 296 33 httpd 0x000377e4 startup_children + 100 34 httpd 0x000387d4 ap_mpm_run + 3988 35 httpd 0x0000a320 main + 3280 36 httpd 0x000019c0 start + 64

    Read the article

  • Hyper-V causes boot loop/failure on a non-Gigabyte Win8 Pro system

    - by Nick
    Hardware: Intel i7 2600K (not overclocked, SLAT compatible, virt. features enabled in bios) Asus Maximus IV Extreme-Z (Z68) 16Gb RAM 256Gb SSD Other non-trivial working parts Adding Hyper-V is causing a boot loop resulting in an attempt at automatic repair by Windows 8 after the second or third loop: I'm trying to get the Windows Phone 8 SDK installed and I've narrowed down my troubles to the Hyper-V feature in Win8. This is required to run the WP8 emulator and there are no install options to omit this feature. My first attempt completely borked the OS as I did not have a recent restore point or system image, so I did a completely clean install and made plenty of backups/restore points. I skipped the SDK install and went straight for the windows feature add-on for Hyper-V. This confirmed that Hyper-V is the issue as the same behavior resulted. I cannot find any hint in the Event Logs. Cancelling automatic recovery causes the same behavior to repeat. I don't have any other VM products installed. My only recourse is to use a restore point, try something else, install it again, and see what happens. No luck so far. I'm on my 10th attempt here. Any help would be much appreciated. EDIT: I found a collection of tips here.. http://social.msdn.microsoft.com/Forums/en-US/wptools/thread/b06cc9f2-aa5e-4cb3-9df1-0c273e1dfd68 So i've been attempting various bios settings to resolve this issue with no luck. I've tried setting 'CPUID Limit' to disabled. This seems to work partly as Win8 boots but no USB devices work at all. I also attempted disabling the usb 3.0 controller as the msdn topic lists an issue with USB controllers on Gigabyte boards. This also doesn't work. The USB devices light up but no input is received by the OS. All of my other bios CPU settings are in line with the info in the post. I'm totally stumped. Bios screenshots: http://i.imgur.com/yKN5u.png http://i.imgur.com/Y9wI4.png http://i.imgur.com/F6EuO.png

    Read the article

  • mounting ext4 fs with block size of 65536

    - by seaquest
    I am doing some benchmarking on EXT4 performance on Compact Flash media. I have created an ext4 fs with block size of 65536. however I can not mount it on ubuntu-10.10-netbook-i386. (it is already mounting ext4 fs with 4096 bytes of block sizes) According to my readings on ext4 it should allow such big block sized fs. I want to hear your comments. root@ubuntu:~# mkfs.ext4 -b 65536 /dev/sda3 Warning: blocksize 65536 not usable on most systems. mke2fs 1.41.12 (17-May-2010) mkfs.ext4: 65536-byte blocks too big for system (max 4096) Proceed anyway? (y,n) y Warning: 65536-byte blocks too big for system (max 4096), forced to continue Filesystem label= OS type: Linux Block size=65536 (log=6) Fragment size=65536 (log=6) Stride=0 blocks, Stripe width=0 blocks 19968 inodes, 19830 blocks 991 blocks (5.00%) reserved for the super user First data block=0 1 block group 65528 blocks per group, 65528 fragments per group 19968 inodes per group Writing inode tables: done Creating journal (1024 blocks): done Writing superblocks and filesystem accounting information: done This filesystem will be automatically checked every 37 mounts or 180 days, whichever comes first. Use tune2fs -c or -i to override. root@ubuntu:~# tune2fs -l /dev/sda3 tune2fs 1.41.12 (17-May-2010) Filesystem volume name: <none> Last mounted on: <not available> Filesystem UUID: 4cf3f507-e7b4-463c-be11-5b408097099b Filesystem magic number: 0xEF53 Filesystem revision #: 1 (dynamic) Filesystem features: has_journal ext_attr resize_inode dir_index filetype extent flex_bg sparse_super large_file huge_file uninit_bg dir_nlink extra_isize Filesystem flags: signed_directory_hash Default mount options: (none) Filesystem state: clean Errors behavior: Continue Filesystem OS type: Linux Inode count: 19968 Block count: 19830 Reserved block count: 991 Free blocks: 18720 Free inodes: 19957 First block: 0 Block size: 65536 Fragment size: 65536 Blocks per group: 65528 Fragments per group: 65528 Inodes per group: 19968 Inode blocks per group: 78 Flex block group size: 16 Filesystem created: Sat Feb 5 14:39:55 2011 Last mount time: n/a Last write time: Sat Feb 5 14:40:02 2011 Mount count: 0 Maximum mount count: 37 Last checked: Sat Feb 5 14:39:55 2011 Check interval: 15552000 (6 months) Next check after: Thu Aug 4 14:39:55 2011 Lifetime writes: 70 MB Reserved blocks uid: 0 (user root) Reserved blocks gid: 0 (group root) First inode: 11 Inode size: 256 Required extra isize: 28 Desired extra isize: 28 Journal inode: 8 Default directory hash: half_md4 Directory Hash Seed: afb5b570-9d47-4786-bad2-4aacb3b73516 Journal backup: inode blocks root@ubuntu:~# mount -t ext4 /dev/sda3 /mnt/ mount: wrong fs type, bad option, bad superblock on /dev/sda3, missing codepage or helper program, or other error In some cases useful info is found in syslog - try dmesg | tail or so

    Read the article

  • Is Gmail Being Blocked by my ISP?

    - by james
    I asked this over at superuser but they weren't able to help, so I was hoping the sysadmins here will be able to advise as to what's wrong. Although the issue here is with a PC and not a server it still deals with networking so I hope it's not too irrelevant. The Issue: I have a desktop on which I cannot access Gmail and also youtube sign in (I believe since youtube is owned by google they both use the same sign in system). On other computers that uses the same connection via a wireless router I can access both gmail and youtube sign in just fine. On this computer which doesn't have a wireless card and so I have to connect via Ethernet cable (connected to a USB converter since the Ethernet port doesn't work anymore) I can access all sites and services including things like aol and hotmail. But only when it comes to gmail, do I get complete and utter throttling. I even turned off my AV ad Firewall momentarily and no luck. The gmail log in page starts to load and by mid point it just stays there loading and loading and loading... never ends. I tried everything, I reset the modem and router multiple times. I reinstalled my operating system from a vista to a windows 7 hoping that a complete reinstall would solve the issue, but no luck. And yes, I am going to call my ISP but not to solve this issue, but to cancel them. I want to upgrade to cable from DSL anyway. I didn't mention my ISP because I'm not sure if that is within the rules (if it's okay some one let me know and I will). P.S. All this happened one day, before that gmail was perfectly accessible in this computer. I can't remember anything special happening on that day prior to this. The only thing I can think of is, my ISP or Google itself is blocking this computer based on it's mac address, but I don't know if that's even done. Additional info: PC: Windows 7 Ultimate 32 bit Connection Type: DSL Connecting Medium: Ethernet cable via USB converter EDIT: I should mention I can access gmail and youtube just fine through a IP proxy service.

    Read the article

  • Is Gmail Being Blocked by my ISP?

    - by james
    EDIT: I thought I pinpointed the problem. Just now I tried to go to the firefox addons page which uses https and gmail also uses https. So I thought I am unable to load https pages on this computer. So I went to a bank site which uses https but that loads just fine. Sigh.... I asked this over at superuser but they weren't able to help, so I was hoping the sysadmins here will be able to advise as to what's wrong. Although the issue here is with a PC and not a server it still deals with networking so I hope it's not too irrelevant. The Issue: I have a desktop on which I cannot access Gmail and also youtube sign in (I believe since youtube is owned by google they both use the same sign in system). On other computers that uses the same connection via a wireless router I can access both gmail and youtube sign in just fine. On this computer which doesn't have a wireless card and so I have to connect via Ethernet cable (connected to a USB converter since the Ethernet port doesn't work anymore) I can access all sites and services including things like aol and hotmail. But only when it comes to gmail, do I get complete and utter throttling. I even turned off my AV ad Firewall momentarily and no luck. The gmail log in page starts to load and by mid point it just stays there loading and loading and loading... never ends. I tried everything, I reset the modem and router multiple times. I reinstalled my operating system from a vista to a windows 7 hoping that a complete reinstall would solve the issue, but no luck. And yes, I am going to call my ISP but not to solve this issue, but to cancel them. I want to upgrade to cable from DSL anyway. I didn't mention my ISP because I'm not sure if that is within the rules (if it's okay some one let me know and I will). P.S. All this happened one day, before that gmail was perfectly accessible in this computer. I can't remember anything special happening on that day prior to this. The only thing I can think of is, my ISP or Google itself is blocking this computer based on it's mac address, but I don't know if that's even done. Additional info: PC: Windows 7 Ultimate 32 bit Connection Type: DSL Connecting Medium: Ethernet cable via USB converter I should mention I can access gmail and youtube just fine through a IP proxy service.

    Read the article

  • Partition table corrupted (USB flash drive)

    - by 13ren
    It's an 8 GB Patriot thumb drive, which I've used extensively with lots of data. Today, it is detected, but all data is gone: (EDIT at least some data is still there, but the partition table is gone) EDIT @Sathya (thanks) here's the relevant output from sudo fdisk -l: Disk /dev/sdc: 8019 MB, 8019509248 bytes 247 heads, 62 sectors/track, 1022 cylinders Units = cylinders of 15314 * 512 = 7840768 bytes Disk /dev/sdc doesn't contain a valid partition table It looks like it is /dev/sdc, with that 8 GB... and no partition table. I tried to mount /dev/sdc (and then dmesg | tail): /media> sudo mount /dev/sdc mytmp mount: wrong fs type, bad option, bad superblock on /dev/sdc, missing codepage or other error In some cases useful info is found in syslog - try dmesg | tail or so /media> dmesg | tail [ 24.300000] sdc: unknown partition table [ 24.320000] sd 2:0:0:0: Attached scsi removable disk sdc [ 24.370000] usb-storage: device scan complete [ 26.870000] EXT2-fs error (device sdc): ext2_check_descriptors: Block bitmap for group 1 not in group (block 0)! [ 26.870000] EXT2-fs: group descriptors corrupted! [ 50.420000] unhashed dentry being revalidated: .DCOPserver_eeepc-brendanma__0 [ 50.430000] unhashed dentry being revalidated: .DCOPserver_eeepc-brendanma__0 [ 50.430000] unhashed dentry being revalidated: .DCOPserver_eeepc-brendanma__0 [ 5565.470000] EXT2-fs error (device sdc): ext2_check_descriptors: Block bitmap for group 1 not in group (block 0)! [ 5565.470000] EXT2-fs: group descriptors corrupted! EDIT @Col: results from testdisk Disk /dev/sdc - 8013 MB / 7642 MiB - CHS 1022 247 62 Current partition structure: Partition Start End Size in sectors Partition sector doesn't have the endmark 0xAA55 After I hit [proceed], it says: Structure: Ok. Keys A: add partition, L: load backup, Enter: to continue The "Structure: Ok." seems reassuring... will "A: add partition" make my old data accessible (if it's still there), or will it make a new, fresh partition? Another option is "[ MBR Code ] Write TestDisk MBR code to first sector" - would it be better to do this? EDIT I found that at least some of my data is still on the flash drive, by using the below, and searching for English text in less (like " the "): cat /dev/sde | tr -cd '\11\12\40\1540-\176' | less (The drive changed from "/dev/sdb" to "/dev/sde" because I connected some extra drives today). I've learnt that "/dev/sde1" would be the first partition; and "/dev/sde" is the whole drive. Because unix treats these devices just like files, you can use all the ordinary unix file commands on them, like cat, and then process them like any other stream of data. The tr above removes non-printable characters ("\40" is space, which I wanted to preserve). In less, you can use "/" to search, similar to Vim. How can I get my data back (assuming it's still there)? If only the partition table is corrupted, is there a standard "partition recovery tool"? Is there a way to "repartition" without deleting everything?

    Read the article

  • How to get more information from the system crash

    - by viraptor
    I'd like to debug an issue I'm having with a linux (debian stable) server, but I'm running out of ideas of how to confirm any diagnosis. Some background: The servers are running DL160 class with hardware raid between two disks. They're running a lot of services, mostly utilising network interface and CPU. There are 8 cpus and 7 "main" most cpu-hungry processes are bound to one core each via cpu affinity. Other random background scripts are not forced anywhere. The filesystem is writing ~1.5k blocks/s the whole time (goes up above 2k/s in peak times). Normal CPU usage for those servers is ~60% on 7 cores and some minimal usage on the last (whatever's running on shells usually). What actually happens is that the "main" services start using 100% CPU at some point, mainly stuck in kernel time. After a couple of seconds, LA goes over 400 and we lose any way to connect to the box (KVM is on it's way, but not there yet). Sometimes we see a kernel reporting hung task (but not always): [118951.272884] INFO: task zsh:15911 blocked for more than 120 seconds. [118951.272955] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. [118951.273037] zsh D 0000000000000000 0 15911 1 [118951.273093] ffff8101898c3c48 0000000000000046 0000000000000000 ffffffffa0155e0a [118951.273183] ffff8101a753a080 ffff81021f1c5570 ffff8101a753a308 000000051f0fd740 [118951.273274] 0000000000000246 0000000000000000 00000000ffffffbd 0000000000000001 [118951.273335] Call Trace: [118951.273424] [<ffffffffa0155e0a>] :ext3:__ext3_journal_dirty_metadata+0x1e/0x46 [118951.273510] [<ffffffff804294f6>] schedule_timeout+0x1e/0xad [118951.273563] [<ffffffff8027577c>] __pagevec_free+0x21/0x2e [118951.273613] [<ffffffff80428b0b>] wait_for_common+0xcf/0x13a [118951.273692] [<ffffffff8022c168>] default_wake_function+0x0/0xe .... This would point at raid / disk failure, however sometimes the tasks are hung on kernel's gettsc which would indicate some general weird hardware behaviour. It's also running mysql (almost read-only, 99% cache hit), which seems to spawn a lot more threads during the system problems. During the day it does ~200kq/s (selects) and ~10q/s (writes). The host is never running out of memory or swapping, no oom reports are spotted. We've got many boxes with similar/same hardware and they all seem to behave that way, but I'm not sure which part fails, so it's probably not a good idea to just grab something more powerful and hope the problem goes away. Applications themselves don't really report anything wrong when they're running. I can run anything safely on the same hardware in an isolated environment. What can I do to narrow down the problem? Where else should I look for explanation?

    Read the article

  • PPTP VPN Not Working - Peer failed CHAP authentication, PTY read or GRE write failed

    - by armani
    Brand-new install of CentOS 6.3. Followed this guide: http://www.members.optushome.com.au/~wskwok/poptop_ads_howto_1.htm And I got PPTPd running [v1.3.4]. I got the VPN to authenticate users against our Active Directory using winbind, smb, etc. All my tests to see if I'm still authenticated to the AD server pass ["kinit -V [email protected]", "smbclient", "wbinfo -t"]. VPN users were able to connect for like . . . an hour. I tried connecting from my Android phone using domain credentials and saw that I got an IP allocated for internal VPN users [which I've since changed the range, but even setting it back to the initial doesn't work]. Ever since then, no matter what settings I try, I pretty much consistently get this in my /var/log/messages [and the VPN client fails]: [root@vpn2 ~]# tail /var/log/messages Aug 31 15:57:22 vpn2 pppd[18386]: pppd 2.4.5 started by root, uid 0 Aug 31 15:57:22 vpn2 pppd[18386]: Using interface ppp0 Aug 31 15:57:22 vpn2 pppd[18386]: Connect: ppp0 <--> /dev/pts/1 Aug 31 15:57:22 vpn2 pptpd[18385]: GRE: Bad checksum from pppd. Aug 31 15:57:24 vpn2 pppd[18386]: Peer armaniadm failed CHAP authentication Aug 31 15:57:24 vpn2 pppd[18386]: Connection terminated. Aug 31 15:57:24 vpn2 pppd[18386]: Exit. Aug 31 15:57:24 vpn2 pptpd[18385]: GRE: read(fd=6,buffer=8059660,len=8196) from PTY failed: status = -1 error = Input/output error, usually caused by unexpected termination of pppd, check option syntax and pppd logs Aug 31 15:57:24 vpn2 pptpd[18385]: CTRL: PTY read or GRE write failed (pty,gre)=(6,7) Aug 31 15:57:24 vpn2 pptpd[18385]: CTRL: Client 208.54.86.242 control connection finished Now before you go blaming the firewall [all other forum posts I find seem to go there], this VPN server is on our DMZ network. We're using a Juniper SSG-5 Gateway, and I've assigned a WAN IP to the VPN box itself, zoned into the DMZ zone. Then, I have full "Any IP / Any Protocol" open traffic rules between DMZ<--Untrust Zone, and DMZ<--Trust Zone. I'll limit this later to just the authenticating traffic it needs, but for now I think we can rule out the firewall blocking anything. Here's my /etc/pptpd.conf [omitting comments]: option /etc/ppp/options.pptpd logwtmp localip [EXTERNAL_IP_ADDRESS] remoteip [ANOTHER_EXTERNAL_IP_ADDRESS, AND HAVE TRIED AN ARBITRARY GROUP LIKE 5.5.0.0-100] Here's my /etc/ppp/options.pptpd.conf [omitting comments]: name pptpd refuse-pap refuse-chap refuse-mschap require-mschap-v2 require-mppe-128 ms-dns 192.168.200.42 # This is our internal domain controller ms-wins 192.168.200.42 proxyarp lock nobsdcomp novj novjccomp nologfd auth nodefaultroute plugin winbind.so ntlm_auth-helper "/usr/bin/ntlm_auth --helper-protocol=ntlm-server-1" Any help is GREATLY appreciated. I can give you any more info you need to know, and it's a new test server, so I can perform any tests/reboots required to get it up and going. Thanks a ton.

    Read the article

  • Command-line connect to wired network for Ubuntu

    - by Tim
    I like to know how to use command-line to connect to a wired network in general for Ubuntu 8.10? In my case, I connect a cable to my laptop but it doesn't work with my WICD. So I like to try command-line method. Here is the ifconfig of my network adapters: $ ifconfig eth0 Link encap:Ethernet HWaddr 00:c0:9f:8d:23:74 UP BROADCAST MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) Interrupt:19 Base address:0x1800 lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:4457 errors:0 dropped:0 overruns:0 frame:0 TX packets:4457 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:493002 (493.0 KB) TX bytes:493002 (493.0 KB) wlan0 Link encap:Ethernet HWaddr 00:0e:9b:ab:56:19 UP BROADCAST NOTRAILERS PROMISC ALLMULTI MTU:576 Metric:1 RX packets:1508929 errors:0 dropped:0 overruns:0 frame:0 TX packets:768144 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:806027375 (806.0 MB) TX bytes:78834873 (78.8 MB) wlan0:avahi Link encap:Ethernet HWaddr 00:0e:9b:ab:56:19 inet addr:169.254.5.92 Bcast:169.254.255.255 Mask:255.255.0.0 UP BROADCAST NOTRAILERS PROMISC ALLMULTI MTU:576 Metric:1 wmaster0 Link encap:UNSPEC HWaddr 00-0E-9B-AB-56-19-00-00-00-00-00-00-00-00-00-00 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) Thanks and regards! UPDATE: Tried what oyvindio suggested. Here is the failing message: $ sudo dhclient3 eth0 There is already a pid file /var/run/dhclient.pid with pid 18279 killed old client process, removed PID file Internet Systems Consortium DHCP Client V3.1.1 Copyright 2004-2008 Internet Systems Consortium. All rights reserved. For info, please visit http://www.isc.org/sw/dhcp/ mon0: unknown hardware address type 803 wmaster0: unknown hardware address type 801 mon0: unknown hardware address type 803 wmaster0: unknown hardware address type 801 Listening on LPF/eth0/00:c0:9f:8d:23:74 Sending on LPF/eth0/00:c0:9f:8d:23:74 Sending on Socket/fallback DHCPDISCOVER on eth0 to 255.255.255.255 port 67 interval 5 DHCPDISCOVER on eth0 to 255.255.255.255 port 67 interval 10 DHCPDISCOVER on eth0 to 255.255.255.255 port 67 interval 12 DHCPDISCOVER on eth0 to 255.255.255.255 port 67 interval 15 DHCPDISCOVER on eth0 to 255.255.255.255 port 67 interval 11 DHCPDISCOVER on eth0 to 255.255.255.255 port 67 interval 8 No DHCPOFFERS received. No working leases in persistent database - sleeping.

    Read the article

  • Hanging page loads every n loads - SOLVED

    - by Christian
    Hi Guys I recently moved my site to a new server (Apache 2, PHP5, MySQL5). The site is an Invision based forum. Every few posts / topics it just hangs. The data has been written because if you stop and reload, the post / thread is there. I thought it was a write issue initially, but nope. So, the data is written but the page load never completes. It doesn't leave the page where the data has been input. Whats the best way to trouble shoot this issue? The only thing I have done recently is reduce my MySQL timeouts, but I can't see that being an issue as the values are still big enough and there are no mentions of timeouts in the MySQL log. (For the record there is nothing in PHP's error log either) Thanks in advance! EDIT: I checked my server-status. It all looked ok, but I have a suspicion I was hitting my ServerLimit, so I doubled that. Also enabled my Keepalives. Will keep an eye on it. EDIT 2: Its now been a few days and this is still occuring. I have more info though; Apache is throwing seg faults, but enabling core dumps does not produce them. I have tried disabling the modules in apache but it just stops things from working. I fear it may actually be DNS related. If I watch Live Headers in Firefox, absolutely nothing happens during this 'hanging' period. After that, the responses come back fairly promptly. UPDATE (05/04): I built the latest versions of Apache and PHP from source, no luck. I then removed those and used the remi repo to update all my packages to the latest stable. Segfaults seem to have stopped, but the hanging is continuing. ini's are at; www.skylinesaustralia.com/php.ini www.skylinesaustralia.com/my.cnf www.skylinesaustralia.com/httpd.conf UPDATE - SOLVED! - The issue was having a gigantic query cache size in MySQL. It was 2GB, changing it to 64M sorted it. Thanks for all the help everybody, much appreciated!!

    Read the article

  • Ubuntu Lucid: Erratic screen behaviour after boot

    - by fgysin
    In short: about 50% of the time I have a screwed up monitor setup after reboot. About 50% it is totally correct. Now the longer version: I updated my machine from 9.04 to 10.04 (via 9.10). At first I run into some monitor problems (I have a 3-monitor setup) because of the known bug in the new xserver driver for xinerama. This messes up behaviour if the mouse goes either left or above the screen number 0, i.e. I had to make my left-most monitor screen 0. Everything worked out fine finally, I got my 3-monitor setup back with xinerama enabled to get one big desktop streched over 3 screens. Now the fun part: Every time I start up my machine only one of the 3 monitors gets a signal and is woken up: it only recognizes the left-most monitor (screen 0) and crams all the desktop stuff into this one screen. If I go into nvidia settings I only see one physical device although all 3 are connected and have power. When I look into the xorg.conf I can still see my old setup with 3 devices, 3 screens, xinerama active etc... But I was totally unable to get 3 montitors to work. (I tried unplugging monitors, reconfiguring whole nvidia setup, ...) But it gets even better: When I restart my machine (i.e. choose the restart option from the Ubuntu menu) it shuts down and tries to restart. The restart then gets stuck after showing the Ubuntu splash screen with the 'loading bar' (the moving dots thingy) and I am forced to kill the machine by cutting power. But after the power cut the machine boots up normally and suddenly I get my 3 monitor setup back up working. That is until the next time I shut down and start up, where it all starts over again and I only have one monitor... (see above) I really have a hard time seeing where the error is. It must be that the restart boot somehow differs from the 'normal' boot. But the fact that it gets stuck and I need to cut power which then basically triggers a 'normal' boot does not really support this theory... My setup (please tell me if you need further info): 3 monitors as 3 screens as one desktop (with xinerama) 2 nvidia cards where screen 0 and 1 are on card 0 and screen 2 is on card 1 Ubuntu 10.04 Lucid Lynx (updated from 9.10, 9.04, ....) I would appreciate every idea on the subject, at the moment I really don't have any clue what to do...

    Read the article

  • multiple puppet masters set up using inventory

    - by Oli
    I have managed to set up multiple puppet masters with one puppet master acting as a CA and clients are able to get a certificate from this CA server but use their designated puppet master to get their manifests. See this question for more info.. multiple puppet masters. However, there are a couple of things I have had to do to get this working correctly and have an error which I'll get to. First of all, to get inventory working for a puppet-client (PC) connecting to its designated puppet-master (PM), I had to copy the CA certs on PM1 to the PM2 ca directory. I ran this command: scp [email protected]:/var/lib/puppet/ssl/ca/* [email protected]:/var/lib/puppet/ssl/ca/. Once i have done that, I was able to uncomment the SSLCertificateChainFile, SSLCACertificateFile & SSLCARevocationFile section of my rack.conf VH file on the PM2. Once I had done this, inventory started to work. Does this sound an acceptable way to do things? Secondly, in the puppet.conf file, I am setting the designated PM server for that client. Unless there is a better way, this is how it'll work in my production setup. So PC1 will talk to PM1 and PC2 will talk to PM2. This is where I have an error. When PC2 first requests a cert from the CA on PM1, the cert appears and then I sign the cert on the CA on PM1. When I then do a puppet agent --test on PC2 (which has server = PM2 in puppet.conf), I get this error: Warning: Unable to fetch my node definition, but the agent run will continue: Warning: Error 403 on SERVER: Forbidden request: puppet-master2.test.net(10.1.1.161) access to /certificate_revocation_list/ca [find] at :112 However, if I change the PC2 puppet.conf file and specify server = PM1 and the rerun puppet agent --test, i do not get any errors. I can then revert the change in the puppet.conf file back to server = PM2 and everything seems to run normally. Do I have to set up some kind of ProxyPassMatch on PM2 for requests made from clients to /certificate_revocation_list/* and redirect them to PM1? Or how can I fix this error? Cheers, Oli

    Read the article

  • DVD playback with Windows Media Player 11 works fine, but when copied to HDD and then played back, t

    - by stakx
    I have several DVDs with short documentaries on it. Since the notebook I'm using (a Dell Latitude E6400) has only one DVD drive, and I might play back those short movies very often, I thought of copying them to the HDD and playing them back from there. However, I've run into a problem, namely stuttering audio. Problem description: When I play back these movies directly from DVD (with Windows Media Player 11 under Windows Vista), everything works fine. Smooth video, no significant audio problems (only the occasional click). But as soon as I copy any of these DVDs to the HDD and try to play them back from there (e.g. using the wmpdvd://drive/title/chapter?contentdir=path protocol, I get stuttering audio — audio playback sounds like a machine gun for a third of a second or so, approx. every 8 seconds. I have tried converting the VOB files from the DVD to another format (ie. ripping), but that resulted in a noticeable downgrade of picture quality. Therefore I thought it best to keep the files in their original format, if possible. Still, I suspect that the stuttering audio is due to some (de-)muxing problem, and that changing the file format might help. (After all, video playback is fine; therefore I don't think that the hardware is too slow for playback.) Only thing is, I don't know how to convert the VOB files to another Windows Media Player-compatible format without quality loss. I hope someone can help me, or give me further pointers on things I could try out to get HDD playback to work without the problem described. Some things I've tried so far, without any success: VOB2MPG, in order to convert the .vob file to a .mpg file. But that changes only the A/V container, not the content. No re-encoding takes place at all. Re-encoding with MPlayer/MEncoder. Lots of quality loss there, and I frankly haven't got the time to test all possible settings combinations available. Disabling all plug-ins, equalizers, etc. in Windows Media Player. Disabling all hardware acceleration on the audio playback device. Further info on the VOB files I'm trying to playback: The video format is MPEG ES, PAL 720x576 pixels @ 24/25 frames per second. The sound stream is uncompressed PCM, 16-bit stereo @ 48kHz. (Might it help if I somehow re-encoded the sound stream at a lower resolution, or as an MP3? If so, how would I do this without changing the video stream?) P.S.: I am limited to using Windows Media Player (11). (I previously tried MPlayer btw., but the video playback quality was surprisingly bad.)

    Read the article

  • I am unable to connect to my netbook from any machine on my network until the netbook has pinged it

    - by Samuel Husky
    I have a rather strange issue with my netbook on my local network. When trying to connect to it in any way from a remote system it does not appear to find it. However if I get the netbook to ping the machine trying to connect it mystically appears to work. Below is the ping test from my main PC to the netbook. C:\Users\Sam>ping 192.168.8.102 Pinging 192.168.8.102 with 32 bytes of data: Reply from 192.168.8.100: Destination host unreachable. Reply from 192.168.8.100: Destination host unreachable. Reply from 192.168.8.100: Destination host unreachable. Reply from 192.168.8.100: Destination host unreachable. Ping statistics for 192.168.8.102: Packets: Sent = 4, Received = 4, Lost = 0 (0% loss), Now a ping from the netbook to my main PC sam@malamute ~ $ ping 192.168.8.100 PING 192.168.8.100 (192.168.8.100) 56(84) bytes of data. 64 bytes from 192.168.8.100: icmp_req=1 ttl=128 time=2.46 ms 64 bytes from 192.168.8.100: icmp_req=2 ttl=128 time=0.835 ms 64 bytes from 192.168.8.100: icmp_req=3 ttl=128 time=1.60 ms 64 bytes from 192.168.8.100: icmp_req=4 ttl=128 time=1.32 ms 64 bytes from 192.168.8.100: icmp_req=5 ttl=128 time=1.34 ms ^C --- 192.168.8.100 ping statistics --- 5 packets transmitted, 5 received, 0% packet loss, time 4004ms rtt min/avg/max/mdev = 0.835/1.514/2.460/0.536 ms And the same ping again from the main PC after the netbook has made a connection to it C:\Users\Sam>ping 192.168.8.102 Pinging 192.168.8.102 with 32 bytes of data: Reply from 192.168.8.102: bytes=32 time=1ms TTL=64 Reply from 192.168.8.102: bytes=32 time=1ms TTL=64 Reply from 192.168.8.102: bytes=32 time=1ms TTL=64 Reply from 192.168.8.102: bytes=32 time=1ms TTL=64 Ping statistics for 192.168.8.102: Packets: Sent = 4, Received = 4, Lost = 0 (0% loss), Approximate round trip times in milli-seconds: Minimum = 1ms, Maximum = 1ms, Average = 1ms The netbook is running Gentoo and is currently connected via wireless. My main PC is running Windows 7 however I get the same result no matter what PC I use on this network. Please see this example from a CentOS machine on the same network [root@tiger ~]# ping 192.168.8.102 PING 192.168.8.102 (192.168.8.102) 56(84) bytes of data. From 192.168.8.200 icmp_seq=2 Destination Host Unreachable From 192.168.8.200 icmp_seq=3 Destination Host Unreachable From 192.168.8.200 icmp_seq=4 Destination Host Unreachable --- 192.168.8.102 ping statistics --- 6 packets transmitted, 0 received, +3 errors, 100% packet loss, time 5000ms , pipe 3 If you need any more information or require logs or config files please let me know and any assistance is greatly appreciated. Additional info: No responses on TCP dump from the netbook. Same result when booting into Ubuntu from a USB key. No issue when using a wired Ethernet connection.

    Read the article

  • IIS6 Virtual Directory 500 Error on Remote Share

    - by David Boike
    We have our servers at the server farm in a domain. Let's call it LIVE. Our developer computers live in a completely separate corporate domain, miles and miles away. Let's call it CORP. We have a large central storage unit (unix) that houses images and other media needed by many webservers in the server farm. The IIS application pools run as (let's say) LIVE\MediaUser and use those credentials to connect to a central storage share as a virtual directory, retrieve the images, and serve them as if they were local on each server. The problem is in development. On my development machine. I log in as CORP\MyName. My IIS 6 application pool runs as Network Service. I can't run it as a user from the LIVE domain because my machine isn't (and can not be) joined to that domain. I try to create a virtual directory, point it to the same network directory, click Connect As, uncheck the "Always use the authenticated user's credentials when validating access to the network directory" checkbox so that I can enter the login info, enter the credentails for LIVE\MediaUser, click OK, verify the password, etc. This doesn't work. I get "HTTP Error 500 - Internal server error" from IIS. The IIS log file reports sc-status = 500, sc-substatus = 16, and sc-win32-status = 1326. The documentation says this means "UNC authorization credentials are incorrect" and the Win32 status means "Logon failure: unknown user name or bad password." This would be all and good if it were anywhere close to accurate. I double- and trouble-checked it. Tried multiple known good logins. The IIS manager allows me to view the file tree in its window, it's only the browser that kicks me out. I even tried going to the virtual directory's Directory Security tab, and under Authentication and Access Control, I tried using the same LIVE domain username for the anonymous access credential. No luck. I'm not trying to run any ASP, ASP.NET, or other dynamic anything out of the virtual directory. I just want IIS to be able to load static images, css, and js files. If anyone has some bright ideas I would be most appreciative!

    Read the article

  • Seeking (somewhat) better explanations about supporting > 2.1 TB hard drives.

    - by irrational John
    Today while Googling about I stumbled across posts claiming that Seagate plans to ship a 3TB drive sometime later in 2010. Unfortunately, the stuff I looked at all seemed to contain tidbits of info which I didn't think fit together properly. (I would link to some examples, but I'm only allowed 1 link per post at the moment). Now I really don't have any "need" to better understand the underlying tedious details of this. I am just curious. And confused. So ... some questions I'm hoping someone better informed than I might answer. The talk about a potential addressing problem in both the hardware and the software confused me. The assertion is that something called something called Long LBA addressing (LLBA) is needed in the Command Descriptor Block as a way to get around the current limits to access a hard drive bigger than ~2.1 (or ~2.2?) TB. OK, fine. But I thought the last time this problem came up it was solved by extending the length of the LBA field from 28 to 48 bits. (Remember this website? www.48bitlba.com) A 6 byte LBA is clearly large enough, so what's up with this LLBA talk. I thought this was all fixed back by Win XP SP2, if not sooner? And certainly all the hardware should be up to the task, shouldn't it? The real problem as I understand it with drives much bigger than 2 TB are the 4 byte LBA fields in the Master Boot Record (MBR) used to partition just about all hard drives at the moment. The most likely solution is to migrate to Intel's GUID Partition Table (GPT). A GPT uses 8 byte fields for the LBA. What I don't understand in this context is what is the problem with booting say Windows from a 3TB drive that uses a GPT. Granted, the current PC BIOS wouldn't know how to recognize or work with a GPT. But every GPT comes with a so-called "Safety" or "Guarding" MBR in sector 0.Apple already uses a hybrid version of the MBR to allow them to boot Windows on their Intel Macs (aka Boot Camp). Couldn't something similar be done to allow the PC BIOS to recognize and boot from a partition in, say, the first 1 GB of a 3GB or larger drive? I've got more questions such as where do 4K sectors fit into all of this. But it's probably time I just shut up and posted this. ;-) -irrational john

    Read the article

  • SSH to an ubuntu machine using avahi

    - by tensaiji
    I have an ubuntu box that I connect to using avahi. Connecting to that box works fine for all services (I regularly use AFP, SSH and SMB on it) but I've noticed that whenever I connect to it from a mac using SSH (and using the ".local" dns name provided by avahi - eg. "ssh .local") SSH tries to connect using ipv6, which for some reason times out (after two minutes) then it tries ipv4 which connects immediately. I'd like to avoid this timeout, as it's really annoying for me and other users - if SSH tried ipv4 first or if ssh over ipv6 worked then that would solve the problem. But so far I've been unable to get either to work (the best I've managed is to specify the "-4" option to SSH to stop it from trying ipv6 at all). I'm using Ubuntu 10.04. Any solution has to be on the server (not the client) as there are multiple clients connecting. A possible complication might be that my LAN is set up to allow link-local ipv6 addresses only, but I have other servers (using Mac OS) that I can SSH into using ipv6) I suspect that the problem could be solved by either preventing avahi from broadcasting the ipv6 address, or by enabling ssh over ipv6, but so far as I can tell avahi is already configured not to broadcast the ipv6 address and sshd is configured to allow ipv6 connections! Here's my /etc/avahi/avahi-daemon.conf (I don't think I've changed anything from the ubuntu defaults) [server] #host-name=foo #domain-name=local #browse-domains=0pointer.de, zeroconf.org use-ipv4=yes use-ipv6=no #allow-interfaces=eth0 #deny-interfaces=eth1 #check-response-ttl=no #use-iff-running=no #enable-dbus=yes #disallow-other-stacks=no #allow-point-to-point=no [wide-area] enable-wide-area=yes [publish] #disable-publishing=no #disable-user-service-publishing=no #add-service-cookie=no #publish-addresses=yes #publish-hinfo=yes #publish-workstation=yes #publish-domain=yes #publish-dns-servers=192.168.50.1, 192.168.50.2 #publish-resolv-conf-dns-servers=yes #publish-aaaa-on-ipv4=yes #publish-a-on-ipv6=no [reflector] #enable-reflector=no #reflect-ipv=no [rlimits] #rlimit-as= rlimit-core=0 rlimit-data=4194304 rlimit-fsize=0 rlimit-nofile=300 rlimit-stack=4194304 rlimit-nproc=3 and here's my sshd_config (mainly updated to only allow pub/private keys): # What ports, IPs and protocols we listen for Port 22 # Use these options to restrict which interfaces/protocols sshd will bind to #ListenAddress :: #ListenAddress 0.0.0.0 Protocol 2 # HostKeys for protocol version 2 HostKey /etc/ssh/ssh_host_rsa_key HostKey /etc/ssh/ssh_host_dsa_key #Privilege Separation is turned on for security UsePrivilegeSeparation yes # Lifetime and size of ephemeral version 1 server key KeyRegenerationInterval 3600 ServerKeyBits 768 # Logging SyslogFacility AUTH LogLevel INFO # Authentication: LoginGraceTime 180 PermitRootLogin no StrictModes yes RSAAuthentication yes PubkeyAuthentication yes #AuthorizedKeysFile %h/.ssh/authorized_keys # Don't read the user's ~/.rhosts and ~/.shosts files IgnoreRhosts yes # For this to work you will also need host keys in /etc/ssh_known_hosts RhostsRSAAuthentication no # similar for protocol version 2 HostbasedAuthentication no # Uncomment if you don't trust ~/.ssh/known_hosts for RhostsRSAAuthentication #IgnoreUserKnownHosts yes # To enable empty passwords, change to yes (NOT RECOMMENDED) PermitEmptyPasswords no # Change to yes to enable challenge-response passwords (beware issues with # some PAM modules and threads) ChallengeResponseAuthentication no # Change to no to disable tunnelled clear text passwords PasswordAuthentication no AllowGroups sshusers # Kerberos options #KerberosAuthentication no #KerberosGetAFSToken no #KerberosOrLocalPasswd yes #KerberosTicketCleanup yes # GSSAPI options #GSSAPIAuthentication no #GSSAPICleanupCredentials yes X11Forwarding yes X11DisplayOffset 10 PrintMotd no PrintLastLog yes TCPKeepAlive yes #UseLogin no MaxStartups 10:30:60 #Banner /etc/issue.net # Allow client to pass locale environment variables AcceptEnv LANG LC_* Subsystem sftp /usr/lib/openssh/sftp-server UsePAM yes Does anyone have any ideas that I can try, or has experienced anything similar?

    Read the article

< Previous Page | 391 392 393 394 395 396 397 398 399 400 401 402  | Next Page >