Search Results

Search found 42307 results on 1693 pages for 'solid works'.

Page 276/1693 | < Previous Page | 272 273 274 275 276 277 278 279 280 281 282 283  | Next Page >

  • mount.nfs: access denied by server while mounting (null), can't find any log information

    - by Mark0978
    Two ubuntu servers: 10.0.8.2 is the client, 192.168.20.58 is the server. Between the 2 machines, Ping works, ssh works (in both directions). From 10.0.8.2 showmount -e 192.168.20.58 Export list for 192.168.20.58: /imr/nfsshares/foobar 10.0.8.2 mount.nfs 192.168.20.58:/imr/nfsshares/foobar /var/data/foobar -v mount.nfs: access denied by server while mounting (null) Found several things online, tried them all and still can't find any log information anywhere. On the server: [email protected]:/var/log# cat /etc/hosts.allow sendmail: all ALL: 10.0.8.2 /etc/hosts.deny is all comments How can I get a trail of log statements to figure this out? What does it take to get some logging so I have some idea of WHY it won't mount? On the server: [email protected]# nmap -sR RPC 192.168.20.58 Starting Nmap 5.21 ( http://nmap.org ) at 2012-07-04 21:16 CDT Failed to resolve given hostname/IP: RPC. Note that you can't use '/mask' AND '1-4,7,100-' style IP ranges Nmap scan report for 192.168.20.58 Host is up (0.0000060s latency). Not shown: 988 closed ports PORT STATE SERVICE VERSION 22/tcp open unknown 80/tcp open unknown 111/tcp open unknown 139/tcp open unknown 445/tcp open unknown 902/tcp open unknown 2049/tcp open unknown 3000/tcp open unknown 5666/tcp open unknown 8009/tcp open unknown 8222/tcp open unknown 8333/tcp open unknown Nmap done: 1 IP address (1 host up) scanned in 3.81 seconds From the client: [email protected]:~$ nmap -sR RPC 192.168.20.58 Starting Nmap 5.21 ( http://nmap.org ) at 2012-07-04 22:14 EDT Failed to resolve given hostname/IP: RPC. Note that you can't use '/mask' AND '1-4,7,100-' style IP ranges Nmap scan report for 192.168.20.58 Host is up (0.73s latency). Not shown: 988 closed ports PORT STATE SERVICE VERSION 22/tcp open unknown 80/tcp open unknown 111/tcp open rpcbind (rpcbind V2) 2 (rpc #100000) 139/tcp open unknown 445/tcp open unknown 902/tcp open unknown 2049/tcp open nfs (nfs V2-4) 2-4 (rpc #100003) 3000/tcp open unknown 5666/tcp open unknown 8009/tcp open unknown 8222/tcp open unknown 8333/tcp open unknown Nmap done: 1 IP address (1 host up) scanned in 191.56 seconds

    Read the article

  • Apache2 with SSL and mod_jk on SUSE Linux Enterprise | Apache always starts SSL disabled

    - by Shaakunthala
    I have installed Apache2 (with mod_ssl enabled) on SUSE Linux Enterprise Server 11 (x86_64) (patchlevel 1), using YaST. Once installed, I tested whether everything works fine so far. SSL also worked fine. Just 'apache2ctl start' was enough to make everything working. Then I installed mod_jk and applied the following configuration changes to make it work. /etc/sysconfig/apache2 (added JK module) APACHE_MODULES="... ... ... ... ...jk" /etc/apache2/httpd.conf (included mod_jk.conf) Include /etc/apache2/mod_jk.conf /etc/apache2/mod_jk.conf (new file) JkLogFile /var/log/apache2/mod_jk.log JkWorkersFile /etc/apache2/mod_jk/workers.properties JkShmFile /etc/apache2/mod_jk/mod_jk.shm # Set the jk log level [debug/error/info] JkLogLevel info # Select the timestamp log format JkLogStampFormat "[%a %b %d %H:%M:%S %Y] " mod_jk.log & mod_jk.shm files were also created. /etc/apache2/mod_jk/workers.properties (new file) worker.list=jira worker.jira.type=ajp13 worker.jira.host=127.0.0.1 worker.jira.port=8009 Once everything is done, I've restarted Apache using the following command, apache2ctl restart Then I observed that SSL is not working. When checked with telnet, I observed that port 443 is not open. In listen.conf, if I specify port 443 bypassing 'IfDefine' and 'IfModule' conditions, then SSL works properly. This is likely the 'SSL' flag is not passed to Apache. I did not make this a persistent change as I thought it might not be the correct practice. I checked /etc/sysconfig/apache2 to see if this has been altered, but it is there. Although this flag is enabled, Apache won't start with SSL support. APACHE_SERVER_FLAGS="SSL" Finally, I had to start Apache using the following command, apache2ctl -D SSL -k start And my question is, why did Apache (or apache2ctl) fail to start with SSL when I have installed and correctly configured mod_jk, and no other configuration changes were applied? Have I missed anything? Thanks in advance. -- Shaakunthala

    Read the article

  • curl FTPS with client certificate to a vsftpd

    - by weeheavy
    I'd like to authenticate FTP clients either via username+password or a client certificate. Only FTPS is allowed. User/password works, but while testing with curl (I don't have another option) and a client certificate, I need to pass a user. Isn't it technically possible to authenticate only by providing a certificate? vsftpd.conf passwd_chroot_enable=YES chroot_local_user=YES ssl_enable=YES rsa_cert_file=usrlocal/ssl/certs/vsftpd.pem force_local_data_ssl=YES force_local_logins_ssl=YES Tested with curl -v -k -E client-crt.pem --ftp-ssl-reqd ftp://server:21/testfile the output is: * SSLv3, TLS handshake, Client hello (1): * SSLv3, TLS handshake, Server hello (2): * SSLv3, TLS handshake, CERT (11): * SSLv3, TLS handshake, Request CERT (13): * SSLv3, TLS handshake, Server finished (14): * SSLv3, TLS handshake, CERT (11): * SSLv3, TLS handshake, Client key exchange (16): * SSLv3, TLS handshake, CERT verify (15): * SSLv3, TLS change cipher, Client hello (1): * SSLv3, TLS handshake, Finished (20): * SSLv3, TLS change cipher, Client hello (1): * SSLv3, TLS handshake, Finished (20): * SSL connection using DES-CBC3-SHA * Server certificate: * SSL certificate verify result: self signed certificate (18), continuing anyway. > USER anonymous < 530 Anonymous sessions may not use encryption. * Access denied: 530 * Closing connection #0 * SSLv3, TLS alert, Client hello (1): curl: (67) Access denied: 530 This is theoretically ok, as i forbid anonymous access. If I specify a user with -u username:pass it works, but it would without a certificate too. The client certificate seems to be ok, it looks like this: client-crt.pem -----BEGIN RSA PRIVATE KEY----- content -----END RSA PRIVATE KEY----- -----BEGIN CERTIFICATE----- content -----END CERTIFICATE----- What am I missing? Thanks in advance. (The OS is Solaris 10 SPARC).

    Read the article

  • Windows7 - “The specified network password is not correct.” when the password is in fact correct.

    - by Win7 Home User
    I have a samba server setup for some time now. It is a Hardware NAS - which unfortunately does not provide access to the Samba logs. (the exact model of the NAS is called Addonics NAS Adapter ) I also have a Windows Vista and a Windows XP machine - from both I am able to map \\192.168.0.20\Smd with no errors ( net use l: \\192.168.0.20\Smd works, after asking for my username and password). I also bought a brand new computer, with Windows 7, and when I try to execute the same exact net use command on it - using the exact same username/password pair, I get a "The specified network password is not correct." message. I also tried mapping from the Windows explorer menu, and got the same error. I synchronized the clocks of the two machines, tried again... and yet the same error persists. So what is really surprising here is that mapping works from WindowXP and Windows Vista machines, but fails from a Windows7 machine using the exact same command and username/password - Anyone has any idea of what could be causing this or how to solve the problem? Thanks

    Read the article

  • Howto make a diff of a bios or backup/ restore all bios settings

    - by sfonck
    Hi, I'm using an Dell M90 Precision Laptop which has a NVidia Quadro FX 2500M graphics card and is running Windows XP. Laptop has been running fine - but a few weeks ago screen went 'white' - restarted computer- bios and startup screens show weird green dots and stripes, normal startup only shows a black screen... only VGA mode works to display something. I've been trying to remove and reinstall the correct drivers downloaded from Dell's website - no solution. I gave up and reinstalled XP - everything was working perfect again. 2 weeks later - again the white screen - tried everything again (flashin new bios also - nothing works) Reinstalled XP - everyhting was working again, so I made a DriveSnapShot of the partition. Today - again the 'white screen'. Ok, no problem ...I was thinking all I needed to do was to restore the DriveSnapShot backup... Few minutes later the backup is restored ... but guess what: video driver does not work correctly... As the DriveSnapShot restored the complete partition, as it was at the time everything was working perfectly, this would mean my driver problems are due to 'settings' in the bios or on the graphics-card itself + these 'settings' can get overridden by doing a new XP-install.... I'm out of options, can somebody help me to find a solution for this problem: Is there some way to backup and restore a bios after seeing some problems? Is there some way to know what is causing this problem like a bios diff utility? Thanks!

    Read the article

  • Postfix: LDAP not working (warning: dict_ldap_lookup: Search base not found: 32: No such object)

    - by Heinzi
    I set up LDAP access with postfix. ldapsearch -D "cn=postfix,ou=users,ou=system,[domain]" -w postfix -b "ou=users,ou=people,[domain]" -s sub "(&(objectclass=inetOrgPerson)(mail=[mailaddr]))" delivers the correct entry. The LDAP config file looks like root@server2:/etc/postfix/ldap# cat mailbox_maps.cf server_host = localhost search_base = ou=users,ou=people,[domain] scope = sub bind = yes bind_dn = cn=postfix,ou=users,ou=system,[domain] bind_pw = postfix query_filter = (&(objectclass=inetOrgPerson)(mail=%s)) result_attribute = uid debug_level = 2 The bind_dn and bind_pw should be the same as I used above with ldapsearch. Nevertheless, calling postmap doesn't work: root@server2:/etc/postfix/ldap# postmap -q [mailaddr] ldap:/etc/postfix/ldap/mailbox_maps.cf postmap: warning: dict_ldap_lookup: /etc/postfix/ldap/mailbox_maps.cf: Search base 'ou=users,ou=people,[domain]' not found: 32: No such object If I change LDAP configuration, so that anonymous users have complete access to LDAP olcAccess: {-1}to * by * read then it works: root@server2:/etc/postfix/ldap# postmap -q [mailaddr] ldap:/etc/postfix/ldap/mailbox_maps.cf [user-id] But when I restrict this access to the postfix user: olcAccess: {-1}to * by dn="cn=postfix,ou=users,ou=system,[domain]" read by * break it doesn't work but produces the error printed above (although ldapsearch works, only postmap doesn't). Why doesn't it work when binding with a postfix DN? I think I set up the LDAP ACL for the postfix user correctly, as the ldapsearch command should prove. What can be the reason for this behaviour?

    Read the article

  • Authenticating Windows 7 against MIT Kerberos 5

    - by tommed
    Hi There, I've been wracking my brains trying to get Windows 7 authenticating against a MIT Kerberos 5 Realm (which is running on an Arch Linux server). I've done the following on the server (aka dc1): Installed and configured a NTP time server Installed and configured DHCP and DNS (setup for the domain tnet.loc) Installed Kerberos from source Setup the database Configured the keytab Setup the ACL file with: *@TNET.LOC * Added a policy for my user and my machine: addpol users addpol admin addpol hosts ank -policy users [email protected] ank -policy admin tom/[email protected] ank -policy hosts host/wdesk3.tnet.loc -pw MYPASSWORDHERE I then did the following to the windows 7 client (aka wdesk3): Made sure the ip address was supplied by my DHCP server and dc1.tnet.loc pings ok Set the internet time server to my linux server (aka dc1.tnet.loc) Used ksetup to configure the realm: ksetup /SetRealm TNET.LOC ksetup /AddKdc dc1.tnet.loc ksetip /SetComputerPassword MYPASSWORDHERE ksetip /MapUser * * After some googl-ing I found that DES encryption was disabled by Windows 7 by default and I turned the policy on to support DES encryption over Kerberos Then I rebooted the windows client However after doing all that I still cannot login from my Windows client. :( Looking at the logs on the server; the request looks fine and everything works great, I think the issue is that the response from the KDC is not recognized by the Windows Client and a generic login error appears: "Login Failure: User name or password is invalid". The log file for the server looks like this (I tail'ed this so I know it's happening when the Windows machine attempts the login): Screen-shot: http://dl.dropbox.com/u/577250/email/login_attempt.png If I supply an invalid realm in the login window I get a completely different error message, so I don't think it's a connection problem from the client to the server? But I can't find any error logs on the Windows machine? (anyone know where these are?) If I try: runas /netonly /user:[email protected] cmd.exe everything works (although I don't get anything appear in the server logs, so I'm wondering if it's not touching the server for this??), but if I run: runas /user:[email protected] cmd.exe I get the same authentication error. Any Kerberos Gurus out there who can give me some ideas as to what to try next? pretty please?

    Read the article

  • How to import certificate for Apache + LDAPS?

    - by user101956
    I am trying to get ldaps to work through Apache 2.2.17 (Windows Server 2008). If I use ldap (plain text) my configuration works great. LDAPTrustedGlobalCert CA_DER C:/wamp/certs/Trusted_Root_Certificate.cer LDAPVerifyServerCert Off <Location /> AuthLDAPBindDN "CN=corpsvcatlas,OU=Service Accounts,OU=u00958,OU=00958,DC=hca,DC=corpad,DC=net" AuthLDAPBindPassword ..removed.. AuthLDAPURL "ldaps://gc-hca.corpad.net:3269/dc=hca,dc=corpad,dc=net?sAMAccountName?sub" AuthType Basic AuthName "USE YOUR WINDOWS ACCOUNT" AuthBasicProvider ldap AuthUserFile /dev/null require valid-user </Location> I also tried the other encryption choices besides CA_DER just to be safe, with no luck. Finally, I also needed this with Apache tomcat. For tomcat I used the tomcat JRE and ran a line like this: keytool -import -trustcacerts -keystore cacerts -storepass changeit -noprompt -alias mycert -file Trusted_Root_Certificate.cer After doing the above line ldaps worked greate via tomcat. This lets me know that my certificate is a-ok. Update: Both ldap modules are turned on, since using ldap instead of ldaps works fine. When I run a git clone this is the error returned: C:\Tempgit clone http://eqb9718@localhost/git/Liferay.git Cloning into Liferay... Password: error: The requested URL returned error: 500 while accessing http://eqb9718@loca lhost/git/Liferay.git/info/refs fatal: HTTP request failed access.log has this: 127.0.0.1 - eqb9718 [23/Nov/2011:18:25:12 -0600] "GET /git/Liferay.git/info/refs service=git-upload-pack HTTP/1.1" 500 535 127.0.0.1 - eqb9718 [23/Nov/2011:18:25:33 -0600] "GET /git/Liferay.git/info/refs HTTP/1.1" 500 535 apache_error.log has nothing. Is there any more verbose logging I can turn on or better tests to do?

    Read the article

  • How to Apache SSL proxy to openerp 7 running in VM?

    - by Johnbritto
    I have installed openerp v7 in an ubuntu 12.04 Virtual machine from launchpad.i.e server, web, addons. I configured SSL reverse proxy on virtual machine and my configuration for virtual host *:443 are ServerName openerp.mydomain.net ServerAdmin openerp@localhost SSLEngine on SSLCertificateFile /etc/ssl/openerp/server.crt SSLCertificateKeyFile /etc/ssl/openerp/server.key ProxyRequests Off ProxyPreserveHost On <Proxy *> Order deny,allow Allow from all </Proxy> ProxyVia On ProxyPass / http://172.16.150.14:8069/ ProxyPassReverse / http://172.16.150.14:8069/ RequestHeader set "X-Forwarded-Proto" "https" # Fix IE problem (httpapache proxy dav error 408/409) SetEnv proxy-nokeepalive 1 </VirtualHost> on host, I have configured apache reverse proxy for my subdomain in vhost_ssl.conf as SSLEngine On SSLProxyEngine On ProxyRequests Off ProxyPreserveHost On <Proxy *> Order deny,allow Allow from all </Proxy> ProxyPass / https://172.16.150.14/ ProxyPassReverse / https://172.16.150.14/ SetEnv proxy-nokeepalive 1 <Location /> Order allow,deny Allow from all </Location> I have set 172.16.150.14 on netrpc and xmlrcs interfaces in openerp-server.conf. Now, when I access https:// openerp.mydomain.net from Girefox and chrome browser..I get http:// openerp.mydomain.net%2C%20openerp.mydomain.net/?db=testingdb which makes 404. But when i access URL from IE 9, the URL https:// openerp.mydomain.net works ok .. secondly if i change the parameter list_db= false, then the links works as expected.. Kindly let me know what is creating bottleneck with URL redirect to http://openerp.mydomain.net, openerp.myydomain.net/?db=testdb on Firefox and chrome. i am struck here doing troubleshooting with the URL to work.

    Read the article

  • Problem posting multipart form data using Apache with mod_proxy to a mongrel instance

    - by Ryan E
    I am attempting to simulate my site's production environment as closely as I can on my local machine. This is a rails site that uses Apache w/ mod_proxy to forward requests to a mongrel cluster. On my Mac OSX Leopard machine, I have the default install of apache running and have configured a vhost to use mod_proxy to to forward requests to a local running mongrel instance on port 3000. <Proxy balancer://mongrel_cluster-development> BalancerMember http://127.0.0.1:3000 </Proxy> For the most part, this is working fine. I can browse my development site using the ServerName of the vhost I configured and can confirm that requests are being properly forwarded to the mongrel instance. However, there is a page on the site that has a multipart form that is used to upload an image to the server. When I post this form, there is a delay of about 5 minutes and the browser ultimately returns a Bad Request Your browser sent a request that this server could not understand. In the error log for my vhost: [Tue Sep 22 09:47:57 2009] [error] (70007)The timeout specified has expired: proxy: prefetch request body failed to 127.0.0.1:3000 (127.0.0.1) from ::1 () This same form works fine if I browse directly to the mongrel instance (http://127.0.0.1:3000). Anybody have any idea what the problem might be and how to fix it? If there is any important information that I neglected to include, post a comment, and I can add to this question. Note: Upon further investigation, this appears to be a problem specific to Safari. The form works fine in Firefox.

    Read the article

  • kickstart: reference floppy drive via %ksappend or %include

    - by virtualeyes
    Having trouble getting %ksappend or %include to work when referencing a local floppy drive. Booting off remote server's cd-rom drive I am able to load the CentOS 6 minimal install image, and then add ks=hd:fd0/ks-jvm.cfg to boot params to load kickstart init file from floppy disk. That works fine. The problem is that I want to load a streamlined generic init file off the floppy and then, within the init, %ksappend or %include specific config files relative to the type of server I'm building (JVM, MySQL, Apache, etc.) I do not have DHCP, networking needs to be specified statically, so %ksappend and %include both fail when attempting to reference http://some-LAN-IP/foo.cfg since networking has not yet been set. The kickstart setup only works when I glob in the entire config into a single file, which is great, but ugly and difficult to maintain when I return later, having forgotten the original setup. At this point I'd be happy if I could get %ksappend or %include working with a floppy drive reference in the %post section; that would consolidate a lot of common boilerplate that all kickstarts will rely on (sshd_config, rsync config, resolve.conf, and so on) Thanks for providing the magic floppy drive reference that is eluding me!

    Read the article

  • KVM recommendations

    - by alex
    I recently tried out a cheaper KVM solution, a dual monitor one with a USB hub and audio/mic support. It seemed the perfect KVM. However, these are my experiences. When using my keyboard via the PS/2 connection, none of my multimedia keys work (useful, because it has a volume control and my speakers do not, only on a wireless remote control.) I plugged the keyboard in via the USB port - and it seemed to work. However, I believe to switch the hub from PC to Mac, you need to use a keyboard combo, which is only supported when the keyboard is plugged in via PS/2 Sometimes the middle mouse button doesn't work when connected via PS/2. The multi monitor is VGA - I just found out by the looks of things my Mac Mini outputs DVI Digital only (though this is my fault!). Mac works with 2 screens, but switching and switching back can cause it not to display on the 2nd screen unless I go detect displays again. My question is - is there a KVM out there that supports these features? USB keyboard and mouse inputs will full multimedia keys support Dual monitor DVI connections Hotkey to change PCs and physical button USB hub Emulate screens attached when switching to a different computer Works with PC and Mac Audio / Microphone support Does one exist, that won't cost the world? So far the only one that seems to support all this (that I can tell) is this one. UPDATE I ended up buying the Aten CS1642. It's expensive, but it seems to work great!

    Read the article

  • What might cause https failure when not specifying SSL protocol?

    - by user35042
    I have a VBScript program that retrieves a web page from a server not under my control. The URL looks something like https://someserver.xxx/index.html. I use this code to create the object that does the page getting: Set objWinHttp = CreateObject("WinHttp.WinHttpRequest.5.1") When I wrote my program it had no problem retrieving this page. Recently, the web server serving this page went through an upgrade. Now my program can no longer fetch the page. Some clues: Clue 1. I can fetch the web page if I use a browser (I tried Firefox, IE, and Chrome). Clue 2. The VBScript code yields this error: The message received was unexpected or badly formatted. Clue 3. I can fetch the web page from the command line in certain cases but not in others: curl --sslv3 -v -k 'https://someserver.xxx/index.html' # WORKS! curl --sslv2 -v -k 'https://someserver.xxx/index.html' # WORKS! curl -v -k 'https://someserver.xxx/index.html' # FAILS curl --tlsv1 -v -k 'https://someserver.xxx/index.html' # FAILS In the case where I do not specify a protocol I get this error: * SSLv3, TLS handshake, Client hello (1): * error:14077417:SSL routines:SSL23_GET_SERVER_HELLO:sslv3 alert illegal parameter * Closing connection #0 In the case where I specify --tlsv1 I get this error: * SSLv3, TLS handshake, Client hello (1): * SSLv3, TLS alert, Server hello (2): * error:14094417:SSL routines:SSL3_READ_BYTES:sslv3 alert illegal parameter * Closing connection #0 A. Does anyone have any suggestions or ideas on what might be going on at the web server end (I am unable to talk to the admins of the web server to find out what they changed). B. Is there a way I can change my VBScript code to work around this issue? Can the SSL version be forced?

    Read the article

  • Can you use SQLite 64 bit odbc from powershell

    - by Levin Magruder
    Basically, my question is "Does this combo work for anyone," and, "Can you see what I am doing wrong" I have installed 64 bit ODBC driver for sqlite, downloaded from this page. I am running powershell version 2 on window 7. In ODBC configuration I create a system DSN with name LoveBoat, pointing at a valid file. I don't have any "real" apps to test whether the ODBC connection works, but a simple program I list below works. However, with PowerShell, which is what I want: $x = new-object System.Data.Odbc.OdbcConnection("DSN=LoveBoat") $x.open() That yields the error: Exception calling "Open" with "0" argument(s): "The type initializer for 'System.Transactions.Diagnostics.DiagnosticTrace' threw an exception." At line:1 char:8 + $x.Open <<<< () + CategoryInfo : NotSpecified: (:) [], MethodInvocationException + FullyQualifiedErrorId : DotNetMethodException On the other hand the test program below runs and prints out the expected data. using System.Data.Odbc; using System.Data; using System; public class program { public static void Main(string[] args) { OdbcConnection conn = new OdbcConnection(@"DSN=LoveBoat"); conn.Open(); OdbcCommand comm = new OdbcCommand(); comm.CommandText= "SELECT Name From Myfavoritetable"; comm.Connection = conn; OdbcDataReader myReader = comm.ExecuteReader(CommandBehavior.CloseConnection); while(myReader.Read()) { Console.WriteLine(myReader[0]); } } }

    Read the article

  • Bind9 configured to start at boot, has to be started manually

    - by antik
    I've configured bind9 on my system and it works great when it runs. It's currently configured to be run at runlevel 2 by setting: $ sudo update-rc.d bind9 enable 2 This appears to have done its work: $ tree -f /etc/rc?.d | grep -e ".*bind9$" |-- /etc/rc0.d/K85bind9 -> ../init.d/bind9 |-- /etc/rc2.d/S15bind9 -> ../init.d/bind9 |-- /etc/rc3.d/S15bind9 -> ../init.d/bind9 |-- /etc/rc4.d/S15bind9 -> ../init.d/bind9 |-- /etc/rc5.d/S15bind9 -> ../init.d/bind9 |-- /etc/rc6.d/K85bind9 -> ../init.d/bind9 Booting the system, I believe I am at runlevel 2: $ runlevel N 2 Given the above configuration, when the system is rebooted, bind does not come up. Only on occasion, for some reason, can I resolve hostnames immediately after startup. Far more often than not however, I cannot. I can interrogate the service's status: $ sudo /etc/init.d/bind9 status * could not access PID file for bind9 When the service doesn't start, I can start it successfully via a terminal by issuing $ sudo /etc/init.d/bind9 start And it works great from then on. Loopback configuration: $ ifconfig lo lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:1872 errors:0 dropped:0 overruns:0 frame:0 TX packets:1872 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:220205 (220.2 KB) TX bytes:220205 (220.2 KB) Do I have my startup misconfigured? (I'm used to Gentoo so Ubuntu's model is still a little new to me) I'm not seeing any log indication of a failed attempt to start at boot in syslog. Is there someplace else I should be looking? What else should I look into to get bind working at startup?

    Read the article

  • Can compressing Program Files save space *and* give a significant boost to SSD performance?

    - by Christopher Galpin
    Considering solid-state disk space is still an expensive resource, compressing large folders has appeal. Thanks to VirtualStore, could Program Files be a case where it might even improve performance? Discovery In particular I have been reading: SSD and NTFS Compression Speed Increase? Does NTFS compression slow SSD/flash performance? Will somebody benchmark whole disk compression (HD,SSD) please? (may have to scroll up) The first link is particularly dreamy, but maybe head a little too far in the clouds. The third link has this sexy semi-log graph (logarithmic scale!). Quote (with notes): Using highly compressable data (IOmeter), you get at most a 30x performance increase [for reads], and at least a 49x performance DECREASE [for writes]. Assuming I interpreted and clarified that sentence correctly, this single user's benchmark has me incredibly interested. Although write performance tanks wretchedly, read performance still soars. It gave me an idea. Idea: VirtualStore It so happens that thanks to sanity saving security features introduced in Windows Vista, write access to certain folders such as Program Files is virtualized for non-administrator processes. Which means, in normal (non-elevated) usage, a program or game's attempt to write data to its install location in Program Files (which is perhaps a poor location) is redirected to %UserProfile%\AppData\Local\VirtualStore, somewhere entirely different. Thus, to my understanding, writes to Program Files should primarily only occur when installing an application. This makes compressing it not only a huge source of space gain, but also a potential candidate for performance gain. Testing The beginning of this post has me a bit timid, it suggests benchmarking NTFS compression on a whole drive is difficult because turning it off "doesn't decompress the objects". However it seems to me the compact command is perfectly capable of doing so for both drives and individual folders. Could it be only marking them for decompression the next time the OS reads from them? I need to find the answer before I begin my own testing.

    Read the article

  • apache vhost not working consistently

    - by petrus
    I have a vhost on my webserver whose sole and unique goal is to return the client IP adress: petrus@bzn:~$ cat /home/vhosts/domain.org/index.php <?php echo $_SERVER['REMOTE_ADDR']; echo "\n" ?> This helps me troubleshoot networking issues, especially when NAT is involved. As such, I don't always have domain name resolution and this service needs to work even if queried by its IP address. I'm using it this way: petrus@hive:~$ echo "GET /" | nc 88.191.124.41 80 191.51.4.55 petrus@hive:~$ echo "GET /" | nc domain.org 80 191.51.4.55 router#more http://88.191.124.41/index.php 88.191.124.254 However I found that it wasn't working from at least a computer: petrus@seth:~$ echo "GET /" | nc domain.org 80 petrus@seth:~$ petrus@seth:~$ echo "GET /" | nc 88.191.124.41 80 petrus@seth:~$ What I checked: This is not related to ipv6: petrus@seth:~$ echo "GET /" | nc -4 ydct.org 80 petrus@seth:~$ petrus@hive:~$ echo "GET /" | nc ydct.org 80 2a01:e35:ee8c:180:21c:77ff:fe30:9e36 netcat version is the same (except platform, i386 vs x64): petrus@seth:~$ type nc nc est haché (/bin/nc) petrus@seth:~$ file /bin/nc /bin/nc: symbolic link to `/etc/alternatives/nc' petrus@seth:~$ ls -l /etc/alternatives/nc lrwxrwxrwx 1 root root 15 2010-06-26 14:01 /etc/alternatives/nc -> /bin/nc.openbsd petrus@hive:~$ type nc nc est haché (/bin/nc) petrus@hive:~$ file /bin/nc /bin/nc: symbolic link to `/etc/alternatives/nc' petrus@hive:~$ ls -l /etc/alternatives/nc lrwxrwxrwx 1 root root 15 2011-05-26 01:23 /etc/alternatives/nc -> /bin/nc.openbsd It works when used without the pipe: petrus@seth:~$ nc domain.org 80 GET / 2a01:e35:ee8c:180:221:85ff:fe96:e485 And the piping works at least with a test service (netcat listening on 1234/tcp and output to stdout) petrus@bzn:~$ nc -l -p 1234 GET / petrus@bzn:~$ petrus@seth:~$ echo "GET /" | nc domain.org 1234 petrus@seth:~$ I don't know if this issue is more related to netcat or Apache, but I'd appreciate any pointers to troubleshoot this issue ! The IP addresses have been modified but kept consistent for easy reading. bzn is the server, hive is a working client and seth is the client on which I have the issue.

    Read the article

  • How does SELinux affect the /home directory?

    - by Matt Solnit
    Hi everyone. I'm migrating a CentOS 5.3 system from MySQL to PostgreSQL. The way our machine is set up is that the biggest disk partition is mounted to /home. This is out of my control and is managed by the hosting provider. Anyway, we obviously want the database files to be on /home for this reason. With MySQL, we did the following: Edited my.cnf and changed the datadir setting to /home/mysql Added a new "File type" policy record (I hope I'm using the right terminology) to set /home/mysql(/.*)? to mysqld_db_t Ran restorecon -R /home/mysql to assign the labels and everything was good. With PostgreSQL, however, I did the following: Edited /etc/init.d/postgresql and changed the PGDATA and PGLOG variables to /home/pgsql/data and /home/pgsql/pgstartup.log, respectively Added a new policy record to set /home/pgsql/pgstartup.log to postgresql_log_t Added a new policy record to set /home/pgsql/data(/.*)? to postgresql_db_t Ran restorecon -R /home/pgsql to assign the labels At this point, I still cannot start PostgreSQL. pgstartup.log says: # cat pgstartup.log postmaster cannot access the server configuration file "/home/pgsql/data/postgresql.conf": Permission denied The weird thing is that I don't see any messages related to this in /var/log/messages or /var/log/secure, but if I turn off SElinux, then everything works. I made sure all the permissions are correct (600 for files and 700 for directories), as well as the ownership (postgres:postgres). Can anyone tell me what I am doing wrong? I'm using the Yum repository from commandprompt.com, version 8.3.7. EDIT: The reason my question specifically mentions the /home directory is that if I go through all these steps for any other directory, e.g. /var/lib/pgsql2 or /usr/local/pgsql, then it works as expected.

    Read the article

  • Windows 7 cannot view FAT32 formatted bootable usb drive

    - by NaimK
    I'm having some issue where when I run bootsect with a command line of "bootsect.exe /nt52 : /force /mbr", then Windows 7 (the comp I'm running bootsect on) can no longer view the contents of the usb drive. Explorer tries to look at it, and then fails, and I can't even correctly eject the drive, when I try, it does nothing until I yank it out, and then I get some errors. Bootsect reports success on writing the volume and the drive data to make it bootable, but it doesn't boot after copying on the necessary files (files from a created ISO, it works when it is created on XP). But this may be that I'm not following the same instructions as when building it on XP since some of the command don't seem to always work correctly. The drive is formatted to FAT32 (necessary I think, cause I'm installing a custom version of Win XP embedded). Any ideas? Or perhaps a good or automated way to load a usb with a custom version of win xp and make it bootable from Win 7? I am having some issues, for instance, "ufdprep.exe" rarely works when I'm running it from Windows 7, I don't know why.

    Read the article

  • SSL certificates work fine from command line but fails in script

    - by jrallison
    I'm trying to setup email notifications for my continuous integration server. I have a script which uses nail to send the email when the build works: #!/bin/bash echo "Build Worked!" | nail -A myisp -s 'Build Success' [email protected] When I run this from the command line with sh build-worked, it works and I receive the email. However, when I start the continuous integration server which executes the same script, I get the following error: nail: /opt/bitnami/common/lib/libssl.so.0.9.8: no version information available (required by nail) nail: /opt/bitnami/common/lib/libcrypto.so.0.9.8: no version information available (required by nail) Error with certificate at depth: 0 issuer = /C=ZA/ST=Western Cape/L=Cape Town/O=Thawte Consulting cc/OU=Certification Services Division/CN=Thawte Premium Server CA/[email protected] subject = /C=US/ST=California/L=Mountain View/O=Google Inc/CN=smtp.gmail.com err 20: unable to get local issuer certificate Continue (y/n)? could not initiate SSL/TLS connection: error:14090086:SSL routines:SSL3_GET_SERVER_CERTIFICATE:certificate verify failed . . . message not sent. I must be messing some configuration, any ideas?

    Read the article

  • Hyper V Server 2012 Remote Management Using Workgroup

    - by Chris Kolenko
    I'm trying to remotely manage Hyper V server 2012 from a windows 8 pc, both client and server are on a workgroup. I've spent about 3-4 hours trying to get this working with no luck so far trying the following: Creating a new administrator on the server with the same details as the client ie. username / password. Add an entry into my hosts file to point to the remote ip by server name. Tried using HVRemote. Disabled both firewalls. The error that I'm getting is RPC Service Unavailable. How can I accomplish what I'm trying to do? Update Some of the operations on the Hyper-V Manager work. IE. Virtual Switch Works. I can open the New VM Wizard. I run into an error when creating a new Virtual Hard Disk tho. I've tried creating a VM without a hard disk, which works. Using the new hard disk wizard does not work either. I still can not see any Virtual Machines. RPC server unavailable. Unable to establish communication between 'ServerName' and 'ClientName'

    Read the article

  • ubuntu 9.04 pptp broken after a power failure

    - by kevin42
    I have a small Ubuntu 9.04 router setup as a NAT box and a PPTP server. After a power failure everything except the PPTP server still works. A windows client gets to "registering your computer on the network" but then says Error 742: The remote computer does not support the required data encryption type. I did some research and I think the problem is with the ppp_mppe module. When I try to run 'modprobe ppp_mppe' it hangs indefinitely. What would cause this hang? Any ideas how I can troubleshoot this further? Thanks for the help! UPDATE: I am still having the problem, however I have found some more information. When the first user tries to connect to pptp, the process list shows modprobe sha1 running, and one instance of modprobe ppp_mppe for each connection attempt. If I killall modprobe at this point the next connection attempt works, and everything is fine until the next reboot. I'm planning to do a clean install at some point in the future but I'd really like to get to the real cause of this.

    Read the article

  • How to remotely open gedit with SFTP URL in Gnome through SSH?

    - by Álvaro Justen
    My setup is weird and I can't change it now. I have two machines: local-machine: it's my desktop running Ubuntu with Gnome remote-machine: it's one virtual machine, also running Ubuntu but without X In both machines I have my private and public SSH keys. I need to run SSH from remote-machine to local-machine and run gedit (in local-machine, under the default $DISPLAY) but openning a file in remote-machine throught SFTP. Something like this: myuser@remote-machine:~$ ssh local-machine "DISPLAY=:0.0 gedit sftp://remote-machine/some/file" The command above doesn't work. gedit shows this message: Could not open the file sftp://remote-machine/some/file. gedit cannot handle sftp: locations. Note that: /some/file exists on remote-machine. I can SSH normally from remote-machine to local-machine using my SSH key without any problems! I can run the command DISPLAY=:0.0 gedit sftp://remote-machine/some/file in a terminal on local-machine and gedit opens the file on remote-machine without any problems - but the terminal in which I executed the command is running in DISPLAY :0 (really, it's gnome-terminal). I also tried -t option of SSH client (to force pseudo-tty allocation) but it didn't work. If I try to run DISPLAY=:0.0 gedit sftp://remote-machine/some/file in local-machine but under a tty (for example in tty1, by pressing <Ctrl>+<Alt>+<F1>) it doesn't not work - I get the same error when running from remote-machine. I found that if I pass the environment variable DBUS_SESSION_BUS_ADDRESS with a correct value, it works! So, if I do something like that: myuser@local-machine:~$ env | grep DBUS_SESSION_BUS_ADDRESS > env.txt myuser@local-machine:~$ scp env.txt remote-machine: and then: myuser@remote-machine:~$ ssh local-machine "DISPLAY=:0.0 $(cat env.txt) gedit sftp://remote-machine/some/file" it works! The problem is that I'm not on local-machine so I can't get the correct value for this env variable. Is there any other way to make this work?

    Read the article

  • Getting 400 Bad Request when requesting by server name on nginx/uwsgi

    - by Marc Hughes
    I'm trying to run 2 different sites on nginx via different ports (they each have a load balancer that points to the appropriate port). The first site work perfectly. The second site... If I access http://localhost:81/ it works correctly If I access http://127.0.01:81/ it works correctly If I access the hostname http://THEHOSTNAME:81/ it fails with a 400 error If I access the public IP http://x.x.x.x:81/ it fails with a 400 error I've set the error_log to info, but the only lines I get in the log when this happens is: ==> /var/log/nginx/access.log <== 10.183.38.141 - - [24/Aug/2014:21:03:28 +0000] "GET / HTTP/1.1" 400 37 "-" "curl/7.36.0" "-" ==> /var/log/nginx/error.log <== 2014/08/24 21:03:28 [info] 7029#0: *5 client 10.183.38.141 closed keepalive connection In my uwsgi log, I only see this: [pid: 6870|app: 0|req: 87/92] 10.28.23.224 () {32 vars in 380 bytes} [Sun Aug 24 21:05:21 2014] GET / => generated 26 bytes in 1 msecs (HTTP/1.1 400) 2 headers in 82 bytes (1 switches on core 2) What should be my next step in debugging this?

    Read the article

  • How can I filter /var/adm/wtmpx on Solaris 10?

    - by Yanick Girouard
    Some of our Solaris 10 servers are monitored using SiteScope, which uses Telnet to probe certain ports (SSH is one of them) every few minutes. This is creating an insane amount of lines in /var/adm/wtmpx, and eventually make it so big (2,5G+) that we can no longer run the last command, or that the uptime command is unable to accurately show the true uptime of the server. The error we get when trying to run the last command is this: /var/adm/wtmpx: Value too large for defined data type I have found ways we can clean this accounting log using a cron job (with the command /usr/lib/acct/fwtmp), and this works. This is not the issue. I was wondering if there would be a way to simply prevent connections from the monitoring user (in our case, user monsite) from creating entries in this accounting log at all. Is this possible, and if so, how can I do it? I've looked around and searched Google for a while, but couldn't find an answer to this question. NOTE: We are very well aware that the monitoring solution we employ is perhaps not the best one, but we cannot change it at this time. Therefore, suggesting that we change it is not pertinent to this question. If you want to read more on the Sitescope monitoring solution we employ for those servers, please see its documentation here and look for Port Monitor, and Connecting to remote UNIX servers, which explains how it works.

    Read the article

< Previous Page | 272 273 274 275 276 277 278 279 280 281 282 283  | Next Page >